text
stringlengths 256
16.4k
|
|---|
Theorem
Let $\struct {R, \norm {\,\cdot\,}}$ be a normed division ring.
Let $\sequence {x_n}$ be a sequence in $R$.
Let $\sequence {x_n}$ be convergent in the norm $\norm {\,\cdot\,}$ to the following limit:
$\displaystyle \lim_{n \mathop \to \infty} x_n = l$
Then $\sequence {x_n}$ is bounded.
Proof
Let $d$ be the metric induced on $R$ be the norm $\norm {\,\cdot\,}$.
Let $\sequence {x_n}$ be convergent to the limit $l$ in $\struct {R, \norm {\,\cdot\,}}$.
By the definition of a convergent sequence in a normed division ring, $\sequence {x_n} $ is convergent to the limit $l$ in $\struct {R, d}$.
By Convergent Sequence in Metric Space is Bounded, $\sequence {x_n} $ is a bounded sequence in $\struct {R, d}$.
By Sequence is Bounded in Norm iff Bounded in Metric, $\sequence {x_n} $ is a bounded sequence in $\struct {R, \norm {\,\cdot\,} }$.
$\blacksquare$
Sources
|
Global Optimization
McCormick-based Algorithm for mixed-integer Nonlinear Global Optimization (MAiNGO) is a deterministic global optimization software for solving mixed-integer nonlinear programs (MINLPs), which is being developed at AVT.SVT.
Any (mixed-integer or continuous) nonlinear program with nonconvex functions can exhibit multiple local solutions. Local optimization methods can converge to any locally optimal solution and can even fail to find any feasible point for a poor choice of initial point. Heuristic methods such as genetic algorithm or simulated annealing converge to the global solution with probability one only as the runtime approaches infinity. In contrast, deterministic global methods do guarantee finite convergence to the global solution given non-zero tolerances for feasibility (δ) and optimality (ϵ) specified by the user.
MAiNGO can solve MINLPs of the form
\(\begin{align*} \min_{\mathbf{x},\mathbf{y}}\,\,\, &f(\mathbf{x},\mathbf{y}) \newline \text{ s.t. } &h(\mathbf{x},\mathbf{y}) = \mathbf{0} \newline &g(\mathbf{x},\mathbf{y}) \leq \mathbf{0} \newline &\mathbf{x}\in X\subset \mathbb{I}\!\mathbb{R}^{n_x} \newline &\mathbf{y}\in Y = \{0,1\}^{n_y} \end{align*}\)
to global optimality, guaranteeing a solution that is δ-feasible and ϵ-optimal or proving that no δ-feasible point exists, where Iℝ denotes the set of closed bounded intervals of ℝ.
One of the main algorithmic features of MAiNGO is the operation in the original variable space using McCormick relaxations [ McCormick1976
,
Mitsos2009
,
Tsoukalas2014
] (i.e., no auxiliary variables are introduced during the optimization process) through
MC++
. library. Additionally, MAiNGO uses a specialized heuristic for tightening McCormick relaxations [
Najman2018
], as well as custom relaxations for various functions (including several functions relevant to process systems engineering) [
Najman2016
]. Furthermore, MAiNGO offers significant flexibility in model formulation and is able to handle functions the algebraic form of which is not visible to the optimizer but whose function values, derivatives, relaxations and its subgradients are available at every point of the domain.
Anwendungen:
Flowsheet Optimization:
Flowsheet optimization is an application that allows for reduced-space optimization formulations (Mitsos2009, Bongartz2017). These can be interpreted as hybrids between equation-based and sequential modular approaches. Compared to purely equation-based approaches that so far have been used for global optimization, this can lead to significant reductions in computational time (Bongartz2017, Bongartz2017ESCAPE, Huster2017, Bongartz2018).
Optimization of Hybrid Models with Artificial Neural Networks:
The solution of problems in the original variable spaces has particular advantages for optimization problems with artificial neural networks embedded (Schweidmann2018a). For example, this method enables global optimization of hybrid process models in which thermodynamics are modeled through artificial neural networks (Schweidtmann2018b). Another example is data-driven modeling and subsequent optimization of membrane properties (Rall2018).
For more information on the current MAiNGO version click here.
|
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
Search
Now showing items 1-10 of 165
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
|
Gevrey regularity and existence of Navier-Stokes-Nernst-Planck-Poisson system in critical Besov spaces
1.
School of Information Technology, Jiangxi University of Finance and Economics, Nanchang 330032, China
2.
Department of Mathematics, Northwest Normal University, Lanzhou 730070, China
J. Funct. Anal., 87(1989), 359-369], we prove that the solutions are analytic in a Gevrey class of functions. As a consequence of Gevrey estimates, we particularly obtain higher-order derivatives of solutions in Besov and Lebesgue spaces. Finally, we prove that there exists a positive constant
$\mathbb{C}$
$(u_{0}, n_{0}, c_{0})=(u_{0}^{h}, u_{0}^{3}, n_{0}, c_{0})$
$\begin{aligned}&\|(n_{0}, c_{0},u_{0}^{h})\|_{\dot{B}^{-2+3/q}_{q, 1}× \dot{B}^{-2+3/q}_{q, 1}×\dot{B}^{-1+3/p}_{p, 1}}+\|u_{0}^{h}\|_{\dot{B}^{-1+3/p}_{p, 1}}^{α}\|u_{0}^{3}\|_{\dot{B}^{-1+3/p}_{p, 1}}^{1-α}≤q1/\mathbb{C}\end{aligned}$
$p, q, α$
$1<p<q≤ 2p<\infty, \frac{1}{p}+\frac{1}{q}>\frac{1}{3}, 1< q<6, \frac{1}{p}-\frac{1}{q}≤\frac{1}{3}$ Keywords:Nernst-Planck-Poisson system, Navier-Stokes system, Gevrey regularity, global solutions, Besov spaces. Mathematics Subject Classification:Primary: 35Q30; Secondary: 76D03, 35E15. Citation:Minghua Yang, Jinyi Sun. Gevrey regularity and existence of Navier-Stokes-Nernst-Planck-Poisson system in critical Besov spaces. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1617-1639. doi: 10.3934/cpaa.2017078
References:
[1]
A. Biswas, V. Martinez and P. Silva,
On Gevrey regularity of the supercritical SQG equation in critical Besov spaces,
[2] [3]
H. Bae, A. Biswas and E. Tadmor,
Analyticity and decay estimates of the Navier-Stokes equations in critical Besov spaces,
[4] [5]
A. Biswas and D. Swanson,
Gevrey regularity of solutions to the 3D Navier-Stokes equations with weighted $\ell^{p}$ initial data,
[6] [7]
J. Y. Chemin, M. Paicu and P. Zhang,
Global large solutions to 3-D inhomogeneous Navier-Stokes system with one slow variable diffusion,
[8]
H. Kozono and M. Yamazaki,
Semilinear heat equations and the Navier-Stokes equation with distributions in new function spaces as initial data,
[9] [10]
R. Danchin,
[11] [12]
C. Deng, J. Zhao and S. Cui,
Well-posedness of a dissipative nonlinear electrohydrodynamic system in modulation spaces,
[13]
C. Deng, J. Zhao and S. Cui,
Well-posedness for the Navier-Stokes-Nernst-Planck-Poisson system in Triebel-Lizorkin space and Besov space with negative indices,
[14] [15] [16] [17]
B. Hajer, Y. Chemin and R. Danchin,
Fourier Analysis and Nonlinear Partial Differential Equations, Springer, Berlin, 2011.
doi: 10.1007/978-3-642-16830-7.
Google Scholar
[18]
J. Huang, M. Paicu and P. Zhang,
Global well-posedness of incompressible inhomogeneous fluid systems with bounded density or non-Lipschitz velocity,
[19] [20] [21] [22] [23]
E. Stein,
Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, 1970.
Google Scholar
[24] [25]
M. Paicu and P. Zhang,
Global solutions to the 3D incompressible anisotropic Navier-Stokes system in the critical spaces,
[26] [27] [28]
B. Wang, Z. Huo, C. Hao and Z. Guo,
Harmonic Analysis Methods for Nonlinear Evolution Equations, World Scientific, 2011.
doi: 10.1142/9789814360746.
Google Scholar
[29] [30] [31] [32] [33] [34]
J. Newman and K. Thomas,
Electrochemical Systems, thirded., John Wiley Sons, 2004.
Google Scholar
[35]
R. Ryham, An energetic variational approach to mathematical modeling of charged fluids: charge phases, simulation and well posedness (Doctoral dissertation), The Pennsylvania State University, 2006, p. 83.Google Scholar
[36] [37] [38]
J. Xiao,
Homothetic variant of fractional Sobolev space with application to Navier-Stokes system revisited,
[39]
C. Zhai and T. Zhang, Global well-posedness to the 3-D incompressible inhomogeneous Navier-Stokes equations with a class of large velocity
[40]
J. Zhao, C. Deng and S. Cui,
Global well-posedness of a dissipative system arising in electrohydrodynamics in negative-order Besov spaces,
[41]
J. Zhao, C. Deng and S. Cui,
Well-posedness of a dissipative system modeling electrohydrodynamics in Lebesgue spaces,
[42]
J. Zhao, T. Zhang and Q Liu,
Global well-posedness for the dissipative system modeling electro-hydrodynamics with large vertical velocity component in critical Besov space,
show all references
References:
[1]
A. Biswas, V. Martinez and P. Silva,
On Gevrey regularity of the supercritical SQG equation in critical Besov spaces,
[2] [3]
H. Bae, A. Biswas and E. Tadmor,
Analyticity and decay estimates of the Navier-Stokes equations in critical Besov spaces,
[4] [5]
A. Biswas and D. Swanson,
Gevrey regularity of solutions to the 3D Navier-Stokes equations with weighted $\ell^{p}$ initial data,
[6] [7]
J. Y. Chemin, M. Paicu and P. Zhang,
Global large solutions to 3-D inhomogeneous Navier-Stokes system with one slow variable diffusion,
[8]
H. Kozono and M. Yamazaki,
Semilinear heat equations and the Navier-Stokes equation with distributions in new function spaces as initial data,
[9] [10]
R. Danchin,
[11] [12]
C. Deng, J. Zhao and S. Cui,
Well-posedness of a dissipative nonlinear electrohydrodynamic system in modulation spaces,
[13]
C. Deng, J. Zhao and S. Cui,
Well-posedness for the Navier-Stokes-Nernst-Planck-Poisson system in Triebel-Lizorkin space and Besov space with negative indices,
[14] [15] [16] [17]
B. Hajer, Y. Chemin and R. Danchin,
Fourier Analysis and Nonlinear Partial Differential Equations, Springer, Berlin, 2011.
doi: 10.1007/978-3-642-16830-7.
Google Scholar
[18]
J. Huang, M. Paicu and P. Zhang,
Global well-posedness of incompressible inhomogeneous fluid systems with bounded density or non-Lipschitz velocity,
[19] [20] [21] [22] [23]
E. Stein,
Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, 1970.
Google Scholar
[24] [25]
M. Paicu and P. Zhang,
Global solutions to the 3D incompressible anisotropic Navier-Stokes system in the critical spaces,
[26] [27] [28]
B. Wang, Z. Huo, C. Hao and Z. Guo,
Harmonic Analysis Methods for Nonlinear Evolution Equations, World Scientific, 2011.
doi: 10.1142/9789814360746.
Google Scholar
[29] [30] [31] [32] [33] [34]
J. Newman and K. Thomas,
Electrochemical Systems, thirded., John Wiley Sons, 2004.
Google Scholar
[35]
R. Ryham, An energetic variational approach to mathematical modeling of charged fluids: charge phases, simulation and well posedness (Doctoral dissertation), The Pennsylvania State University, 2006, p. 83.Google Scholar
[36] [37] [38]
J. Xiao,
Homothetic variant of fractional Sobolev space with application to Navier-Stokes system revisited,
[39]
C. Zhai and T. Zhang, Global well-posedness to the 3-D incompressible inhomogeneous Navier-Stokes equations with a class of large velocity
[40]
J. Zhao, C. Deng and S. Cui,
Global well-posedness of a dissipative system arising in electrohydrodynamics in negative-order Besov spaces,
[41]
J. Zhao, C. Deng and S. Cui,
Well-posedness of a dissipative system modeling electrohydrodynamics in Lebesgue spaces,
[42]
J. Zhao, T. Zhang and Q Liu,
Global well-posedness for the dissipative system modeling electro-hydrodynamics with large vertical velocity component in critical Besov space,
[1] [2]
Peter Constantin, Gregory Seregin.
Global regularity of solutions of coupled Navier-Stokes equations
and nonlinear Fokker Planck equations.
[3]
Minghua Yang, Zunwei Fu, Jinyi Sun.
Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces.
[4] [5] [6]
Reinhard Farwig, Paul Felix Riechwald.
Regularity criteria for weak solutions of the Navier-Stokes system in general unbounded domains.
[7]
Lucas C. F. Ferreira, Elder J. Villamizar-Roa.
On the existence of solutions for the Navier-Stokes system in a sum of weak-$L^{p}$ spaces.
[8] [9]
Jonathan Zinsl.
Exponential convergence to equilibrium in a Poisson-Nernst-Planck-type system with nonlinear diffusion.
[10]
Hammadi Abidi, Taoufik Hmidi, Sahbi Keraani.
On the global regularity of axisymmetric Navier-Stokes-Boussinesq system.
[11] [12]
Vena Pearl Bongolan-walsh, David Cheban, Jinqiao Duan.
Recurrent motions in the nonautonomous Navier-Stokes system.
[13]
Hi Jun Choe, Bataa Lkhagvasuren, Minsuk Yang.
Wellposedness of the Keller-Segel Navier-Stokes equations in the critical Besov spaces.
[14] [15] [16]
Zhong Tan, Yong Wang, Xu Zhang.
Large time behavior of solutions to the non-isentropic compressible Navier-Stokes-Poisson system in $\mathbb{R}^{3}$.
[17] [18]
Vladimir V. Chepyzhov, E. S. Titi, Mark I. Vishik.
On the convergence of solutions of the Leray-$\alpha $ model to the trajectory attractor of the 3D Navier-Stokes system.
[19] [20]
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
The difference rule of derivatives is actually derived in differential calculus from first principle. For example, $f{(x)}$ and $g{(x)}$ are two differentiable functions and the difference of them is written as $f{(x)}-g{(x)}$. The derivative of difference of two functions with respect to $x$ is written in the following mathematical form.
$\dfrac{d}{dx}{\, \Big(f{(x)}-g{(x)}\Big)}$
Take $m{(x)} = f{(x)}-g{(x)}$ and then $m{(x+\Delta x)} = f{(x+\Delta x)}-g{(x+\Delta x)}$
According to definition of the derivative, write the derivative of the function $m{(x)}$ with respect to $x$ in limiting operation.
$\dfrac{d}{dx}{\, \Big(m{(x)}\Big)}$ $\,=\,$ $\displaystyle \large \lim_{\Delta x \,\to\, 0}{\normalsize \dfrac{m{(x+\Delta x)}-m{(x)}}{\Delta x}}$
Replace the actual functions of $m{(x)}$ and $m{(x+\Delta x)}$.
$\implies$ $\dfrac{d}{dx}{\, \Big(f{(x)}-g{(x)}\Big)}$ $\,=\,$ $\displaystyle \large \lim_{\Delta x \,\to\, 0}{\normalsize \dfrac{\Big(f{(x+\Delta x)}-g{(x+\Delta x)}\Big)-\Big(f{(x)}-g{(x)}\Big)}{\Delta x}}$
Now, take $\Delta x = h$ and start simplifying this function for deriving the derivative of difference of two functions by first principle.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\Big(f{(x+h)}-g{(x+h)}\Big)-\Big(f{(x)}-g{(x)}\Big)}{h}}$
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f{(x+h)}-g{(x+h)}-f{(x)}+g{(x)}}{h}}$
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f{(x+h)}-f{(x)}-\Big(g{(x+h)}-g{(x)}\Big)}{h}}$
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg[\dfrac{f{(x+h)}-f{(x)}}{h}-\dfrac{g{(x+h)}-g{(x)}}{h}\Bigg]}$
As per difference rule of limits, the limit of difference of two functions can be written as difference of their limits.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{f{(x+h)}-f{(x)}}{h}}$ $-$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{g{(x+h)}-g{(x)}}{h}}$
According to first principle of differentiation, each term in the right right-hand side of the equation represents the derivative of the respective function.
$\,\,\, \therefore \,\,\,\,\,\,$ $\dfrac{d}{dx}{\, \Big(f{(x)}-g{(x)}\Big)}$ $\,=\,$ $\dfrac{d}{dx}{\, f{(x)}}$ $-$ $\dfrac{d}{dx}{\, g{(x)}}$
In this way, the difference rule of derivatives can be derived in differential calculus mathematically from first principle.
The derivative difference rule is also written in two forms alternatively by taking $u = f{(x)}$ and $v = g{(x)}$.
$(1) \,\,\,$ $\dfrac{d}{dx}{\, (u-v)}$ $\,=\,$ $\dfrac{du}{dx}$ $-$ $\dfrac{dv}{dx}$
$(2) \,\,\,$ ${d}{\, (u-v)}$ $\,=\,$ $du-dv$
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
A trigonometry identity, derived from Pythagorean Theorem is called Pythagorean identity.
There are three possibilities to form Pythagorean identities in terms of trigonometric functions in trigonometry. They are used as formulas in trigonometry. So, it is very important to remember them to study the trigonometry.
The sum of the squares of the sine and cosine functions at an angle is equal one.
$\sin^2{\theta} + \cos^2{\theta} = 1$
The subtraction of the square of tangent function from square of secant function at an angle is equal to one.
$\sec^2{\theta} \,-\, \tan^2{\theta} = 1$
The subtraction of the square of cotangent function from square of cosecant function at an angle is equal to one.
$\csc^2{\theta} \,-\, \cot^2{\theta} = 1$
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Bézout's Lemma/Euclidean Domain Theorem
Let $\nu: D \setminus \set 0 \to \N$ be the Euclidean valuation on $D$.
Let $a, b \in D$ such that $a$ and $b$ are not both equal to $0$.
Let $\gcd \set {a, b}$ be the greatest common divisor of $a$ and $b$.
Then:
$\exists x, y \in D: a \times x + b \times y = \gcd \set {a, b}$
such that $\gcd \set {a, b}$ is the element of $D$ such that:
$\forall c = a \times x + b \times y \in D: \map \nu {\gcd \set {a, b} } \le \map \nu c$ Proof
We are given that $a, b \in D$ such that $a$ and $b$ are not both equal to $0$.
Without loss of generality, suppose specifically that $b \ne 0$.
Let $S \subseteq D$ be the set defined as:
$S = \set {x \in D_{\ne 0}: x = m \times a + n \times b: m, n \in D}$
where $D_{\ne 0}$ denotes $D \setminus 0$.
Setting $m = 0$ and $n = 1$, for example, it is noted that $b \in S$.
Therefore $S \ne \O$.
By definition, $\nu$ has the properties: $(1): \quad \forall a, b \in D, b \ne 0: \exists q, r \in D$ such that $\map \nu r < \map \nu b$, or $r = 0$, such that: $a = q \times b + r$ $(2): \quad \forall a, b \in D, b \ne 0$: $\map \nu a \le \map \nu {a \times b}$
Let $\nu \sqbrk S$ denote the image of $S$ under $\nu$.
We have that:
$\nu \sqbrk S \subseteq \N$
Let $d \in S$ be such that $\map \nu d$ is that smallest element of $\nu \sqbrk S$.
By definition of $S$, we have that:
$d = u \times a + v \times b$
for some $u, v \in D$.
Let $x \in S$.
By $(2)$ above:
$x = q \times d + r$
such that either:
$\map \nu r < \map \nu d$
or:
$r = 0$ Aiming for a contradiction, suppose $r \ne 0$.
Then:
\(\, \displaystyle \exists m, n \in D: \, \) \(\displaystyle x\) \(=\) \(\displaystyle m \times a + n \times b\) \(\displaystyle \leadsto \ \ \) \(\displaystyle r\) \(=\) \(\displaystyle x - q \times d\) \(\displaystyle \) \(=\) \(\displaystyle \paren {m \times a + n \times b} - q \paren {u \times a + v \times b}\) \(\displaystyle \) \(=\) \(\displaystyle \paren {m - q \times u} a + \paren {n - q \times v} b\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \) \(\) \(\displaystyle \paren {r \in S} \land \paren {\map \nu r < \map \nu d}\) Therefore: $\forall x \in S: x = q \times d$
for some $q \in D$.
That is:
$\forall x \in S: d \divides x$
where $\divides$ denotes divisibility.
In particular: $d \divides a = 1 \times a + 0 \times b$ $d \divides b = 0 \times a + 1 \times b$
Thus:
$d \divides a \land d \divides b \implies \map \nu 1 \le \map \nu d \le \map \nu {\gcd \set {a, b} }$ However, note that as $\gcd \set {a, b}$ also divides $a$ and $b$ (by definition), we have:
\(\displaystyle \gcd \set {a, b}\) \(\divides\) \(\displaystyle \paren {u \times a + v \times b} = d\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \gcd \set {a, b}\) \(\divides\) \(\displaystyle d\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \map \nu {\gcd \set {a, b} }\) \(\le\) \(\displaystyle \map \nu d\) $\gcd \set {a, b} = d = u \times a + v \times b$
$\blacksquare$
Source of Name
This entry was named for Étienne Bézout.
|
Let the equations of motion be expressed in a frame with coordinates $q$. We now want to switch over to another (arbitrarily moving) frame, whose corresponding coordinates are $Q$, given by:$$Q = f(q, t)$$For example, if the frame itself is moving with position $x(t)$, we will have:$$Q = q - x(t)$$(where $x$ is not dynamic, but is completely specified in advance).
This is quite obviously, in the general case, just a point transformation that keeps changing with time; or, if you prefer, a different point transformation at different times. And that's the way one might expect it to be - this follows directly from the fact that the moving frame is moving.
This does not necessarily leave the equations of motion invariant. It's true that Euler-Lagrange equations (note that the Lagrangian must now be allowed to be time dependent)$$\frac{d}{dt}\frac{\partial L(q, \dot{q}, t)}{\partial \dot{q}} = \frac{\partial L(q, \dot{q}, t)}{\partial q}$$continue to hold, but the change in the form of the Lagrangian effected by the change of frame means that the equations of motion can 'look' different.
In the case that this point transformation is also a gauge transformation, we have a special situation. Consider the following/relevant example. In classical mechanics, from an inertial frame, the Lagrangian is:$$L(q, \dot{q}) = \frac{1}{2}m\dot{q}^2 - V(q)$$The general transformation to an arbitrary moving (non-rotating, for simplicity) frame is given by $q = Q + x(t)$, so that $\dot{q} = \dot{Q} + \dot{x}(t)$, and the Lagrangian becomes:$$L(Q, \dot{Q},t) = \frac{1}{2}m\dot{Q}^2 + m\dot{Q}\dot{x}(t) + \frac{1}{2}m\dot{x}(t)^2 - V(Q+x(t)) $$The term quadratic in $\dot{x}$ produces only a pure-boundary term in the action $S = \int L dt$, and is irrelevant. The main term of interest is the second one (responsible for the fictitious forces, and part of the generalized potential, as mentioned in this answer to the question you linked), and it's contribution to the action:$$S_2 = \int m\dot{Q}\dot{x}(t)\ dt$$Integrating by parts and neglecting the boundary term, we get$$S_2' = -\int mQ\ddot{x}(t) \ dt$$This readily gives the answer - for the "time-dependent point transformation", which corresponds to a change of frame, to be a gauge transformation, we must have $\ddot{x}(t) = 0$ for all time. In this case, we get the important part of the action as:$$S' = \int \left[ \frac{1}{2}m\dot{Q}^2 - V(Q, t) \right] dt$$This is hardly any different from the one we started with (the time dependence in $V$ is not an issue, and is just a reflection of the fact that the "field" would also appear to move in a moving frame; the important thing is that at a given time $t$, the particle sees the same force $-\nabla V$ at its location in both frames).
This is indeed, what makes inertial frames ($\ddot{x} = 0$ as seen from another inertial frame) special - the general point-transformation due to switching frames reduces to a gauge transformation, and the equation of motion 'looks' the same i.e. 'Galilean invariance'. That this doesn't occur in non-inertial frames leads to the fictitious forces seen in such frames.
|
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
|
Basic Idea
The algorithm essentially counts the number of days that havepassed between a fixed date (this will be 1st January 0 AD) and achosen date (our birthday). By taking this number and finding itsremainder when divided by 7, provided we know what day of the weekit was on 1st January 0 AD we will know what day we were bornon.Modular Arithmetic
We do not need to know the exact number of days that have passedsince our fixed date and our birthday, just the remainder when thisnumber is divided by 7. To make our algorithm easier to work with,then, we will be taking certain shortcuts, so that what wecalculate is not actually the exact number of days, but a smallernumber which has the same remainder when divided by 7 (if this isconfusing, don't worry, it will make sense later).
Consequently, we'll be using the ideas of 'modular arithmetic', or'clock arithmetic'. If you have not encountered these ideas before,you should read the NRICH article on modulararithmetic
. We want to know the number days between 1 Jan 0and our birthday mod 7
.The Algorithm
A date can be divided into the following four bits information: thenumber of hundreds of years, the number of additional years, thenumber of months,and number of days that have passed since 1January 0 AD. Suppose our birthday is 23rd September 1989. To countthe days that have passed since 1 Jan 0, we will use four steps:count the days up to 1 Jan 1900, then the days between 1 Jan 1900and 1 Jan 1989, the days between 1 Jan 1989 and 1 Sep 1989, andfinally the days between 1 Sep 1989 and 23 Sep 1989.
The number of days between 1 Sep 1989 and 23 Sep 1989 is obviously23, or in general our number $D$. Years and months are a bit moredifficult.Years
$$\frac{365}{7}=52\times 7+1$$ so there are exactly 52 weeks andone day in a year. This means that our birthday moves forward byone day each year i.e. 1 year $=$ 365 days $\equiv$ 1 day (mod 7).So, mod 7, is the number of days between 1 Jan 0 and 1 Jan 1989 isjust 1989?
Unfortunately, we have to take leap years into account. There are366 days in a leap year, and our birthday moves forward 2 days. Weneed to add an extra 1 for each leap year on top of the totalnumber of years. How many leap years have passed since 1 Jan 0? Youprobably know that we get a leap year every four years, in factprecisely on those years which are divisible by 4. However, thereis one more rule you may not know: if a year is divisible by 100but not divisible by 400 then it is not
a leap year. For example, 1900 isdivisible by 4, but not 400, so 1900 AD was not a leap year.
The number of leap years between $Y_F$01 and $Y_F$99 is always 24,so if for a moment we ignore leap years on the turn of the century,every 100 years our birthday moves forward 100 + 24 = 124 days =18$\times$7 - 2 $\equiv$ -2 (mod 7). Now we add 1 for the leap yearon 0 AD and an additional 1 every 400 years up to $Y_F$00 (thisnumber is $1+\big[ \frac{Y_F}{4}\big]$, where $[x]$ denotes theinteger part of $x$. In general, then, between 1 Jan 0 and $Y_F$00there are: $$-2Y_F+ 1 + \Big[ \frac{Y_F}{4} \Big]$$ days mod 7. Upto 1 Jan 1900 this number is $19\times -2 + 1 + \big[ \frac{19}{4}\big] = -33 \equiv2$.
Thinking a similar way we can show that between 1 Jan $Y_F 00$ and1 Jan $Y_F Y_L$ there are $$Y_L \Big[ \frac{Y_L}{4} \Big] - 1$$days mod 7,the -1 coming from the fact that we have taken accountof leap years occuring on centuries in out previous formula.Putting the two parts of the year formula together, up to 1 Jan$Y_F Y_L$ days are given by:$$-2Y_F+ 1 + \Big[ \frac{Y_F}{4} \Big]+ Y_L + \Big[ \frac{Y_L}{4} \Big] - 1= Y_L - 2Y_F+ \Big[\frac{Y_L}{4} \Big] + \Big[ \frac{Y_F}{4} \Big]$$
The number of days mod 7 up to 1 Jan 1989 is $2 + 89 +\big[\frac{89}{4}\big] - 1 = 112 = 0$ mod 7. Months
Months are tricky because they contain different numbers of days.Coming up with a formula for the number of days in the first $M$months in terms of $M$ is requires some thought. This is by far themost difficult part of the algorithm to derive. The first columnsof the chart below show the months $M$, days mod 7 in that month,and the days mod 7 before the $M$th month.
We want to come up with a formula that will approximate the numberof days before $M$ months mod 7, in the hope that we can make itexact by taking its integer part. Our problem month is February,which at only 28 days long makes it hard to average out themonths.The average number of days mod 7 in months not
February is around 2.6. So we takeas a first approximation $2.6 \times M$.
$M$ $\quad$ Days mod 7 in $M\quad$ days mod 7 before $M \quad$ $2.6M$ mod 7 $\quad$ $2.6M-4$ mod 7 $\quad$ $[2.6M-4]$ mod 7 $\quad$ $2.6M-4.39$ mod 7 $\quad$ $[2.6M-4.39]$ mod 7 $\quad$ 1 3 0 2.6 5.6 5 5.21 5 2 0 3 5.2 1.2 1 0.81 0 3 3 3 0.8 3.8 3 3.11 3 4 2 6 3.4 6.4 6 6.01 6 5 3 1 6 2 3 1.61 1 6 2 4 1.6 4.6 4 4.21 4 7 3 6 4.2 0.2 0 6.81 6 8 3 2 6.8 2.8 2 2.41 2 9 2 5 2.4 5.4 5 5.01 5 10 3 0 5 1 1 0.61 0 11 2 3 0.6 3.6 3 3.21 3 12 3 5 3.2 6.2 6 5.81 5 13 3 1 5.8 1.8 1 1.41 1 14 0 4 1.4 4.4 4 4.01 4
If we subtract 4 from our first approximation, then the integerpart of this is very close to an exact formula. You will notice ifwe subtract a further number between 0.2 and 0.4, then the integerpart of our approximation will be correct for all months apart fromJan and Feb - months 1 and 2. However, if we extend our table andreplace months 1 and 2 by 13 and 14 then our formula is exact forall months. Therefore, given that if $M$ is 1 or 2, it is replacedby 13 or 14 and $Y_L$ is decreased by 1, the month part of thealgorithm is given by $[2.6M-4.39]$
So the number of days between 1 Jan 1989 and 1 Sep 1989 is$[2.6\times 9 - 4.39] = [19.01] = 19 \equiv5$.Putting everything together
The final step is to calculate the sum of our individual parts. Ifwe work backwards it turns out that 1 Jan 0 AD was a Monday, so wetweak our formula and subtract 1 from our sum to ensure that 0 isSunday instead. Our final formula, then, is $$D + [2.6M-4.38] +\Big( Y_L - 2Y_F+ \Big[ \frac{Y_L}{4} \Big] + \Big[ \frac{Y_F}{4}\Big] \Big) - 1 = D + Y_L -2Y_F + [2.6M-5.39] + \Big[ \frac{Y_L}{4}\Big] + \Big[ \frac{Y_F}{4} \Big]$$ For 23 Sep 1989 we get 23 + 5 +0 - 1 = 27 = 6 mod 7. So 23 Sep 1989 was a Saturday.
|
Asymptotics in parameter of solution to elliptic boundary value problem in vicinity of outer touching of characteristics to
limit equation Yu. Z. Shaygardanov Institute of Mathematics, Ufa Scientific Center, RAS,
Chernyshevsky str. 112,
450077, Ufa, Russia Abstract: In a bounded domain $Q\subset\mathbb{R}^3$ with a smooth boundary $\Gamma$ we consider the boundary value problem $$\varepsilon Au-\frac{
\partial u}{\partial x_3}=f(x),\quad
u|_{\Gamma}=0.$$ Here $A$ is a second order elliptic operator, $\varepsilon$ is a small parameter. The limiting equation, as $\varepsilon=0$, is the first order equation. Its characteristics are the straight lines parallel to the axis $Ox_3$. For the domain $\overline{Q}$ we assume that the characteristic either intersects $\Gamma$ at two points or touches $\Gamma$ from outside. The set of touching point forms a closed smooth curve. In the paper we construct the asymptotics as $\varepsilon\to 0$ for the solutions to the studied problem in the vicinity of this curve. For constructing the asymptotics we employ the method of matching asymptotic expansions. Keywords: small parameter, asymptotic, elliptic equation. Full text: PDF file (380 kB) References: PDF file HTML file English version: Ufa Mathematical Journal, 2017, 9:3, 137–147 (PDF, 345 kB); https://doi.org/10.13108/2017-9-3-137 Bibliographic databases: UDC: 517.928 MSC: 34E05, 35J25 Received: 09.06.2017 Citation: Yu. Z. Shaygardanov, “Asymptotics in parameter of solution to elliptic boundary value problem in vicinity of outer touching of characteristics to
limit equation”, Ufimsk. Mat. Zh., 9:3 (2017), 138–147; Ufa Math. J., 9:3 (2017), 137–147 Citation in format AMSBIB
\Bibitem{Sha17}
\by Yu.~Z.~Shaygardanov \paper Asymptotics in parameter of solution to elliptic boundary value problem in vicinity of outer touching of characteristics to
limit equation \jour Ufimsk. Mat. Zh. \yr 2017 \vol 9 \issue 3 \pages 138--147 \mathnet{http://mi.mathnet.ru/ufa395} \elib{http://elibrary.ru/item.asp?id=30022859} \transl \jour Ufa Math. J. \yr 2017 \vol 9 \issue 3 \pages 137--147 \crossref{https://doi.org/10.13108/2017-9-3-137} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000411740000014} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85030032571} Linking options: http://mi.mathnet.ru/eng/ufa395 http://mi.mathnet.ru/eng/ufa/v9/i3/p138 Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles
Number of views: This page: 76 Full text: 27 References: 10
|
Or Zamir:Faster k-SAT Algorithms Using Biased-PPSZ
Wednesday, September 18, 2019 - 4:00pm to 5:00pm
The PPSZ algorithm, due to Paturi, Pudlak, Saks and Zane, is currently the fastest known algorithm for the k-SAT problem, for every k>3. For 3-SAT, a tiny improvement over PPSZ was obtained by Hertli.
A Swiss-Army Knife for Nonlinear Random Matrix Theory of Deep Learning and Beyond
Wednesday, May 29, 2019 - 4:00pm to 5:00pm
The resurgence of neural networks has revolutionized artificial intelligence since 2010. The Sample Complexity of Toeplitz Covariance Estimation
Wednesday, June 5, 2019 - 4:00pm to 5:00pm
We study the query complexity of estimating the covariance matrix T of a distribution D over d-dimensional vectors, under the assumption that T is Toeplitz. Adversarially Robust Property Preserving Hashes
Friday, April 26, 2019 - 11:00am to 12:00pm
Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P(x,y) of the original inputs.
Jerry Li: Nearly Optimal Algorithms for Robust Mean Estimation
Thursday, February 7, 2019 - 4:00pm to 5:00pm
Robust mean estimation is the following basic estimation question: given samples from a distribution, where an \epsilon-fraction of them have been corrupted, how well can you estimate the mean of the distribution? Gautam Kamath:Privately Learning High-Dimensional Distributions
Tuesday, February 19, 2019 - 3:45pm to 4:45pm
We present novel, computationally efficient, and differentially private algorithms for two fundamental high-dimensional learning problems: learning a multivariate Gaussian in R^d and learning a product distribution in {0,1}^d in total variation distance.
Brendan Juba: New Algorithms for Conditional Linear Regression
Monday, July 30, 2018 - 4:00pm to 5:00pm
The kinds of rules that we know how to fit to data, such as linear rules, are
Manuel Sabin: Fine-Grained Derandomization: From Problem-Centric to Resource-Centric Complexity
Wednesday, April 25, 2018 - 4:00pm to 5:00pm
We show that popular hardness conjectures about problems from the field of fine-grained complexity theory imply structural results for resource-based complexity classes.
Dor Minzer: An approach for 2-to-1 Games Conjecture via expansion on the Grassmann Graph
Wednesday, September 6, 2017 - 4:00pm to 5:00pm
The PCP theorem characterizes the computational class NP, so as to allow proving approximation problems are NP-hard.
Harry Lang: Coresets for k-Means-Type Problems on Streams
Wednesday, June 28, 2017 - 4:00pm to 5:00pm
Let f be a non-negative symmetric dissimilarity measure. Given sets of points P (the input data) and C (a "query"), we define F(P,C) = \sum_{p \in P} \min_{c \in C} f(p,c).
|
Fréchet space
A Fréchet space is a complete metrizable locally convex topological vector space. Banach spaces furnish examples of Fréchet spaces, but several important function spaces are Fréchet spaces without being Banach spaces. Among these are: the Schwartz space $\mathscr{S}(\R^n)$ of all infinitely-differentiable complex-valued functions on $\R^n$ that decrease at infinity, as do all their derivatives, faster than any polynomial, with the topology given by the system of semi-norms \[ p_{\alpha,\beta}(x) = \sup_{t \in \R^n} \left| t_1^{\beta_1} \cdots t_n^{\beta_n} \frac{ \partial^{\alpha_1 + \cdots + \alpha_n} x(t_1,\ldots,t_n) }{ \partial t_1^{\alpha_1} \cdots \partial t_n^{\alpha_n} } \right|, \] where $\alpha$ and $\beta$ are non-negative integer vectors; the space $H(D)$ of all holomorphic functions on some open subset $D$ of the complex plane with the topology of uniform convergence on compact subsets of $D$, etc.
A closed subspace of a Fréchet space is a Fréchet space; so is a quotient space of a Fréchet space by a closed subspace; a Fréchet space is a barrelled space, and therefore the Banach–Steinhaus theorem is true for mappings from a Fréchet space into a locally convex space. If a separable locally convex space is the image of a Fréchet space under an open mapping, then it is itself a Fréchet space. A one-to-one continuous linear mapping from a Fréchet space onto a Fréchet space is an isomorphism (an analogue of a theorem of Banach).
Fréchet spaces are so named in honour of M. Fréchet.
References
[Bo] N. Bourbaki, "Topological vector spaces", Springer (1987) (Translated from French) [KeNa] J.L. Kelley, I. Namioka, "Linear topological spaces", Springer (1963) [Kö] G. Köthe, "Topological vector spaces", 1, Springer (1969) [RoRo] A.P. Robertson, W.S. Robertson, "Topological vector spaces", Cambridge Univ. Press (1973) [Sc] H.H. Schaefer, "Topological vector spaces", Macmillan (1966) How to Cite This Entry:
Fréchet space.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Fr%C3%A9chet_space&oldid=27212
|
The course started yesterday with one very concrete example, followed by loads of abstractions. Last night’s lecture began the trip back into things that are a little bit more concrete.
What I Taught
I began the lecture by handing out a few pages from Larry Gonick’s fantastic The Cartoon Guide to Calculus. As I noted yesterday, the compressed summer schedule means that certain parts of the curriculum are giving short shrift, including the formal definition of the limit. I would like them to at least see the ideas, so I gave them something entertaining to read, and spent a couple of minutes trying to get the general gist of idea across.
With that out of the way, we dove straight into a rather overwhelming section on rules for finding limits. I asked them to essentially memorize six computational rules (things like “the limit of a sum is the sum of the limits”), as well as two special cases (for any constant \(c\) and any point \(a\), we have \(\lim_{x\to a} c = c\) and \(\lim_{x\to a} x = a). I showed how a couple of the rules could be used to derive others, and demonstrated the special cases graphically. No formal proofs were given.
Using these “limit laws,” we worked through a couple of examples involving polynomials and rational functions in detail. We noted that in these cases, it seemed like it was possible to find \(\lim_{x\to a} f(x)\) by evaluating \(f\) at \(a\). I told them that this was, in fact the case (assuming that \(f\) is defined at \(a\)), and gave a quick sketch of the proof.
I finished the lecture by relating limits to one-sided limits, stating the Squeeze Theorem, and demonstrating how the results covered in the lecture could be used to find \(\lim_{x\to 0} x\sin(\pi/x)\). In the last ten minutes of class, I opened the floor for questions about the homework.
What Worked
I think that the examples were pretty solid, and that most of the students were able to see and understand the process of applying the limit laws to compute the limits of polynomials and rational functions. One student noticed that the general idea was to use the laws to reduce all of the terms to either constant functions or the identity function, which allowed us to use the special cases. I am heartened that I didn’t have to point this out—it was “discovered” by one of the students, which I think makes it a bit real and accessible for the entire class.
What Didn’t Work
On the other hand, this was a pretty rough lecture. Because the treatment of the material is a bit less formal and much faster, the limit laws look like a bunch of arbitrary rules that need to be memorized. I tried to motivated them and show the connections, but there was neither time nor sufficient background to really do the job justice. I’m not sure what else to do in the situation—there is a certain amount of material that I am expected to cover (and not cover) in a limited amount of time—but it seems like there might be a better way. To the three or four of you reading along, any thoughts would be appreciated.
I also made a complete and utter hash of my last example. The limit
\[ \lim_{x\to 0} \left[x\sin\left(\frac{\pi}{x}\right)\right] \] was meant to highlight a lot of ideas. Because the sine term doesn’t have a limit at zero, the previous technique of reducing the problem to terms with known limits wasn’t going to work. This requires the use of other tools, and the only other theorem presented was the Squeeze Theorem. To use this theorem, it is necessary to find two functions that bound the function of interest, and have the same limit at 0. In this case, \(|x|\) and \(-|x|\) do the job, which means finding those limits at 0. To do this, we have to take the absolute value apart piecewise, find one-sided limits, and use those to obtain the limit. All in all, there is a lot to see with this example.
Unfortunately, when I first wrote the problem on the board, I was beginning to stress about having enough time to work the homework (I had about 30 minutes left), and I wrote a 3 rather than an \(x\) for the multiplication. Obviously, this didn’t work the way I wanted, and I had to backtrack. In and of itself, no big deal—chalk is cheap, and students are generally willing to forgive one mistake, especially if it is caught early, as it was here. But I let myself get flustered, which was bad.
Because I was flustered and worried about time, I rushed through the next step. I stated, without much explanation, that
\[ -|x| \le x\sin\left(\frac{\pi}{x}\right) \le |x|. \] This is true, but I lost several students here, and had to come back to it after finishing the problem, which must have been even more confusing and disorientating for the students.
Instead of properly explaining the above inequality, I trooped on ahead, and found \(\lim_{x\to 0^-} |x|\) and \(\lim_{x\to 0^+} |x|\) from the piecewise definition. This part was, I think, pretty clear, but on the opposite side of the board because I didn’t want to erase the statement of the squeeze theorem from the center board. Bad Xander. On the bright side, this was the last piece of the puzzle, and I was able to finish things off without too much more difficulty.
Except that I had to go back and explain the inequality.
Ultimately, I think that I lost a few students with the example because I got flustered, then allowed myself to compound errors upon errors. There is a truism in flying planes that one mistake doesn’t lead to a crash—crashes happen when a pilot makes a mistake, then another, then another. That’s what I allowed myself to do.
So, here’s how to fix the problem in the future:
Stop, breath, and relax after making a mistake. One mistake is easily corrected. Slow down and don’t stress about time. The amount of backtracking required to correct for rushing quickly costs more than doing things right the first time. Be willing to erase the board. For seriously.
|
Reference documentation for deal.II version Git 0491297983 2019-09-23 09:31:59 +0200
#include <deal.II/sundials/kinsol.h>
class AdditionalData
KINSOL (const AdditionalData &data=AdditionalData(), const MPI_Comm mpi_comm=MPI_COMM_WORLD) ~KINSOL () unsigned int solve (VectorType &initial_guess_and_solution)
static ::ExceptionBase & ExcKINSOLError (int arg1)
std::function< void(VectorType &)> reinit_vector std::function< int(const VectorType &src, VectorType &dst)> residual std::function< int(const VectorType &src, VectorType &dst)> iteration_function std::function< int(const VectorType ¤t_u, const VectorType ¤t_f)> setup_jacobian std::function< int(const VectorType &ycur, const VectorType &fcur, const VectorType &rhs, VectorType &dst)> solve_jacobian_system std::function< VectorType &()> get_solution_scaling std::function< VectorType &()> get_function_scaling
void set_functions_to_trigger_an_assert ()
static ::ExceptionBase & ExcFunctionNotProvided (std::string arg1)
AdditionalData data void * kinsol_mem N_Vector solution N_Vector u_scale N_Vector f_scale MPI_Comm communicator GrowingVectorMemory< VectorType > mem
KINSOL is a solver for nonlinear algebraic systems in residual form \(F(u) = 0\) or fixed point form \(G(u) = u\). It includes a Newton-Krylov solver as well as Picard and fixed point solvers, both of which can be accelerated with Anderson acceleration. KINSOL is based on the previous Fortran package NKSOL of Brown and Saad.
KINSOL’s Newton solver employs the inexact Newton method. As this solver is intended mainly for large systems, the user is required to provide their own solver function. If a solver function is not provided, the internal dense solver of KINSOL is used. Be warned that this solver computes the Jacobian approximately, and may be efficient only for small systems.
At the highest level, KINSOL implements the following iteration scheme:
Here, \(u_n\) is the \(n\)-th iterate to \(u\), and \(J(u) = \nabla_u F(u)\) is the system Jacobian. At each stage in the iteration process, a scalar multiple of the step \(\delta_n\), is added to \(u_n\) to produce a new iterate, \(u_{n+1}\). A test for convergence is made before the iteration continues.
Unless specified otherwise by the user, KINSOL strives to update Jacobian information as infrequently as possible to balance the high costs of matrix operations against other costs. Specifically, these updates occur when:
KINSOL allows changes to the above strategy through optional solver inputs. The user can disable the initial Jacobian information evaluation or change the default value of the number of nonlinear iterations after which a Jacobian information update is enforced.
To address the case of ill-conditioned nonlinear systems, KINSOL allows prescribing scaling factors both for the solution vector and for the residual vector. For scaling to be used, the user may supply the function get_solution_scaling(), that returns values \(D_u\), which are diagonal elements of the scaling matrix such that \(D_u u_n\) has all components roughly the same magnitude when \(u_n\) is close to a solution, and get_function_scaling(), that supply values \(D_F\), which are diagonal scaling matrix elements such that \(D_F F\) has all components roughly the same magnitude when \(u_n\) is
not too close to a solution.
When scaling values are provided for the solution vector, these values are automatically incorporated into the calculation of the perturbations used for the default difference quotient approximations for Jacobian information if the user does not supply a Jacobian solver through the solve_jacobian_system() function.
Two methods of applying a computed step \(\delta_n\) to the previously computed solution vector are implemented. The first and simplest is the standard Newton strategy which applies the update with a constant \(\lambda\) always set to 1. The other method is a global strategy, which attempts to use the direction implied by \(\delta_n\) in the most efficient way for furthering convergence of the nonlinear problem. This technique is implemented in the second strategy, called Linesearch. This option employs both the \(\alpha\) and \(\beta\) conditions of the Goldstein-Armijo linesearch algorithm given in
J. E. Dennis and R. B. Schnabel. "NumericalMethods for Unconstrained Optimization and Nonlinear Equations." SIAM, Philadelphia, 1996., where \(\lambda\) is chosen to guarantee a sufficient decrease in \(F\) relative to the step length as well as a minimum step length relative to the initial rate of decrease of \(F\). One property of the algorithm is that the full Newton step tends to be taken close to the solution.
As a user option, KINSOL permits the application of inequality constraints, \(u_i > 0\) and \(u_i < 0\), as well as \(u_i \geq 0\) and \(u_i \leq 0\), where \(u_i\) is the \(i\)-th component of \(u\). Any such constraint, or no constraint, may be imposed on each component by providing the optional functions
KINSOL will reduce step lengths in order to ensure that no constraint is violated. Specifically, if a new Newton iterate will violate a constraint, the maximum step length along the Newton direction that will satisfy all constraints is found, and \(\delta_n\) is scaled to take a step of that length.
The basic fixed-point iteration scheme implemented in KINSOL is given by:
At each stage in the iteration process, function \(G\) is applied to the current iterate to produce a new iterate, \(u_{n+1}\). A test for convergence is made before the iteration continues.
For Picard iteration, as implemented in KINSOL, we consider a special form of the nonlinear function \(F\), such that \(F(u) = Lu − N(u)\), where \(L\) is a constant nonsingular matrix and \(N\) is (in general) nonlinear.
Then the fixed-point function \(G\) is defined as \(G(u) = u − L^{-1}F(u)\). Within each iteration, the Picard step is computed then added to \(u_n\) to produce the new iterate. Next, the nonlinear residual function is evaluated at the new iterate, and convergence is checked. The Picard and fixed point methods can be significantly accelerated using Anderson’s method.
The user has to provide the implementation of the following std::functions:
Specifying residual() allows the user to use Newton strategies (i.e., \(F(u)=0\) will be solved), while specifying iteration_function(), fixed point iteration or Picard iteration will be used (i.e., \(G(u)=u\) will be solved).
If the use of a Newton method is desired, then the user should also supply
If the solve_jacobian_system() function is not supplied, then KINSOL will use its internal dense solver for Newton methods, with approximate Jacobian. This may be very expensive for large systems. Fixed point iteration does not require the solution of any linear system.
Also the following functions could be rewritten, to provide additional scaling factors for both the solution and the residual evaluation during convergence checks:
SUNDIALS::KINSOL< VectorType >::KINSOL ( const AdditionalData & data =
AdditionalData(),
const MPI_Comm mpi_comm =
MPI_COMM_WORLD
)
unsigned int SUNDIALS::KINSOL< VectorType >::solve ( VectorType & initial_guess_and_solution )
private
std::function<void(VectorType &)> SUNDIALS::KINSOL< VectorType >::reinit_vector
std::function<int(const VectorType &src, VectorType &dst)> SUNDIALS::KINSOL< VectorType >::residual
A function object that users should supply and that is intended to compute the residual dst = F(src). This function is only used if the SolutionStrategy::newton or SolutionStrategy::linesearch are specified.
This function should return:
std::function<int(const VectorType &src, VectorType &dst)> SUNDIALS::KINSOL< VectorType >::iteration_function
A function object that users should supply and that is intended to compute the iteration function G(u) for the fixed point and Picard iteration. This function is only used if the SolutionStrategy::fixed_point or SolutionStrategy::picard are specified.
This function should return:
std::function<int(const VectorType ¤t_u, const VectorType ¤t_f)> SUNDIALS::KINSOL< VectorType >::setup_jacobian
A function object that users may supply and that is intended to prepare the linear solver for subsequent calls to solve_jacobian_system().
The job of setup_jacobian() is to prepare the linear solver for subsequent calls to solve_jacobian_system(), in the solution of linear systems \(Ax = b\). The exact nature of this system depends on the SolutionStrategy that has been selected.
In the cases strategy = SolutionStrategy::newton or SolutionStrategy::linesearch, A is the Jacobian \(J = \partial F/\partial u\). If strategy = SolutionStrategy::picard, A is the approximate Jacobian matrix \(L\). If strategy = SolutionStrategy::fixed_point, then linear systems do not arise, and this function is never called.
The setup_jacobian() function may call a user-supplied function, or a function within the linear solver module, to compute Jacobian-related data that is required by the linear solver. It may also preprocess that data as needed for solve_jacobian_system(), which may involve calling a generic function (such as for LU factorization). This data may be intended either for direct use (in a direct linear solver) or for use in a preconditioner (in a preconditioned iterative linear solver).
The setup_jacobian() function is not called at every Newton iteration, but only as frequently as the solver determines that it is appropriate to perform the setup task. In this way, Jacobian-related data generated by setup_jacobian() is expected to be used over a number of Newton iterations.
current_u Current value of u current_f Current value of F(u) or G(u)
This function should return:
std::function<int(const VectorType &ycur, const VectorType &fcur, const VectorType &rhs, VectorType & dst)> SUNDIALS::KINSOL< VectorType >::solve_jacobian_system
A function object that users may supply and that is intended to solve the Jacobian linear system. This function will be called by KINSOL (possibly several times) after setup_jacobian() has been called at least once. KINSOL tries to do its best to call setup_jacobian() the minimum amount of times. If convergence can be achieved without updating the Jacobian, then KINSOL does not call setup_jacobian() again. If, on the contrary, internal KINSOL convergence tests fail, then KINSOL calls again setup_jacobian() with updated vectors and coefficients so that successive calls to solve_jacobian_systems() lead to better convergence in the Newton process.
If you do not specify a solve_jacobian_system() function, then only a fixed point iteration strategy can be used. Notice that this may not converge, or may converge very slowly.
A call to this function should store in
dst the result of \(J^{-1}\) applied to
src, i.e.,
J*dst = src. It is the users responsibility to set up proper solvers and preconditioners inside this function.
Arguments to the function are
[in] ycur is the current \(y\) vector for the current KINSOL internal step [in] fcur is the current value of the implicit right-hand side at ycur, \(f_I (t_n, ypred)\). [in] rhs the system right hand side to solve for [out] dst the solution of \(A^{-1} * src\)
This function should return:
std::function<VectorType &()> SUNDIALS::KINSOL< VectorType >::get_solution_scaling
std::function<VectorType &()> SUNDIALS::KINSOL< VectorType >::get_function_scaling
A function object that users may supply and that is intended to return a vector whose components are the weights used by KINSOL to compute the vector norm of the function evaluation away from the solution. The implementation of this function is optional, and it is used only if implemented.
private
private
private
private
private
private
private
|
Assume, $x$ is a variable and the natural exponential function is written as $e^{\displaystyle x}$ in mathematics. The indefinite integration of natural exponential function with respect to $x$ is written in the following mathematical form in integral calculus.
$\displaystyle \int{e^{\displaystyle x} \,}dx$
Now, let us learn how to derive the proof for the integration rule of the natural exponential function.
Write the formula for the derivative of natural exponential function with respect to $x$ in mathematical form.
$\dfrac{d}{dx}{\, (e^{\displaystyle x})} \,=\, e^{\displaystyle x}$
Include a constant to natural exponential function but it does not change the differentiation of sum of natural exponential function and constant because the derivative of a constant is zero.
$\implies$ $\dfrac{d}{dx}{\, (e^{\displaystyle x}+c)} \,=\, e^{\displaystyle x}$
According to the integration, the collection of all primitives of $e^{\displaystyle x}$ function is called the integration of $e^{\displaystyle x}$ function with respect to $x$. It is expressed in mathematics as follows.
$\displaystyle \int{e^{\displaystyle x} \,}dx$
The antiderivative or primitive of $e^{\displaystyle x}$ function is sum of the natural exponential function and the constant of integration ($c$).
$\dfrac{d}{dx}{(e^{\displaystyle x}+c)} = e^{\displaystyle x}$ $\,\Longleftrightarrow\,$ $\displaystyle \int{e^{\displaystyle x} \,}dx = e^{\displaystyle x}+c$
$\therefore \,\,\,\,\,\,$ $\displaystyle \int{e^{\displaystyle x} \,}dx \,=\, e^{\displaystyle x}+c$
Therefore, it has proved that the integration of natural exponential function with respect to a variable is equal to the sum of the natural exponential function and the constant of integration.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Why does Schwarz get credit for proving “Cauchy–Schwarz for integrals”? There is an easy proof of Cauchy–Schwarz that relies only on $\langle \cdot, \cdot \rangle$ being an inner product, whether defined in terms of integrals or not. And proving that $$(f,g) \mapsto \int_X f(x)\overline{g(x)}\,d\mu(x)$$ is an inner product is not hard either. Why bother looking at Schwarz’s proof, either now or back in the time when it was published?
An inner product was not defined until ~1905 by Hilbert. Orthogonality of the trigonometric functions in the Fourier expansion was experimentally discovered in the late 1700's, and Fourier built on this work with more general orthogonal function expansions. Cauchy only came up with his inequality for complex Euclidean space in the 1820's (I think that's right.) And no connection was made between orthogonality of functions with respect to an integral and the Euclidean dot product until the second half of the 1800's. Part of the reason for this disconnect may have been that there was no natural path from the sum to the integral because the Riemann integral was not defined until several decades later.
A note by Bunyakowsky appeared in a journal in 1859 that the discrete case could be generalized to integrals, but it was ignored because no applications were given. It wasn't until the next decade that Schwarz published a paper on minimal surfaces where the Schwarz inequality for the integral was rediscovered and used to measure something like a distance in order to get at a solution for PDEs. But, even then, it was not noticed that there was a general concept of a "norm" or "distance function." Cantor began developing set axioms for the foundation of Mathematics about that time, and by the turn of the 20th century, people started thinking of abstract "spaces" where a point could be an object such as a function. And that's what led to Hilbert's definition of a general inner product space, and to a great deal of modern Mathematics. The current version of the Cauchy-Schwarz inequality was proved in this context.
So, the Schwarz inequality for integrals came a few decades before the definition of an inner product. And Cauchy's inequality for the discrete case came about 80 years earlier. I believe it was Hilbert who tagged the general inner product inequality as Cauchy-Schwarz because of the work of these two Mathematicians.
|
The biorthogonality relation for discrete wavelets can be formulated as follows:
$$\sum_{n\in \mathbb Z} a_n \tilde a_{n+2m} = 2\cdot \delta_{m,0}$$
for two sequences of numbers $$\{a_{-N},\cdots,a_N\},\{\tilde a_{-N},\cdots,\tilde a_N\}$$
Usually we consider solutions for which $a_k,\tilde a_k \in \mathbb R$.
Now to the question.. given that the equation system above in all essence is a linear equation system relating $a$ and $\tilde a$ to each other, can we expand to a higher or more advanced field of numbers? I am for example aware that there exist complex-valued discrete wavelets, (although I do not know how these are usually constructed).
If so, how would we do this in practice?
|
If $G$ is a group, the center of G is defined to be $Z(G)=\{ x\in G \mid x*a=a*x$ for all $a\in G \}$. Show that $Z(G)$ is a subgroup of $G$.
Solution:
By the way that $Z(G)$ is defined, all elements in $Z(G)$ must be in $G$, so $Z(G)$ is a subset of $G$.
Let $a,b \in Z(G)$.
Then for every $c\in G, a*c=c*a$ and $b*c=c*b$. Now, since $G$ is a group, $*$ is associative on G. Thus, $(b*a)*c=(b*c)*a$. Consequently, since $b*c=c*b$, $(b*a)*c=(c*b)*a=c*(b*a)$. Hence, $(b*a)\in Z(G)$ for all $a,b \in G$. Therefore, $Z(G)$ is closed under $*$. $\hspace{200pt} \clubsuit$
Let $a,b,c \in Z(G)$.
Then $a,b,c \in G$. Now, since $G$ is associative, $(a*b)*c=a*(b*c)$. Therefore, $*$ is associative on $Z(G)$. $\hspace{200pt} \clubsuit$
Since G is a group, $e\in G$ such that for all $a\in G, e*a=a*e=a$.
Therefore, by definition of $Z(G), e \in Z(G)$. $\hspace{200pt} \clubsuit$
Let $a\in Z(G)$.
Then $a \in G$ and for all $b\in G, a*b=b*a$ Now, since $G$ is a group, there must exist $a'\in G$ such that $a*a'=a'*a=e$. Hence, $a'*a*b*a'=a'*b*a*a'$. Consequently, $e*b*a'=a'*b*e$. Thus, $b*a'=a'*b$. Therefore, $a' \in Z(G)$. $\hspace{200pt} \clubsuit$
Therefore, since $Z(G)$ is a subset of $G$ closed under $*$ and satisfies all requirements for being a group, $Z(G)\leq G$.
$\hspace{200pt} \spadesuit$
|
By using the polar form of the complex number prove that, $|z_1 z_2| = |z_1| |z_2|$ and $\left|\frac{z_1}{z_2}\right| = \frac{|z_1|}{|z_2|}$
closed as off-topic by Matthew Conroy, Avitus, Michael Albanese, Shailesh, Leucippus Jul 14 '16 at 2:41
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Matthew Conroy, Avitus, Michael Albanese, Shailesh, Leucippus
Hint:
remember that $|e^{i\theta}|=1$ and use the polar decomposition: $$ z_1=|z_1|e^{i\theta_1} \qquad z_2=|z_2|e^{i\theta_2} $$
Using polar form, let $$z_1 = r_1(\cos{\theta_1} + i \sin{\theta_1})$$ $$z_2 = r_2(\cos{\theta_2} + i \sin{\theta_2})$$ Then, we have: $$|z_1 \cdot z_2| = |r_1r_2(\cos{\theta_1}\cos{\theta_2} - \sin{\theta_1}\sin{\theta_2} + i(\sin{\theta_1}\cos{\theta_2} + \cos{\theta_1}\sin{\theta_2}))| \\ = |r_1 r_2 (\cos(\theta_1 + \theta_2)+ i\sin(\theta_1 + \theta_2))| = r_1 r_2 $$ where I used the trignometric addition identities. Therefore, $|z_1 \cdot z_2| = |z_1| \cdot |z_2|$.
Similarly, $$ \left|\frac{z_1}{z_2}\right| = \left| \frac{r_1(\cos{\theta_1} + i \sin{\theta_1})}{r_2(\cos{\theta_2} + i \sin{\theta_2})} \right| = \left| \frac{r_1(\cos{\theta_1} + i \sin{\theta_1})(\cos{\theta_2} - i \sin{\theta_2})}{r_2(\cos^2{\theta_2} + \sin^2{\theta_2})} \right| = \left| \frac{r_1}{r_2}(\cos{\theta_1}\cos{\theta_2} + \sin{\theta_1} \sin{\theta_2} + i(\sin{\theta_1}\cos{\theta_2} - \cos{\theta_1} \sin{\theta_2})) \right| = \left| \frac{r_1}{r_2}\left(\cos(\theta_1 - \theta_2) + i\sin(\theta_1 - \theta_2)\right) \right| = \frac{r_1}{r_2} = \frac{|z_1|}{|z_2|}$$
The absolute value of a number is its distance from $0$. Thus, all absolute values are either positive or 0. That is, if we have a number $x$, then the absolute value of x which can be written as $|x|$ is equal to:
$x$ if $x$ is positive $-x$ if $x$ is negative $0$ if $x$ is $0$
Also, for any real number, $x^2$ is positive or $0$ (if $x=0$). Therefore, $\sqrt{x^2}$ is:
$x$ if $x$ is positive $-x$ if $x$ is negative $0$ if $x$ is $0$
If the $-x$ part is a bit confusing, consider take for example:
$$\sqrt{(-8)^2}=-(-8)=8$$
From the discussion above we can conclude that $|x|=\sqrt{x^2}$.
Using this fact we are going to prove to arguments in this post.
If $a$ and $b$ are two real numbers:
(1) the absolute value of their product is equal to the product of their absolute values or
$$|ab|=|a||b|$$
and
(2) the absolute value of their quotient is equal to the quotient of their absolute values or
$$\left|\frac{a}{b}\right|=\frac{|a|}{|b|}$$
Proof of (1)
From above, we know that $|x|=\sqrt{x^2}$, so
$$|ab|=\sqrt{(ab)^2}=\sqrt{a^2b^2}=\sqrt{a^2}\sqrt{b^2}$$
But from the definition above:
$$\sqrt{a^2}\sqrt{b^2}=|a||b|$$
Therefore, $|ab|=|a||b|$.
Proof of (2)
$$\left|\frac{a}{b}\right|=\sqrt{\left(\frac{a}{b}\right)^2}=\frac{\sqrt{a^2}}{\sqrt{b^2}}$$
Since $\sqrt{x^2}=|x|$:
$$\frac{\sqrt{a^2}}{\sqrt{b^2}}=\frac{|a|}{|b|}$$
Therefore,
$$\left|\frac{a}{b}\right|=\frac{|a|}{|b|}$$
|
Wed, Dec 5, 2018
Disclaimer: the author has a background in Computer Science, the physics, chemistry and biology anecdotes in this article are obtained during research on the topic on the on-demand basis, without a formal training in the respective subjects. Also, this article aims to describe the pipeline to help the readers to form initial intuition about the problem and approach. Details of the machine learning part, is not within my intended scope of this article.
ELI5 Protein Structure Prediction problem
Proteins are basic building blocks of life. They themselves are messy ball-like objects much similar to carelessly rolled up shoe-laces. A single shoe-lace is a string of amino acids. Amino Acids are even smaller stuff that are very important to life. Interestingly, groups of amino acids forms different shape under different conditions. They are like living lego blocks, some combination of these blocks tend to stick together, like magnets; and almost all of them act like this depending on whether they can relax themselves in water. We know that proteins can do a lot of different work, and we can tell what type of work a protein can do by looking at its shape. The shape of protein is like an outfit for job, to us humans: we can tell a person is a police officer because that person is wearing a police outfit. However, they are many many proteins, so the biology and chemistry experts documented them using a technique called “sequencing”: putting down each amino acid one by one: Imagine straightening a rolled shoe-lace magically without having to worry about knots. This technique is so fast, it allows us to document massive amount of proteins, but at the same time we lose the shape of the protein. As a result, only a small part of the known proteins have their shape mapped out. Finding out the rolled up shape base only on the straightened shoe-lace is what we called “Protein structure prediction problem”.
How hard can it be? As it turns out, quite complicated :‘(
Above is an illustration of broken down steps of how we predict a folded protein: as you can see, getting the final structure or even the tertiary structure right involves a lot of hidden knowledge from physics, chemistry and biology.
Hold your breath: Terminology and Structural Physics of the Protein
Firstly, we will use abbreviation
AA for amino acids. The magnet that binds AA is called peptide bonds. We sometimes also say conformation (of AA sub-sequence or the whole chain), instead of the shape.
The bonds bind a small sequence of AA, different ordering and different type of AA forms different local shapes. We call these local shapes
Secondary Structures( SS), they mostly fall into 2 groups: $\alpha$-helix and $\beta$-sheet. Also, SS generally have a certain range of length for their respective type. $\alpha$-helix is usually 3 to 5 AA long, while $\beta$-sheet averagely could stretch to 7 AA long.
The conformation of “strings” of SS (
Tertiary Structure) is mainly determined by compounded bond force of all the AA in the chain. On the even smaller scale, AA’s carbon atom ($C_{\alpha}$) serves as pivot point, on which the chain rotates: The positive and negative polar charge of each AA interact (attracts and propels) and define a “stable” conformation, depending on a series of rotating dynamics. To make matters even more “interesting”, given the same sub-sequence, some known conformation of the said sequence don’t end up in the same conformation in other part of the chain, because that part is not interacting with water.
To recap, the conformation of protein is important because the structure reveals the function of protein. Many people equate protein to machines, and this analogy is quite fitting: on the molecular level, a factory of proteins breaks up and assemble genes as if they are workers of different parts on the manufacturing line.
Without loss of generality, a rigorous physical rule-set describes the mechanics of proteins, thus in principle life itself can be understood, should we be able to gain full knowledge of the dynamics and their functional mappings.
Methodology: a tale of two cites
Evidently due to the complexity of the problem, traditional approaches of structural prediction involve the comparison against known structures. The assumption is that the evolution has done enough experiments for us to say, if a stable and working structure happened before (“happened” as in proven, not extinct), it should happen again elsewhere. As a matter of fact, the comparison method is effective. The main problem, however, is that we don’t have nearly enough of known templates to solve the rest of protein sequences.
For sequences that have nearly zero matching templates, we have to seek solutions that assume almost nothing, that derive from first principles. This is what we (Computer Science, Computational Biology) call the
ab initio or de novo methods. de novo or ab initio is latin synonym for “from scratch”. Because we at least know about the physical bond properties and constraints, we should be able to compute based on energy (how likely, or least amount of energy, for a conformation to take hold).
From the early hand-crafted rule based simulations to the more recent machine learning approaches,
ab initio methods continue to gain traction. For instance, SS prediction using bi-directional recurrent neural networks can achieve an accuracy of well beyond 80%, and each iteration seemed not have slowed down, continuing the approach to the theoretical limit of 90% accuracy.
So how does a pipeline for
ab initio Protein Structure Prediction task looks like? Pipeline for ab inito or de novo Protein Structure Prediction
Machine learning is data-driven, more meaningful data is always better than less. Instead of taking sequences alone as input for training in a machine learning approach, we enrich them using a series of pre-processing steps. Although we don’t have templates to look-up from, we still can gain insight from sequence similarities. In Computational Biology we can gain richer protein profile by aligning sequences based on similarities. Protein sequences are coded sequences of AA. We have 20 known AA, thus we codify them using alphabet A-T. Similarity includes, but not limited to, alphabetical similarity of the sequence, sub-sequence AA substitution tables and similarity produced by probabilistic models. Commonly the main alignment and search methods are PSI-BLAST and HHblits. The resulting richer data-set is then further encoded and primed for training.
Mirroring the illustration in the first section, we design a multi-stage machine learning model: Predicting SS and other properties first, using predicted SS feature and original profile to predict distance/contact maps of the tertiary features, eventually combining all previous information to predict final folded structure.
For those readers who reach here and find this text helpful, I thank you for your attention. And I hope you to talk about the details of my approach in a later article soon.
Sat, Feb 28, 2015
This article is inspired by Slides in the Algorithms, Part I
by Kevin Wayne, Robert Sedgewick. Image 1 and 2 are cropped
from the original slide on page 51 and 52 of the 15UnionFind.PDF *
My knowledge is limited, my writing is most certainly flawed.
Please don’t hesitate to leave me a feedback/suggestion/correction/rant
Percolation and the threshold
In computer science, especially algorithm there is a topic that involves solving dynamic connectivity. Figuring out if a path/route is connected to other path/route is widely applicable in engineering the solutions to real-world problems.
For instance in many physical systems we have a model of N by N square grid, the blocks inside the grid can be either open or blocked. We call the system grid is percolated if and only if there is a path connecting top open sites with bottom open sites. Just like solving a labyrinth problem, on a massive grid it is very tiresome and ineffective (enumerate all possibility. Here checking whether two components are connected, a brute-force will cost
$O(n^2)$ time) to obtain the percolating status. Therefore we need an efficient algorithm to solve this percolation problem. Determining whether 2 subset of the grid is connected can be modeled as connected components as in graph theory, which involves equivalence relation in the Set Theory:
On set X, an operation ~ is an equivalence relation iff
a ~ a
if a~b then b~a
if a~b and b~c then a~c
Intuitively, subgraphs (components) are “equivalent” iff they have the same root. Because we don’t need to keep track of the shapes of the original subgraphs, we can model the solution to support a tree like data structure that is as flat as possible to guarantee the look-up speed, and some Connect (
union) action to merge two subgraphs. In other words, we can choose Union Find algorithm to solve the percolation problem.
There is however a subsequent question regarding percolation that is more interesting: if all the grid blocks have a possibility
p to be opened (blocked would be
(1-p)), what is the likelihood that the whole system percolates? Imaging a social network with N by N user grid, each user has the possibility of
p to be able to communicate with his/her neighbor, what are the odds that the first row users are linked with the bottom row users?
We observed that above a certain
p, the system most likely percolates. This critical threshold
p is called Percolation Threshold. It appears to be a constant number, tied to the lattice structure.
There is no direct solution to calculate the percolation threshold. But we sure can simulate the problem and if we do enough times of experiments, we will have some confidence that an approximation can be determined by the experiment results.
This article is about a demonstration of using Go and Monte Carlo Simulation to obtain an approximation of the percolation threshold of square lattice / grid.
Approach
Due to the fact that Simulations like Monte Carlo require iterating huge number of randomized experiments in order to deliver confident results, the number class we are talking here often start with millions. Running such simulations might be challenging if we have
Inefficient Experiments (software constraints) Slow Computer (hardware constraints)
Point 1 might be overcome by using more efficient algorithms. However faster machines may not be available, especially since 2000 the trend of new computing engine is to go multi-core instead of higher frequency. This is why parallelism is often the way to go for such problems. I chose to use Go because on the one hand, it’s a fantastic system language which is fast, expressive and most familiar to C programmers; on the other hand, it has first class support for concurrency programming. Although the CSP paper by Hoare has been around for decades, there are very few of mainstream languages that adopt the concept.
Concurrent Analysis
Each experiment in the simulation is independent to each other, this is a perfect entry-point where we can apply parallelism. Mainly we need to satisfy the condition
$$\begin{equation}P_{linear} (x) = P_{parallel} (x)\end{equation}$$
With
$$\begin{equation}P_{linear}(x) = \dfrac{\sum^n{p(x)}}{n}\end{equation}$$
where n is the number of all experiments, and
$p(x)$ being the probability of experiment
$x$ that yields a percolation.
And having denoted
$w$ as the workload every worker has to perform and the
$c$ is the count of all the works, the parallel version needs to deliver
$$\begin{equation}P_{parallel}(x) = \dfrac{\sum^c \sum^w{p(x)}}{n} \text{, with } w\cdot c = n\end{equation}$$
The experiment
Ideally if we can have a grid constructed for us in a manner that it is instantly set up with randomized open sites, all we need to do is call the Find function on the grid (the
$p(x)$ in previous section).
If I am to randomly fill up a completely blocked grid with open sites, it would take me maximal
SIZE of the grid times to construct a percolation. Further, if I repeat the random construction again and again and record the steps that I took to produce a percolation in each and every experiment
$\rightarrow$ when making a histogram of this recorded steps, I would have a plot that looks like a normal distribution which centers at the
expected value. This expected value is exactly the wanted
$p$ because the underlying pdf (probability density function) is what the plot represents.
To recap and modify our formula.
Let
$s(x)$ be the
steps used in
$x$ experiment to produce a percolation;
$n$ be the number of experiments;
$g$ be the number of the blocks in the grid (size).
$$\begin{equation}P(x)=\dfrac{\sum^n{s(x)}}{n\cdot g}\end{equation}$$
Because
$\dfrac{s(x)}{g} = p(x)$, we can parallel this approach on the top level too.
High Level Algorithm
To randomly fill up open sites on a initially blocked square grid, is equivalent to obtaining a randomized permutation of numbers
$0...N$-
$1$, where
$N$is the size of the grid. The
rand package of the Go library can deliver that right out of the box.
On each site opening I need to connect the opened site to its neighbors,
the neighbor is also opened. iff
Algorithm in Pseudo-code
Step := 0
LOOP number in Permutation
OBTAIN Neighbors of number
LOOP neighbor in Neighbors
IF neighbor is open
Connect / Union number with neighbor
Step++
IF Percolates / Find
RETURN Step
END IF
END IF
END LOOP
END LOOP
Construction and Union Find
Now we need to construct the grid using cheap and smart data structures that supports our efficient union find algorithm. Let the size of the grid be
$Size = N^2$, N is the length of its side. A weighted compressed Union-Find can achieve both union and find in
$O(N+Mlg^*N)$time. * This Union-Find needs two arrays of size
$N^2$, one (noted
P) for presenting the parent relation / subgraph ownership of each element, the other (noted
S) for the size of that component (used for weight comparison).
Additionally, in order to maintain states of the openness of the blocks, I need an boolean array of
$N^2$. Finally, to ease the modeling of opening to top side edge and bottom side edge, we can add 2 virtual sites
$N^2 + 1$ and
$N^2 + 2$ to the original
P array, the grid
percolates iff
$$\begin{equation}N^2+1 \text{ and } N^2+2\end{equation}$$
are connected.
Code
Code contains 1 package called connectivity, which contains 2 components:
unionfind and
percolation. Both components are exported, you can use it on your own project. To run the simulation, please see next section.
Link
Complete source code on Github
Getting the code
go get github.com/nilbot/algo.go/connectivity
Some Snippets
A Simulator interface:
type Simulator interface {
Simulate() int64
mark(n int)
Clear()
}
Data structure used in simulation
type PercolationSimulator struct {
Size int
Marked []bool
l int
connect *WeightedCompressed
}
and its construction
func NewPercolationSimulator(N int) *PercolationSimulator {
result := &PercolationSimulator{
Size: N * N,
Marked: make([]bool, N*N),
l: N,
connect: NewWeightedCompression(N*N + 2),
}
for i := 0; i != N; i++ {
result.connect.Union(i, result.Size)
}
for i := N * (N - 1); i != N*N; i++ {
result.connect.Union(i, result.Size+1)
}
return result
}
The 2 for loops are the trick to have 2 extra virtual sites connected to top and bottom edges, respectively.
To obtain a randomized permutation, calling
(r *rand) Perm(int) from the math/rand package.
// return a slice of pseudo-random permutation of [0,n)
func getPermutation(n int) []int {
seed := time.Now().UnixNano() % 271828182833
r := rand.New(rand.NewSource(seed))
return r.Perm(n)
}
And marking open site, as outlined in pseudo-code in previous section:
// mark (paint) the block as white
func (p *PercolationSimulator) mark(n int) {
if p.Marked[n] {
return
}
p.Marked[n] = true
neighbors := getDirectNeighbours(n, p.l)
for _, adj := range neighbors {
if p.Marked[adj] {
p.connect.Union(n, adj)
}
}
}
For overview you can also visit the godoc
Result and Benchmarks Run Test and Benchmark
cd $GOPATH/github.com/nilbot/algo.go/connectivity
go test -bench Simulate
Result
Benchmark
BenchmarkSimulate 100000 21375 ns/op
was achieved on a 2013 i7 8G MacbookAir. On the same machine running simulation with 10 million iterations yields
ran 1000000 simulations, got result 0.59377084
|
It sounds like all you really care about it is a realistic simulation. For you, this means that you are interested in specifying a force that is "reasonable" for your robot. I say "reasonable" because it's up to you to define, but hopefully I can help you set some guidelines.
Torque, the angular force you apply to a joint, has two general uses:
static poses and dynamic motion. A lot of people try to calculate the torque they need by evaluating only the static torque, which is a poor method to use because, once the worst-case position is reached, there is no longer any overhead to accelerate (change the speed of) the joint. That is, once the joint stops at the worst-case position, if you only specified the worst-case static torque, it's not possible to resume motion.
Static torque is easily calculated: $\tau_s = FL\sin{\theta}$. Torque is the force applied, times the distance at which it's applied, times the sine of the angle the arm makes with vertical.
Dynamic torque, which I'd say is arguably the more important of the two, is a little harder to calculate because it requires the moment of inertia of the load.
Dynamic torque is $\tau_d = I\alpha$. Moment of inertia times angular acceleration. It's pretty straightforward,
once you have the moment of inertia, but again that can be tricky to calculate.
The easiest way, in my opinion, is to calculate it through the center of mass of the object, then use the parallel axis theorem to shift the axis to that of the joint.
Now, this becomes tricky when you are dealing with multibody simulation because the method I give above only gives the moment of inertia for one link. If you have multiple linkages, then you have to take the joint angle into account when calculating the moments of inertia for all subsequent linkages.
Ultimately, for your problem, you need to choose an acceptable angular acceleration $\alpha$, programmatically calculate the moments of inertia, and then use $\tau = I\alpha$ to determine the maximum forces you want to apply.
|
Vinogradov method
A new method for estimating trigonometric sums (see Trigonometric sums, method of). By Vinogradov's method one may obtain very accurate estimates of a wide class of trigonometric sums, in which the summation variable runs through a sequence of integers, prime numbers, etc. In this way many classical problems in analytic number theory can be solved (the distribution of the fractional parts of a wide class of functions, the distribution of prime numbers in series of natural numbers, additive problems such as the Waring and Goldbach problems, etc.).
One distinguishes two parts in Vinogradov's method: The method of estimating Weyl sums (cf. Weyl sum) and the method of estimating trigonometric sums with prime numbers. Both these parts utilize Vinogradov's basic idea — to wit, the idea of smoothing double trigonometric sums, which may be described as follows. Given the sum\[ W=\sum_u\sum_v\psi_1(u)\psi_2(v) e^{2\pi i\alpha uv},\]where the summation variables $u$ and $v$ run through the values of (not necessarily successive) integers in respective sets $U$ and $V$, $A<u<2A$. Let $\psi_1(u)$ and $\psi_2(v)$ be arbitrary complex-valued functions. Then
\[ \lvert W\rvert^2\leq B\sum_{A< u\leq 2A}\left\lvert \sum_v\psi_2(v)e^{2\pi i\alpha uv}\right\rvert^2,\]
where $u$ runs through the successive integers in the interval $(A,2A]$ (smoothing), and
\[ B=\sum_n\lvert \psi_1(u)\rvert^2.\]
Vinogradov's method for estimating Weyl sums.
The sums to be estimated are
\[ S=\sum_{1\leq x\leq P}e^{2\pi if(x)},\]
where $f(x)=\alpha_{n+1}x^{n+1}+\cdots+\alpha_1x$; here $\alpha_{n+1},\ldots,\alpha_1$ are real numbers. For $Y=[P^{1-n^2/4}]$ one finds
\begin{align*} S&=Y^{-1}\sum_{1\leq x\leq P}\sum_{1\leq y\leq Y}e^{2\pi if(x+y)}+2\theta Y\\ &=Y^{-1}\sum_{1\leq x\leq P}\sum_{1\leq y\leq Y}e^{2\pi i\mathfrak{A}}+2\theta Y=Y^{-1}W+2\theta Y, \end{align*}
where $\mathfrak{A}=\alpha_{n+1}x^{n+1}+A_n(y)x^n+\cdots+A_1(y)x+A_0(y)$, the letter $W$ denotes the double sum over $x$ and $y$, and $\lvert \theta\rvert\leq 1$. Moreover, letting $\mathfrak{B}$ denote the expression
\[ \alpha_{n+1}x^{n+1}+(A_n(y)+B_n)x^n+\cdots+(A_1(y)+B_1)x\]
one has
\[\lvert W\rvert\leq \sum_{1\leq y\leq Y}\left\lvert \sum_{1\leq x\leq P}e^{2\pi i\mathfrak{B}}\right\rvert+Y_nP^{1-n^2/4},\]
for arbitrary $B_n,\ldots,B_1$ from the domain
\[\lvert B_n\rvert\leq L_n=P^{-n-n^2/4},\ldots,\lvert B_1\rvert\leq L_1=P^{-1-n^2/4}.\]
For any integer $k\geq 1$:
\begin{align*} \lvert W\rvert^{2k} &\leq 2^kY^{2k-1}\sum_{1\leq y\leq Y}\int_{\lvert B_n\rvert\leq L_n}\cdots \int_{\lvert B_1\rvert\leq L_1}\left\lvert \sum_{1\leq x\leq P} e^{2\pi i\mathfrak{B}}\right\rvert\mathrm{d}B_n\cdots\mathrm{d}B_1+2^k(Y_nP^{1-n^2/4})^{2k}\\ &\leq 2^kY^{2k-1}G(Y)\int_0^1\cdots \int_0^1\left\lvert \sum_{1\leq x\leq P}e^{2\pi i\mathfrak{C}}\right\rvert^{2k}\mathrm{d}\gamma_n\cdots\mathrm{d}\gamma_1+2^k(Y_nP^{1-n^2/4})^{2k}, \end{align*}
where $\mathfrak{C}=\alpha_{n+1}x^{n+1}+\gamma_nx^n+\cdots+\gamma_1x$ and $G(Y)$ is the maximum number of cases of coincidence of points with coordinates \[ \{A_n(y)+B_n\},\{A_{n-1}(y)+B_{n-1}\},\ldots,\{A_1(y)+B_1\}.\] Here the braces denote the fractional part of the enclosed number, while $y$ varies between 1 and $Y$, and \[\lvert B_n\rvert\leq L_n,\ldots,\lvert B_1\rvert\leq L_1.\] If the coefficients $\alpha_{n+1},\ldots,\alpha_2$ of the polynomial $f(x)$ have certain arithmetical properties, it is possible to obtain the estimate $G(Y)\leq Y^{0.9}$. In addition, the last integral does not exceed the number of solutions of the system of equations: \begin{align*} x_1^n+\cdots+x_k^n &= y_1^n+\cdots+y_k^n,\\ x_1^{n-1}+\cdots+x_k^{n-1} &= y_1^{n-1}+\cdots+y_k^{n-1},\\ \vdots\\ x_1+\cdots+x_k &= y_1+\cdots+y_k, \end{align*} \[ 1\leq x_1,\ldots,y_k\leq P.\] Vinogradov's method for estimating trigonometric sums with prime numbers.
The sums to be estimated are
\[ S' = \sum_{p\leq P}e^{2\pi i f(p) },\] where $f(p)=\alpha_np^n+\cdots+\alpha_1p$, and $\alpha_n,\ldots,\alpha_1$ are real numbers. Let $D=\prod_{p\leq H}p$, where $H\leq P^{0.25}$. Using the well-known property of the Möbius function, $S'$ is reduced to a small number of sums (this number is not larger than $\ln P/\ln H$) of the form \[W_S=\sum_{d_1\mid D}\cdots \sum_{d_s\mid D}\sum_{m_1>0}\cdots\sum_{m_s>0}\mu(d_1)\cdots\mu(d_s)e^{2\pi i f(m_1\cdots m_sd_1\cdots d_s)},\] where $m_1\cdots m_sd_1\cdots d_s\leq P$. In the multiple sum $W_S$ the variables $m_1,\ldots,m_s$ run through the entire summation intervals. The sums $W_S$ in which the summation interval over at least one variable $m$ is long are estimated by Vinogradov's method for estimating Weyl sums. Otherwise the summation interval over one of the summation variables $d$, $d\mid D$, will be long. In such a case one uses the following lemma of Vinogradov which, together with the idea of smoothing double sums, is fundamental to Vinogradov's method for estimating trigonometric sums with prime numbers. Lemma. Let $0\leq \sigma\leq 1/3$ and let $D$ be the product of all primes not larger than $x^{\sigma}$; all divisors $d$ of $D$ not larger than $x$ may then be distributed over fewer than $x^{\epsilon}$ sets with the following properties: 1) the numbers $d$ belonging to one of these sets have the same number $\beta$ of prime factors and therefore the same value of $\mu(d)=(-1)^\beta$; 2) one of these sets — the so-called simplest set — consists of the single number $d=1$. For each of the remaining sets there is a number $\phi$ such that all numbers of this set satisfy the relation
\[\phi< d\leq \phi^{1+\epsilon_1},\hspace{1cm} \epsilon_1=\epsilon_1(\epsilon);\]
3) for each set of numbers $d$ other than the simplest set there exist, for any $U$, $0\leq U\leq \phi$, two sets of numbers $d'$ and $d''$, with corresponding numbers $\phi'$ and $\phi''$ satisfying the relations
\[U<\phi'\leq Ux^{\sigma},\hspace{1cm}\phi'\phi''=\phi,\]
and such that, for a certain natural number $B$, one obtains each number in the chosen set $B$ times if, out of all the products $d'd''$ one selects only those which satisfy the relation $(d',d'')=1$.
The application of point 3) of this lemma, with a suitable value of $U$, yields
\[ W_S=\sum_u\sum_v\psi_1(u)\psi_2(v)e^{2\pi i f(uv) },\]
where the variables $u$ and $v$ run through long summation intervals. It is possible to deduce Vinogradov's estimate for trigonometric sums with prime numbers from this lemma (cf. Vinogradov estimates).
If $F(x)$ can be properly approximated, in a certain sense, by a polynomial, Vinogradov's method can be used to estimate sums of the type
\[S=\sum_{1\leq x\leq P}e^{2\pi i F(x)},\hspace{1cm} S'=\sum_{p\leq P}e^{2\pi i F(p)}\]
(see [2], ). Sums of the type
\[ \sum_{p\leq P}\chi(p+a),\hspace{1cm} \sum_{1\leq n\leq N}\mu(n)\chi(n+a)\]
and other types can also be estimated by the method. It is possible in this way to solve problems on the distribution of power residues, primitive roots, etc., in sequences of the type $p+a$, where $a>0$ is a given integer, while $p$ runs through the successive prime numbers [3], [5]. For the application of Vinogradov's method in analytic number theory see [1], [2], , [5], [6].
References
[1] I.M. Vinogradov, "The method of trigonometric sums in the theory of numbers" , Interscience (1954) (Translated from Russian) [2] I.M. Vinogradov, "Selected works" , Springer (1985) (Translated from Russian) [3] I.M. Vinogradov, "Estimates of a sum, distribution of primes in an arithmetic series" Izv. Akad. Nauk SSSR Ser. Mat. , 30 (1966) pp. 481–496 (In Russian) [4a] A.A. Karatsuba, "Estimates for trigonometric sums by Vinogradov's method, and applications" Proc. Steklov Inst. Math. , 112 (1971) pp. 251–265 Trudy Mat. Inst. Steklov. , 112 (1971) pp. 241–255 [4b] A.A. Karatsuba, "On some problems of prime number theory connected with I.M. Vinogradov's method" Proc. Steklov Inst. Math. , 132 (1973) pp. 293–298 Trudy Mat. Inst. Steklov. , 132 (1973) pp. 257–261 [5] L.-K. Hua, "Abschätzungen von Exponentialsummen und ihre Anwendung in der Zahlentheorie" , Enzyklopaedie der Mathematischen Wissenschaften mit Einschluss ihrer Anwendungen , 1 : 2 (1959) (Heft 13, Teil 1) [6] K. Chandrasekharan, "Arithmetical functions" , Springer (1970)
[a1] R.C. Vaughan, "The Hardy–Littlewood method" , Cambridge Univ. Press (1981) How to Cite This Entry:
Vinogradov method.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Vinogradov_method&oldid=34392
|
1of 1 Tolaso J Kos Administrator Articles:0 Posts:844 Joined:Sat Nov 07, 2015 6:12 pm Location:Larisa Contact:
$$\mathcal{S}= \sum_{n=1}^{\infty} \left(\alpha-\frac{\lfloor n\alpha \rfloor}{n}\right)$$
diverges.
Imagination is much more important than knowledge.
We are focusing on the $\alpha$' s lying in the interval $(0, 1)$. That is because each term of the series is $1$ periodic. Let $\mathbb{Z} \ni k >0$ and let $n$ be the maximal integer for which it holds
\[k-1 <n\alpha < k\]
Since it holds that $\left \{ n \alpha \right \} \geq 1-\alpha$ as well as $n \leq \frac{k}{\alpha}$ we deduce that the series has at least one term of the form $\displaystyle \frac{\alpha\left ( 1-\alpha \right )}{k}$. Since for every positive integer $k$ we have one such term , we conclude that the series diverges.
|
I'm studying Lie algebras, and I'm struggling to prove the following result:
Let $L$ be a solvable subalgebra of $\mathfrak{gl}(V)$, $dimV = n < \infty$. Then $L$ stabilizes some flags in $V$ (in other words, the matrices of $L$ relative to a suitable basis of $V$ are upper triangular).
The instruction to prove this is to use a preceding theorem, namely:
Let $L$ be a solvable subalgebra of $\mathfrak{gl}(V)$, $V$ finite dimensional. If $V \neq 0$, then $V$ contains a common eigenvector for all the endomorphisms in $L$.
My attempt was:
If $dimV = 1$, then the result is true.
Now suppose the result is proven for $dimV < n$. If $dimV = n$, by the preceding theorem, since $L$ is solvable, there is $v \in V - \{0\}$ such that $v$ is eigenvector for all $x \in L$. If $W = Span(v)$, then $W$ is one dimensional, and $\frac{V}{W}$ has dimension $n-1$. I've done this so I could use the induction hypothesis, but I feel it doesn't work, since I don't know for sure if $L$ is a subalgebra of $\mathfrak{gl}(\frac{V}{W})$.
I do think I am in the right track, because of this question. Any help?
|
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover.
By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact
I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure
The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure
that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure
What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set
Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value?
@AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure
We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$.
Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$?
$\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value
The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure
What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set
Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$
(cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series
Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively
Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below
Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$
Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and:
@AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently.
If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies
@Overflow2341313 Could you send a picture or a screenshot of the problem?
nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum
So there are only countably many disjoint intervals in the cover $C$
@Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t).
Simply restrict the codomain so that it is onto? Making it bijective and hence invertible.
hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird...
In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set:
@Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}.
To do this... the smallest t can be is the trivial topology on R - {\emptyset, R}
But, we required that everything in U be in t under f?
@Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse
I'm not sure if adding the additional condition that $f$ is an open map will make an difference
For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship
Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system
Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
|
Tool for calculating the Hermite normal form (by reducing a matrix to its row echelon form) from a matrix M (with coefficients in Z) the computation yields 2 matrices H and U such that $ H = U . M $.
Hermite Normal Form Matrix - dCode
Tag(s) : Matrix
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Sponsored ads
Tool for calculating the Hermite normal form (by reducing a matrix to its row echelon form) from a matrix M (with coefficients in Z) the computation yields 2 matrices H and U such that $ H = U . M $.
A matrix $ M $ of size $ n \times m $ with integer coefficients (natural or relative) has a Hermite decomposition if there exists a triangular matrix $ H $ and a unimodular matrix $ U $ such that $ H = U. M $. Reminder: An upper triangular matrix $ H $ is such that $ H_ {i, j} = 0 $ for $ i> j $ and a unimodular matrix is an invertible square matrix with integer coefficients whose determinant is $ \pm 1 $.
Example: $$ M = \begin{bmatrix} 3 & 2 & 1 \\ 0 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix} \Rightarrow H = \begin{bmatrix} 0 & -1 & 1 \\ 0 & 1 & 0 \\ -1 & -1 & 3 \end{bmatrix}, U = \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{bmatrix} $$
There are two forms for the Hermite matrix, an upper triangular matrix such that $ H = UM $ (also called Hermite's normal form row style) is a lower triangular matrix such that $ H = MU $ ( also called
Hermite normal form column style)
dCode uses the LLL algorithm (Lenstra-Lenstra-Lovász) to calculate the Hermite decomposition (the calculation by hand is not recommended)
A normal Hermite-shaped matrix is the triangular scaled matrix $ H $ calculated by the Hermite decomposition (above)
dCode retains ownership of the source code of the script Hermite Normal Form Matrix online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Hermite Normal Form Matrix script for offline use on PC, iPhone or Android, ask for price quote on contact page !
|
The lower attic
From Cantor's Attic
Revision as of 07:42, 30 December 2011 by Jdh
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
$\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including $\omega_1^x$ admissible ordinals Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the Feferman–Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel $\omega$, the smallest infinity down to the parlour, where large finite numbers dream
|
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues
|
A mathematical equation which contains logarithmic functions as terms is called a logarithmic equation.
The logarithmic terms are formed to express quantities mathematically. The expression of one or more connected log terms represents a quantity and the same quantity can also be expressed by different one or more connected log terms.
Then, the two logarithmic expressions are equal and the mathematical equation is completely in terms of log functions. Therefore, the mathematical equation is called as a logarithmic equation.
$3$ is a number. It can be expressed in logarithmic form.
$\log_{2}{(8)} = 3$
The quantity $3$ is equal to $\log_{2}{8}$. So, it is a basic mathematical equation. It can also be written in the following way as well.
$\implies \log_{2}{(8)}-3 = 0$
In this equation, $\log_{2}{8}$ is a logarithmic term. So, the mathematical equation is called as a logarithmic equation.
Now, forget about this example temporarily and subtract $\log_{5}{(25)}$ from $\log_{4}{(1024)}$.
$\log_{4}{(1024)}-\log_{5}{(25)}$
Now, simplify this log expression.
$=\,\,\, \log_{4}{(4^5)}-\log_{5}{(5^2)}$
As per power rule of logarithms, the two terms in the expression can be simplified.
$=\,\,\, 5\log_{4}{(4)}-2\log_{5}{(5)}$
The logarithm of base is always one as per log of base rule.
$=\,\,\, 5 \times 1-2 \times 1$
$=\,\,\, 5-2$
$=\,\,\, 3$
Therefore, $\log_{4}{(1024)}-\log_{5}{(25)} \,=\, 3$
But, $\log_{2}{(8)} = 3$ is also true. So, the expression $\log_{2}{(8)}$ can be equal to the logarithmic expression $\log_{4}{(1024)}-\log_{5}{(25)}$
$\,\,\, \therefore \,\,\,\,\,\, \log_{4}{(1024)}-\log_{5}{(25)}$ $\,=\,$ $\log_{2}{(8)}$
It is a mathematical equation completely in terms of logarithms. So, it is also called as a logarithmic equation. Thus, the logarithmic equations are formed in mathematics.
Observe the following few more equations to have some basic knowledge about log equations.
$(1)\,\,\,\,\,\,$ $\log_{3}{[5+4\log_{3}{(x-1)}]}$ $\,=\, 2$
$(2)\,\,\,\,\,\,$ $\log{\Bigg[\dfrac{1}{2^y+y-1}\Bigg]}$ $\,=\,$ $y{(\log{5}-1)}$
$(3)\,\,\,\,\,\,$ $\log_{2a}{2}$ $+$ $\log_{a}{2}$ $+$ $\dfrac{3}{2}$ $\,=\, 0$
Now, let’s learn solving logarithmic equations from example problems.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
The effect of replacing $x$ in $\cos x$ with $x + a y - a$ is a shear. In particular, this shear can be represented by the (homogeneous matrix)$$ M = \begin{pmatrix} 1 & a & -a \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} $$since $$ \begin{pmatrix} \tilde{x} \\ \tilde{y} \\ 1 \end{pmatrix} = M \cdot \begin{pmatrix} x \\ y \\ 1 \end{pmatrix} = \begin{pmatrix} x + ay - a \\ y \\ 1\end{pmatrix} $$That is, given coordinates $(x,y)$ of a point of the graph of $y = \cos x$, its image under the shear gives the coordinates $(\tilde{x}, \tilde{y})$ of a point on the graph $\tilde{y} = \cos(\tilde{x}+a\tilde{y}-a)$.
This matrix applies a linear map. The effect of a linear map on area is given by its determinant -- if the determinant is, say, $2$, the map doubles areas. So we compute the determinant of our map. Using he first column for expansion by minors should minimize computation. We find$$ \det M = 1 \begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} + 0 \cdot | \dots | + 0 \cdot | \dots | = 1 \text{.} $$This says that areas are unchanged when we apply this shear. Therefore, we only need to be able to integrate the unsheared cosine, but with (reverse) sheared bounds of integration. A picture might help.
Here's a plot of $y = \cos(x + ay - a)$ with $a = 1/2$.
Say we want the integral from $1$ to $4$. The left and right edges of the area we want are vertical on this graph.
But on the unsheared graph, they are not.
The resulting area is a triangular region on the left, a usual integral between the points where the bounds meet the unsheared cosine graph, plus a traingular region on the right. (Notice that the left triangular region is negative if the point where it meets $\cos x$ has negative height. Similarly, for the right endpoint if that line meets cosine at positive height.)
So you integrate this by finding where the unsheared bounds meet the unmodified cosine graph, computing the usual integral between those bounds, then correcting by these two triangle areas.
|
I wonder how to solve the following constrained problem
${\rm Minimize}_{\vec{A}}$ $\parallel Z\vec{A}-Y\parallel^2_2$ , $\quad\vec{A}\in\mathbb{R}^{n^2}$
such that: $A\in\mathbb{R}^{5\times 5}$ is positive definite
where $\vec{A}=vec(A)$. For the unconstrained case, which has a closed form solution, I used MATLAB command
lsqlin(). With constraints, in MATLAB, I am stuck as the constraints should be in terms of the variable that is minimized $\vec{A}$. However, my constraint is in terms of the matrix form $A$.
I can also say
${\rm Minimize}_{A}$ $\parallel X^{\rm T}AX-Y\parallel^2_2$ ,
such that: $A\in\mathbb{R}^{5\times 5}$ and $X^{\rm T}AX >0$, where $X\in\mathbb{R}^5$
|
The roots of a quadratic equation are real and also a repeated or double root. It is only possible when the discriminant of a quadratic equation is equal to zero.
$ax^2+bx+c = 0$ is a quadratic equation and its discriminant ($\Delta$) is $b^2-4ac$.
The roots of the quadratic equation in terms of discriminant are $\dfrac{-b + \sqrt{\Delta}}{2a}$ and $\dfrac{-b \,-\sqrt{\Delta}}{2a}$
If the discriminant of the quadratic equation is zero ($\Delta = 0$), then the roots are $\dfrac{-b + \sqrt{0}}{2a}$ and $\dfrac{-b \,-\sqrt{0}}{2a}$.
Therefore, the two roots are $\dfrac{-b}{2a}$ and $\dfrac{-b}{2a}$. In this case, the two roots are same and equal.
Actually, the literals $a$, $b$ and $c$ are real numbers. So, the two roots should also be real numbers.
The property of the roots can be understood from an example quadratic equation.
$9x^2+30x+25 = 0$
Find the discriminant of the quadratic equation.
$\Delta = 30^2-4 \times 9 \times 25$
$\implies \Delta = 900-900$
$\implies \Delta = 0$
The value of the discriminant of the quadratic equation $9x^2+30x+25 = 0$ is zero. Now, evaluate the roots of this equation.
$x \,=\, \dfrac{-30 \pm \sqrt{30^2-4 \times 9 \times 25}}{2 \times 9}$
$\implies$ $x \,=\, \dfrac{-30 \pm \sqrt{0}}{18}$
$\implies$ $x \,=\, \dfrac{-30+0}{18}$ and $x \,=\, \dfrac{-30-0}{18}$
$\implies$ $x \,=\, \dfrac{-30}{18}$ and $x \,=\, \dfrac{-30}{18}$
$\,\,\, \therefore \,\,\,\,\,\,$ $x \,=\, -\dfrac{5}{3}$ and $x \,=\, -\dfrac{5}{3}$
Therefore $x \,=\, -\dfrac{5}{3}$ and $-\dfrac{5}{3}$ are the two roots of this quadratic equation. The two roots are equal and real numbers. Hence, they are called repeated or double root simply.
It is proved that the roots of a quadratic equation are real and equal when the discriminant of the quadratic equation is equal to zero.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Dependent Choice (Fixed First Element) Theorem
Suppose that:
$\forall a \in S: \exists b \in S: a \mathrel{\mathcal R} b$
that is, that $\mathcal R$ is a left-total relation (specifically a
serial relation).
Let $s \in S$.
Then there exists a sequence $\left\langle{x_n}\right\rangle_{n \in \N}$ in $S$ such that: $x_0 = s$ $\forall n \in \N: x_n \mathrel{\mathcal R} x_{n+1}$ Proof
Let $S' = \left\{{y \in S: s \mathrel{\mathcal R^+} y}\right\}$, where $\mathcal R^+$ is the transitive closure of $\mathcal R$.
Let $\mathcal R'$ be the restriction of $\mathcal R$ to $S'$.
For each $x \in S'$, there is a $y \in S$ such that $x \mathrel{\mathcal R} y$. But then $s \mathrel{\mathcal R^+} y$, so $y \in S'$, so $x \mathrel{\mathcal R'} y$.
Thus $\mathcal R'$ is a left-total relation on $S'$.
$S'$ is non-empty: since $\mathcal R$ is left-total, there is a $t \in S$ such that $s \mathrel{\mathcal R} t$, so $s \mathrel{\mathcal R^+} t$, so $t \in S'$.
Then by the definition of restriction, $y_n \mathrel{\mathcal R} y_{n+1}$ for each $n \in \N$.
By the definition of $S'$, $s \mathrel{\mathcal R^+} y_0$.
By the definition of transitive closure, there are elements $x_0, \dots, x_m$ such that $s = x_0 \mathrel{\mathcal R} x_1 \mathrel{\mathcal R} \cdots \mathrel{\mathcal R} x_m \mathrel{\mathcal R} x_m = y_0$.
Then for $n > m$, define $x_n$ as $y_{n-m}$.
This sequence then meets the requirements.
$\blacksquare$
Axiom of Dependent Choice
This theorem depends on the Axiom of Dependent Choice.
The consensus in conventional mathematics is that it is true and that it should be accepted.
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
Chemical Society Reviews, ISSN 0306-0012, 07/2010, Volume 39, Issue 7, pp. 2354 - 2371
There has been remarkable progress in the science and technology of semiconducting polymers during the past decade. The field has evolved from the early work...
PHOTOVOLTAIC CELLS | CONDUCTING POLYMER | NOBEL LECTURE | PHOTOINDUCED ELECTRON-TRANSFER | BULK HETEROJUNCTION MATERIALS | PLASTIC SOLAR-CELLS | CONJUGATED POLYMER | METALLIC POLYMERS | 4TH GENERATION | LOW-BANDGAP POLYMER | CHEMISTRY, MULTIDISCIPLINARY
PHOTOVOLTAIC CELLS | CONDUCTING POLYMER | NOBEL LECTURE | PHOTOINDUCED ELECTRON-TRANSFER | BULK HETEROJUNCTION MATERIALS | PLASTIC SOLAR-CELLS | CONJUGATED POLYMER | METALLIC POLYMERS | 4TH GENERATION | LOW-BANDGAP POLYMER | CHEMISTRY, MULTIDISCIPLINARY
Journal Article
Energy, ISSN 0360-5442, 12/2018, Volume 164, pp. 147 - 159
This review article presents a description of contemporary developments and findings related to the different elements needed in future 4th generation district...
Smart thermal grids | Low-temperature district heating | 4th generation district heating | Meta conclusions | Smart energy systems | 4GDH | generation district heating | PUMPS | ENERGY & FUELS | WIND POWER | SAVING POTENTIALS | EUROPE | WASTE HEAT | THERMODYNAMICS | DOMESTIC HOT-WATER | SYSTEMS | 100-PERCENT RENEWABLE ENERGY | DATA CENTERS | SINGLE-FAMILY HOUSES | Heating systems | Heat sources | Costs | Integration | District heating | Alternative energy | Energy costs | Energy sources | Smart grid technology | Cost benefit analysis | Systems integration | Renewable energy sources | Renewable energy | Energy | Heating | Sustainability | Energy efficiency | Low temperature | Engineering and Technology | Teknik och teknologier | Energy Engineering | Energiteknik | Mechanical Engineering | Maskinteknik | 4 generation district heating
Smart thermal grids | Low-temperature district heating | 4th generation district heating | Meta conclusions | Smart energy systems | 4GDH | generation district heating | PUMPS | ENERGY & FUELS | WIND POWER | SAVING POTENTIALS | EUROPE | WASTE HEAT | THERMODYNAMICS | DOMESTIC HOT-WATER | SYSTEMS | 100-PERCENT RENEWABLE ENERGY | DATA CENTERS | SINGLE-FAMILY HOUSES | Heating systems | Heat sources | Costs | Integration | District heating | Alternative energy | Energy costs | Energy sources | Smart grid technology | Cost benefit analysis | Systems integration | Renewable energy sources | Renewable energy | Energy | Heating | Sustainability | Energy efficiency | Low temperature | Engineering and Technology | Teknik och teknologier | Energy Engineering | Energiteknik | Mechanical Engineering | Maskinteknik | 4 generation district heating
Journal Article
Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, 10/2007, Volume 76, Issue 7
In the light of the LHC, we revisit the implications of a fourth generation of chiral matter. We identify a specific ensemble of particle masses and mixings...
BOSON | ASTRONOMY & ASTROPHYSICS | MASS | STANDARD MODEL FAMILIES | 4TH GENERATION | RADIATIVE-CORRECTIONS | QUARK | NEUTRINOS | DECAYS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Phenomenology
BOSON | ASTRONOMY & ASTROPHYSICS | MASS | STANDARD MODEL FAMILIES | 4TH GENERATION | RADIATIVE-CORRECTIONS | QUARK | NEUTRINOS | DECAYS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Phenomenology
Journal Article
Optics Letters, ISSN 0146-9592, 11/2018, Volume 43, Issue 22, pp. 5599 - 5602
Journal Article
Optics Express, ISSN 1094-4087, 2014, Volume 22, Issue 22, pp. 27086 - 27093
High-average-power fourth harmonic generation (4thHG) of an Nd:YAG laser has been achieved by using a KBe2BO3F2-prism-coupled device (KBBF-PCD). The highest...
HARMONIC-GENERATION | SOLID-STATE LASER | FREQUENCY | GROWTH | CRYSTAL | OPTICS | KHZ | 4TH-HARMONIC GENERATION | ULTRAVIOLET-BEAM GENERATION
HARMONIC-GENERATION | SOLID-STATE LASER | FREQUENCY | GROWTH | CRYSTAL | OPTICS | KHZ | 4TH-HARMONIC GENERATION | ULTRAVIOLET-BEAM GENERATION
Journal Article
Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, 02/2008, Volume 77, Issue 3
Journal Article
Physics Letters B, ISSN 0370-2693, 12/2018, Volume 787, pp. 1 - 7
An extension of the Standard Model is presented that leads to the possible existence of new gauge bosons with masses in the range of a few TeV. Due to the fact...
Z-prime | Multi-Higgs model | 4th fermion generation | CONSERVATION | SYMMETRY | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | 4TH GENERATION | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Phenomenology
Z-prime | Multi-Higgs model | 4th fermion generation | CONSERVATION | SYMMETRY | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | 4TH GENERATION | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Phenomenology
Journal Article
Optics Communications, ISSN 0030-4018, 03/2018, Volume 410, pp. 966 - 969
We analyse the output quantum tripartite correlations from an intracavity nonlinear optical system which uses cascaded nonlinearities to produce both second...
Entanglement | Steering | Cascaded systems | FREQUENCY | CRITERION | OPTICS | CONTINUOUS VARIABLE SYSTEMS | 4TH-HARMONIC GENERATION | Physics - Quantum Physics
Entanglement | Steering | Cascaded systems | FREQUENCY | CRITERION | OPTICS | CONTINUOUS VARIABLE SYSTEMS | 4TH-HARMONIC GENERATION | Physics - Quantum Physics
Journal Article
Physical Review D - Particles, Fields, Gravitation and Cosmology, ISSN 1550-7998, 08/2009, Volume 80, Issue 3
Within the framework of supersymmetry, the particle content is extended in a way that each Higgs doublet is in a full generation. Namely, in addition to...
LOW-ENERGY SUPERSYMMETRY | ASTRONOMY & ASTROPHYSICS | ELECTROWEAK SYMMETRY-BREAKING | LEPTON NUMBER | DARK-MATTER | 4TH GENERATION | MODEL | CROSS-SECTION | COUPLING-CONSTANTS | HEAVY TOP-QUARK | GRAND UNIFICATION | PHYSICS, PARTICLES & FIELDS
LOW-ENERGY SUPERSYMMETRY | ASTRONOMY & ASTROPHYSICS | ELECTROWEAK SYMMETRY-BREAKING | LEPTON NUMBER | DARK-MATTER | 4TH GENERATION | MODEL | CROSS-SECTION | COUPLING-CONSTANTS | HEAVY TOP-QUARK | GRAND UNIFICATION | PHYSICS, PARTICLES & FIELDS
Journal Article
10. Optimization of 4th generation distributed district heating system: Design and planning of combined heat and power
Renewable Energy, ISSN 0960-1481, 01/2019, Volume 130, pp. 371 - 387
This study applies a mathematical programming procedure to model the optimal design and planning of a new district which satisfies two features of the 4th...
CHP | Cogeneration | Heat and power | 4th generation | Optimization | Distributed | GREEN & SUSTAINABLE SCIENCE & TECHNOLOGY | OPERATION OPTIMIZATION | THERMAL-ENERGY STORAGE | ENERGY & FUELS | NETWORKS | MODEL | SIMULATION | COST | INTEGRATION | RENEWABLE ENERGY | EXERGY ANALYSIS | Cogeneration power plants | Case studies | Energy trading | Analysis
CHP | Cogeneration | Heat and power | 4th generation | Optimization | Distributed | GREEN & SUSTAINABLE SCIENCE & TECHNOLOGY | OPERATION OPTIMIZATION | THERMAL-ENERGY STORAGE | ENERGY & FUELS | NETWORKS | MODEL | SIMULATION | COST | INTEGRATION | RENEWABLE ENERGY | EXERGY ANALYSIS | Cogeneration power plants | Case studies | Energy trading | Analysis
Journal Article
11. Comparison of 4th-Generation HIV Antigen/Antibody Combination Assay With 3rd-Generation HIV Antibody Assays for the Occurrence of False-Positive and False-Negative Results
Laboratory Medicine, ISSN 0007-5027, 05/2015, Volume 46, Issue 2, pp. 84 - 89
Objective: To assess the false-positive and false-negative rates of a 4th-generation human immunodeficiency virus (HIV) assay, the Abbott ARCHITECT, vs 2 HIV...
Western blot | 3rd-generation HIV assays | False-negative results | False-positive results | HIV | 4th-generation HIV testing | UNITED-STATES | DIAGNOSIS | VIRUS | PERFORMANCE | SEXUAL TRANSMISSION | SCREENING ASSAY | MULTICENTER EVALUATION | INFECTION | false-positive results | false-negative results | IMMUNOASSAY | MEDICAL LABORATORY TECHNOLOGY | ANTIGEN | HIV Antibodies - blood | HIV Antigens - blood | HIV-1 - immunology | Diagnostic Tests, Routine - methods | HIV Infections - diagnosis | Humans | Immunoassay - methods | Female | Male | False Positive Reactions | Serologic Tests - methods | Antigens | Automation | Immunoglobulins | Algorithms | Disease transmission | Architects | Medical laboratories | Human immunodeficiency virus--HIV | Infections | FDA approval | Index Medicus
Western blot | 3rd-generation HIV assays | False-negative results | False-positive results | HIV | 4th-generation HIV testing | UNITED-STATES | DIAGNOSIS | VIRUS | PERFORMANCE | SEXUAL TRANSMISSION | SCREENING ASSAY | MULTICENTER EVALUATION | INFECTION | false-positive results | false-negative results | IMMUNOASSAY | MEDICAL LABORATORY TECHNOLOGY | ANTIGEN | HIV Antibodies - blood | HIV Antigens - blood | HIV-1 - immunology | Diagnostic Tests, Routine - methods | HIV Infections - diagnosis | Humans | Immunoassay - methods | Female | Male | False Positive Reactions | Serologic Tests - methods | Antigens | Automation | Immunoglobulins | Algorithms | Disease transmission | Architects | Medical laboratories | Human immunodeficiency virus--HIV | Infections | FDA approval | Index Medicus
Journal Article
Journal of Historical Sociology, ISSN 0952-1909, 03/2019, Volume 32, Issue 1, pp. 142 - 151
The study of revolution in historical sociology is conventionally divided into four ‘generations’ of scholarship, with the fourth associated with an...
ORIGINS | 4TH GENERATION | ANTHROPOLOGY | HISTORY | SOCIOLOGY | Analysis | Structuralism | Revolutions | Intervention | Historical sociology | 21st century | Cold War | Political change | Social movements | Dominance | Social change
ORIGINS | 4TH GENERATION | ANTHROPOLOGY | HISTORY | SOCIOLOGY | Analysis | Structuralism | Revolutions | Intervention | Historical sociology | 21st century | Cold War | Political change | Social movements | Dominance | Social change
Journal Article
The European Physical Journal C, ISSN 1434-6044, 04/2003, Volume 27, Issue 4, pp. 555 - 561
We study the effect of a sequential fourth generation on $b\to s\gamma$ , Bs mixing, $B\to K^{(*)}\ell^+\ell^-$ , $X_s\ell^+\ell^-$ and $\phi K_{\mathrm {S}}$...
STANDARD MODEL | QCD | LEADING LOGARITHMS | DOUBLET MODEL | 4TH GENERATION | DELTA-S=1 | QUARK | PHYSICS | DECAYS | B->S-GAMMA | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Phenomenology
STANDARD MODEL | QCD | LEADING LOGARITHMS | DOUBLET MODEL | 4TH GENERATION | DELTA-S=1 | QUARK | PHYSICS | DECAYS | B->S-GAMMA | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Phenomenology
Journal Article
14. The fourth generation Alere (TM) HIV Combo rapid test improves detection of acute infection in MTN-003 (VOICE) samples
JOURNAL OF CLINICAL VIROLOGY, ISSN 1386-6532, 09/2017, Volume 94, pp. 15 - 21
Background: Early and accurate detection of HIV is crucial when using pre-exposure prophylaxis (PrEP) for HIV prevention to avoid PrEP initiation in acutely...
HIV-1 | Acute infection | HIV confirmatory test | 4th generation rapid test | VIROLOGY | PERFORMANCE | HIV diagnostics | PREEXPOSURE PROPHYLAXIS | Pre-exposure prophylaxis
HIV-1 | Acute infection | HIV confirmatory test | 4th generation rapid test | VIROLOGY | PERFORMANCE | HIV diagnostics | PREEXPOSURE PROPHYLAXIS | Pre-exposure prophylaxis
Journal Article
15. Second-harmonic generation of light at 245 nm in a lithium tetraborate whispering gallery resonator
Optics Letters, ISSN 0146-9592, 2015, Volume 40, Issue 9, pp. 1932 - 1935
Journal Article
Energies, ISSN 1996-1073, 12/2018, Volume 11, Issue 12, p. 3287
Biomass heating networks provide renewable heat using low carbon energy sources. They can be powerful tools for economy decarbonization. Heating networks can...
Biomass district heating for rural locations | emissions abatement | 4th generation district heating | Low temperature district heating system | ENERGY | INTEGRATION | ENERGY & FUELS | SYSTEMS | biomass district heating for rural locations | SIMULATION | CO2 emissions abatement | low temperature district heating system
Biomass district heating for rural locations | emissions abatement | 4th generation district heating | Low temperature district heating system | ENERGY | INTEGRATION | ENERGY & FUELS | SYSTEMS | biomass district heating for rural locations | SIMULATION | CO2 emissions abatement | low temperature district heating system
Journal Article
17. Performance comparison of the 4th generation Bio-Rad Laboratories GS HIV Combo Ag/Ab EIA on the EVOLIS™ automated system versus Abbott ARCHITECT HIV Ag/Ab Combo, Ortho Anti-HIV 1 + 2 EIA on Vitros ECi and Siemens HIV-1/O/2 enhanced on Advia Centaur
Journal of Clinical Virology, ISSN 1386-6532, 2013, Volume 58, Issue 1, pp. e79 - e84
Abstract Background A multisite study was conducted to evaluate the performance of the Bio-Rad 4th generation GS HIV Combo Ag/Ab EIA versus Abbott 4th...
Infectious Disease | Allergy and Immunology | False positives | Specificity | Ag/Ab combination | 4th Generation | PPV | ANTIBODIES | VIRUS | SEROCONVERSION | ASSAY | ALGORITHM | SEXUAL TRANSMISSION | VIROLOGY | RECOMMENDATIONS | SYNTHETIC PEPTIDE | INFECTION | IMMUNOASSAY | HIV Antibodies - blood | HIV Antigens - blood | HIV-1 - immunology | Diagnostic Tests, Routine - methods | HIV Infections - diagnosis | Humans | Sensitivity and Specificity | False Positive Reactions | Automation, Laboratory - methods | Serologic Tests - methods | Index Medicus
Infectious Disease | Allergy and Immunology | False positives | Specificity | Ag/Ab combination | 4th Generation | PPV | ANTIBODIES | VIRUS | SEROCONVERSION | ASSAY | ALGORITHM | SEXUAL TRANSMISSION | VIROLOGY | RECOMMENDATIONS | SYNTHETIC PEPTIDE | INFECTION | IMMUNOASSAY | HIV Antibodies - blood | HIV Antigens - blood | HIV-1 - immunology | Diagnostic Tests, Routine - methods | HIV Infections - diagnosis | Humans | Sensitivity and Specificity | False Positive Reactions | Automation, Laboratory - methods | Serologic Tests - methods | Index Medicus
Journal Article
Japanese Journal of Applied Physics, ISSN 0021-4922, 12/2018, Volume 57, Issue 12, p. 122702
Efficient second harmonic generation (SHG) of a Yb:KGW laser with the femtosecond (fs) pulsed output using lithium triborate (LiB3O4, LBO) and barium borate...
HARMONIC-GENERATION | AMPLIFIER | PHYSICS, APPLIED | 3RD | 4TH | EFFICIENT 2ND | CRYSTAL | BIBO | HIGH-POWER | RADIATION | PULSES | Femtosecond | Second harmonic generation | Barium | Crystals | Lithium | Laser beams | Conversion | Ion beams | Pulse duration | Mathematical analysis | Lasers | Efficiency | Potassium gadolinium tungstate | LBO
HARMONIC-GENERATION | AMPLIFIER | PHYSICS, APPLIED | 3RD | 4TH | EFFICIENT 2ND | CRYSTAL | BIBO | HIGH-POWER | RADIATION | PULSES | Femtosecond | Second harmonic generation | Barium | Crystals | Lithium | Laser beams | Conversion | Ion beams | Pulse duration | Mathematical analysis | Lasers | Efficiency | Potassium gadolinium tungstate | LBO
Journal Article
19. Continuous tuning of double resonance-enhanced second harmonic generation in a dispersive dielectric resonator
Optics Express, ISSN 1094-4087, 01/2014, Volume 22, Issue 1, pp. 557 - 562
Journal Article
|
(this post requires some rethinking) 7/29/17 -mr
It seems this still does not explain why the mass of the proton is what it is... back to drawing board! $$mr={2h\over \pi c}$$
It seems this still does not explain why the mass of the proton is what it is... back to drawing board!
$$mr={2h\over \pi c}$$
Or, why can't the proton mass be 4% smaller, and the radius 4% bigger? It still works in the equation... (approximately. Linearization errors...)
This is because the solution is a cymatics-like resonance in the vacuum, and the mass ratio, $\phi$, is actually an information-theory area ratio divided by a geometric volumetric ratio: (Haramein's team's work):
$$\phi={\eta\over R}$$
$$\eta={Area_{objectSurface}\over A_{equitorialXsectionPSU}}$$ $$R={Volume_{object}\over Volume_{PSU}}$$ (the PSU is the Planck Sphereical Unit, a sphere of diameter Planck length, ${\ell}_{\ell}$) for the proton, a factor of 2 is needed: $$m_p={2{\eta\over R}m_{\ell}}$$ $m_p=$ Proton mass $m_{\ell}=$ Planck mass When the terms for eta over R, the mass ratio, are expanded, the results is: $$m_pr_p={2h\over \pi c}$$ $r_p$ is Proton radius. $h=$ Planck's constant $c=$ Speed of Light $\pi$ is good. So what you have is a 2D area ratio term & a 3D volumetric ratio which resonates similar to cymatics-type oscillations in a substrate material, the substrate being the new superfluid like aether of the vacuum. It's pretty simple math and this approach gives the same answer as the quantized angular momentum approach. Thus, the higher-level 3D approach seems to be more inclusive, as it can be used to calculated the mass of black holes, protons, and electrons. What's next? The details of the math and derivation have been outlined previously: http://fractalu.com/AdvancedGeometricPhysicsSolutions1.pdf and more info here: https://www.thenewenergyindustry.com/mass-ratio/
$$\eta={Area_{objectSurface}\over A_{equitorialXsectionPSU}}$$
$$R={Volume_{object}\over Volume_{PSU}}$$
(the PSU is the Planck Sphereical Unit, a sphere of diameter Planck length, ${\ell}_{\ell}$)
for the proton, a factor of 2 is needed:
$$m_p={2{\eta\over R}m_{\ell}}$$
$m_p=$ Proton mass
$m_{\ell}=$ Planck mass
When the terms for eta over R, the mass ratio, are expanded, the results is:
$$m_pr_p={2h\over \pi c}$$
$r_p$ is Proton radius.
$h=$ Planck's constant
$c=$ Speed of Light
$\pi$ is good.
So what you have is a 2D area ratio term & a 3D volumetric ratio which resonates similar to cymatics-type oscillations in a substrate material, the substrate being the new superfluid like aether of the vacuum.
It's pretty simple math and this approach gives the same answer as the quantized angular momentum approach. Thus, the higher-level 3D approach seems to be more inclusive, as it can be used to calculated the mass of black holes, protons, and electrons. What's next?
The details of the math and derivation have been outlined previously:
http://fractalu.com/AdvancedGeometricPhysicsSolutions1.pdf
and more info here:
https://www.thenewenergyindustry.com/mass-ratio/
The Surfer, OM-IV
|
I am new to any CAS (and Mathematica, for that matter) and new to StackExchange too, so forgive me and correct me on any mistakes.
I have this function: $J_p=\sum_{m,n=1}^{\infty} \epsilon_{mn}f_{mn}\sum_{k=-\infty}^{\infty}\frac{J_k^2(\beta)(m\Omega+k\omega)}{1+(m\Omega+k\omega)^2}$ where $\epsilon_{mn}=-\frac{m n}{4\pi^2}\int_0^{2\pi}\epsilon(p_x,p_y)\exp(-i(m p_x+n p_y))\,dp_x dp_y$ and $f_{mn}=-\frac{m n}{4\pi^2}\int_0^{2\pi}\frac{\exp(-i(m p_x+n p_y))}{1+\exp(-\epsilon(p_x,p_y))}\,dp_x dp_y$ where again $\epsilon(p_x,p_y)=\sqrt{1+4\cos\left(\frac{p_y}{2}\right)\cos\left(\frac{p_x\sqrt{3}}{2}\right)+4\cos^2\left(\frac{p_y}{2}\right)}$.
Here is my Mathematica code to evaluate this:
Off[NIntegrate::ncvi];epsilonCoeffsMMA[cl_] := Module[{reComp, imComp}, reComp[m_, n_] := (-m n)/(4 \[Pi]^2) NIntegrate[ Re[(1 + 4 Cos[py /2] Cos[(px Sqrt[3])/2] + 4 Cos[py/2]^2)^(1/2) Exp[-I (m px + n py)]], {px, 0, 2 \[Pi]}, {py, 0, 2 \[Pi]}, Method -> "Trapezoidal", MaxRecursion -> 100]; imComp[m_, n_] := (-m n)/(4 \[Pi]^2) NIntegrate[ Im[(1 + 4 Cos[py /2] Cos[(px Sqrt[3])/2] + 4 Cos[py/2]^2)^(1/2) Exp[-I (m px + n py)]], {px, 0, 2 \[Pi]}, {py, 0, 2 \[Pi]}, Method -> "Trapezoidal", MaxRecursion -> 100]; emnMatrix = Table[0, {m, 1, cl}, {n, 1, cl}]; Do[emnMatrix[[m, n]] = reComp[m, n] + I imComp[m, n], {m, 1, cl}, {n, 1, cl}]; ];boltzECoeffsMMA[cl_] := Module[{reComp, imComp},reComp[m_, n_] := (-m n)/(4 \[Pi]^2) NIntegrate[ Re[Exp[-I (m px + n py)]/(1 + Exp[-(1 + 4 Cos[py /2] Cos[(px Sqrt[3])/2] + 4 Cos[py/2]^2)^(1/2)])], {px, 0, 2 \[Pi]}, {py, 0, 2 \[Pi]}, Method -> "Trapezoidal", MaxRecursion -> 100]; imComp[m_, n_] := (-m n)/(4 \[Pi]^2) NIntegrate[ Im[(1 + 4 Cos[py /2] Cos[(px Sqrt[3])/2] + 4 Cos[py/2]^2)^(1/2) Exp[-I (m px + n py)]], {px, 0, 2 \[Pi]}, {py, 0, 2 \[Pi]}, Method -> "Trapezoidal", MaxRecursion -> 100]; fmnMatrix = Table[0, {m, 1, cl}, {n, 1, cl}]; Do[fmnMatrix[[m, n]] = reComp[m, n] + I imComp[m, n], {m, 1, cl}, {n, 1, cl}]; ];jPMMA[coeffLim_, kernLim_] := Module[ {cl = coeffLim, kl = kernLim, px, py}, epsilonCoeffsMMA[cl]; boltzECoeffsMMA[cl]; coeffMatrix = emnMatrix fmnMatrix; sumMatrix = Table[Sum[( BesselJ[k, \[Beta]]^2 (m \[CapitalOmega] + k \[Omega]))/( 1 + (m \[CapitalOmega] + k \[Omega])^2), {k, -kl, kl}], {m, 1, cl}, {n, 1, cl}]; jParaMMA = Total[coeffMatrix sumMatrix, 2]; ];
This generates a function
jParaMMA which I can
Plot after I have made the call
jPMMA[a,b] for some integers;
a and
b. For example
jPMMA[10, 10];Plot[Evaluate@Re[jParaMMA /. {\[Beta] ->2, \[Omega] -> {0, 2, 4, 6, 8}}], {\[CapitalOmega], 0, 20}, PlotRange -> Full]
for which
First[Timing[jPMMA[10, 10]]]
gives
115.437500
My question is: How can I obtain similar results, possibly with more terms (i.e. from running
jPMMA[50, 60], say.) in a shorter time,? Thank you.
PS: I used the
Off[NIntegrate::ncvi] because I do not know how to eliminate it from my numerical integration and I'd be glad to obtain some help for that too. Also, I used the
Trapezoidal method because I noticed it gave a faster approximation even when coupled with
MaxRecursion -> 100. I have tried with the Cuba library implementation in both Mathematica and Maple, which I was led to by this post, and the approximations are appreciably close.
|
Closure of Subset in Subspace
Jump to navigation Jump to search
Theorem
Let $T = \struct{S, \tau}$ be a topological space.
Let $H$ be a subset of $S$.
Let $T_H = \struct{H, \tau_H}$ be the topological subspace on $H$.
Let $A$ be a subset of $H$.
Then: $\map {\operatorname{cl}_H} A = H \cap \map {\operatorname{cl}} A$
where
$\map {\operatorname{cl}_H} A$ denotes the closure of $A$ in $T_H$ $\map {\operatorname{cl}} A$ denotes the closure of $A$ in $T$ Proof
\(\displaystyle \map {\operatorname{cl}_H} A\) \(=\) \(\displaystyle \bigcap \set{K \subseteq H: A \subseteq K, K \text{ is closed in } T_H}\) Definition of closure of subset \(\displaystyle \) \(=\) \(\displaystyle \bigcap \set{N \cap H: A \subseteq N, N \text{ is closed in } T}\) Closed Set in Topological Subspace \(\displaystyle \) \(=\) \(\displaystyle H \cap \bigcap \set{N: A \subseteq N, N \text{ is closed in } T}\) Intersection Distributes over Intersection of Family of Sets \(\displaystyle \) \(=\) \(\displaystyle H \cap \map {\operatorname{cl} } A\) Definition of closure of subset
$\blacksquare$
|
I have tried $\gcd(0,8)$ in a lot of online gcd (or hcf) calculators, but some say $\gcd(0,8)=0$, some other gives $\gcd(0,8)=8$ and some others give $\gcd(0,8)=1$. So really which one of these is correct and why there are different conventions?
Let's recall the definition of $ $ "$\rm a $ divides $\rm b$" $ $ in a ring $\rm\,Z,\, $ often written as $\rm\ a\mid b\ \ in\ Z.$
$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \rm\ a\mid b\ \ in\ Z\ \iff\ a\,c = b\ \ $ for some $\rm\ c\in Z$
Recall also the definition of $\rm\ gcd(a,b),\,$ namely
$(1)\rm\qquad\quad \rm gcd(a,b)\mid a,b\qquad\qquad\qquad\ $ the gcd is a
common divisor
$(2)\rm\qquad\quad\! \rm c\mid a,b\ \ \ \Longrightarrow\ \ c\mid gcd(a,b)\quad$ the gcd is a
greatest common divisor
$\ \ \ \ $ i.e. $\rm\quad\ c\mid a,b\ \iff\ c\mid gcd(a,b)\quad\,$ expressed in $\iff$ form $ $ [put $\rm\ c = gcd(a,b)\ $ for $(1)$]
Notice $\rm\quad\, c\mid a,0\ \iff\ c\mid a\,\ $ so $\rm\ gcd(a,0)\ =\ a\ $ by the prior "iff" form of the gcd definition.
Note that $\rm\ gcd(0,8) \ne 0\,$ since $\rm\ gcd(0,8) = 0\ \Rightarrow\ 0\mid 8\ $ contra $\rm\ 0\mid x\ \iff\ x = 0.$
Note that $\rm\ gcd(0,8) \ne 1\,$ else $\rm\ 8\mid 0,8\ \Rightarrow\ 8\mid gcd(0,8) = 1\ \Rightarrow\ 1/8 \in \mathbb Z. $
Therefore it makes no sense to define $\rm\ gcd(0,8)\ $to be $\,0\,$ or $\,1\,$ since $\,0\,$ is not a common divisor of $\,0,8\,$ and $\,1\,$ is not the
greatest common divisor.
The $\iff$ gcd definition is
universal - it may be employed in any domain or cancellative monoid, with the convention that the gcd is defined only up to a unit factor. This $\iff$ definition is very convenient in proofs since it enables efficient simultaneous proof of both implication directions. $\ $ For example, below is a proof of this particular form for the fundamental GCD distributive law $\rm\ (ab,ac)\ =\ a\ (b,c)\ $ slightly generalized (your problem is simply $\rm\ c=0\ $ in the special case $\rm\ (a,\ \ ac)\ =\,\ a\ (1,c)\ =\ a\, $). Theorem $\rm\quad (a,b)\ =\ (ac,bc)/c\quad$ if $\rm\ (ac,bc)\ $ exists. Proof $\rm\quad d\mid a,b\ \iff\ dc\mid ac,bc\ \iff\ dc\mid (ac,bc)\ \iff\ d|(ac,bc)/c$
See here for further discussion of this property and its relationship with
Euclid's Lemma.
Recall also how this universal approach simplifies the proof of the basic GCD * LCM law:
Theorem $\rm\;\; \ (a,b) = ab/[a,b] \;\;$ if $\;\rm\ [a,b] \;$ exists. Proof $\rm\quad d|\,a,b \;\iff\; a,b\,|\,ab/d \;\iff\; [a,b]\,|\,ab/d \;\iff\; d\,|\,ab/[a,b] \quad\;\;$
For much further discussion see my many posts on GCDs.
Another way to look at it is by the divisibility lattice, where gcd is the
greatest lower bound. So 5 is the greatest lower bound of 10 and 15 in the lattice.
The counter-intuitive thing about this lattice is that the 'bottom' (the absolute lowest element) is 1 (1 divides everything), but the highest element, the one above everybody, is 0 (everybody divides 0).
So $\gcd(0, x)$ is the same as ${\rm glb}(0, x)$ and should be $x$, because $x$ is the lower bound of the two: they are not 'apart' and 0 is '$>'$ $x$ (that is the counter-intuitive part).
In fact, the top answer can be generalized slightly: if $a \mid b$, then $\gcd(a,b)=a$ (and this holds in any algebraic structure where divisibility makes sense, e.g. a commutative, cancellative monoid).
To see why, well, it's clear that $a$ is a common divisor of $a$ and $b$, and if $\alpha$ is any common divisor of $a$ and $b$, then, of course, $\alpha \mid a$. Thus, $a=\gcd(a,b)$.
It might be partly a matter of convention. However, I believe that stating that $\gcd(8,0) = 8$ is safer. In fact, $\frac{0}{8} = 0$, with no remainder. The proof of the division, indeed is that "Dividend = divider $\times$ quotient plus remainder". In our case, 0 (dividend) = 8 (divisor) x 0 (quotient). No remainder. Now, why should 8 be the GCD? Because, while the same method of proof can be used for all numbers, proving that $0$ has infinite divisors, the greatest
common divisor cannot be greater than $8$, and for the reason given above, is $8$.
|
Worldly cardinal Every inaccessible cardinal is worldly. Nevertheless, the least worldly cardinal is singular and hence not inaccessible. The least worldly cardinal has cofinality $\omega$. Indeed, the next worldly cardinal above any ordinal, if any exist, has cofinality $\omega$. Any worldly cardinal $\kappa$ of uncountable cofinality is a limit of $\kappa$ many worldly cardinals. Degrees of worldliness
A cardinal $\kappa$ is
$1$-worldly if it is worldly and a limit of worldly cardinals. More generally, $\kappa$ is $\alpha$-worldly if it is worldly and for every $\beta\lt\alpha$, the $\beta$-worldly cardinals are unbounded in $\kappa$. The cardinal $\kappa$ is hyper-worldly if it is $\kappa$-worldly. One may proceed to define notions of $\alpha$-hyper-worldly and $\alpha$-hyper${}^\beta$-worldly in analogy with the hyper-inaccessible cardinals. Every inaccessible cardinal $\kappa$ is hyper${}^\kappa$-worldly, and a limit of such kinds of cardinals.
The consistency strength of a $1$-worldly cardinal is stronger than that of a worldly cardinal, the consistency strength of a $2$-worldly cardinal is stronger than that of a $1$-worldly cardinal, etc.
The worldly cardinal terminology was introduced in lectures of J. D. Hamkins at the CUNY Graduate Center and at NYU.
Replacement Characterization
As long as $\kappa$ is an uncountable cardinal, $V_\kappa$ already satisfies $\text{ZF}^-$ ($\text{ZF}$ without the axiom schema of replacement). So, $\kappa$ is worldly if and only if $\kappa$ is uncountable and $V_\kappa$ satisfies the axiom schema of replacement. More analytically, $\kappa$ is worldly if and only if $\kappa$ is uncountable and for any function $f:A\rightarrow V_\kappa$ definable from parameters in $V_\kappa$ for some $A\in V_\kappa$, $f"A\in V_\kappa$ also.
|
10 Days Of Grad: Deep Learning From The First Principles.
Now that we have seen how neural networks work, we realize that understandingof the gradients flow is essential for survival. Therefore, we will reviseour strategy on the lowest level. However, as neural networks become more complicated,calculation of gradients by hand becomes a murky business. Yet, fear not young
padawan, there is a way out! I am very excited that today we will finally getacquainted with automatic differentiation, an essential tool in your deeplearning arsenal. This post was largely inspired by Hacker's guide to NeuralNetworks. For comparison, see alsoPython version.
Before jumping ahead, you may also want to check the previous posts:
The source code from this guide is available on Github. The guide is written in literate Haskell, so it can be safely compiled.
Why Random Local Search Fails
Following Karpathy's guide, we first consider a simple multiplication circuit. Well, Haskell is not JavaScript, so the definition is pretty straightforward:
forwardMultiplyGate = (*)
Or we could have written
forwardMultiplyGate x y = x * y
to make the function look more intuitively $f(x,y) = x \cdot y$. Anyway,
forwardMultiplyGate (-2) 3
returns -6. Exciting.
Now, the question: is it possible to change the input $(x,y)$ slightly in order to increase the output? One way would be to perform local random search.
_search tweakAmount (x, y, bestOut) = do x_try <- (x + ). (tweakAmount *) <$> randomDouble y_try <- (y + ). (tweakAmount *) <$> randomDouble let out = forwardMultiplyGate x_try y_try return $ if out > bestOut then (x_try, y_try, out) else (x, y, bestOut)
Not surprisingly, the function above represents a single iteration of a"for"-loop. What it does, it randomly selects points around initial $(x, y)$and checks if the output has increased. If yes, then it updates the best knowninputs and the maximal output. To iterate, we can use
foldM :: (b -> a -> IOb) -> b -> [a] -> IO b. This function is convenient since we anticipate someinteraction with the "external world" in the form of random numbers generation:
localSearch tweakAmount (x0, y0, out0) = foldM (searchStep tweakAmount) (x0, y0, out0) [1..100]
What the code essentially tells us is that we seed the algorithm with someinitial values of
x0,
y0, and
out0 and iterate from 1 till 100. The coreof the algorithm is
searchStep:
searchStep ta xyz _ = _search ta xyz
which is a convenience function that glues those two pieces together. Itsimply ignores the iteration number and calls
_search. Now, we would like tohave a random number generator within the range of [-1; 1). From thedocumentation,we know that
randomIO produces a number between 0 and 1. Therefore, we scalethe value by multiplying by 2 and subtracting 1:
randomDouble :: IO DoublerandomDouble = subtract 1. (*2) <$> randomIO
The
<$> function is a synonym to
fmap. What it essentially does isattaching the pure function
subtract 1. (*2) which has type
Double ->Double, to the "external world" action
randomIO, which has type
IO Double(yes, IO = input/output)
1.
A hack for a numerical minus infinity:
inf_ = -1.0 / 0
Now, we run
localSearch 0.01 (-2, 3, inf_) several times:
(-1.7887454910045664,2.910160042416705,-5.205535653974539)(-1.7912166830200635,2.89808308735154,-5.19109477484237)(-1.8216809458018006,2.8372869694452523,-5.168631610010152)
In fact, we see that the outputs have increased from -6 to about -5.2. But the improvement is only about 0.8/100 = 0.008 units per iteration. That is an extremely inefficient method. The problem with random search is that each time it attempts to change the inputs in random directions. If the algorithm makes a mistake, it has to discard the result and start again from the previously known best position. Wouldn't it be nice if instead each iteration would improve the result at least by a little bit?
Automatic Differentiation
Instead of random search in random direction, we can make use of the precise direction and amount to change the input so that the output would improve. And that is exactly what the gradient tells us. Instead of manually computing the gradient every time, we can employ some clever algorithm. There exist multiple approaches: numerical, symbolic, and automatic differentiation. In his article, Dominic Steinitz explains the differences between them. The last approach, automatic differentiation is exactly what we need: accurate gradients with minimal overhead. Here, we will briefly explain the concept.
The idea behind automatic differentiation is that we explicitly define gradients only for elementary, basic operators. Then, we exploit the chain rule combining those operators into neural networks or whatever we like. That strategy will infer the necessary gradients by itself. Let us illustrate the method with an example.
Below we define both multiplication operator and its gradient using the chain rule, i.e. $\frac {d} {dt} x(t) y(t) = x(t) y'(t) + x'(t) y(t)$:
(x, x') *. (y, y') = (x * y, x * y' + x' * y)
The same can be done with addition, subtraction, division, and exponent:
(x, x') +. (y, y') = (x + y, x' + y')x -. y = x +. (negate1 y)negate1 (x, x') = (negate x, negate x')(x, x') /. (y, y') = (x / y, (y * x' - x * y') / y^2)exp1 (x, x') = (exp x, x' * exp x)
We also have
constOp for constants:
constOp :: Double -> (Double, Double)constOp x = (x, 0.0)
Finally, we can define our favourite sigmoid $\sigma(x)$ combining the operators above:
sigmoid1 x = constOp 1 /. (constOp 1 +. exp1 (negate1 x))
Now, let us compute a neuron $f(x, y) = \sigma(a x + b y + c)$, where $x$ and $y$ are inputs and $a$, $b$, and $c$ are parameters
neuron1 [a, b, c, x, y] = sigmoid1 ((a *. x) +. (b *. y) +. c)
Now, we can obtain the gradient of
a in the pointwhere $a = 1$, $b = 2$, $c = -3$, $x = -1$, and $y = 3$:
abcxy1 :: [(Double, Double)]abcxy1 = [(1, 1), (2, 0), (-3, 0), (-1, 0), (3, 0)]
neuron1 abcxy1(0.8807970779778823,-0.1049935854035065)
Here, the first number is the result of the neuron's output and the secondone is the gradient with respect to
a ($\frac d {da}$). Let us verify themath behind the result:
$$\begin{equation}\sigma(ax + by + c) | _{a=(a,1), b=(b,0), c=(c,0), x=(x,0), y=(y,0)} = \\
\sigma[(a, 1) (x, 0) + (b, 0) (y, 0) + (c, 0)] = \\ \sigma[(ax, a \cdot 0 + 1 \cdot x) + (by, 0 \cdot b + 0 \cdot y) + (c, 0)] = \\ \sigma[(ax + by + c, x)] = \\ \frac {(1, 0)} {(1, 0) + \exp \left[ -(ax + by + c, x) \right]} = \\ \frac {(1, 0)} {(1, 0) + \exp \left[ -ax - by - c, -x) \right]} = \\ \frac {(1, 0)} {(1, 0) + (\exp (-ax - by - c), -x \exp (-ax - by - c))} = \\ \frac {(1, 0)} {(1 + \exp(-ax - by - c), -x \exp(-ax - by - c))} = \\ \left( \sigma(ax + by + c), \frac {x \exp(-ax - by -c)} {(1 + \exp(-ax - by -c))^2} \right). \end{equation} $$
The first expression is the result of neuron's computation and the second one is the exact analytic expression for $\frac d {da}$. That is all the magic behind automatic differentiation! In a similar way, we can obtain the rest of the gradients:
neuron1 [(1, 0), (2, 1), (-3, 0), (-1, 0), (3, 0)](0.8807970779778823,0.3149807562105195)neuron1 [(1, 0), (2, 0), (-3, 1), (-1, 0), (3, 0)](0.8807970779778823,0.1049935854035065)neuron1 [(1, 0), (2, 0), (-3, 0), (-1, 1), (3, 0)](0.8807970779778823,0.1049935854035065)neuron1 [(1, 0), (2, 0), (-3, 0), (-1, 0), (3, 1)](0.8807970779778823,0.209987170807013)
Introducing backprop library
The backprop library was specifically designed for differentiable programming. It provides combinators to reduce our mental overhead. In addition, the most useful operations such as arithmetics and trigonometry, have already been defined in the library. See also hmatrix-backprop for linear algebra. So all you need for differentiable programming now is to define some functions:
neuron :: Reifies s W => [BVar s Double] -> BVar s Doubleneuron [a, b, c, x, y] = sigmoid (a * x + b * y + c)sigmoid x = 1 / (1 + exp (-x))
Here
BVar s wrapper signifies that our function isdifferentiable. Now, the forward pass is:
forwardNeuron = BP.evalBP (neuron. BP.sequenceVar)
We use
sequenceVar isomorphism to convert a
BVar of a list into a list of
BVars, as required by our
neuron equation. And the backward pass is
backwardNeuron = BP.gradBP (neuron. BP.sequenceVar)abcxy0 :: [Double]abcxy0 = [1, 2, (-3), (-1), 3]forwardNeuron abcxy0-- 0.8807970779778823backwardNeuron abcxy0-- [-0.1049935854035065,0.3149807562105195,0.1049935854035065,0.1049935854035065,0.209987170807013]
Note that all the gradients are in one list, the type of the first
neuronargument.
Summary
Modern neural networks tend to be complex beasts. Writing backpropagation gradients by hand can easily become a tedious task. In this post we have seen how automatic differentiation can face this problem.
In the next posts we will apply automatic differentiation to real neural networks. We will talk about batch normalization, another crucial method in modern deep learning. And we will ramp it up to convolutional networks allowing us to solve some interesting challenges. Stay tuned!
Further reading Visual guide to neural networks Backprop documentation Article on backpropagation by Dominic Steinitz In fact, 64 bit double precision is not necessary for neural networks, if not an overkill. In practice you would prefer to use a 32 bit
Floattype.
^
|
Research Open Access Published: Minimal thinness with respect to the Schrödinger operator and its applications on singular Schrödinger-type boundary value problems Boundary Value Problems volume 2019, Article number: 91 (2019) Article metrics
227 Accesses
2 Citations
Abstract
The application of the new criteria for minimally thin sets with respect to the Schrödinger operator to an approximate solution of singular Schrödinger-type boundary value problems are discussed in this study. The method is based on approximating functions and their derivatives by using the natural and weakened total energies. This study shows that the new criteria are very effective and powerful tools in solving such problems. At the end of the paper, we are also concerned with the boundary behaviors of solutions for a kind of quasilinear Schrödinger equation.
Introduction
In this paper, we further consider the following Schrödinger problem (see [1]):
where \(x \in \mathbb{R}^{n}\), \(z:\mathbb{R}\times \mathbb{R}^{n} \to \mathbb{C}\), \(a,W:\mathbb{R}^{n}\to \mathbb{R}\) is a given potential,
k is real constant, and l and h are real functions. The above quasilinear equations have been accepted as models of several physical phenomena corresponding to various types of l; we refer to [2] and the references given therein for physical applications of these problems. Specifically, we would like to mention that the superfluid film equation in plasma physics has this structure for \(l(s)=s\) (see e.g. [3, 4]), while in the case \(l(s)=(1 +s)^{1/2}\), (1) models the self-channeling of a high-power ultrashort laser in matter (see e.g. [5, 6]).
The standing waves solutions of (1); that is, solutions of the type \(z(t,x)=\exp (-iEt)u(x)\) where \(E \in \mathbb{R}\) and \(u>0\) is a real function. Inserting
z into (1), with \(l(s)=s\) and \(l(s)=(1 +s^{2})^{1/2}\), turns, respectively, the following equations (see e.g. [7]):
where \(x \in \mathbb{R}^{n}\) and \(V_{\infty }=W-E\).
It is well known that an unknown Borel probability measure on \(W= S\times T\) controls the sampling process, where \(T=\mathbb{R}\) and
S is a compact metric space in \(\mathbb{R}^{n}\). As in [8], the exact weak solutions of (1) can be defined by \(g_{\varrho }(s)= \int _{T} y \,d\varrho (t|s)\), where \(\varrho (\cdot |s)\) is the conditional probability measure induced by ϱ on T given \(s\in S\).
To our knowledge, the criteria for minimally thin sets with respect to the Schrödinger operator (1) was introduced for the first time in the context of the stationary Schrödinger equations in [9, 10]. In 2018, Jiang, Zhang and Li (see [11]) further improved this complex method and applied to study meromorphic solutions for the linear differential equations with analytic coefficients and obtain some applications. Recently, Zhang (see [12, 13]) defined a new type of minimal thinness with respect to the stationary Schrödinger operator, established new criteria for it and applied the result to study growth properties at infinity of the maximum modulus with respect to the Schrödinger operator.
In this paper, we will continue to apply new criteria for solutions for a kind of quasilinear Schrödinger equations. Although we are motivated here by [9,10,11,12,13], there were substantial difficulties to adapt the above approach to the present situation. Let \(\mathfrak{H}_{E}\) be the completion of the linear span of the set of functions \(\{E_{s} :=E(s,\cdot ) : s \in S \}\) equipped with (see [8, 14])
Let \(s\in S\) and \(g\in \mathfrak{H}_{E}\). Define (see [15, Remark 2.3])
where
Define (see [17])
where
and
In order to study the boundary behaviors of \(g_{\mathbf{w},\varsigma }\), we derive
The remainder of this paper is organized as follows. In Sect. 2, we will provide the main results. In Sect. 3, some basic but important estimates and properties are summarized. The proofs of main results will be given in Sect. 4. Section 5 contains the conclusions of the paper.
Main results
The integral operator \(L_{E}:L_{\varrho _{S}} ^{2}(S)\rightarrow L_{ \varrho _{S}} ^{2}(S)\) is defined by
Let \(\{\mu _{i}\} \) be the eigenvalues of \(L_{E}\) and \(\{e_{i}\}\) be the corresponding eigenfunctions. Then we define
for \(g\in L_{\varrho _{S}} ^{2}(S)\). We assume that \(g_{\varrho }\) satisfies \(L_{E}^{-r}g_{\varrho }\in L^{2}_{\varrho _{S}}\), where
r is a positive constant depending on the size of the initial data in a suitable norm.
Let \(c_{p}~(0< p<2)\) be a positive constant. Define (see [24])
where
Now we are in a position to obtain the existence of solutions for the problem (1).
Theorem 1 and Proposition 1 Let \(L_{E}^{-r}g_{\varrho }\in L^{2}_{\varrho _{S}}\), where \(r>0\). Then
and
Finally, we further study the boundary behaviors for solutions for the problem (1).
Theorem 2 Let the assumptions of Theorem 1 hold. Then where \(0<\delta <1\). Theorem 3 Let the assumptions of Theorem 1 hold. Then where \(0<\delta <1\) and Lemmas
Some basic but important estimates are needed in this section. The following lemma indicates that the natural and weakened total energies are conserved in time.
Lemma 1 We have the following estimates: Proof
Multiplying the first equation by \(g_{\varrho }'\), we obtain
It follows that
Therefore
which leads to
which is equivalent to (11).
By taking the sum of the resulting two identities we obtain
using the symmetry of the matrix \((-\partial _{g}^{2} )^{-1}\) we obtain
□
From Lemma 1, we deduce the following result.
Lemma 2 Let \(0 \leq \delta \leq \frac{\delta _{0}}{3}\). Then for a positive constant Cτ depending only on τ. Proof
We recall
and we can write
It follows from Lemma 1 that
On the other hand
Hence
Integrating this last inequality over \(t \in [0,\tau ]\) and using the fact that the energy \(\widetilde{\mathfrak{E}}_{\tau ,g}(t)\) is conservative, we deduce that
Moreover, thanks to inequality (17), we have
and inserting this last equation into (18) yields
However, since
for all \(\delta \leq \frac{\delta _{0}}{3}\), we deduce from (19) that
We complete this subsection with the following lemma.
Lemma 3 We have where \(0 \leq \delta \leq \min ( \delta _{0}, \sqrt{\delta _{0}} )\). Proof
First, we recall the following estimates:
from the proof of Lemma 2.
Taking the sum of these two inequalities, we obtain
which proves the inequality (20).
□
Proofs of main results
Now we derive the learning rates.
Proof of Theorem 1
Let \(\mathbf{{y}}=(t_{1}, t_{2}, t_{3},\ldots , t_{m})^{ \tau }\), \(K[{\mathbf{{s}}}]=(E(s_{i},s_{j}))_{i,j=1}^{m}\) and \(\mathbf{{a}}^{\mathbf{w}}=(a_{1}^{\mathbf{w}},\ldots ,a _{m}^{\mathbf{w}})\) be the coefficient of \(g_{\mathbf{w},\varsigma }\). It follows from the representation theorem (see [27, 28]) that
for \(i=1,2,\ldots ,m\).
By the Hölder inequality, we have
It follows that
from (5).
Thus
Since
we get
This yields our desired estimation. □
Proof of Theorem 2
Let
for any \(z=(u,t)\in Z\). Then
By (3) we have
Combining with (5), we have
Therefore
and
By Lemma 1,
□
Proof of Theorem 3
Consider the set of functions
We have
from (5), which yields
So
which implies that
Then we get
for any \(h_{1}\), \(h_{2}\in \mathfrak{G}_{R}\), which yields
It follows from the capacity condition (7) that
By applying Lemma 2 to \(\mathscr{G}\) with \(Q=8M^{2}\) we have
for any \(0<\delta <1\), where
If we take
then we can complete the proof of Theorem 3. □
Conclusion
The application of the new criteria for minimally thin sets with respect to the Schrödinger operator to an approximate solution of singular Schrödinger-type boundary value problems were discussed in this study. The method was based on approximating functions and their derivatives by using the natural and weakened total energies. This study showed that the new criteria were very effective and powerful tools in solving such problems. At the end of the paper, we were also concerned with the boundary behaviors of solutions for a kind of quasilinear Schrödinger equation.
References 1.
Cottle, R.: Nonlinear programs with positively bounded jacobians. Ph.D. Dissertation, Department of Mathematics, University of California, Berkeley (1964)
2.
Glowinski, R., Lions, J., Trémolières, R.: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981)
3.
Cui, Y., Ma, W., Sun, Q., Su, X.: New uniqueness results for boundary value problem of fractional differential equation. Nonlinear Anal., Model. Control
23(1), 31–39 (2018) 4.
Cui, Y., Ma, W., Wang, X., Su, X.: Uniqueness theorem of differential system with coupled integral boundary conditions. Electron. J. Qual. Theory Differ. Equ.
9, 1 (2018) 5.
Zou, Y., He, G.: A fixed point theorem for systems of nonlinear operator equations and applications to \((p1,p2)\)-Laplacian system. Mediterr. J. Math.
15(2), 74 (2018) 6.
Zhang, X., Liu, L., Wu, Y., Cui, Y.: The existence and nonexistence of entire large solutions for a quasilinear Schrödinger elliptic system by dual approach. J. Math. Anal. Appl.
464(2), 1089–1106 (2018) 7.
Lions, J., Stampaccia, G.: Variational inequalities. Commun. Pure Appl. Math.
20, 493–512 (1967) 8.
Bremermann, J.: Distributions, Complex Variables, and Fourier Transforms. Addison-Wesley, Reading (1965)
9.
Zhao, H., Ma, W.: Mixed lump-kink solutions to the KP equation. Comput. Math. Appl.
74(6), 1399–1405 (2017) 10.
Zhang, J., Ma, W.: Mixed lump-kink solutions to the BKP equation. Comput. Math. Appl.
74(3), 591–596 (2017) 11.
Jiang, C., Zhang, F., Li, T.: Synchronization and antisynchronization of N-coupled fractional-order complex chaotic systems with ring connection. Math. Methods Appl. Sci.
41(7), 2625–2638 (2018) 12.
Zhang, X., Liu, L., Wu, Y., Cui, Y.: Existence of infinitely solutions for a modified nonlinear Schrödinger equation via dual approach. Electron. J. Differ. Equ.
2018, 147 (2018) 13.
Zhang, X., Jiang, J., Wu, Y., Cui, Y.: Existence and asymptotic properties of solutions for a nonlinear Schrödinger elliptic equation from geophysical fluid flows. Appl. Math. Lett.
90, 229–237 (2019) 14.
Gasiorowicz, S.: Elementary Particle Physics. Wiley, New York (1966)
15.
Gelfand, I., Shilov, G.: Generalized Functions, vol. 1. Academic Press, New York (1964)
16.
Passare, M.: Residues, currents, and their relation to ideals of holomorphic functions. Math. Scand.
62, 75–152 (1988) 17.
Bliedtner, J., Hansen, W.: Potential Theory. An Analytic and Probabilistic Approach to Balayage. Springer, Berlin (1986)
18.
Dong, X., Bai, Z., Zhang, S.: Positive solutions to boundary value problems of
p-Laplacian with fractional derivative. Bound. Value Probl. 2017, Article ID 5 (2017) 19.
Liu, L., Deng, F., Hou, T.: Almost sure exponential stability of implicit numerical solution for stochastic functional differential equation with extended polynomial growth condition. Appl. Math. Comput.
330, 201–212 (2018) 20.
Andersson, M., Carlsson, H.: \(H^{p}\)-estimates of holomorphic division formulas. Pac. J. Math.
173, 307–335 (1996) 21.
Antosik, P., Mikusinski, J., Sikorski, R.: Theory of Distributions the Sequential Approach. PWN, Warsaw (1973)
22.
Ikegami, T.: Compactifications of Martin type of harmonic spaces. Osaka J. Math.
23, 653–680 (1986) 23.
Shang, S., Bai, Z., Tian, Y., Yue, Y.: Periodic solution for second-order impulsive differential inclusions with relativistic operator. Bound. Value Probl.
2018, Article ID 173 (2018) 24.
Chen, S., Ma, W.: Lump solutions to a generalized Bogoyavlensky–Konopelchenko equation. Front. Math. China
13(3), 525–534 (2018) 25.
Zhang, X., Wu, Y., Cui, Y.: Existence and nonexistence of blow-up solutions for a Schrödinger equation involving a nonlinear operator. Appl. Math. Lett.
82, 85–91 (2018) 26.
Meng, B., Wang, X.: Adaptive synchronization for uncertain delayed fractional-order Hopfield neural networks via fractional-order sliding mode control. Math. Probl. Eng.
2018, Article ID 1603629 (2018) 27.
Noor, M.: Mixed variational-like inequalities. Commun. Appl. Nonlinear Anal.
1, 63–75 (1994) 28.
Wu, J., Zhang, X., Liu, L., Wu, Y., Cui, Y.: Convergence analysis of iterative scheme and error estimation of positive solution for a fractional differential equation. Math. Model. Anal.
23(4), 611–626 (2018) 29.
Yang, J., Ma, W., Qin, Z.: Lump and lump-soliton solutions to the \((2+1)\)-dimensional Ito equation. Anal. Math. Phys.
8(3), 427–436 (2018) Acknowledgements
Not applicable.
Availability of data and materials
Not applicable.
Funding
This work was supported by the Post-Doctoral Applied Research Projects of Qingdao (no. 2015122) and the Scientific Research Foundation of Shandong University of Science and Technology for Recruited Talents (no. 2014RCJJ032).
Ethics declarations Ethics approval and consent to participate
Not applicable.
Competing interests
The author declares that he has no competing interests.
Consent for publication
Not applicable.
Additional information Abbreviations
Not applicable.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
I'm not sure how to go about this proof. I just need help getting started. Is there a way to prove it algebraically?
Take the prime-power decomposition of $m$ and $n$. We have \begin{array} .m &=&p_1^{a_1}\times p_2^{a_2} \times \ldots \times p_k^{a_k} \\ n &=&p_1^{b_1}\times p_2^{b_2}\times \ldots \times p_k^{b_k} \end{array} where each of the $p_i$ are distinct primes and each of the $a_j$ and $b_{\ell}$ are non-negative integers. For example, if $m=4$ and $n=18$ then we write $m = 2^2 \times 3^0$ and $n = 2^1 \times 3^2$.
The important part of this trick is that we write both $m$ and $n$ as a product of the same primes, even if some of the powers are zero.
By definition:\begin{array}.\text{lcm}(m,n) &=& p_1^{\max(a_1,b_1)}\times \cdots \times p_k^{\max(a_k,b_k)} \\\text{gcd}(m,n) &=& p_1^{\min(a_1,b_1)}\times \cdots \times p_k^{\min(a_k,b_k)}\end{array}Clearly $\max(a_i,b_i) + \min(a_i,b_i) = a_i + b_i$ and hence\begin{array}.\text{lcm}(m,n) \times \gcd(m,n) &=& p_1^{a_1+b_1} \times \cdots \times p_k^{a_k+b_k} \\&=& (p_1^{a_1} \times p_1^{b_1}) \times \cdots \times (p_k^{a_k} \times p_k^{b_k}) \\ \\&=& m \times n\end{array}
There are many proofs. We give two, one that uses the Unique Factorization Theorem, and another that uses Bezout's Identity.
First Proof: Let $p_1, p_2,p_k$ be the primes that occur in the prime power factorization of $M$ or $N$ or both. Let$$M=p_1^{\alpha_1}p_2^{\alpha_2} \cdots p_k^{\alpha_k}\quad\text{and}\quad N=p_1^{\beta_1}p_2^{\beta_2} \cdots p_k^{\beta_k}.$$Note that we are allowing some of the $\alpha_i$ and $\beta_j$ to be $0$.
You may have already seen the theorem that the gcd of $M$ and $N$ is equal to $$ p_1^{\delta_1}p_2^{\delta_2} \cdots p_k^{\delta_k},$$ and their lcm is $$ p_1^{\mu_1}p_2^{\mu_2} \cdots p_k^{\mu_k},$$ where $\delta_i=\min(\alpha_i,\beta_i)$ and $\mu_i=\max(\alpha_i,\beta_i)$.
Then the theorem follows from the fact that $\delta_i+\mu_i=\alpha_i+\beta_i$. (The minimim of two numbers, plus the maximum of two numbers, is the sum of the two numbers.)
Second Proof: We use Bezout's Identity, which says that if $d$ is the gcd of $M$ and $N$, there exist integers $x$ and $y$ such that $Mx+Ny=d$.
Note that $d$ divides $MN$. Let $m=\frac{MN}{d}$. We show that $m$ is the lcm of $M$ and $N$. This will finish things.
Certainly $m$ is a common multiple of $M$ and $N$. Let $n$ be a common positive multiple of $M$ and $N$. We will show that $m$ divides $n$. That will show that $m\le n$, making $m$ the
least common multiple.
We have $$\frac{n}{m}=\frac{nd}{MN}==\frac{n(Mx+Ny)}{MN}=\frac{n}{N}x+\frac{n}{M}y.\tag{1}$$ The right-hand expression in (1) is an integer, and therefore $\frac{n}{m}$ is an integer, that is, $n$ is a multiple of $m$.
Theorem 1: For any $N,M$, $$\gcd\left(\frac{N}{\gcd(N,M)},\frac{M}{\gcd(N,M)}\right)=1$$ Theorem 2: For any $N,M,K$, $$\mathrm{lcm}(NK,MK)=K\cdot\mathrm{lcm}(N,M)$$ Theorem 3: If $\gcd(N,M)=1$ then if $N|K$ and $M|K$ then $NM|K$. Corollary: If $\gcd(M,N)=1$ then $\mathrm{lcm}(N,M)=NM$.
From these, you can prove the above result.
(1) and (3) have nice proofs using Bézout's identity. (2) is a direct proof. The corollary follows from Theorem (3), and the final result follows from (1) and the corollary.
I don't know what you mean by ''algebraically''. I'll show you a proof.
I write $(m,n)$ for $gcd$ and $[m,n]$ for $lcm$. If $(m,n) = 1$, then $m$ and $n$ both divide some integer $r$ if and only if $mn$ divides it (easy consequence of the Euclidean algorithm); it means that $[m,n] = mn$. Otherwise, let $(m,n) = d$. Then since $(m/d, n/d) = 1$ and $[km,kn] = k[m,n]$ for any integer $k$, we have $$ [m,n] = \left[ \frac md d , \frac nd d \right] = \left[ \frac md, \frac nd \right] d = \frac md \frac nd d = \frac {mn}{d}, $$ hence $[m,n](m,n) = [m,n]d = mn$.
Please correct me if I'm wrong (I mean it!), but it seems to me that unique factorization (let alone the Euclidean algorithm or Bezout's identity) is not needed to prove this. I'll give a proof of the generalization of the theorem for an arbitrary integral domain $R$ (but see notes right after the statement explaining how to specialize the theorem to the case $R = \mathbb{Z}$).
Let $R$ be an integral domain, and let $a, b$ be elements of $R$ having both
agcd $d$ and anlcm $m$ in $R$. Then there exists a unit $u \in R$ such that $dm = uab$. Note: as the wording suggests, uniqueness of gcds and/or lcms is not assumed here. Also, when $R = \mathbb{Z}$, the only units (aka "invertible elements") are $\pm 1$, and furthermore, if one adopts the common convention that both the $\gcd(a, b)$ and the $\mathrm{lcm}(a, b)$ are defined to be positive, then the $\gcd(a, b)$ and $\mathrm{lcm}(a, b)$ are unique, and the theorem's punchline therefore reduces to $$\gcd(a, b) \times \mathrm{lcm}(a, b) = |ab|\;.$$ Proof: Since $ab$ is a common multiple of $a$ and $b$, it follows from the definition 1 of lcm that there exists a $\delta \in R$ such that $\delta m = ab$. On the other hand, again by the definition of lcm, there exist $i,j\in R$ such that $ai = m = bj$. Multiplying through by $\delta$ yields
$$\delta ai = \delta m = \delta bj\,.$$
Replacing the middle term with $ab\;(=\delta m)$ and cancelling $a$ from the left equality, and $b$ from the right one, we get that $\delta i = b$ and $\delta j = a$. This shows that $\delta$ is a common divisor of both $a$ and $b$.
The definition of gcd now implies that there exists a $u \in R$ such that $\delta u = d$, and therefore,
$$dm = u\delta m = uab\;.$$
It remains to be shown that $u$ is a unit. Now, again by the definition of gcd, there exist $r, s \in R$ such that $dr = a$ and $ds = b$. Define $\mu = rds$. Then
$$d\mu = dr\,ds = a\,ds = dr\,b = ab\;.$$
Cancelling $d$ in the first three equalities above shows that $\mu = as = br.$ This means that $\mu$ is a common multiple of both $a$ and $b$, and therefore, by the definition of lcm, there exists $v\in R$ such that $\mu = vm$.
Substituting this expression for $\mu$ into the equality $d\mu = ab$ established above yields $d\,vm = vdm = ab$, which becomes $uvdm = uab$ after multiplying through by $u$. Putting this together with the previously obtained $dm = uab$ gives $uvdm = dm$, which, after cancelling out the $dm$ factor reduces to $uv = 1$. Hence $u$ is a unit.
1 This proof repeatedly invokes the definitions of gcd and lcm for a general integral domain; these definitions are entirely consistent with those for the gcd and lcm in $\mathbb{Z}$, but, for the sake of clarity and completeness, here they are: $d \in R$ is a divisor of $a \in R$ if there exists $r \in R$ such that $dr = a$; $d$ is a greatest common divisor (gcd) of $a, b\in R$ if (1) it is a divisor of both $a$ and $b$, and (2) any other divisor of $a$ and $b$ is also a divisor of $d$; note that, in general, for any $a, b \in R$ there may be any number of gcds—including none; if $d$ and $d^{\prime}$ are both gcds of $a$ and $b$, then they are associates (i.e. there exists a unit $u \in R$ such that $d = ud^{\prime}$); this is so because they must be divisors of each other. Similarly, $m \in R$ is a multiple of $a\in R$ if there exists $q \in R$ such that $m = aq$; $m$ is a least common multiple (lcm) of $a, b\in R$ if (1) it is a multiple of both $a$ and $b$, and (2) any other multiple of $a$ and $b$ is also a multiple of $m$. Similar considerations on uniqueness as those for gcds apply to lcms. In particular, if $m$ and $m^{\prime}$ are both lcms of $a, b\in R$ then $m$ and $m^{\prime}$ are associates.
Given $n,m$ and $d=\gcd(n,m)$, we can then denote $n=da$,$m=db$ where $a,b$ are coprime. In that language, $\operatorname{lcm}(n,m)$ would be $dab$ (it is fairly straightforward to show that they both divide each other), which gives us the claim.
CLAIM$\hspace{5.5 cm }(a,b)[a,b]=ab$ P. Let $d=(a,b), e=\dfrac{ab}{[a,b]}$. We will prove that $d=e$. Recall we define $d$ as the (unique) positive number such that $d$ divides both $a$ and $b$, and if $d'$ is any other common divisor, $d'\mid d$. We wish to show $e$ has these two properties. First, note that $$\frac{a}{e} = \frac{{\left[ {a,b} \right]}}{b} \in {\Bbb Z}$$ $$\frac{b}{e} = \frac{{\left[ {a,b} \right]}}{a} \in {\Bbb Z}$$ since both $a,b$ divide $[a,b]$, so $e$ is a common divisor. Now suppose $d'$ is another common divisor. Recall that $[a,b]$ is the unique positive number such that $a,b$ both divide $[a,b]$ and whenever $a,b$ both divide some $f$, $[a,b]$ divides this $f$. We will use this to finish off the proof. Since $d'$ is a common divisor, $\dfrac{ab}{d'}$ is an integer. Moreover, both $a,b$ divide it, so it is a common multiple. It follows that $$\frac{f}{{\left[ {a,b} \right]}} = \frac{{ab}}{{\left[ {a,b} \right]d'}} = \frac{e}{{d'}}$$ is an integer, do $d'\mid e$, whence $d=e$. $\blacktriangle$
|
The Pythagorean Theorem is derived in algebraic form by the geometric system. Now, it is your time to know how the square of length of hypotenuse is equal to sum of squares of lengths of opposite and adjacent sides in a right triangle.
$a$, $b$ and $c$ are lengths of sides of a right triangle. $\alpha$ and $\beta$ are two angles and third is a right angle obviously in this triangle. So, let’s study the properties of right triangle before deriving the Pythagorean Theorem in algebraic form.
According to sum of angles of a triangle rule, the sum of the interior angles of this triangle is $180^°$.
$\alpha + \beta + 90^°$ $\,=\,$ $180^°$
$\implies$ $\alpha + \beta$ $\,=\,$ $180^°-90^°$
$\implies$ $\alpha + \beta$ $\,=\,$ $90^°$
The equation expresses that the angles $\alpha$ and $\beta$ are complementary angles.
Remember this property and it will be used in deriving the Pythagoras Theorem mathematically.
Take four same right triangles and join the hypotenuses of four right triangles as a closed geometric shape.
It is time to study the geometric shape formed by the joining of the four right triangles.
The opposite and adjacent sides of four right triangles form a square externally and the side of the square is $a+b$. Therefore, the area of this square is ${(a+b)}^2$.
The intersection of hypotenuses of four right triangles formed a quadrilateral internally. It is taken that the length of hypotenuse is $c$ and it is also length of the each side of the quadrilateral.
Even though the length of each side of the internal quadrilateral is $c$, we don’t know exactly what kind of quadrilateral it is. It can be determined by finding the angles of this internal quadrilateral.
The angle between any two sides of the internal quadrilateral forms a straight angle with the angles of the two right triangles $\alpha$ and $\beta$. It is also same in the case of other three angles of the quadrilateral. So, four angles of the internal quadrilateral should be equal geometrically and it is taken as $\gamma$.
Therefore, the sum of the angles $\alpha$, $\beta$ and $\gamma$ is equal to $180^°$.
$\alpha + \beta + \gamma$ $\,=\,$ $180^°$
It is already proved that $\alpha + \beta$ $\,=\,$ $90^°$ in the previous step.
$\implies$ $90^°+\gamma$ $\,=\,$ $180^°$
$\implies$ $\gamma$ $\,=\,$ $180^°-90^°$
$\,\,\, \therefore \,\,\,\,\,\, \gamma \,=\, 90^°$
It is proved that the angle $\gamma$ is a right angle and the other three angles are also $90^°$. Therefore, it is proved that the internal quadrilateral is a square.
The hypotenuses of all four right triangles are now sides of the square. It is taken that the length of hypotenuse of each right triangle is $c$. So, the length of each side of the square is also $c$ geometrically.
Now, the area of the internal square can be calculated by using area formula of the square.
Therefore, the area of the square is $c \times c$ and it is $c^2$.
Remember the area of this internal square and it will be used in the upcoming step.
A rectangle can be formed by joining hypotenuses of two right triangles. In our case, there are four right triangles which can be added together to form two rectangles.
The geometrical arrangement formed two rectangles and two squares. There is no need to find the areas of the rectangles but it is essential to find the areas of the two squares.
The length of each side of the first square is $a$. So, its area is equal to $a^2$. The length of each side of the second square is $b$ and its area is equal to $b^2$. The sum of both squares is equal to $a^2+b^2$.
It can be observed that the square having area of $c^2$ is split as the two squares whose areas are $a^2$ and $b^2$. Therefore, it is geometrically proved that the area of square ($c^2$) is equal to the sum of the areas of two squares.
$\therefore \,\,\,\,\,\,$ $c^2 = a^2+b^2$
Actually, $c$ is length of hypotenuse and, $a$ and $b$ are lengths of adjacent and opposite sides of the right triangle.
Therefore, it is proved that the square of length of hypotenuse is equal to the sum of squares of the opposite and adjacent sides of the right triangle.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Difference between revisions of "Lower attic"
From Cantor's Attic
(the Takeuti-Feferman-Buchholz ordinal)
(19 intermediate revisions by 5 users not shown) Line 1: Line 1:
{{DISPLAYTITLE: The lower attic}}
{{DISPLAYTITLE: The lower attic}}
[[File:SagradaSpiralByDavidNikonvscanon.jpg | thumb | Sagrada Spiral photo by David Nikonvscanon]]
[[File:SagradaSpiralByDavidNikonvscanon.jpg | thumb | Sagrada Spiral photo by David Nikonvscanon]]
+
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
Line 10: Line 11:
** [[infinite time Turing machines#zeta | $\zeta$]] = the supremum of the eventually writable ordinals
** [[infinite time Turing machines#zeta | $\zeta$]] = the supremum of the eventually writable ordinals
** [[infinite time Turing machines#lambda | $\lambda$]] = the supremum of the writable ordinals,
** [[infinite time Turing machines#lambda | $\lambda$]] = the supremum of the writable ordinals,
− +
* [[admissible]] ordinals and [[#| relativized Church-Kleene $\omega_1^x$]]
−
* [[admissible]] ordinals and [[
+
* [[Church-Kleene | Church-Kleene $\omega_1^{ck}$]], the supremum of the computable ordinals
−
* [[Church-Kleene
+ + + + + + + + +
* the [[Feferman-Schütte]] ordinal [[Feferman-Schütte | $\Gamma_0$]]
* the [[Feferman-Schütte]] ordinal [[Feferman-Schütte | $\Gamma_0$]]
* [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]]
* [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]]
+
* the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]]
* the [[small countable ordinals]], such as [[small countable ordinals | $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$]] up to [[epsilon naught | $\epsilon_0$]]
−
* [[
+
* [[| Hilbert's hotel]]
* [[omega | $\omega$]], the smallest infinity
* [[omega | $\omega$]], the smallest infinity
* down to the [[parlour]], where large finite numbers dream
* down to the [[parlour]], where large finite numbers dream
Latest revision as of 13:37, 27 May 2018
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
$\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the omega one of chess $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ = the supremum of the game values for white of all positions in infinite chess $\omega_1^{\mathfrak{Ch},c}$ = the supremum of the game values for white of the computable positions in infinite chess $\omega_1^{\mathfrak{Ch}}$ = the supremum of the game values for white of the finite positions in infinite chess the Takeuti-Feferman-Buchholz ordinal the Bachmann-Howard ordinal the large Veblen ordinal the small Veblen ordinal the Extended Veblen function the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel and other toys in the playroom $\omega$, the smallest infinity down to the parlour, where large finite numbers dream
|
For any given load, a switcher will transfer a given amount of energy thousands of times per second. This is how the buck regulator works.
Let's say your op-amp is switching at 10kHz (because it's a slow sort of device and will have slew rate problems compared to other devices). Let's also say you are aiming to deliver 5V across a 10 ohm resistor. Resistor power is 25/10 watts = 2.5 watts.
To calculate energy per switching cycle divide this power by frequency because power = joules per second. At 10kHz, the energy you transfer per switch cycle is 250\$\mu J\$.
This energy powers your load resistor but, if you removed your load resistor, this energy gets dumped into the output capacitor and its voltage rises a little (or a lot) higher than normal.
Let's say your output capacitor is 10uF - if suddenly it was imbibed with 250\$\mu J\$, how much would it rise in voltage?
We know that capacitor energy is \$\dfrac{C V^2}{2}\$ therefore we can calculate the voltage rise and this is: -
\$\sqrt{\dfrac{250\times 10^{-6} \times 2}{10\times 10^{-6}}}\$ = 7.07V.
It's a little bit subtler than this - in the above I assumed the capacitor was being charged with energy from a zero voltage state. In fact it already has 5V across it and this means that the previously stored energy + influx energy (from the inductor) is 125\$\mu J\$ + 250\$\mu J\$ = 375\$\mu J\$.
If you do the reverse math, the peak voltage on the capacitor becomes 8.66V i.e. 3.66 volts higher than the 5V rail.
You could put an argument together to consider the losses in the diode also - this may trim half a volt of the absolute peak voltage.
So, you either need to increase the capacitance a lot or, decrease the transfer energy by increasing the operating frequency. Modern switchers regularly operate at 500kHz and this means the energy per cycle reduces from 250\$\mu J\$ to 5\$\mu J\$ in this example.
Should this be the case (500kHz operation), the rogue energy from the inductor would make the capacitor's stored energy 130\$\mu J\$ and this means a peak voltage of 5.1 volts - probably quite acceptable for load dumping on a switcher.
Operating at higher frequencies requires faster silicon but, the ability to control load variations (and their repercussions), on a cyclic basis, means much tighter control of the output voltage.
This is just an example to see where you might be going wrong.
|
Bernstein inequality
$ \newcommand{\expect}{\mathbb{E}} \newcommand{\prob}{\mathbb{P}} \newcommand{\abs}[1]{\left|#1\right|} $
Bernstein's inequality in probability theory is a more precise formulation of the classical Chebyshev inequality in probability theory, proposed by S.N. Bernshtein [Be2] in 1911; it permits one to estimate the probability of large deviations by a monotone decreasing exponential function. In fact, if the equations \[ \expect X_j=0,\quad \expect X_j^2=b_j,\quad j=1,\ldots,n, \] hold for the independent random variables $X_1,\ldots,X_n$ with \[ \expect\abs{X_j}^l \leq \frac{b_j}{2}H^{l-2}l! \] (where $l>2$ and $H$ is a constant independent of $j$), then the following inequality of Bernstein (where $r>0$) is valid for the sum $S_n=X_1+\cdots+X_n$: \begin{equation}\label{eq1} \prob\left( \abs{S_n} > r \right) \leq 2\exp\left( - \frac{r^2}{2(B_n + Hr)} \right), \end{equation} where $B_b = \sum b_j$. For identically-distributed bounded random variables $X_j$ ($\expect X_j = 0$, $\expect X_j^2 = \sigma^2$ and $\abs{X_j}\leq L$, $j=1,\ldots,n$) inequality \ref{eq1} takes its simplest form: \begin{equation}\label{eq2} \prob\left( \abs{S_n} > t\sigma\sqrt{n} \right) \leq 2\exp\left( - \frac{t^2}{2(1 + a/3)} \right), \end{equation} where $a = Lt/\sqrt{n}\sigma$. A.N. Kolmogorov gave a lower estimate of the probability in \ref{eq1}. The Bernstein–Kolmogorov estimates are used, in particular, in proving the law of the iterated logarithm. Some idea of the accuracy of \ref{eq2} may be obtained by comparing it with the approximate value of the left-hand side of \ref{eq2} which is obtained by the central limit theorem in the form \[ \frac{2}{\sqrt{2\pi}}\int_t^\infty \mathrm{e}^{-u^2/2}\,\mathrm{d}u = \frac{2}{\sqrt{2\pi t}} \left( 1-\frac{\theta}{t^2} \right) \mathrm{e}^{-t^2/2}, \] where $0<\theta<1$. Subsequent to 1967, Bernstein's inequalities were extended to include multi-dimensional and infinite-dimensional cases.
References
[Be2] S.N. Bernshtein, "Probability theory", Moscow-Leningrad (1946) (In Russian) [Be3] A.N. [A.N. Kolmogorov] Kolmogoroff, "Ueber das Gesetz des iterierten Logarithmus" Math. Ann., 101 (1929) pp. 126–135 [Ni] W. Hoeffding, "Probability inequalities for sums of independent random variables" J. Amer. Statist. Assoc., 58 (1963) pp. 13–30 [Yu] V.V. Yurinskii, "Exponential inequalities for sums of random vectors" J. Multivariate Anal., 6 (1976) pp. 473–499 A.V. Prokhorov
Bernstein's inequality for the derivative of a trigonometric or algebraic polynomial gives an estimate of this derivative in terms of the polynomial itself. If $T_n(x)$ is a trigonometric polynomial of degree not exceeding $n$ and if \[ M = \max_{0 \leq x \leq 2\pi} \abs{T_n(x)}, \] then the following inequalities are valid for all $x$ (cf. [Be2]): \[ \abs{T_n^{(r)}(x)} \leq Mn^r, \] where $T_n^{(r)}$ is the $r$th derivative of $T_n$. These estimates cannot be improved, since the number $M=1$ for \[ T_n(x) = \cos n(x-x_0) \] is sharp: \[ \max_{0 \leq x \leq 2\pi} \abs{T_n^{(r)}(x)} = n^r. \] Bernstein's inequality for trigonometric polynomials is a special case of the following theorem [Be3]: If $f(x)$ is an entire function of order no greater than $\sigma$ and if \[ M = \sup_{-\infty < x < \infty} \abs{f(x)}, \] then one has \[ \sup_{-\infty < x < \infty} \abs{f^{(r)}(x)} \leq M\sigma^r \quad (r=1,2,\ldots). \] Bernstein's inequality for an algebraic polynomial has the following form [Be2]: If the polynomial \[ P_n(x) = \sum_{k=0}^n \alpha_k x^k \] satifies the condition \[ \abs{P_n(x)} \leq M, \quad a \leq x \leq b, \] then its derivative $P_n^\prime(x)$ has the property \[ \abs{P_n^\prime(x)} \leq \frac{Mn}{\sqrt{(x-a)(x-b)}}, \quad a \leq x \leq b, \] which cannot be improved. As was noted by S.N. Bernshtein [Be2], this inequality is a consequence of the proof of the Markov inequality given by A.A. Markov.
Bernstein's inequalities are in fact employed in proving converse theorems in the theory of approximation of functions. There are a number of generalizations of Bernstein's inequality, in particular for entire functions in several variables.
References
[Be2] S.N. [S.N. Bernshtein] Bernstein, "Sur l'ordre de la meilleure approximation des fonctions continues par des polynômes" Acad. R. Belgique, Cl. Sci. Mém. Coll. 4. Sér. II, 4 (1922) [Be3] S.N. [S.N. Bernshtein] Bernstein, "Sur une propriété des fonctions entières" C.R. Acad. Sci. Paris, 176 (1923) pp. 1603–1605 [Ni] S.M. Nikol'skii, "Approximation of functions of several variables and imbedding theorems", Springer (1975) (Translated from Russian) N.P. Korneichuk, V.P. Motornyi Comments References
[Lo] G.G. Lorentz, "Approximation of functions", Holt, Rinehart & Winston (1966) pp. Chapt. 2 [Na] I.P. Natanson, "Constructive function theory", 1–3, F. Ungar (1964–1965) (Translated from Russian) How to Cite This Entry:
Bernstein inequality.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Bernstein_inequality&oldid=27197
|
Bhattacharyya, T and Binding, PA and Seddighi, K (2001)
Multiparameter SturmLiouville Problems with Eigenparameter Dependent Boundary Conditions. In: Journal of Mathematical Analysis and Applications, 264 (2). pp. 560-570.
PDF
sdarticle.pdf
Restricted to Registered users only
Download (135kB) | Request a copy
Abstract
A system of ordinary differential equations, $ -y_j ^n + q_j y_j = \sum_{k=1}^{n} \lambda _k r_{jk}) y_j, j = 1, \cdot \cdot \cdot$ ,n, with real valued and continuous coefficient functions $q_j , r_{jk}$ is studied on [0, 1] subject to boundary conditions $\frac {y_j ^'(0)}{y_j(0)}\ = cot \beta _j$ and $b_j y_j(1) - d_j y_j^'(1)$ = $e^T_j \lambda(c_j y^'_j(1) - a_j y_j(1))$ (0.2) for j = 1, \cdot \cdot \cdot ,n. Here $E^T$ = $[e_1, e_2 \cdot \cdot \cdot e_n]$ is an arbitrary n \times n matrix of real numbers and $ \omega _j = a_j d_j - b_j c_j $ \neq 0. A point $\lambda = [ \lambda_1 \cdot \cdot \cdot \lambda_n]^T \epsilon C^n$, satisfying (0.1) and (0.2) is called an eigenvalue of the system.Results are given on the existence and location of the eigenvalues and completeness and oscillation of the eigenfunctions.
Item Type: Journal Article Additional Information: Copyright of this article belongs to Elsevier Science. Department/Centre: Division of Physical & Mathematical Sciences > Mathematics Depositing User: Mr. Ramesh Chander Date Deposited: 13 Aug 2008 Last Modified: 19 Sep 2010 04:49 URI: http://eprints.iisc.ac.in/id/eprint/15541 Actions (login required)
View Item
|
I'm trying to understand why the following proposition is true:
Let $J$ be a small category and $F, G : J \to \textbf{Top}$ functors. If $\tau : F \Rightarrow G$ is a pointwise homotopy equivalence, then $\operatorname{hocolim}_J F \to \operatorname{hocolim}_J G$ is a homotopy equivalence.
This seems to be such a natural result that I'm surprised it's not mentioned at all in Riehl, Dugger or Hirschorn's texts on homotopy theory. Although I think all three mention a
version of this result for weak homotopy equivalences.
For instance, Riehl has [Proposition 14.5.7, p. 259 of
Categorical Homotopy Theory] that
If $X_\bullet \to Y_\bullet$ is a pointwise weak equivalence of split simplicial spaces, then $\vert X_\bullet \vert \to \vert Y_\bullet \vert$ is a weak equivalence.
The details of this are explained in Dugger [Theorem 3.5, p. 10]. Applying this to the $\operatorname{hocolim}$, after justifying a few points, one get's the desired result for weak homotopy equivalence [Dugger, Theorem 4.7, p. 17].
But this seems to be where the story ends and the fact that you actually get a homotopy equivalence doesn't seem to be that important. Can someone explain why this is the case?
Now, the only source I've found that states and proves this result is Munson and Volic's
Cubical Homotopy Theory, [Theorem 8.3.7, p. 409], but it's a (very technical) ten page proof, that in turn references results all over the book.
So my main question : is there a simpler way to see why this is true? If so, could you explain or point me in the right direction?
|
In the Art Gallery Problem we are given a polygon P \subset [0,L]^2 on n vertices and a number k. We want to find a guard set G of size k, such that each point in P is seen by a guard in G. Formally, a guard g sees a point p \in P if the line segment pg is fully contained inside the polygon P. The history and practical findings indicate that irrational coordinates are a "very rare" phenomenon. We give a theoretical explanation. Next to worst case analysis, Smoothed Analysis gained popularity to explain the practical performance of algorithms, even if they perform badly in the worst case. The idea is to study the expected performance on small perturbations of the worst input. The performance is measured in terms of the magnitude \delta of the perturbation and the input size. We consider four different models of perturbation. We show that the expected number of bits
to describe optimal guard positions per guard is logarithmic in the input and the magnitude of the perturbation. This shows from a theoretical perspective that rational guards with small bit-complexity are typical. Note that describing the guard position is the bottleneck to show NP-membership. The significance of our results is that algebraic methods are not needed to solve the Art Gallery Problem in typical instances. This is the first time an ER-complete problem was analyzed by Smoothed Analysis.
This is joint work with Michael Dobbins and Andreas Holmsen.
May 06, 2019 | 04:00 PM s.t.
Technische Universität Berlin
Institut für Mathematik Straße des 17. Juni 136 10623 Berlin Room MA 041 (Ground Floor)
|
What is the complexity of the follwoing recurrence? $$T(n) = T(n-1) + 1/n$$I highly suspect the answer to be $O(1)$, because your work reduces by $1$ each time, so by the $n$th time it would be $T(n-...
This is a question about recurrence relation that contains sum inside the recursion.I am totally stuck. Can anyone help?The problem asks to solve the following recursion $T(n)=\frac{1}{n} \sum_{i=1}^...
I've been learning master theorem in school now and have learnt how to apply it to a number of recurrence relations. However one of my assignments has the following recurrence relation:T(n) = T(n-2) ...
I'm trying to solve the recurrence relation T(n) = 3T(n-1) + n and I think the answer is O(n^3) because each new node spawns three child nodes in the recurrence tree. Is this correct? And, in terms of ...
My question here is dealing with the residual that I get. We are trying to prove $T(n) = 3T(n/3) + n$ is $O(n*\log n)$. So where I get is $T(n) \le cn[\log n - \log 3] + n$. So my residual is $-cn\log ...
|
Inverse problems for the p-Laplace type equation
Speaker
Dr. Manas Kar, Department of Mathematics and Statistics, University of Jyväskylä
When Jan 06, 2016
from 03:30 PM to 04:30 PM
Where LH 006 Add event to calendar vCal
iCal
Abstract: Inverse problems for non-linear equations have been of great interest recently. We will discuss the $p$-Calder$\acute{o}$n problem, which is a nonlinear generalization of the inverse conductivity problem due to Calder$\acute{o}$n that involves the $p$-Laplace equation. We will consider here mainly two different types of inverse problems. First one is the enclosure method, which allows one to reconstruct the convex hull of an inclusion in the nonlinear model by using exponentially growing solutions introduced by Wolff. The second one is the interior uniqueness result for the conductivities involving slightly more general nonlinear model. In two dimensions, we show that any two conductivities satisfying $\sigma_1\geq \sigma_2$ and having the same nonlinear Dirichlet-to-Neumann map must be identical. The proof is based on a monotonicity inequality and the unique continuation principle for $p$-Laplace type equation. In higher dimensions, where unique continuation is not known, we obtain a similar result for conductivities close to constant.
|
Your friend meant that all
complex numbers can be represented by such matrices.
$$a+bi = \begin{pmatrix} a & -b \\ b & a \end{pmatrix}$$
Adding complex numbers matches adding such matrices and multiplying complex numbers matches multiplying such matrices.
This means that the collection of matrices:
$$R = \left\{ \begin{pmatrix} a & -b \\ b & a \end{pmatrix} \;\Bigg|\; a,b \in \mathbb{R} \right\}$$
is "isomorphic" to the field of complex numbers.
Specifically,
$$i = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$$
Notice that for this matrix $i^2=-I_2=-1$. :)
How does this help?
It allows you to construct the complex numbers from matrices over the reals. This allows you to get at some properties of the complex numbers via linear algebra.
For example: The modulus of a complex number is $|a+bi|=a^2+b^2$. This is the same as the determinant of such a matrix. Now since the determinant of a product is the product of a determinant, you get that $|z_1z_2|=|z_1|\cdot |z_2|$ for any two complex numbers $z_1$ and $z_2$.
Another nice tie, transposing matches conjugation. :)
Edit: As per request, a little about Euler's formula.
The exponential function can be defined in a number of ways. One nice way is via its MacLaurin series: $e^x = 1+x+\frac{x^2}{2!}+\cdots$. If you start thinking of $x$ as some sort of indeterminant, you might start to ask, "What can I plug into this series?" It turns out that the series:$$e^A = I+A+\frac{A^2}{2!}+\frac{A^3}{3!}+\cdots$$converges for any square matrix $A$ (you have to make sense out of "a convergent series of matrices").
Consider a "real" number, $x$, encoded as one of our matrices: $$x=\begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} \quad \mbox{then} \quad e^x = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} + \begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} + \begin{pmatrix} x^2/2 & 0 \\ 0 & x^2/2 \end{pmatrix} + \cdots$$ $$= \begin{pmatrix} 1+x+x^2/2+\cdots & 0 \\ 0 & 1+x+x^2/2+\cdots \end{pmatrix} = \begin{pmatrix} e^x & 0 \\ 0 & e^x \end{pmatrix} = e^x$$
So (no surprise) the matrix exponential and the good old real exponential do the same thing.
Now one can ask, "What does the exponential of a complex number get you?" It turns out that...$$\mbox{Given } a+bi = \begin{pmatrix} a & -b \\ b & a \end{pmatrix} \quad \mbox{then} \quad e^{a+bi} = \begin{pmatrix} e^a\cos(b) & -e^a\sin(b) \\ e^a\sin(b) & e^a\cos(b) \end{pmatrix}$$...this involves
some (?intermediate?) linear algebra.
Anyway accepting that, we have found that $e^{a+bi} = e^a(\cos(b)+i\sin(b))$. In particular,$$e^{i\theta} = \begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{pmatrix}$$So that $$e^{i\pi} = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix} = -1$$
We can see this way that complex exponentiation (with pure imaginary exponent) yields a rotation matrix. Thus leading us down a path to start identifying complex arithmetic with 2-dimensional geometric transformations.
Of course, there are many other ways to arrive at these various relationships. The matrix route is not the fastest/easiest route but it is an interesting one to contemplate.
I hope that helps a little bit. :)
|
1. Because negation is applied to formulas, and $x, y, z$ are not formulas.
$x$, $y$ and $z$ are variables. Variables are
terms. Terms are those strings of symbols of the langauge which stand for . Here, the objects for which $x, y, z$ stand are numbers, so we can do $<$ etc. between them. objects By definition, terms can be
a variable -- e.g. $x, y, z$,
an individual constant, which stands for one particular object -- e.g. $0$ could be a constant representing the object $0$, or
a function symbol applied to some terms -- e.g. $\text{add}$ could be the addition function which adds to terms, then $\text{add}(x,y)$,
or more conveniently written as $x+y$, would also be a term, i.e. an
expression which stands for some number again.
Next to terms, there are
formulas. Formulas are those strings of symbols of the language which represent . The formulas of first-order logic are inductively defined: truth values
A predicate or relation symbol applied to a suitable tuple of terms is a formula --
so for example, since $x$ and $y$ and terms, putting
the relation symbol $<$ between them yields a ,
which stands for a truth value -- "true" if $x$ is strictly smaller
than $y$ and "false" otherwise. formula $x < y$ If $\phi$ is a formula, then $\neg \phi$ is a formula.
Since we just established that $x < y$ is a formula, $\neg(x < y)$ is also a
formula. If $\phi, \psi$ are formulas, then $\phi \land \psi, \phi \lor \psi, \phi \to \psi$ are also formulas.
If $\phi$ is a formula and $x$ a variable, then $\forall x \phi$ and $\exists x \phi$ are also formulas.
Nothing else is a formula.
The important part here is 2.: $\neg \phi$ yields a formula
if $\phi$ is a formula. But $x$ and $y$ are not formulas, but terms. So writing $\neg x$ or $\neg y$ is not just nonsensual -- because we can't negate something that doesn't have a truth value, and $x$ and $y$ don't represent truth values, but objects (e.g. numbers) -- it is simply not a formula of predicate logic at all, by the way the language of predicate logic is defined.
I would recommend you to have a second close look at the basic definitions like term or formula in your textbook and make sure you understood these definitions -- it is important to know what a formula is at all before you start evaluating complex propositions.
2. Because the De Morgan rule only changes the formula on the outermost level, and we do not pass the negation symbol arbitrarily deep down into the subformulas.
The De Morgan law applied here states that
$\neg(A \lor B) \equiv \neg A \lor \neg B$
In your case, $A$ is the formula $¬(x < y)$, and $B$ is the formula $∃z (x < z ∧ z < y)$. The precise equivalence steps of your formula are
$\neg A \quad \text{(after De Morgan)}\\\equiv \neg \neg (x < y) \\\equiv x < y \quad \text{by double negation elimination}$
and
$\neg B \quad \text{(after De Morgan)}\\\equiv \neg \exists z(x < z \land <y) \\\equiv \forall z \neg (x < z \land z < y) \quad \text{by $\neg \exists z C \equiv \forall z \neg C$}\\\equiv \forall z (\neg(x < z) \lor \neg (z < y)) \quad \text{by De Morgan on C}\\\equiv \forall z(x \geq z \lor z \geq y) \quad \text{negation of $<$ is $\geq$}$
which is why your formula
$\exists x y \neg (A \lor B)$ eventually ends up as $\equiv \exists x y (\neg A \land \neg B)\\ \equiv \exists x y (x < y \lor \forall z(x \geq z \lor z \geq y)$
The important point here is that the new negation $\neg$ in front of $\neg(x < y)$ introduced by the De Morgan rule is where it stops. The negation is applied to $A$, which is $\neg(x < y)$ and that's it. We do
not pass on the the negation arbitrarily deep into the formula. For example, if instead of $\neg(x < y)$ we had $A :\equiv P(x) \to (Q(y) \lor R(z)))$, then $\neg A$ would be $\neg(P(x) \to (Q(y) \lor R(z))$; we would not carry the negation deeply into the formula like $(\neg P(x) \to (\neg Q(y) \lor \neg R(z)))$ or something. Any modification of the subformula inside the negation would be different rule applications, like the conversion $\neg \exists z C \equiv \forall z \neg C$ and the second De Morgan rule above, but these are different steps. The De Morgan rule above says to put another negation in front of $\neg (x < y)$ and that's it, we do not pass the negation sign deeper down into the formula except when we apply other rules on it in a new step. So even if $x$ and $y$ were formulas, there would be no reason to apply $\neg$ to them -- we already did that on the outside and that's where the De Morgan rule stops.
|
In general if you have a bijective map $f$ of a ring $S$ to itself then you can define two new operations
$x+_fy:=f^{-1}(f(x)+f(y))$
$x*_fy:=f^{-1}(f(x)f(y))$
In this case $(S,+_f, *_f)$ is a ring, where the neutral element with respect to $+_f$ is $f^{-1}(0)$ and the neutral element with respect to $*_f$ is $f^{-1}(1)$
Example:
If you choose $f: S\to S$ such that
$f(x):=-x+1$
you have that the inverse is $g(x)=-x+1=f(x)$ and in this case we have
$g(0)=1$ and $g(1)=-1+1=0$
So $1$ is the new neutral element with respect to $+_f$ and $0$ is neutral element with respect to $*_f$ on the new ring $(S,+_f,*_f)$. The new operations are
$x+_fy=f(-x-y+2)=x+y-1$
$x*_fy=f((-x+1)(-y+1))=$
$f(xy-x-y+1)=x+y-xy$
The problem is that $f: (S,+_f,*_f)\to (S,+,*)$ is an isomorphism of rings, so the two structures are equal.
In your case if you want a new structure on $R$, equal to the initial structure up to isomorphism,you can consider two bijective maps $f:R_1\to R_1$ and $g:R_2\to R_2$, and in this case you have that
$(R, +_{(f,g)}, *_{(f,g)})$ is a new ring but it is isomorphic to the initial ring $(R,+,*)$
If you want a different structure you can consider a generalization of semi-direct products for Rings:
If you have a morphism
$\psi: R_2\to Aut((R_1,+,*))$
then you have that
$(a,b)+^\sim(c,d)=(a+\psi(b)(c), b+d)$
and
$(a,b)*^\sim(c,d)=(a\psi(b)(c), bd)$
are two operation on $R_1\times R_2$ such that
$(R_1\times R_2, +^\sim,*^\sim)$ is a ring different from $(R,+,*)$
|
$f(x) = 8 \cos^4 x + 6 \sin (2x + 3 \pi/4) \sin(2x - \pi/4)$.
How can I simplify into a linear combination of simple sine functions?
Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up.Sign up to join this community
$f(x) = 8 \cos^4 x + 6 \sin (2x + 3 \pi/4) \sin(2x - \pi/4)$.
How can I simplify into a linear combination of simple sine functions?
Maybe this?:
FourierTrigSeries[8 Cos[x]^4 + 6 Sin[2 x + 3 Pi/4] Sin[2 x - Pi/4], x, 4] /. Cos[t_] :> HoldForm[Sin][Pi/2 - t]
I'm assuming it's primarily about formatting the output in terms of sines.
I figured that if we take the Fourier transform of it to get the discrete frequencies and then invert the transform, that the expression would be simpler. It comes back in exponential form, and the leading coefficient needs to be distributed. From there, you can use Euler's equation to transform it back to trig. The remaining required transforms are trivial. I have no idea if this is how you were supposed to solve it.
FourierTransform[8 Cos[x]^4 + 6Sin[2x + 3Pi/4]Sin[2x - Pi/4],x,w];InverseFourierTransform[%,w,t];Distribute@%;ExpToTrig@%(*4 Cos[2 t]+Cos[4 t]+3 Sin[4 t]*)
$4 \cos (2 t) + \cos (4 t) + 3 \sin (4 t) = 4 \sin (2 t + \pi/2) + \sin (4 t + \pi/2) + 3 \sin (4 t)$
|
Quantity and/or base quantity of logarithmic terms can be expressed in exponential form to find the values of logarithm of quantities. Due to the involvement of exponents in logarithms, the logarithmic identities are simply called as power rules of logarithms.
There are three power rules in logarithms and they are used as formulas in logarithmic mathematics to find the values of logarithmic terms easily.
Logarithm of a quantity in exponential form is equal to the product of exponent and logarithm of base of exponential term.
$\large \log_{b}{m^x} \,=\, x\log_{b}{m}$
Logarithm of a quantity to a base in exponential form is equal to the quotient of logarithm of quantity by the exponent of base quantity in exponential form.
$\large \log_{b^y}{m} \,=\, \Big(\dfrac{1}{y}\Big)\log_{b}{m}$
Logarithm of a quantity in exponential form to a base in exponential form is equal to product of quotient of exponent of quantity by the exponent of base quantity and logarithm of quantity.
$\large \log_{b^y} m^x = \Big(\dfrac{x}{y}\Big) \log_{b} m$
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Connected Subset of Union of Disjoint Open Sets Theorem
Let $T = \struct{S, \tau}$ be a topological space.
Let $A$ be a connected set of $T$.
Let $A \subseteq U \cup V$.
Then either $A \subseteq U$ or $A \subseteq V$. Proof
Let $U’ = A \cap U$ and $V’ = A \cap V$.
From Intersection is Empty Implies Intersection of Subsets is Empty $U’$ and $V’$ are disjoint.
Hence $U’$ and $V’$ are separated sets by definition.
Now
\(\displaystyle A\) \(=\) \(\displaystyle A \cap \paren {U \cup V}\) Intersection with Subset is Subset \(\displaystyle \) \(=\) \(\displaystyle \paren {A \cap U} \cup \paren {A \cap V}\) Intersection Distributes over Union \(\displaystyle \) \(=\) \(\displaystyle U’ \cup V’\)
Without loss of generality assume that $V’ = \empty$.
Then
\(\displaystyle A\) \(=\) \(\displaystyle U’ \cup V’\) \(\displaystyle \) \(=\) \(\displaystyle U’ \cup \empty\) \(\displaystyle \) \(=\) \(\displaystyle U’\) Union with Empty Set \(\displaystyle \) \(=\) \(\displaystyle A \cap U\) \(\displaystyle \leadsto \ \ \) \(\displaystyle A\) \(\subseteq\) \(\displaystyle U\) Intersection with Subset is Subset
$\blacksquare$
|
$x^2+y^2+z^2$ is an algebraic expression. It is given that the values of all three literals are expressed in three equations as follows.
$(1) \,\,\,\,\,\,$ $x = r\cos{\alpha}\cos{\beta}$
$(2) \,\,\,\,\,\,$ $y = r\cos{\alpha}\sin{\beta}$
$(3) \,\,\,\,\,\,$ $z = r\sin{\alpha}$
It is asked us to find the value $x^2+y^2+z^2$ in this problem on the basis of the above three equations.
Replace the algebraic expression by the respective values of the literals $x$, $y$ and $z$ to find the value of algebraic expression.
$x^2+y^2+z^2$ $\,=\,$ ${(r\cos{\alpha}\cos{\beta})}^2$ $+$ ${(r\cos{\alpha}\sin{\beta})}^2$ $+$ ${(r\sin{\alpha})}$
$=\,\,\,$ $r^2\cos^2{\alpha}\cos^2{\beta}$ $+$ $r^2\cos^2{\alpha}\sin^2{\beta}$ $+$ $r^2\sin^2{\alpha}$
Observe the first two terms in the expression, $r^2\cos^2{\alpha}$ is a common factor in both terms in the expression. So, they can be taken common from them. It is not only useful to simplify the expression and helpful to us to find the value of the expression easily.
$=\,\,\,$ $r^2\cos^2{\alpha}{(\cos^2{\beta}+\sin^2{\beta})}$ $+$ $r^2\sin^2{\alpha}$
According to Pythagorean identity of sin and cos functions, the value of $\cos^2{\beta}+\sin^2{\beta}$ is equal to one.
$=\,\,\,$ $r^2\cos^2{\alpha}{(1)}$ $+$ $r^2\sin^2{\alpha}$
$=\,\,\,$ $r^2\cos^2{\alpha}$ $+$ $r^2\sin^2{\alpha}$
This time, $r^2$ is a common factor in the two terms of the expression. So, take it common from them to proceed further in simplifying the trigonometric expression.
$=\,\,\,$ $r^2{(\cos^2{\alpha}+\sin^2{\alpha})}$
Now, use Pythagorean identity of sin and cos functions to get the complete solution of this problem.
$=\,\,\,$ $r^2{(1)}$
$=\,\,\,$ $r^2$
Therefore, it is solved that the value of $x^2+y^2+z^2$ is equal to $r^2$.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Synchronization of positive solutions for coupled Schrödinger equations
1.
School of Mathematics and Statistics and Hubei Key Laboratory of Mathematical Sciences, Central China Normal University, Luo-Yu Road 152, Wuhan 430079, China
2.
Center for Applied Mathematics, Tianjin University, Tianjin 300072, China
3.
Department of Mathematics and Statistics, Utah State University, Logan, UT 84322, USA
$\left\{ {\begin{array}{*{20}{c}} {\Delta u - u + ({\mu _1}|u{|^p} + \beta |v{|^p})|u{|^{p - 2}}u = 0,}&{{\text{i}}n\;{\mathbb{R}^n},} \\ {\Delta v - v + ({\mu _2}|v{|^p} + \beta |u{|^p})|v{|^{p - 2}}v = 0,}&{{\text{i}}n\;{\mathbb{R}^n},} \end{array}} \right.$
$ 2< p<\frac{n}{n-2}, $
$ n\ge 3$
$ 2< p<+∞ $
$ n = 1, 2, $
$μ_1, μ_2, β>0 $
$ n = 1 $
$ p = 2 $ Mathematics Subject Classification:Primary: 35J20, 35J47; Secondary: 35J50. Citation:Chuangye Liu, Zhi-Qiang Wang. Synchronization of positive solutions for coupled Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 2795-2808. doi: 10.3934/dcds.2018118
References:
[1] [2] [3] [4]
T. Bartsch, N. Dancer and Z.-Q. Wang,
A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system,
[5] [6]
S. Correia,
Characterization of ground-states for a system of
[7]
D. G. de Figueiredo and O. Lopes,
Solitary waves for some nonlinear Schrödinger systems,
[8] [9]
A. Hasegawa and F. Tappert,
Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers Ⅱ. Normal dispersion,
[10]
T. Lin and J. Wei,
Ground state of N coupled nonlinear Schrödinger equations in $ \mathbb{R}^n, n\le 3 $,
[11]
H. Liu, Z. Liu and J. Chang,
Existence and uniquiness of positive solutions of nonlinear Schrödinger systems,
[12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
show all references
References:
[1] [2] [3] [4]
T. Bartsch, N. Dancer and Z.-Q. Wang,
A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system,
[5] [6]
S. Correia,
Characterization of ground-states for a system of
[7]
D. G. de Figueiredo and O. Lopes,
Solitary waves for some nonlinear Schrödinger systems,
[8] [9]
A. Hasegawa and F. Tappert,
Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers Ⅱ. Normal dispersion,
[10]
T. Lin and J. Wei,
Ground state of N coupled nonlinear Schrödinger equations in $ \mathbb{R}^n, n\le 3 $,
[11]
H. Liu, Z. Liu and J. Chang,
Existence and uniquiness of positive solutions of nonlinear Schrödinger systems,
[12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
[1]
Chuangye Liu, Zhi-Qiang Wang.
A complete classification of ground-states for a coupled nonlinear Schrödinger system.
[2]
V. Afraimovich, J.-R. Chazottes, A. Cordonet.
Synchronization in directionally coupled systems: Some rigorous results.
[3]
Oskar Weinberger, Peter Ashwin.
From coupled networks of systems to networks of states in phase space.
[4]
Eugenio Montefusco, Benedetta Pellacci, Marco Squassina.
Energy convexity
estimates for non-degenerate ground states of nonlinear 1D
Schrödinger systems.
[5]
Dongdong Qin, Xianhua Tang, Qingfang Wu.
Ground states of nonlinear Schrödinger systems with periodic or non-periodic potentials.
[6]
Shuang Liu, Wenxue Li.
Outer synchronization of delayed coupled systems on networks without strong connectedness: A hierarchical method.
[7] [8] [9]
Riccardo Adami, Diego Noja, Nicola Visciglia.
Constrained energy minimization and ground states for NLS with point defects.
[10]
Patricio Felmer, César Torres.
Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation.
[11]
Antonio Iannizzotto, Kanishka Perera, Marco Squassina.
Ground states for scalar field equations with anisotropic nonlocal nonlinearities.
[12] [13] [14] [15]
Guangze Gu, Xianhua Tang, Youpei Zhang.
Ground states for asymptotically periodic fractional Kirchhoff equation with critical Sobolev exponent.
[16]
Xiaoyu Zeng, Yimin Zhang.
Asymptotic behaviors of ground states for a modified Gross-Pitaevskii equation.
[17]
Zupei Shen, Zhiqing Han, Qinqin Zhang.
Ground states of nonlinear Schrödinger equations with fractional Laplacians.
[18]
Olena Naboka.
On synchronization of oscillations of two coupled Berger
plates with nonlinear interior damping.
[19]
R. Yamapi, R.S. MacKay.
Stability of synchronization in a shift-invariant ring of mutually coupled oscillators.
[20]
Tatsien Li, Bopeng Rao, Yimin Wei.
Generalized exact boundary synchronization
for a coupled system of wave equations.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
In MSEtool, assessment models are of class
Assess. This appendix provides a brief description and references for the
Assess objects. Further details regarding parameterization, e.g., fixing parameters, and tuning, e.g., adjusting start parameters, are provided in the function documentation.
For LaTeX equation rendering, it is recommended that this vignette be viewed in a HTML browser. This can be done with the
browseVignettes function in R:
The surplus production model uses the Fletcher (1978) formulation. The biomass \(B_t\) in year \(t\) is \[B_t = B_{t-1} + P_{t-1} - C_{t-1},\] where \(C_t\) is the observed catch and \(P_t\) is the surplus production given by: \[P_t = \gamma MSY \left(\dfrac{B_t}{K}-\left[\dfrac{B_t}{K}\right]^n\right), \] where \(K\) is the carrying capacity, \(MSY\) is the estimated maximum sustainable yield, and \(n\) is the parameter that controls shape of the production curve, and \(\gamma\) is \[\gamma = \dfrac{1}{n-1}n^{n/(n-1)}.\]
By conditioning the model on observed catch, the predicted index \(\hat{I}_t\) is \[\hat{I}_t = \hat{q} \hat{B}_t \] and the harvest rate is \[\hat{F}_t = \dfrac{C_t}{\hat{B}_t}.\] The dynamics equations above use an annual time step. Optionally, smaller time steps are used in the model to approximate continuous production and fishing. Given the biomass in the start of the year and assuming a constant fishing mortality over the time steps within a year, the fishing mortality that produces the observed annual catch is solved iteratively.
The likelihood of the observed index \(I_t\), assuming a lognormal distribution, is \[\log(I_t) \sim N(\log[\hat{I}_t], \sigma^2).\]
From estimates of leading parameters \(F_{MSY}\) and \(MSY\), the biomass \(B_{MSY}\) at \(MSY\) is \[B_{MSY} = \dfrac{MSY}{F_{MSY}},\] the carrying capacity \(K\) is \[K = n^{1/(n-1)} B_{MSY} ,\] and the intrinsic rate of population increase \(r\) is \[ r = n F_{MSY}.\] The production parameter \(n\) is typically fixed and the model has a symmetric productive curve (\(B_{MSY}/K = 0.5\)) when \(n = 2\).
In the state-state version, annual biomass deviates are estimated as random effects. Similar to Meyer and Millar (1999), the biomass \(B_t\) in year \(t\) is \[B_t = (B_{t-1} + P_{t-1} - C_{t-1})\exp(\delta_t - 0.5 \tau^2),\] where \(\delta_t \sim N(0, \tau^2)\) are biomass deviations in lognormal space and \(\tau\) is the standard deviation of the biomass deviations.
The log-likelihood of the estimated deviations \(\hat{\delta}_t\) is \[\hat{\delta}_t \sim N(0, \tau^2).\]
Fletcher, R.I. 1978. On the restructuring of the Pella-Tomlinson system. Fishery Bulletin 76:515-521.
Meyer, R., and Millar, R.B. 1999. BUGS in Bayesian stock assessments. Canadian Journal of Fisheries and Aquatic Science 56:1078-1086.
|
Difference between revisions of "Inertia"
(→Derivation)
Line 41: Line 41:
where <math>I = mr^{2}</math> is called the '''[https://en.wikipedia.org/wiki/Moment_of_inertia moment of inertia]''' (kg.m<sup>2</sup>)
where <math>I = mr^{2}</math> is called the '''[https://en.wikipedia.org/wiki/Moment_of_inertia moment of inertia]''' (kg.m<sup>2</sup>)
+ + + + + + + + + + + + + + + + + + +
[[Category:Fundamentals]]
[[Category:Fundamentals]]
Revision as of 06:14, 24 August 2018
In power systems engineering, "inertia" is a concept that typically refers to rotational inertia or rotational kinetic energy. For synchronous systems that run at some nominal frequency (i.e. 50Hz or 60Hz), inertia is the energy that is stored in the rotating masses of equipment electro-mechanically coupled to the system, e.g. generator rotors, fly wheels, turbine shafts.
Derivation
Below is a basic derivation of power system rotational inertia from first principles, starting from the basics of circle geometry and ending at the definition of moment of inertia (and it's relationship to kinetic energy).
The length of a circle arc is given by:
[math] L = \theta r [/math]
where [math]L[/math] is the length of the arc (m)
[math]\theta[/math] is the angle of the arc (radians) [math]r[/math] is the radius of the circle (m)
A cylindrical body rotating about the axis of its centre of mass therefore has a rotational velocity of:
[math] v = \frac{\theta r}{t} [/math]
where [math]v[/math] is the rotational velocity (m/s)
[math]t[/math] is the time it takes for the mass to rotate L metres (s)
Alternatively, rotational velocity can be expressed as:
[math] v = \omega r [/math]
where [math]\omega = \frac{\theta}{t} = \frac{2 \pi \times n}{60}[/math] is the angular velocity (rad/s)
[math]n[/math] is the speed in revolutions per minute (rpm)
The kinetic energy of a circular rotating mass can be derived from the classical Newtonian expression for the kinetic energy of rigid bodies:
[math] KE = \frac{1}{2} mv^{2} = \frac{1}{2} m(\omega r)^{2}[/math]
where [math]KE[/math] is the rotational kinetic energy (Joules or kg.m
2/s 2 or MW.s, all of which are equivalent) [math]m[/math] is the mass of the rotating body (kg)
Alternatively, rotational kinetic energy can be expressed as:
[math] KE = \frac{1}{2} I\omega^{2} [/math]
where [math]I = mr^{2}[/math] is called the
moment of inertia (kg.m 2) Normalised Inertia Constants
TBA
Generator Inertia
The moment of inertia for a generator is dependent on its mass and apparent radius, which in turn is largely driven by its prime mover type.
|
Let $\alpha \geq 1$ and $X_n$ be independent random variables such that $P(X_n=2)=P(X_n=-2)=\frac 1{2n^\alpha}$ and $P(X_n=0)=1-\frac 1{n^\alpha}$. Let $S_n=\sum_{k=1}^n X_k$.
Depending on the value of $\alpha$, what are the properties of $S_n$ (convergence, asymptotic behaviour)?
$S_n$ is clearly a Markov chain on the even integers. Note that it is
not time-homogeneous or stationary.
Since $E(X_n)=0$ and $V(X_n)=\frac 4{n^\alpha}$, we have $V(S_n)=4\sum_{k=1}^n \frac{1}{k^\alpha}$.
Whether $\alpha=1$ or $\alpha >1$, we have respectively $V(S_n)=O(\log n)$ and $V(S_n)=O(1)$. In both cases, by Markov's bound, for any $\epsilon >0$ and $\delta >0$, $$P(\frac{|S_n|}{ n^{1/2+\epsilon}}\geq\delta) = P(|S_n|\geq n^{1/2+\epsilon}\delta)=O\left( \frac{\log n}{n^{1+2\epsilon}}\right)$$ and since $\sum_n \frac{\log n}{n^{1+2\epsilon}} < \infty$, we get $S_n = o\left(n^{1/2+\epsilon} \right)$ a.s.
As noticed by Olivier in the comments, if $\alpha>1$, $\sum_n P(X_n\neq 0) = \sum_n \frac{1}{n^\alpha}<\infty$ thus by Borel-Cantelli lemma, $P(\limsup_n (X_n\neq 0)) = 0$, i.e. $$P(\liminf_n (X_n= 0)) = 1$$ Hence almost surely, $S_n$ becomes constant.
Olivier also noticed that $S_n$ is a martingale, and for $\alpha>1$, $E(S_n^2) = V(S_n)=O(1)$. A result from the theory of martingales implies that $S_n$ converges almost surely and also converges in $L^2$.
The characteristic function of $S_n$ is $$\prod_{k=1}^n \frac 1{2k^\alpha} e^{2it} + \frac 1{2k^\alpha} e^{-2it} + 1-\frac 1{k^\alpha} = \prod_{k=1}^n \left(1-\frac{1-\cos(2t)}{k^\alpha}\right)$$ When $\alpha=1$, this converges pointwise to $$t\mapsto 1_{\pi \mathbb Z}(t) $$ This limit is not a continuous function, so Lévy's continuity theorem does not apply. This rather indicates that $S_n$ does not converge in distribution when $\alpha=1$.
|
Help:Formatting
You can format your text by using wiki markup. This consists of normal characters like asterisks, apostrophes or equal signs which have a special function in the wiki, sometimes depending on their position. For example, to format a word in
italic, you include it in two pairs of apostrophes like
''this''.
Contents 1 Text formatting markup 2 Paragraphs and line breaks 3 HTML tags 4 Links 5 Other formatting and tools 6 Inserting media and tables 7 Inserting templates, citations, and category tags 8 Inserting symbols 9 HTML tags and symbol entities displayed themselves (with and without interpreting them) 10 Formatting help Text formatting markup
Description You type You get Character (inline) formatting – applies anywhere Italic text ''italic'' italic Bold text '''bold''' bold Bold and italic '''''bold & italic''''' bold & italic Strike text <strike>strike text</strike> Escape wiki markup <nowiki>no ''markup''</nowiki> no ''markup'' Escape wiki markup once [[Laboratory]]<nowiki/> equipment Laboratory equipment Section formatting – only at the beginning of the line Headings of different levels ==Level 2== ===Level 3=== ====Level 4==== =====Level 5===== ======Level 6====== Level 2 Level 3 Level 4 Level 5 Level 6 Horizontal rule Text before ---- Text after Text before
Text after
Bullet list * Start each line * with an asterisk (*). ** More asterisks give deeper *** and deeper levels. * Line breaks <br />don't break levels. *** But jumping levels creates empty space. Any other start ends the list.
Any other start ends the list.
Numbered list # Start each line # with a number sign (#). ## More number signs give deeper ### and deeper ### levels. # Line breaks <br />don't break levels. ### But jumping levels creates empty space. # Blank lines # end the list and start another. Any other start also ends the list.
Any other start also ends the list.
Definition list ;item 1 : definition 1 ;item 2 : definition 2-1 : definition 2-2
Begin with a semicolon. One item per line; a new line can appear before the colon, but using a space before the colon improves parsing.
Indent text : Single indent :: Double indent ::::: Multiple indent
This workaround may harm accessibility.
Mixture of different types of list # one # two #* two point one #* two point two # three #; three item one #: three def one # four #: four def one #: this looks like a continuation #: and is often used #: instead <br />of <nowiki><br /></nowiki> # five ## five sub 1 ### five sub 1 sub 1 ## five sub 2
The usage of
#: and
*: for breaking a line within an item may also harm accessibility.
Preformatted text Start each line with a space. Text is '''preformatted''' and ''markups'' '''''can''''' be done.
This way of preformatting only applies to section formatting. Character formatting markups are still effective.
Start each line with a space. Text is Preformatted text blocks <nowiki>Start with a space in the first column, (before the <nowiki>). Then your block format will be maintained. This is good for copying in code blocks: def function(): """documentation string""" if True: print True else: print False</nowiki> Start with a space in the first column, (before the <nowiki>). Then your block format will be maintained. This is good for copying in code blocks: def function(): """documentation string""" if True: print True else: print False Paragraphs and line breaks
MediaWiki ignores single line breaks. To start a new paragraph, leave an empty line:
You type You get A single newline generally has no effect on the layout. These can be used to separate sentences within a paragraph. Some editors find that this aids editing and improves the ''diff'' function (used internally to compare different versions of a page). But an empty line starts a new paragraph. When used in a list, a newline ''does'' affect the layout (see above).
A single newlinegenerally has no effect on the layout.These can be used to separatesentences within a paragraph.Some editors find that this aids editingand improves the
But an empty line starts a new paragraph.
When used in a list, a newline
If necessary, you can force a line break within a paragraph with the HTML tag
<br />:
You type You get You can break lines<br /> without a new paragraph.<br /> Please use this sparingly.
You can break lines
Some HTML tags are allowed in MediaWiki, for example
<code>,
<div>,
<span> and
<font>. These apply anywhere you insert them.
Description You type You get Inserted
(Displays as underline in most browsers)
<ins>Inserted</ins> or <u>Underline</u>
or
Deleted
(Displays as strikethrough in most browsers)
<s>Struck out</s> or <del>Deleted</del>
or
Fixed width text <code>Source code</code> or <tt>Fixed width text</tt>
or
Superscripts and subscripts X<sup>2</sup>, H<sub>2</sub>O
X
Line breaks You can break lines<br /> without a new paragraph.<br /> Please use this sparingly.
You can break lines
Blockquotes Text before <blockquote>Blockquote</blockquote> Text after
Text before
Blockquote
Text after
Completely preformatted text <pre> Text is '''preformatted''' and ''markups'' '''''cannot''''' be done</pre>
For marking up of preformatted text, check the "Preformatted text" entry at the end of the previous table.
Text is '''preformatted''' and ''markups'' '''''cannot''''' be done Customized preformatted text <pre style="color: red"> Text is '''preformatted''' with a style and ''markups'' '''''cannot''''' be done </pre>
A CSS style can be named within the
style property.
Text is '''preformatted''' with a style and ''markups'' '''''cannot''''' be done
continued:
Description You type You get Customized preformatted text with text wrap according to available width <pre style="white-space: pre-wrap; white-space: -moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: break-word;"> This long sentence is used to demonstrate text wrapping. This additional sentence makes the text even longer. </pre> This long sentence is used to demonstrate text wrapping. This additional sentence makes the text even longer. Preformatted text with text wrap according to available width <code> This long sentence is used to demonstrate text wrapping. This additional sentence makes the text even longer. </code>
Leading spaces to preserve formatting Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Help:MediaWiki basics/Introduction to MediaWiki and wikis|wiki]] ''markup'' and special characters: → Putting a space at the beginning of each line stops the text from being reformatted. It still interprets wiki Links
Description You type You get Internal links Here's a link to a page named [[Cell counter]]. You can even say [[cell counter]]s and the link will show up correctly. You can put formatting around a link. Example: ''[[Laboratory informatics]]''. The ''first letter'' of articles is automatically capitalized, so [[laboratory informatics]] goes to the same place as [[Laboratory informatics]]. Capitalization matters after the first letter. You can link to a page section by its title: [[Laboratory information management system#Technology]] You can make the text appearing on an internal link different from the article title: [[Laboratory information management system#Technology|technology of LIMS]] If you wish to link to a category, add a colon in front: [[:Category:LIMSwiki help documentation]]
You can put formatting around a link.Example:
You can link to a page section by its title: Laboratory information management system#Technology
You can make the text appearing on an internal link different from the article title: technology of LIMS
If you wish to link to a category, add a colon in front: Category:LIMSwiki help documentation
External links You can make an external link just by typing a URL: http://clinfowiki.org You can give it a title: [http://clinfowiki.org ClinfoWiki.org] Or leave the title blank: [http://clinfowiki.org] Linking to an e-mail address works the same way: mailto:someone@example.com or [mailto:someone@example.com someone]
You can make an external link just by typing a URL: http://clinfowiki.org
You can give it a title: ClinfoWiki.org
Or leave the title blank: [1]
Other formatting and tools
Description You type You get Mathematical formulas <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> Comment <!-- This is a comment --> Comments are only visible in the edit zone.
Comments are only visible in the edit zone.
Signing talk page comments You should "sign" your comments on talk pages: <br> - Three tildes gives your signature: ~~~ <br> - Four tildes give your signature plus date/time: ~~~~ <br> - Five tildes gives the date/time alone: ~~~~~
You should "sign" your comments on talk pages:
Page redirects #REDIRECT [[Laboratory informatics]]
You use redirects most often on pages with incorrect or outdated page titles. You simply copy or remove the existing content, paste this code in, and change the internal link text to the title of the article you wish to automatically redirect users to.
#REDIRECT Laboratory informatics Inserting media and tables
For more on these topics: Help:MediaWiki basics/Intermediate training
For more on these topics: Help:MediaWiki basics/Advanced training
Inserting symbols
Symbols and other special characters not available on your keyboard can be inserted through a special sequence of characters. Those sequences are called HTML entities. For example, the following sequence (entity)
→ when inserted will be shown as HTML symbol → and — when inserted will be shown as an HTML symbol —.
HTML symbol entities Á á  ⠴ Æ æ À à ℵ Α α & ∧ ∠ Å å ≈ à ã Ä ä „ Β β ¦ • ∩ Ç ç ¸ ¢ Χ χ ˆ ♣ ≅ © ↵ ∪ ¤ † ‡ ↓ ⇓ ° Δ δ ♦ ÷ É é Ê ê È è ∅ Ε ε ≡ Η η Ð ð Ë ë € ∃ ƒ ∀ ½ ¼ ¾ ⁄ Γ γ ≥ > ↔ ⇔ ♥ … Í í Î î ¡ Ì ì ℑ ∞ ∫ Ι ι ¿ ∈ Ï ï Κ κ Λ λ 〈 « ← ⇐ ⌈ “ ≤ ⌊ ∗ ◊ ‹ ‘ < ¯ — µ · − Μ μ ∇ – ≠ ∋ ¬ ∉ ⊄ Ñ ñ Ν ν Ó ó Ô ô Œ œ Ò ò ‾ Ω ω Ο ο ⊕ ∨ ª º Ø ø Õ õ ⊗ Ö ö ¶ ∂ ‰ ⊥ Φ φ Π π ϖ ± £ ′ ″ ∏ ∝ Ψ ψ " √ 〉 » → ⇒ ⌉ ” ℜ ® ⌋ Ρ ρ › ’ ‚ Š š ⋅ § Σ σ ς ∼ ♠ ⊂ ⊆ ∑ ⊃ ¹ ² ³ ⊇ ß Τ τ ∴ Θ θ ϑ Þ þ ˜ × ™ Ú ú ↑ ⇑ Û û Ù ù ¨ ϒ Υ υ Ü ü ℘ Ξ ξ Ý ý ¥ ÿ Ÿ Ζ ζ
Description You type You get Copyright symbol © Greek delta letter symbol δ Euro currency symbol €
See the list of all HTML entities on the Wikipedia article List of XML and HTML character entity references. Additionally, MediaWiki supports two non-standard entity reference sequences:
&רלמ; and
&رلم; which are both considered equivalent to
which is a right-to-left mark. (Used when combining right-to-left languages with left-to-right languages in the same page.)
€→
€
<span style="color: red; text-decoration: line-through;">Typo to be corrected</span>→
Typo to be corrected
<span style="color: red; text-decoration: line-through;">Typo to be corrected</span>→
<span style="color: red; text-decoration: line-through;">Typo to be corrected</span> Nowiki for HTML
<nowiki /> can prohibit (HTML) tags:
<<nowiki />pre> → <pre>
But
not & symbol escapes: &<nowiki />amp; → &
To print & symbol escapes as text, use "
&" to replace the "&" character (eg. type "
", which results in "
").
Formatting help
Beyond the text formatting markup shown on this page, here are some other formatting references:
You can find more help documentation at Category:LIMSwiki help documentation.
|
[1003.0299] The local B-polarization of the CMB: a very sensitive probe of cosmic defects
Authors: Juan Garcia-Bellido, Ruth Durrer, Elisa Fenu, Daniel G. Figueroa, Martin Kunz Abstract: We present a new and especially powerful signature of cosmic strings and other topological or non-topological defects in the polarization of the cosmic microwave background (CMB). We show that even if defects contribute 1% or less in the CMB temperature anisotropy spectrum, their signature in the local $\tilde{B}$-polarization correlation function at angular scales of tens of arc minutes is much larger than that due to gravitational waves from inflation, even if the latter contribute with a ratio as big as $r\simeq 0.1$ to the temperature anisotropies. Proposed B-polarization experiments, with a good sensitivity on arcminute scales, may either detect a contribution from topological defects produced after inflation or place stringent limits on them. Even Planck should be able to improve present constraints on defect models by at least an order of magnitude, to the level of $\ep <10^{-7}$. A future full-sky experiment like CMBpol, with polarization sensitivities of the order of $1\mu$K-arcmin, will be able to constrain the defect parameter $\ep=Gv^2$ to a few $\times10^{-9}$, depending on the defect model. [PDF] [PS] [BibTex] [Bookmark]
Discussion related to specific recent arXiv papers
Post Reply
3 posts • Page
1of 1
Topological defects can source scalar, vector and tensor modes in the early universe. The vector modes have power on small scales and can generate E and B polarization; the B signal can be quite distinctive, and used to constrain defect models with future data.
This paper appears to take some previous results for the B-mode power spectrum and multiply them by l^4, so e.g. in Fig 1 the power is very blue. Of course to be consistent you also have to multiply the noise and the any other spectrum of interest by l^4 as well, so you seem to gain nothing by doing this. Is there some point I have missed? The paper also defines a 'local' scalar [tex]\tilde{B}[/tex] by taking two derivatives of the polarization tensor. However you gain nothing by doing this; with noisy or non-band-limited data you cannot calculate derivatives on a scale L without having data available over a scale L - the non-locality just hits you in a different form (see astro-ph/0305545 and refs).
This paper appears to take some previous results for the B-mode power spectrum and multiply them by l^4, so e.g. in Fig 1 the power is very blue. Of course to be consistent you also have to multiply the noise and the any other spectrum of interest by l^4 as well, so you seem to gain nothing by doing this. Is there some point I have missed?
The paper also defines a 'local' scalar [tex]\tilde{B}[/tex] by taking two derivatives of the polarization tensor. However you gain nothing by doing this; with noisy or non-band-limited data you cannot calculate derivatives on a scale L without having data available over a scale L - the non-locality just hits you in a different form (see astro-ph/0305545 and refs).
The main point is that vector components of defects' contribution to CMB polarization anisotropies peak at scales smaller than those from inflation.
On the other hand, the ordinary E- and B-modes depend non-locally on the Stokes parameters, so they cannot be used to put constraints on causal sources like defects using the angular correlation function of E- and B-modes on small scales. That is the reason why Baumann and Zaldarriaga [0901.0958] suggested using instead the local modes. Those are the true causal modes, written in terms of derivatives of the Stokes parameters. These local B-modes then have power spectra that are much bluer than the non-local ones, and hence enhance the small scale (high-l) end of the spectrum. It is by looking at the angular correlation functions at small separations (tens of arcmin) that one has a chance to measure the defect's contribution to the local B-modes, and distinguish it from the one of inflation. Of course, the usual white noise power spectrum for polarization will also be modified by this [tex]\ell^4[/tex] factor, but by a suitable gaussian smoothing of the data (following Baumann&Zaldarriaga), we can indeed obtain large signal to noise ratios for binned data at small angular scales. Baumann&Zaldarriaga looked at the model-independent signature of inflation at angles [tex]\theta>2[/tex] degrees. What we have realiazed is that, although model-dependent, the signal at angles [tex]\theta < 1[/tex] degrees can be much more significant. In fact, the feature at small angles is rather universal. The differences between defect models (and we considered four different ones) is just in the height and width of the first and second oscillations in the angular correlation functions (related to the heigth and position of the angular power spectrum). Therefore, with sufficient angular resolution one could not only detect defects (if they are there) but also differentiate between different models.
On the other hand, the ordinary E- and B-modes depend non-locally on the Stokes parameters, so they cannot be used to put constraints on causal sources like defects using the angular correlation function of E- and B-modes on small scales. That is the reason why Baumann and Zaldarriaga [0901.0958] suggested using instead the local modes. Those are the true causal modes, written in terms of derivatives of the Stokes parameters.
These local B-modes then have power spectra that are much bluer than the non-local ones, and hence enhance the small scale (high-l) end of the spectrum. It is by looking at the angular correlation functions at small separations (tens of arcmin) that one has a chance to measure the defect's contribution to the local B-modes, and distinguish it from the one of inflation.
Of course, the usual white noise power spectrum for polarization will also be modified by this [tex]\ell^4[/tex] factor, but by a suitable gaussian smoothing of the data (following Baumann&Zaldarriaga), we can indeed obtain large signal to noise ratios for binned data at small angular scales.
Baumann&Zaldarriaga looked at the model-independent signature of inflation at angles [tex]\theta>2[/tex] degrees. What we have realiazed is that, although model-dependent, the signal at angles [tex]\theta < 1[/tex] degrees can be much more significant. In fact, the feature at small angles is rather universal. The differences between defect models (and we considered four different ones) is just in the height and width of the first and second oscillations in the angular correlation functions (related to the heigth and position of the angular power spectrum). Therefore, with sufficient angular resolution one could not only detect defects (if they are there) but also differentiate between different models.
I think it is clear from the normal power spectra that the sourced vector mode B-polarization peaks at much smaller scales than the gravitational wave spectrum: mostly scales sub-horizon at recombination as opposed to tensor modes which decay on sub-horizon scales. I agree that with low enough noise this is an interesting signal (and has been calculated many times before), though it needs to be distinguished from other possible vector mode sources like magnetic fields.
I thought the point of the Baumann paper was to make a nice picture showing visually the structure of the correlations. The E and B modes contain exactly the same information as the tilde versions; in the same way the WMAP7 papers make some nice plots of the polarization-temperature correlation to visually show a physical effect, but these constrain the same information as the usual power spectra. In the Gaussian limit the usual E/B spectra contain all the information on the defect power spectrum. Only Q and U can actually be measured locally on the sky (in one pixel you cannot calculate any spatial derivatives). The two-point Q/U correlations can be calculated from the usual E and B spectra.
I thought the point of the Baumann paper was to make a nice picture showing visually the structure of the correlations. The E and B modes contain exactly the same information as the tilde versions; in the same way the WMAP7 papers make some nice plots of the polarization-temperature correlation to visually show a physical effect, but these constrain the same information as the usual power spectra. In the Gaussian limit the usual E/B spectra contain all the information on the defect power spectrum.
Only Q and U can actually be measured locally on the sky (in one pixel you cannot calculate any spatial derivatives). The two-point Q/U correlations can be calculated from the usual E and B spectra.
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent
1.
School of Mathematics and Statistics, Southwest University, Chongqing, 400715, China
2.
College of Mathematics and Information Sciences, Xin-Yang Normal University, Xinyang, 464000, China
$ \ddot{\mbox{o}} $
$ \begin{align*} \begin{cases} -\Delta u+V(x)u-K(x)\phi|u|^8u-\Delta(u^2)u = g(x,u),\ \ \ \ &\mbox{in}\ \mathbb{R}^3,\\ -\Delta\phi = K(x)|u|^{10},\ \ \ \ &\mbox{in}\ \mathbb{R}^3, \end{cases} \end{align*} $
$ V,K,g $
$ x $ Keywords:Modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system, asymptotically periodic, critical nonlocal term, Nehari manifold, ground state solution. Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C35. Citation:Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang. Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2299-2324. doi: 10.3934/cpaa.2019104
References:
[1]
A. Azzollini and A. Pomponio,
Ground state solutions for the nonlinear Schr$\ddot{\mbox o}$dinger-Maxwell equations,
[2]
C. O. Alves, M. A. S. Souto and S. H. M. Soares,
Schr$\ddot{\mbox o}$dinger-Poisson equations without Ambrosetti-Rabinowitz condition,
[3]
G. Bao,
Infinitely many small solutions for a sublinear Schr$\ddot{\mbox o}$dinger-Poisson system with sign-changing potential,
[4] [5]
H. Brézis and L. Nirenberg,
Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,
[6] [7] [8]
G. Cerami and G. Vaira,
Positive solutions for some non-autonomous Schr$\ddot{\mbox o}$dinger-Poisson systems,
[9]
T. D'Aprile and D. Mugnai,
Solitary waves for nonlinear Klein-Gordon-Maxwell and Schr$\ddot{\mbox o}$dinger-Maxwell equations,
[10] [11]
X. Feng and Y. Zhang,
Existence of non-trivial solution for a class of modified Schr$\ddot{\mbox o}$dinger-Poisson equations via perturbation method,
[12]
Y.-P. Gao, S.-L. Yu and C.-L. Tang,
Positive ground state solutions to Schr$\ddot{\mbox o}$dinger-Poisson systems with a negative non-local term,
[13] [14]
L. R. Huang, E. M. Rocha and J. Q. Chen,
On the Schr$\ddot{\mbox o}$dinger-Poisson system with a general indefinite nonlinearity,
[15]
H. Liu,
Positive solutions of an asymptotically periodic Schr$\ddot{\mbox o}$dinger-Poisson system with critical exponent,
[16]
H. Liu and H. Chen,
Multiple solutions for a nonlinear Schr$\ddot{\mbox o}$dinger-Poisson system with sign-changing potential,
[17]
W. Liu and L. Gan,
Existence of solutions for modified Schr$\ddot{\mbox o}$dinger-Poisson system with critical nonlinearity in $\mathbb{R}^3$,
[18]
Z. Liu and Y. Huang,
Multiple solutions of asymptotically linear Schr$\ddot{\mbox o}$dinger-Poisson system with radial potentials vanishing at infinity,
[19]
J. Liu, J.-F. Liao and C.-L. Tang,
A positive ground state solution for a class of asymptotically periodic Schr$\ddot{\mbox o}$dinger equations,
[20]
F.-Y. Li, Y.-H. Li and J.-P. Shi, Existence of positive solutions to Schr$\ddot{\mbox o}$dinger-Poisson type systems with critical exponent,
[21]
X. Liu, J. Liu and Z.-Q. W,
Ground states for quasilinear Schr$\ddot{\mbox o}$dinger equations with critical growth,
[22] [23]
M.-M. Li and C.-L. Tang,
Multiple positive solutions for Schr$\ddot{\mbox o}$dinger-Poisson system in $\mathbb{R}^3$ involving concave-convex nonlinearities with critical exponent,
[24]
J.-Q. Liu, Y.-Q. Wang and Z.-Q. Wang,
Soliton solutions for quasilinear Schr$\ddot{\mbox o}$dinger equations, Ⅱ,
[25]
Z. Liu, Z.-Q. Wang and J. Zhang,
Infinitely many sign-changing solutions for the nonlinear Schr$\ddot{\mbox o}$dinger-Poisson system,
[26]
F. Li and Q. Zhang,
Existence of positive solutions to the Schr$\ddot{\mbox o}$dinger-Poisson system without compactness conditions,
[27]
C. Mercuri,
Positive solutions of nonlinear Schr$\ddot{\mbox o}$dinger-Poisson systems with radial potential vanishing at infinity,
[28]
J. Nie and X. Wu,
Existence and multiplicity of non-trivial solutions for a class of modified Schr$\ddot{\mbox o}$dinger-Poisson systems,
[29] [30] [31]
J. Sun and S. Ma,
Ground state solutions for some Schr$\ddot{\mbox o}$dinger-Poisson systems with periodic potentials,
[32]
E. A. B. Silva and G. F. Vieira,
Quasilinear asymptotically periodic Schr$\ddot{\mbox o}$dinger equations with critical growth,
[33]
E. A. B. Silva and G. F. Vieira,
Quasilinear asymptotically periodic Schr$\ddot{\mbox o}$dinger equations with subcritical growth,
[34] [35]
M. Willem,
[36]
M.-H. Yang and Z.-Q. Han,
Existence and multiplicity results for the nonlinear Schr$\ddot{\mbox o}$dinger-Poisson systems,
[37]
Y. Ye and C.-L. Tang,
Existence and multiplicity of solutions for Schr$\ddot{\mbox o}$dinger-Poisson equations with sign-changing potential,
[38]
L. Zhao, H. Liu and F. Zhao,
Existence and concentration of solutions for the Schr$\ddot{\mbox o}$dinger-Poisson equations with steep well potential,
[39]
show all references
References:
[1]
A. Azzollini and A. Pomponio,
Ground state solutions for the nonlinear Schr$\ddot{\mbox o}$dinger-Maxwell equations,
[2]
C. O. Alves, M. A. S. Souto and S. H. M. Soares,
Schr$\ddot{\mbox o}$dinger-Poisson equations without Ambrosetti-Rabinowitz condition,
[3]
G. Bao,
Infinitely many small solutions for a sublinear Schr$\ddot{\mbox o}$dinger-Poisson system with sign-changing potential,
[4] [5]
H. Brézis and L. Nirenberg,
Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,
[6] [7] [8]
G. Cerami and G. Vaira,
Positive solutions for some non-autonomous Schr$\ddot{\mbox o}$dinger-Poisson systems,
[9]
T. D'Aprile and D. Mugnai,
Solitary waves for nonlinear Klein-Gordon-Maxwell and Schr$\ddot{\mbox o}$dinger-Maxwell equations,
[10] [11]
X. Feng and Y. Zhang,
Existence of non-trivial solution for a class of modified Schr$\ddot{\mbox o}$dinger-Poisson equations via perturbation method,
[12]
Y.-P. Gao, S.-L. Yu and C.-L. Tang,
Positive ground state solutions to Schr$\ddot{\mbox o}$dinger-Poisson systems with a negative non-local term,
[13] [14]
L. R. Huang, E. M. Rocha and J. Q. Chen,
On the Schr$\ddot{\mbox o}$dinger-Poisson system with a general indefinite nonlinearity,
[15]
H. Liu,
Positive solutions of an asymptotically periodic Schr$\ddot{\mbox o}$dinger-Poisson system with critical exponent,
[16]
H. Liu and H. Chen,
Multiple solutions for a nonlinear Schr$\ddot{\mbox o}$dinger-Poisson system with sign-changing potential,
[17]
W. Liu and L. Gan,
Existence of solutions for modified Schr$\ddot{\mbox o}$dinger-Poisson system with critical nonlinearity in $\mathbb{R}^3$,
[18]
Z. Liu and Y. Huang,
Multiple solutions of asymptotically linear Schr$\ddot{\mbox o}$dinger-Poisson system with radial potentials vanishing at infinity,
[19]
J. Liu, J.-F. Liao and C.-L. Tang,
A positive ground state solution for a class of asymptotically periodic Schr$\ddot{\mbox o}$dinger equations,
[20]
F.-Y. Li, Y.-H. Li and J.-P. Shi, Existence of positive solutions to Schr$\ddot{\mbox o}$dinger-Poisson type systems with critical exponent,
[21]
X. Liu, J. Liu and Z.-Q. W,
Ground states for quasilinear Schr$\ddot{\mbox o}$dinger equations with critical growth,
[22] [23]
M.-M. Li and C.-L. Tang,
Multiple positive solutions for Schr$\ddot{\mbox o}$dinger-Poisson system in $\mathbb{R}^3$ involving concave-convex nonlinearities with critical exponent,
[24]
J.-Q. Liu, Y.-Q. Wang and Z.-Q. Wang,
Soliton solutions for quasilinear Schr$\ddot{\mbox o}$dinger equations, Ⅱ,
[25]
Z. Liu, Z.-Q. Wang and J. Zhang,
Infinitely many sign-changing solutions for the nonlinear Schr$\ddot{\mbox o}$dinger-Poisson system,
[26]
F. Li and Q. Zhang,
Existence of positive solutions to the Schr$\ddot{\mbox o}$dinger-Poisson system without compactness conditions,
[27]
C. Mercuri,
Positive solutions of nonlinear Schr$\ddot{\mbox o}$dinger-Poisson systems with radial potential vanishing at infinity,
[28]
J. Nie and X. Wu,
Existence and multiplicity of non-trivial solutions for a class of modified Schr$\ddot{\mbox o}$dinger-Poisson systems,
[29] [30] [31]
J. Sun and S. Ma,
Ground state solutions for some Schr$\ddot{\mbox o}$dinger-Poisson systems with periodic potentials,
[32]
E. A. B. Silva and G. F. Vieira,
Quasilinear asymptotically periodic Schr$\ddot{\mbox o}$dinger equations with critical growth,
[33]
E. A. B. Silva and G. F. Vieira,
Quasilinear asymptotically periodic Schr$\ddot{\mbox o}$dinger equations with subcritical growth,
[34] [35]
M. Willem,
[36]
M.-H. Yang and Z.-Q. Han,
Existence and multiplicity results for the nonlinear Schr$\ddot{\mbox o}$dinger-Poisson systems,
[37]
Y. Ye and C.-L. Tang,
Existence and multiplicity of solutions for Schr$\ddot{\mbox o}$dinger-Poisson equations with sign-changing potential,
[38]
L. Zhao, H. Liu and F. Zhao,
Existence and concentration of solutions for the Schr$\ddot{\mbox o}$dinger-Poisson equations with steep well potential,
[39]
[1]
Sitong Chen, Junping Shi, Xianhua Tang.
Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity.
[2]
Xianhua Tang, Sitong Chen.
Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials.
[3]
Xu Zhang, Shiwang Ma, Qilin Xie.
Bound state solutions of Schrödinger-Poisson system with critical exponent.
[4]
Sitong Chen, Xianhua Tang.
Existence of ground state solutions for the planar axially symmetric Schrödinger-Poisson system.
[5]
Yanfang Xue, Chunlei Tang.
Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth.
[6]
Lun Guo, Wentao Huang, Huifang Jia.
Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $.
[7]
Yi He, Lu Lu, Wei Shuai.
Concentrating ground-state solutions for a class of Schödinger-Poisson equations in $\mathbb{R}^3$ involving critical Sobolev exponents.
[8]
Zhengping Wang, Huan-Song Zhou.
Positive solution for a nonlinear stationary Schrödinger-Poisson system in $R^3$.
[9] [10]
Zhi Chen, Xianhua Tang, Ning Zhang, Jian Zhang.
Standing waves for Schrödinger-Poisson system with general nonlinearity.
[11]
Miao-Miao Li, Chun-Lei Tang.
Multiple positive solutions for Schrödinger-Poisson system in $\mathbb{R}^{3}$ involving concave-convex nonlinearities with critical exponent.
[12]
A. Pankov.
Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach.
[13]
Marco A. S. Souto, Sérgio H. M. Soares.
Ground state solutions for quasilinear stationary Schrödinger equations with critical growth.
[14]
Kaimin Teng, Xiumei He.
Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent.
[15]
Yongpeng Chen, Yuxia Guo, Zhongwei Tang.
Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents.
[16] [17]
Xiaoyan Lin, Yubo He, Xianhua Tang.
Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential.
[18]
Zhanping Liang, Yuanmin Song, Fuyi Li.
Positive ground state solutions of a quadratically coupled schrödinger system.
[19]
Mingzheng Sun, Jiabao Su, Leiga Zhao.
Infinitely many solutions for a Schrödinger-Poisson system with
concave and convex nonlinearities.
[20]
Margherita Nolasco.
Breathing modes for the Schrödinger-Poisson system with a multiple--well external potential.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
Descartes' rule of signs Statement
The rule states that if the terms of a single-variable polynomial with real coefficients are ordered by descending variable exponent, then the number of positive roots of the polynomial is either equal to the number of sign differences between consecutive nonzero coefficients, or is less than it by an even number. Multiple roots of the same value are counted separately.
Theorem:Let \( f(x) = a_nx^n + a_{n-1}x^{n-1}+ \cdots+a_0\) be a polynomial with real coefficients. Let \( s\) be the number of sign changes in the sequence \( a_n,a_{n-1},\ldots,a_0\): that is, delete the terms of the sequence that are \( 0,\) and let \( s \) be the number of pairs of consecutive terms in the remaining sequence that have opposite signs. Let \( p \) be the number of positive roots of \( f(x) \) (counted with multiplicity). Then \( s-p \) is a nonnegative even number. Proof (using induction) Base case: For f(x)=ax+b, we have the only root \( x_{0}=-\frac{b}{a}\) and so s-p=0. Inductive case Suppose that n-1 degree polynomials satisfy s-p=even nonnegative number. First we prove that s-p is an even number. WLOG assume that \(a_{n}\) is positive. We split cases over f(0). If \(a_{0}:=f(0)>0\), then we claim that there are even number of sign changes and even number of positive roots.
We have the following cases for \(a_{n}\cdot a_{n-1}\) and \(a_{1}\cdot a_{0}\). If these terms have the same parity (sign change at both ends, or no sign change at both ends), then we are reduced to the n-1 polynomial by ignoring the \(a_{n},a_{0}\) terms. If the terms have different parity (sign change at one end but not the other), then we ignore the term where the sign change happen to get a reduced polynomial. In particular, suppose that \(a_{n-1}<0\), then we ignore the \(a_{n}\) term and all the lower order terms \(a_{0},a_{1},...,a_{d}\) that are nonnegative till we get a negative term \(a_{d+1}<0\), and so we reduced to a lower order polynomial. Therefore, s is even.
Since \(a_{n},a_{0}>0\) we have that the function starts positive at 0 and eventually grows to positive infinity. So either we have no positive roots or an even number of them. Therefore, p is even.
The case \(a_{0}:=f(0)<0\) is similar for showing that s is odd. For showing that p is odd, we have that f(0)<0 and the polynomial grows to positive infinity, so it must have at least one positive root.
Second, we prove that \(s\geq p\). We will use that the derivative f' is an (n-1) degree polynomial and let s' be the number of its sign changes and p' the number of its positive roots. Since f and f' have the same parity for each of their terms besides \(a_{0}\), we have \(s\geq s'\). If f has p positive roots, then by Rolle's theorem f' has at least p-1 positive roots. Therefore, together we have\begin{align*}s\geq s'\stackrel{Induction}{\geq }p'\geq p-1.\end{align*}So \(s-p\geq -1\) but since s-p is an even number, we obtain \(s\geq p\).
|
how can a prove that at least one of those is less than or equal to 1/4. $$\forall a,b,c\in \mathbb R^+, \ a(1-b)\leq 1/4 \lor b(1-c) \leq 1/4 \lor c(1-a) \leq 1/4.$$ help please!
We can assume $1-a, 1-b, 1-c \geq 0$, since otherwise we are done.
By the AM-GM inequality (see http://en.wikipedia.org/wiki/AM-GM_inequality), we have $abc(1-a)(1-b)(1-c) \leq (\frac{a+b+c+(1-a)+(1-b)+(1-c)}{6})^6= (\frac{1}{2})^6 = \frac{1}{64}$.
Then, if $a(1-b)> 1/4, b(1-c) > 1/4$ and $ c(1-a) > 1/4$, multiplying together we get $abc(1-a)(1-b)(1-c)> (\frac{1}{4})^3 = \frac{1}{64}$, which is a contradiction, and thus the result follows.
EDIT:
If you do not want to use AM-GM: Let $x \in \mathbb{R}^+ $. We have $0\leq (\sqrt x - \sqrt{(1-x)})^2 = x +(1-x) -2\sqrt{x(1-x)}$, and thus $ 2\sqrt{x(1-x)} \leq 1$, which implies $x(1-x) \leq \frac{1}{4}$. Apply this for $a,b$ and $c$, multiply together, and you get the inequality of the first paragraph.
If you don't want to use AM-GM you can do it this way.
Let's asume without loss of generalization that $a\leq b\leq c$.
Trivially, If $1 \leq c$ then $b(1-c) \leq 0 \leq 1/4$.
If $a\leq 1/2 \leq b$ or $b\leq 1/2 \leq c$ then $a(1-b)\leq 1/4$ or $b(1-c)\leq 1/4$. (respectively)
If $1/2 \leq a$ then let $a' = a-1/2$ and $b' = 1/2 - (1-b) = b - 1/2$ Note that $a' \leq b'$
Then $a(1-b) = (1/2 + a')(1/2 - b') = 1/4 - (b'-a')/2 -a'b' \leq 1/4$.
Similarly, if $c \leq 1/2$. Let $a' = 1/2 - a$ and $b' = (1-b) - 1/2 = 1/2 - b$. Note that this time, $a' \geq b'$.
Then $a(1-b) = (1/2 - a')(1/2 + b') = 1/4 - (a'-b')/2 -a'b' \leq 1/4$.
First assume $a, b, c \lt 1$.
without loss of generality, assume $a \le b$
Now consider the quadratic
$$f(x) = x^2 - x + a(1-b)$$
$f(0) = a(1-b) \gt 0$
$f(b) = b^2 - b + a(1-b) = (a-b)(1-b) \le 0$
Thus the quadratic has a real root (ok, some calculus used, but elementary proofs exist), and thus the discriminant =$ 1 -4a(1-b) \ge 0 \implies a(1-b) \le \frac{1}{4}$
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Difference between revisions of "Huge"
(→Definitions)
Line 12: Line 12:
=== Elementary embedding definitions ===
=== Elementary embedding definitions ===
−
The elementary embedding definitions are somewhat standard. Let $j:V\rightarrow M$ be
+
The elementary embedding definitions are somewhat standard. Let $j:V\rightarrow M$ be [[elementary embedding]] $$ a [[$\$. Then:
*$\kappa$ is '''almost $n$-huge with target $\lambda$''' iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$).
*$\kappa$ is '''almost $n$-huge with target $\lambda$''' iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$).
Revision as of 06:33, 26 November 2017
Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\aleph_2$-saturated ideal $\sigma$-complete on $\omega_1$". [1]
Contents Definitions
Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties $n$-$P_0$ and $n$-$P_1$, $n$-$P_0$ has less consistency strength than $n$-$P_1$, which has less consistency strength than $(n+1)$-$P_0$, and so on. This phenomenon is seen only around the $n$-fold variants as of modern set theoretic concerns. [2]
Although they are very large, there is a first-order definition which is equivalent to $n$-hugeness, so the $\theta$-th $n$-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability.
Elementary embedding definitions $\kappa$ is almost $n$-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is $n$-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost $n$-hugeiff it is almost $n$-huge with target $\lambda$ for some $\lambda$. $\kappa$ is $n$-hugeiff it is $n$-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost $n$-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost $n$-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super $n$-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is $n$-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost $1$-huge, $1$-huge, etc. respectively. Ultrafilter definition
The first-order definition of $n$-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is $n$-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that:
$$\forall i<n\forall x\subseteq\lambda(ot(x\cap\lambda_{i+1})=\lambda_i\rightarrow x\in U)$$
Where $ot(X)$ is the order-type of the poset $(X,\in)$. [1] This definition is, more intuitively, making $U$ very large, like most ultrafilter characterizations of large cardinals (supercompact, strongly compact, etc.). $\kappa$ is then super $n$-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is $n$-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. [1]
If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses $n$-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are.
Consistency strength and size
Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the $n$-fold variants) known as the
double helix. This phenomenon is when for one $n$-fold variant, letting a cardinal be called $n$-$P_0$ iff it has the property, and another variant, $n$-$P_1$, $n$-$P_0$ is weaker than $n$-$P_1$, which is weaker than $(n+1)$-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = $0$-superstrong = almost $0$-huge = super almost $0$-huge = $0$-huge = super $0$-huge $n$-superstrong $n$-fold supercompact $(n+1)$-fold strong, $n$-fold extendible $(n+1)$-fold Woodin, $n$-fold Vopěnka $(n+1)$-fold Shelah almost $n$-huge super almost $n$-huge $n$-huge super $n$-huge $(n+1)$-superstrong
All huge variants lay at the top of the double helix restricted to some natural number $n$, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of $n$-huge cardinals, for all $n$. [1]
Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every $(n+1)$-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super $n$-huge". [1]
In terms of size, however, the least $n$-huge cardinal is smaller than the least supercompact cardinal. Assuming both exist, for any $\kappa$ which is supercompact and has an $n$-huge cardinal above it, there are $\kappa$ many $n$-huge cardinals less than $\kappa$. [1]
Every $n$-huge cardinal is $m$-huge for every $m\leq n$. Similarly with almost $n$-hugeness, super $n$-hugeness, and super almost $n$-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1]
References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
|
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
Megaupload.com was the 72nd most visited site on the Internet. It was headed by Kim Dotcom; at least that's how most people called Kim Schmitz (originally from Germany), probably because he resembles a dotcom bubble.
American authorities decided to arrest Mr Dotcom a few weeks ago and the dream came true in New Zealand today. I must proudly add that the most important collaborator of Mr Dotcom is Július "Juice" Benčko [Yoo-lee-yoos "Juice" Bench-kaw], a webdesigner born in [Czecho]Slovakia in 1977. This author of the Megaupload.com graphics issues managed to earn about $1 million in the last year. Not bad. More precisely, it is very bad.
When I read the Wikipedia page, it erased all my doubts that Dotcom is a villain. These people have undoubtedly lived as parasites. Dotcom has done many financial things in the past that are either illegal or at least flagrantly immoral. Benčko may receive up to 55 years in the prison.
The copyrights holding companies are saying that they have lost more than $0.5 billion in revenues because of Megaupload.com. I suppose that this figure is obtained by adding the prices of all the copies of the movies etc. that were distributed via Megaupload.com. Well, I find such calculations tendentious or misleading because if the people who have used Megaupload.com couldn't have used it, most of them would never buy the movies or music or whatever the people were getting there. So I am convinced that the profit of the companies owning the copyrights would be much smaller than those $0.5 billion. In other words, a transaction has two sides and any calculation that assumes the price to be determined by one side only is skewed.
In the past, we could hear some poor U.S. students who were suddenly caught and demanded to pay tens of millions of dollars etc. I feel almost sure that many other people are doing similar things and those unlucky folks were chosen as scapegoats – although what some of those folks have done seemed extraordinary to me, too.
In my opinion, when the owners of the copyrights compute the losses, a fairer formula should be
\[ {\rm Loss}_{\rm eff} = \sum_{i}^{\rm products} \frac{{\rm Price}_i\times N_{{\rm copies\,would\,be\,bought},i} }{\rm Probability(caught)_i} \] The summation goes over different kinds of products. For each product, the price, as required by the seller, is multiplied by the estimated number of copies that would be bought if they were not offered by the copyright violators. However, each term should also be amplified by the inverse probability that a similar culprit gets caught.
Quite generally, I think that the fairer formula above would be yielding smaller amounts than what the copyright owners claim but they could still be big in many cases. You probably understand the reasoning behind my "fairer" formula; its aim is to balance the flow of money at least at a macroscopic level. (The accounting based on debits and credits is also known as "he should give, he has given" in Czechia, but this type of accounting was replaced by "they've stolen from us, we have stolen" [ukradli nám, ukradli jsme] during socialism.) Much more generally, I think that people should adopt a fairer formula. Copyright infringement shouldn't be something that allows the copyright holder to effectively kill the copyright violator. When someone steals a chocolate in the supermarket, it doesn't give the supermarket manager the right to kill the thief, either.
Things should be fair and balanced. This is not just an aesthetic requirement. I believe that an excessive copyright law – in either direction – creates an instability because too many people may want to abolish it completely, sometimes for their legitimate or justifiable feeling that the law is too cruel and the copyright holders have too much power and too convenient a life.
However, I want to tell those foes of any copyright laws that artists are legitimate workers, too. There are other occupations that their business needs as well. They are creating some values and they have to get their money for the work (unless we want to live in the society which only produces worthless arts – coming from the people who aren't good at anything so they wouldn't earn any money in their free time, anyway). If someone creates values that are considered high by the viewers or listeners, he should naturally get more money for that. But the production of intellectual assets is a work like another one, it may be hard work, and some people may be extremely good at it.
If someone says that artists etc. have no right to demand any money for their creative works or protect the mechanisms that are needed for them to get some money, he is effectively saying that he demands everyone to respect the idea that the market value of intellectual assets and creative works is zero. But that's not what the market says. The market says that it's not zero. People are ready to pay for certain things which proves that the value isn't zero. There's a supply of money to be paid on the demand side. Once you admit it isn't zero, then I think you should also agree that the money that the consumers are willing to pay end up in the pockets of the genuine owners of this immaterial but nonzero type of wealth.
And in my opinion, it is obvious that the artists and the people who directly cooperate with him or her, and not people like Kim Dotcom, deserve to be given the money that the people are willing to pay for the movies and other things that were being and that are still being stolen by various websites. Kim Dotcom and Július Benčko only created a simple website that only became important by the content and most of the content has been posted against the will of the primary originators of the content, e.g. the artists. It makes a difference. The copying of the files at/from Megaupload.com isn't really what we want to pay millions of dollars for, do we?
Mark Zuckerberg also created just a rather simple website (relatively to the price over $50 billion) whose value depends on the content that people contribute – but in this case, one could say that the people contribute the content voluntarily (well, in most cases). I think that these differences are self-evident to most readers; I still need to emphasize them because they're being deliberately overlooked by those who believe that the artists or inventors or authors have no right to assume that they're the owners of a monopoly to deal with their inventions or other creative works. Those folks use the loaded word "monopoly" to explain why they hate any copyright law. But any full-fledged ownership is a monopoly; it doesn't mean that we should share everything.
If I return to the title, this successful raid of the FBI against Megaupload.com shows that no special new legislation is needed. Because the FBI attacked a server whose violations of the copyright law seem obvious to most people, it's likely that the courts will confirm that it was legitimate. Most of the voters won't harass the current U.S. administration for this event, either. Some of them surely will (Alexa says that 1% of the Internet users visited Megaupload.com often). But if the FBI did a similar assault on a relatively innocent server, people would protest, courts could declare the raid illegal, and voters could even punish the government doing such things.
These are the checks and balances that do exist now and that should exist in the future. With a SOPA-like law that gives the copyright owners a total power that may precede any decision by the court and that may even circumvent it, the checks and balances would be destroyed.
Mullahs fight against Barbie
Today, the mullahs closed all shops that were selling Barbies. That proves that the mullahs are nothing else than feminists on steroids. Britain's media supervisory body, Ofcom, retaliated against the harassment of Barbie by Iran by terminating the license for the Iranian Press TV in the U.K. which was run by hardcore British Marxists and environmentalists.
Some decades ago, feminists only managed to ban one type of a Barbie, one that admitted that the math class is tough. However, young U.S. girls today already tend to understand that Barbie has always been right. She mentions a new research showing the non-psychological origin of the girls-boys math IQ gap which was even reported on the Huffington Post. Geary and Stoet invalidate claims that there existed "evidence" that the gap was due to the girls' low self-esteem. All the papers that have promoted the low self-esteem theory had general flaws; for example, none of them has ever applied the same tests to a male sample group. See University of Missouri press release.
Too bad that such obvious research couldn't have been routinely published when Larry Summers was in hot water.
|
Reflecting cardinals
Reflection is a fundamental motivating concern in set theory. The theory of ZFC can be equivalently axiomatized over the very weak Kripke-Platek set theory by the addition of the reflection theorem scheme, below, since instances of the replacement axiom will follow from an instance of $\Delta_0$-separation after reflection down to a $V_\alpha$ containing the range of the defined function. Several philosophers have advanced philosophical justifications of large cardinals based on ideas arising from reflection.
Contents Reflection theorem
The Reflection theorem is one of the most important theorems in Set Theory, being the basis for several large cardinals. The Reflection theorem is in fact a "meta-theorem," a theorem about proving theorems. The Reflection theorem intuitively encapsulates the idea that we can find sets resembling the class $V$ of all sets.
Theorem (Reflection): For every set $M$ and formula $\phi(x_0...x_n,p)$ ($p$ is a parameter) there exists some limit ordinal $\alpha$ such that $V_\alpha\supseteq M$ such that $\phi^{V_\alpha}(x_0...x_n,p)\leftrightarrow \phi(x_0...x_n,p)$ (We say $V_\alpha$ reflects $\phi$). Assuming the Axiom of Choice, we can find some countable $M_0\supseteq M$ that reflects $\phi(x_0...x_n,p)$.
Note that by conjunction, for any finite family of formulas $\phi_0...\phi_n$, as $V_\alpha$ reflects $\phi_0...\phi_n$ if and only if $V_\alpha$ reflects $\phi_0\land...\land\phi_n$. Another important fact is that the truth predicate for $\Sigma_n$ formulas is $\Sigma_{n+1}$, and so we can find a (Club class of) ordinals $\alpha$ such that $(V_\alpha,\in, {T_{\Sigma_n}}\restriction{V_\alpha})\prec(V,\in, T_{\Sigma_n})$, where $T_{\Sigma_n}$ is the truth predicate for $\Sigma_n$ and so $ZFC\rightarrow Con(ZFC(\Sigma_n))$ for every $n$, where $ZFC(\Sigma_n)$ is $ZFC$ with Replacement and Separation restricted to $\Sigma_n$.
Lemma: If $W_\alpha$ is a cumulative hierarchy, there are arbitrarily large limit ordinals $\alpha$ such that $\phi^{W_\alpha}(x_0...x_n,p)\leftrightarrow \phi^W(x_0...x_n,p)$. Reflection and correctness
For any class $\Gamma$ of formulas, an inaccessible cardinal $\kappa$ is
$\Gamma$-reflecting if and only if $H_\kappa\prec_\Gamma V$, meaning that for any $\varphi\in\Gamma$ and $a\in H_\kappa$ we have $V\models\varphi[a]\iff H_\kappa\models\varphi[a]$. For example, an inaccessible cardinal is $\Sigma_n$-reflecting if and only if $H_\kappa\prec_{\Sigma_n} V$. In the case that $\kappa$ is not necessarily inaccessible, we say that $\kappa$ is $\Gamma$-correct if and only if $H_\kappa\prec_\Gamma V$ . A simple Löwenheim-Skolem argument shows that every infinite cardinal $\kappa$ is $\Sigma_1$-correct. For each natural number $n$, the $\Sigma_n$-correct cardinals form a closed unbounded proper class of cardinals, as a consequence of the reflection theorem. This class is sometimes denoted by $C^{(n)}$ and the $\Sigma_n$-correct cardinals are also sometimes referred to as the $C^{(n)}$-cardinals. Every $\Sigma_2$-correct cardinal is a $\beth$-fixed point and a limit of such $\beth$-fixed points, as well as an $\aleph$-fixed point and a limit of such. Consequently, we may equivalently define for $n\geq 2$ that $\kappa$ is $\Sigma_n$-correct if and only if $V_\kappa\prec_{\Sigma_n} V$.
A cardinal $\kappa$ is
correct, written $V_\kappa\prec V$, if it is $\Sigma_n$-correct for each $n$. This is not expressible by a single assertion in the language of set theory (since if it were, the least such $\kappa$ would have to have a smaller one inside $V_\kappa$ by elementarity). Nevertheless, $V_\kappa\prec V$ is expressible as a scheme in the language of set theory with a parameter (or constant symbol) for $\kappa$.
Although it may be surprising, the existence of a correct cardinal is equiconsistent with ZFC. This can be seen by a simple compactness argument, using the fact that the theory ZFC+"$\kappa$ is correct" is finitely consistent, if ZFC is consistent, precisely by the observation about $\Sigma_n$-correct cardinals above.
A cardinal $\kappa$ is
reflecting if it is inaccessible and correct. Just as with the notion of correctness, this is not first-order expressible as a single assertion in the language of set theory, but it is expressible as a scheme. The existence of such a cardinal is equiconsistent to the assertion ORD is Mahlo. $\Sigma_2$ correct cardinals
The $\Sigma_2$-correct cardinals are a particularly useful and robust class of cardinals, because of the following characterization: $\kappa$ is $\Sigma_2$-correct if and only if for any $x\in V_\kappa$ and any formula $\varphi$ of any complexity, whenever there is an ordinal $\alpha$ such that $V_\alpha\models\varphi[x]$, then there is $\alpha\lt\kappa$ with $V_\alpha\models\varphi[x]$. The reason this is equivalent to $\Sigma_2$-correctness is that assertions of the form $\exists \alpha\ V_\alpha\models\varphi(x)$ have complexity $\Sigma_2(x)$, and conversely all $\Sigma_2(x)$ assertions can be made in that form.
It follows, for example, that if $\kappa$ is $\Sigma_2$-correct, then any feature of $\kappa$ or any larger cardinal than $\kappa$ that can be verified in a large $V_\alpha$ will reflect below $\kappa$. So if $\kappa$ is $\Sigma_2$-reflecting, for example, then there must be unboundedly many inaccessible cardinals below $\kappa$. Similarly, if $\kappa$ is $\Sigma_2$-reflecting and measurable, then there must be unboundedly many measurable cardinals below $\kappa$.
The Feferman theory
This is the theory, expressed in the language of set theory augmented with a new unary class predicate symbol $C$, asserting that $C$ is a closed unbounded class of cardinals, and every $\gamma\in C$ has $V_\gamma\prec V$. In other words, the theory consists of the following scheme of assertions: $$\forall\gamma\in C\ \forall x\in V_\gamma\ \bigl[\varphi(x)\iff\varphi^{V_\gamma}(x)\bigr]$$as $\varphi$ ranges over all formulas. Thus, the Feferman theory asserts that the universe $V$ is the union of a chain of elementary substructures $$V_{\gamma_0}\prec V_{\gamma_1}\prec\cdots\prec V_{\gamma_\alpha}\prec\cdots \prec V$$Although this may appear at first to be a rather strong theory, since it seems to imply at the very least that each $V_\gamma$ for $\gamma\in C$ is a model of ZFC, this conclusion would be incorrect. In fact, the theory does
not imply that any $V_\gamma$ is a model of ZFC, and does not prove $\text{Con}(\text{ZFC})$; rather, the theory implies for each axiom of ZFC separately that each $V_\gamma$ for $\gamma\in C$ satisfies it. Since the theory is a scheme, there is no way to prove from that theory that any particular $\gamma\in C$ has $V_\gamma$ satisfying more than finitely many axioms of ZFC. In particular, a simple compactness argument shows that the Feferman theory is consistent provided only that ZFC itself is consistent, since any finite subtheory of the Feferman theory is true by the reflection theorem in any model of ZFC. It follows that the Feferman theory is actually conservative over ZFC, and proves with ZFC no new facts about sets that is not already provable in ZFC alone.
The Feferman theory was proposed as a natural theory in which to undertake the category-theoretic uses of Grothendieck universes, but without the large cardinal penalty of a proper class of inaccessible cardinals. Indeed, the Feferman theory offers the advantage that the universes are each elementary substructures of one another, which is a feature not generally true under the universe axiom.
Maximality Principle
The existence of an inaccessible reflecting cardinal is equiconsistent with the boldface maximality principle $\text{MP}(\mathbb{R})$, which asserts of any statement $\varphi(r)$ with parameter $r\in\mathbb{R}$ that if $\varphi(r)$ is forceable in such a way that it remains true in all subsequent forcing extensions, then it is already true; in short, $\text{MP}(\mathbb{R})$ asserts that every possibly necessary statement with real parameters is already true. Hamkins showed that if $\kappa$ is an inaccessible reflecting cardinal, then there is a forcing extension with $\text{MP}(\mathbb{R})$, and conversely, whenever $\text{MP}(\mathbb{R})$ holds, then there is an inner model with an inaccessible reflecting cardinal.
References
|
Sample Quantiles
The generic function
quantile produces sample quantiles corresponding to the given probabilities. The smallest observation corresponds to a probability of 0 and the largest to a probability of 1.
Keywords univar Usage
quantile(x, …)
# S3 method for defaultquantile(x, probs = seq(0, 1, 0.25), na.rm = FALSE, names = TRUE, type = 7, …)
Arguments x
numeric vector whose sample quantiles are wanted, or an object of a class for which a method has been defined (see also ‘details’).
NAand
NaNvalues are not allowed in numeric vectors unless
na.rmis
TRUE.
probs
numeric vector of probabilities with values in \([0,1]\). (Values up to
2e-14outside that range are accepted and moved to the nearby endpoint.)
na.rm
logical; if true, any
NAand
NaN's are removed from
xbefore the quantiles are computed.
names
logical; if true, the result has a
namesattribute. Set to
FALSEfor speedup with many
probs.
type
an integer between 1 and 9 selecting one of the nine quantile algorithms detailed below to be used.
…
further arguments passed to or from other methods.
Details
A vector of length
length(probs) is returned; if
names = TRUE, it has a
names attribute.
The default method works with classed objects sufficiently like numeric vectors that
sort and (not needed by types 1 and 3) addition of elements and multiplication by a number work correctly. Note that as this is in a namespace, the copy of
sort in base will be used, not some S4 generic of that name. Also note that that is no check on the ‘correctly’, and so e.g.
quantile can be applied to complex vectors which (apart from ties) will be ordered on their real parts.
Types
quantile returns estimates of underlying distribution quantiles based on one or two order statistics from the supplied elements in
x at probabilities in
probs. One of the nine quantile algorithms discussed in Hyndman and Fan (1996), selected by
type, is employed.
All sample quantiles are defined as weighted averages of consecutive order statistics. Sample quantiles of type \(i\) are defined by: $$Q_{i}(p) = (1 - \gamma)x_{j} + \gamma x_{j+1}$$ where \(1 \le i \le 9\), \(\frac{j - m}{n} \le p < \frac{j - m + 1}{n}\), \(x_{j}\) is the \(j\)th order statistic, \(n\) is the sample size, the value of \(\gamma\) is a function of \(j = \lfloor np + m\rfloor\) and \(g = np + m - j\), and \(m\) is a constant determined by the sample quantile type.
Discontinuous sample quantile types 1, 2, and 3
For types 1, 2 and 3, \(Q_i(p)\) is a discontinuous function of \(p\), with \(m = 0\) when \(i = 1\) and \(i = 2\), and \(m = -1/2\) when \(i = 3\).
Type 1
Inverse of empirical distribution function. \(\gamma = 0\) if \(g = 0\), and 1 otherwise.
Type 2
Similar to type 1 but with averaging at discontinuities. \(\gamma = 0.5\) if \(g = 0\), and 1 otherwise.
Type 3
SAS definition: nearest even order statistic. \(\gamma = 0\) if \(g = 0\) and \(j\) is even, and 1 otherwise.
Continuous sample quantile types 4 through 9
For types 4 through 9, \(Q_i(p)\) is a continuous function of \(p\), with \(\gamma = g\) and \(m\) given below. The sample quantiles can be obtained equivalently by linear interpolation between the points \((p_k,x_k)\) where \(x_k\) is the \(k\)th order statistic. Specific expressions for \(p_k\) are given below.
Type 4
\(m = 0\). \(p_k = \frac{k}{n}\). That is, linear interpolation of the empirical cdf.
Type 5
\(m = 1/2\). \(p_k = \frac{k - 0.5}{n}\). That is a piecewise linear function where the knots are the values midway through the steps of the empirical cdf. This is popular amongst hydrologists.
Type 6
\(m = p\). \(p_k = \frac{k}{n + 1}\). Thus \(p_k = \mbox{E}[F(x_{k})]\). This is used by Minitab and by SPSS.
Type 7
\(m = 1-p\). \(p_k = \frac{k - 1}{n - 1}\). In this case, \(p_k = \mbox{mode}[F(x_{k})]\). This is used by S.
Type 8
\(m = (p+1)/3\). \(p_k = \frac{k - 1/3}{n + 1/3}\). Then \(p_k \approx \mbox{median}[F(x_{k})]\). The resulting quantile estimates are approximately median-unbiased regardless of the distribution of
x.
Type 9
\(m = p/4 + 3/8\). \(p_k = \frac{k - 3/8}{n + 1/4}\). The resulting quantile estimates are approximately unbiased for the expected order statistics if
xis normally distributed.
Further details are provided in Hyndman and Fan (1996) who recommended type 8. The default method is type 7, as used by S and by R < 2.0.0.
References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988)
The New S Language. Wadsworth & Brooks/Cole.
Hyndman, R. J. and Fan, Y. (1996) Sample quantiles in statistical packages,
American Statistician 50, 361--365. 10.2307/2684934. See Also Aliases quantile quantile.default Examples
library(stats)
# NOT RUN {quantile(x <- rnorm(1001)) # Extremes & Quartiles by defaultquantile(x, probs = c(0.1, 0.5, 1, 2, 5, 10, 50, NA)/100)### Compare different typesquantAll <- function(x, prob, ...) t(vapply(1:9, function(typ) quantile(x, prob=prob, type = typ, ...), quantile(x, prob, type=1)))p <- c(0.1, 0.5, 1, 2, 5, 10, 50)/100signif(quantAll(x, p), 4)## for complex numbers:z <- complex(re=x, im = -10*x)signif(quantAll(z, p), 4)# }
Documentation reproduced from package stats, version 3.5.0, License: Part of R 3.5.0
|
$a+b\sin{x}$ and $b+a\sin{x}$ are two algebraic trigonometric expressions, where $a$ and $b$ are constants, and $x$ represents an angle of a right triangle and also a variable. In this derivative problem, the differentiation of quotient of $a+b\sin{x}$ by $b+a\sin{x}$ has to calculate with respect to $x$.
$= \,\,\,$ $\dfrac{d}{dx}{\, \Bigg(\dfrac{a+b\sin{x}}{b+a\sin{x}}\Bigg)}$
The given function is in quotient form. So, the differentiation of the given function can be calculated by the quotient rule of derivatives. According to Leibniz’s notation, the quotient rule of differentiation can be written in the following form.
$\dfrac{d}{dx}{\, \Big(\dfrac{u}{v}\Big)}$ $\,=\,$ $\dfrac{v\dfrac{du}{dx}-u\dfrac{dv}{dx}}{v^2}$
Take $u = a+b\sin{x}$ and $v = b+a\sin{x}$, then start differentiating the given function with respect to $x$.
$= \,\,\,$ $\dfrac{(b+a\sin{x}) \times \dfrac{d}{dx}{\, (a+b\sin{x})}-(a+b\sin{x}) \times \dfrac{d}{dx}{\, (b+a\sin{x})}}{(b+a\sin{x})^2}$
The derivative of sum of two functions can be done by the sum rule of derivatives.
$= \,\,\,$ $\dfrac{(b+a\sin{x}) \times \Big(\dfrac{d}{dx}{\,(a)}+\dfrac{d}{dx}{\,(b\sin{x})\Big)}-(a+b\sin{x}) \times \Big(\dfrac{d}{dx}{\,(b)}+\dfrac{d}{dx}{\,(a\sin{x})\Big)}}{(b+a\sin{x})^2}$
The derivative of a constant is always zero as per the derivative of constant rule.
$= \,\,\,$ $\dfrac{(b+a\sin{x}) \times \Big(0+\dfrac{d}{dx}{\,(b\sin{x})\Big)}-(a+b\sin{x}) \times \Big(0+\dfrac{d}{dx}{\,(a\sin{x})\Big)}}{(b+a\sin{x})^2}$
$= \,\,\,$ $\dfrac{(b+a\sin{x}) \times \dfrac{d}{dx}{\,(b\sin{x})}-(a+b\sin{x}) \times \dfrac{d}{dx}{\,(a\sin{x})}}{(b+a\sin{x})^2}$
In both $b\sin{x}$ and $a\sin{x}$ functions, the factors $a$ and $b$ are constants. So, they can be separated from trigonometric functions by the constant multiple rule of differentiation.
$= \,\,\,$ $\dfrac{(b+a\sin{x}) \times b \times \dfrac{d}{dx}{\,\sin{x}}-(a+b\sin{x}) \times a \times \dfrac{d}{dx}{\,\sin{x}}}{(b+a\sin{x})^2}$
$= \,\,\,$ $\dfrac{b(b+a\sin{x}) \times \dfrac{d}{dx}{\,\sin{x}}-a(a+b\sin{x}) \times \dfrac{d}{dx}{\,\sin{x}}}{(b+a\sin{x})^2}$
According to the derivative of sin function, the derivative of $\sin{x}$ with respect to $x$ is equal to $\cos{x}$.
$= \,\,\,$ $\dfrac{b(b+a\sin{x}) \times \cos{x}-a(a+b\sin{x}) \times \cos{x}}{(b+a\sin{x})^2}$
The differentiation for the given function is successfully completed and now it is time to simplify the function.
$= \,\,\,$ $\dfrac{b(b+a\sin{x})\cos{x}-a(a+b\sin{x})\cos{x}}{(b+a\sin{x})^2}$
In numerator, cosx is a common factor in the both terms of the expression. It can be taken out common from the expression by factorization by taking out common factor.
$= \,\,\,$ $\dfrac{\cos{x}\Big(b(b+a\sin{x})-a(a+b\sin{x})\Big)}{(b+a\sin{x})^2}$
As per distributive property of multiplication over addition, Each constant can be multiplied to its respective factor in the expression of the numerator.
$= \,\,\,$ $\dfrac{\cos{x}\Big(b^2+ab\sin{x}-(a^2+ab\sin{x})\Big)}{(b+a\sin{x})^2}$
Now, simplify the whole function to find the differentiation of the given function mathematically.
$= \,\,\,$ $\dfrac{\cos{x}\Big(b^2+ab\sin{x}-a^2-ab\sin{x}\Big)}{(b+a\sin{x})^2}$
$= \,\,\,$ $\dfrac{\cos{x}\Big(b^2-a^2+ab\sin{x}-ab\sin{x}\Big)}{(b+a\sin{x})^2}$
$= \,\,\,$ $\require{cancel} \dfrac{\cos{x}\Big(b^2-a^2+\cancel{ab\sin{x}}-\cancel{ab\sin{x}}\Big)}{(b+a\sin{x})^2}$
$= \,\,\,$ $\dfrac{\cos{x}(b^2-a^2)}{(b+a\sin{x})^2}$
$= \,\,\,$ $\dfrac{(b^2-a^2)\cos{x}}{(b+a\sin{x})^2}$
Therefore, it is calculated in this calculus problem that the derivative of the ratio of $a+b\sin{x}$ to $b+a\sin{x}$ with respect to $x$ is equal to the quotient of $(b^2-a^2)\cos{x}$ by the square of $b+a\sin{x}$.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
$a, b, c$ are positives such that $a + b + c = 1$. Determine the maximal value of $$\large \sum_{cyc}\frac{1}{a(b + c)} - \frac{a^2 + b^2 + c^2}{2abc}$$
This is a problem in a recent exam, I got $3/20$ points (and also almost everybody did worse). I didn't know how to solve this problem, then our teacher went on our group chat and said that Sum Of Square works. Thanks.
Here was my attempt (during the time taking the exam). We have that
$$\sum_{cyc}\frac{1}{a(b + c)} - \frac{a^2 + b^2 + c^2}{2abc} = \sum_{cyc}\frac{1}{a(1 - a)} - \frac{1 - 2(bc + ca + ab)}{2abc}$$
$$ = \sum_{cyc}\frac{1}{a(1 - a)} - \left(\frac{1}{a} + \frac{1}{b} + \frac{1}{c}\right) - \dfrac{1}{2abc} = \left(\frac{1}{1 - a} + \frac{1}{1 - b} + \frac{1}{1 - c}\right) - \dfrac{1}{2abc}$$
$$ = 3 + \left(\frac{a}{1 - a} + \frac{a}{1 - b} + \frac{a}{1 - c}\right) - \dfrac{1}{2abc}$$
Below is the solution I can come up with after taking the exam. I'm disappointed in myself.
|
Tool to make calculations with time dilation. Time dilation is an effect of the special relativity which states that time is going slower if an object is moving.
Time Dilation - dCode
Tag(s) : Physics-Chemistry, Date and Time
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Sponsored ads
Tool to make calculations with time dilation. Time dilation is an effect of the special relativity which states that time is going slower if an object is moving.
The dilation of time is a consequence of Einstein's theory of special relativity: the perception of flow of time is different according to the relative speed of movement of an object relative to an observer.
Calculation involves Lorentz factor and is calculated with this formula:
$$ \Delta{t} = \frac{\Delta\tau}{\sqrt{1 - \frac{v^2}{c^2}}}\ $$
Example: A person (A) voyaging during 1 hour at 99% of the speed of light will appear as a 7 hours-long trip for a stationary person (B).
According to the theory of relativity of Albert Einstein, time is defined in each frame of reference. By associating a clock in each referential, and deduce the time changes (
time dilation) for an observer outside of the clock frame of reference, which will see a clock moving with a tick slower than a stationary clock.
An experiment was carried out by installing atomic clocks in planes and returned to earth by comparing the time of the clocks: there was a shift similar to that expected by the theory.
The speed of light 'in a vacuum' is of 299792458 m/s (an order of magnitude of 300000 km/s) or 1079252848 km/h.
The Twin Paradox is a thought experiment presented by Paul Langevin, which is at first sight paradoxical / contradictory.
Take twins, one of them travels at a speed close to that of light, then returns to Earth.
According to the
time dilation phenomenon, the twin stayed on Earth lived longer than the one who left in space (which measured less seconds during the trip) and so the traveling twin became younger than his twin on Earth.
However, by changing the frame of reference, the traveling twin may consider that he has remained immobile and that it is his brother who has travel at a speed close to that of light and thus think that it is him who is become younger.
The experience of atomic clocks has clarified the fact that it is the twin traveler who is younger.
dCode retains ownership of the source code of the script Time Dilation online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Time Dilation script for offline use on PC, iPhone or Android, ask for price quote on contact page !
|
I'm going to try to design an algorithm to find all the rational roots of a polynomial equation in range [a, b]. Can someone please tell me which algorithm currently solves the problem with lowest worst-case complexity? This algorithm will be for a general purpose computer(Turing Machine).
The paper Computing Real Roots of Real Polynomials by Sagraloff and Mehlhorn from 2015 provides an almost optimal algorithm and references for simpler algorithms that might be used in practice. The CGAL library (in version 4.9) for example uses the method developed by Arno Eigenwillig in his PhD thesis
Real Root Isolation for Exact and Approximate Polynomials Using Descartes' Rule of Signs.
If you only want to find all
rational roots, you can simply use the rational root theorem. This theorem states that, given a polynomial $a_n x^n + a_{n-1}x^{n-1} + \ldots + a_1x+a_0$, for any rational root $x=p/q$, where $p,q\in \mathbb N$ and $GCD(p,q)=1$, we have: $p$ is a divisor of $a_0$ and $q$ is a divisor of $a_n$.
So, one possible algorithm is to factorise $a_0$ and $a_n$ to get all possible $p,q$ and simply 'fill in' the combinations as a ratio to see if it is a root. This way, we find all possible roots. The complexity of the root finding is negligible to the factorisation, so the complexity of this method is the complexity of factorising $a_0$ and $a_n$, which will take a long time for large $a_0$ and $a_n$ (but is fast for small $a_0$ and $a_n$, independent of the rest of the equation!)
There is a speedup, however. If a root $p/q\in [a,b]$, this means that $p\in [aq,bq]$ and $q\in [p/b,p/a]$. If $a_0$ is small, but $a_n$ is large, we can find all divisors $p_i$ of $a_0$ and test for all integers in the range $[p_i/b,p_i/a]$ whether they divide $a_n$. If $a_n$ is large and $[a,b]$ not too big, this will be a lot faster than factoring $a_n$. This means that we only have to do one factorisation and can do it on the smallest of $a_0$ and $a_n$.
So, to get a complete overview of the worst case complexity for the methods described, define $a_{\max}=\max\{a_0,a_n\}$ and $a_{\min} = \min\{a_0,a_n\}$. Assume $b\geq a>1$ (another worst case exists when $a,b<1$, but that will have the same running time, only with $1/a$ and $1/b$). We will factor $a_{\min}$ and consider all it's divisors, of which there are $O(\log n)$
on average (The actual worst case upper bound is $\exp(O(\frac{\log n}{\log\log n}))$, but this factor will likely be dominated anyway, so I'd rather keep it simple. A derivation and more is given here).
All divisors of $a_{\min}$ are in the range $[1,\sqrt{a_{\min}}]$, so we do at most $\lceil (b-a)\sqrt{a_{\min}} \rceil$ divisor tests per factor of $a_\min$. Since we know that any factor of $n$ must be in $[1,\sqrt{n}]$, we have that $b-a\leq \sqrt{a_\max}$ to be useful (if not, replace $[a,b]$ by $[1,\sqrt{a_\max}]$). So, we do at most $\lceil \sqrt{a_{\min}a_\max} \rceil$ divisor tests. Testing whether a number is a divisor of $a_\max$ takes $O(\log a_\max)$ time, using the Euclidean algorithm.
Factoring $a_\min$ takes $O(F(a_\min))$, where $F(n):=\exp ((\log n)^{1/3}(\log \log n)^{2/3})$.
So, in total, this algorithm has a worst case complexity of $O(F(a_\min) + (b-a)\sqrt{a_\min}\log{a_\min}\log{a_\max})$ time. Since we can assume $(b-a)\leq a_\max$, the factoring is the only non-polynomial (in $a_\min$ or $a_\max$) part of this formula, so we get that the complexity is simply $O(F(a_\min))$.
I highly doubt that it is possible to find all rational roots within a range without factoring at least one of the coefficients, because that would mean (by the rational root theorem), that we have found a more efficient algorithm for factoring! In that case, the algorithm I gave is asymptotically optimal, as it is the cost of factoring the smallest of the coefficients $a_0$ and $a_n$.
|
In every Hilbert space $H \neq \{0 \}$, there exists a total orthonormal set.
I think I've understood the proof given by Erwin Kreyszig in
Introductory Functional Analysis With Applications.
The following questions arise in my mind:
Is there a total orthonormal set in \emph{every} inner product space?
Is there a total proper subset in every normed (or Banach) space?
A subset $M$ of a normed space $X$ is said to be total in $X$ if span of $M$ is dense in $X$.
If $X$ is an inner product space and if $M (\neq \emptyset) \subset X$ is total in $X$, then $$M^\perp \colon= \{ \ x \in X \ \colon \ \langle x, v \rangle = 0 \ \mbox{ for all } \ v \in M \ \} = \{0 \}.$$
If $X$ is a Hilbert space and if $M^\perp = \{0 \}$, then $M$ is total in $X$.
|
Wikipedia says that if we can prove $\forall x_1...\forall x_n \exists! y . \phi(y,x_1,...,x_n)$, then introducing a function symbol $f$ and the axiom $\forall x_1...\forall x_n.\phi(f(x_1,...,x_n),x_1,...,x_n)$ gives a conservative extension of the original theory. I'd like to understand the importance of the uniqueness requirement. Specifically,
The "0-ary" case: If we've proved $\exists x.\phi(x)$ without the uniqueness part, is it safe (i.e. conservative) to introduce a constant symbol $c$ and an axiom $\phi(c)$? We seem to allow this in natural-language proofs (as in "at least one element satisfies $\phi$, so let $c$ be one of them").
If we start with ZF set theory and allow function symbol extensions without the uniqueness requirement, is the resulting proof system equivalent to ZFC in some sense? (It seems like it would prove the axiom of choice. Is it stronger than ZFC?)
Edit: I came across the conservativity theorem, which suggests that the uniqueness requirement is
not necessary. Now I'm wondering: Does the proof of the conservativity theorem require the axiom of choice (in the metatheory)? What's wrong with this argument: By introducing a choice function symbol (as described in the second part of this question), we can prove AC from ZF. By the conservativity theorem, we conclude that AC is a consequence of ZF. This contradicts the independence of AC.
|
The question I have been given is
Given that $z=2e^{i\frac{\pi}{7}}$, find the smallest positive integer of $k$ such that $z\times{z^2}\times{z^3}\times{...}\times{z^k}$ is real, and state the value of $|z\times{z^2}\times{z^3}\times{...}\times{z^k}|$ in this case.
A previous part of the question had me show that for any complex number $z=re^{i\theta}$, $z\times{z^2}\times{z^3}\times{...}\times{z^k}=(re^{i\theta})^{\frac{k(k+1)}{2}}$, which I achieved using de Moivre's Theorem
Here's what I've tried doing;
Using Euler's Identity, I expanded the product to
$\cos{\left(\frac{k(k+1)\frac{\pi}{7}}{2}\right)}+i\sin{\left(\frac{k(k+1)\frac{\pi}{7}}{2}\right)}$
For a number to be real, I know that it's imaginary component must equal zero, so I figured that I would have to find the smallest positive integer of $k$ that satisfies the equation
$$\sin{\left(\frac{k(k+1)\frac{\pi}{7}}{2}\right)}=0$$
From here, I tried the following; $$\therefore \frac{k(k+1)\frac{\pi}{7}}{2}=\arcsin{0}=0+a\pi\qquad a\in\mathbb{Z} $$ $$\therefore k(k+1)=\frac{2a\pi}{\frac{\pi}{7}}$$ $$\therefore k^2+k=14a$$ $$\therefore k^2+k-14a=0$$ $$\therefore k=\frac{-1\pm\sqrt{1-56a}}{2}$$ I'm not sure where to go from here. If $a$ was a defined constant, I could easily find $k$, however, because it isn't and I'm trying to find the smallest positive integer. According to the textbook, the solution is $k=6$, giving a modulus of $2^{21}$, but I do not understand how they came to this answer. I think I've gone about this the wrong way and there is probably a simpler method.
Edit: Fixed my dumb $\arcsin{0}$ mistake..
|
A trigonometric identity to expand a trigonometric function having difference of two angles is called the angle difference identity. In trigonometry, there are four angle difference trigonometric identities and they’re used as formulas in mathematics. Let’s start to study all the angle difference identities with proofs.
$(1) \,\,\,\,$ $\sin{(A-B)}$ $\,=\,$ $\sin{A}\cos{B}$ $-$ $\cos{A}\sin{B}$
$(2) \,\,\,\,$ $\sin{(x-y)}$ $\,=\,$ $\sin{x}\cos{y}$ $-$ $\cos{x}\sin{y}$
$(3) \,\,\,\,$ $\sin{(\alpha-\beta)}$ $\,=\,$ $\sin{\alpha}\cos{\beta}$ $-$ $\cos{\alpha}\sin{\beta}$
$(1) \,\,\,\,$ $\cos{(A-B)}$ $\,=\,$ $\cos{A}\cos{B}$ $+$ $\sin{A}\sin{B}$
$(2) \,\,\,\,$ $\cos{(x-y)}$ $\,=\,$ $\cos{x}\cos{y}$ $+$ $\sin{x}\sin{y}$
$(3) \,\,\,\,$ $\cos{(\alpha-\beta)}$ $\,=\,$ $\cos{\alpha}\cos{\beta}$ $+$ $\sin{\alpha}\sin{\beta}$
$(1) \,\,\,\,$ $\tan{(A-B)}$ $\,=\,$ $\dfrac{\tan{A}-\tan{B}}{1+\tan{A}\tan{B}}$
$(2) \,\,\,\,$ $\tan{(x-y)}$ $\,=\,$ $\dfrac{\tan{x}-\tan{y}}{1+\tan{x}\tan{y}}$
$(3) \,\,\,\,$ $\tan{(\alpha-\beta)}$ $\,=\,$ $\dfrac{\tan{\alpha}-\tan{\beta}}{1+\tan{\alpha}\tan{\beta}}$
$(1) \,\,\,\,$ $\cot{(A-B)}$ $\,=\,$ $\dfrac{\cot{B}\cot{A}+1}{\cot{B}-\cot{A}}$
$(2) \,\,\,\,$ $\cot{(x-y)}$ $\,=\,$ $\dfrac{\cot{y}\cot{x}+1}{\cot{y}-\cot{x}}$
$(3) \,\,\,\,$ $\cot{(\alpha-\beta)}$ $\,=\,$ $\dfrac{\cot{\beta}\cot{\alpha}+1}{\cot{\beta}-\cot{\alpha}}$
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Mathematically, the exact value of cot of $45$ degrees can be derived in three different methods. One of three methods is trigonometric approach but the remaining two methods are slightly different geometric methods. Study all of them here to know how to find the $\cot{(50^g)}$ value in trigonometry.
On the basis of direct relation between adjacent and opposite sides, the value of $\cot{\Big(\dfrac{\pi}{4}\Big)}$ is derived in mathematics in theoretical geometry method.
$\cot{(45^°)}$ $\,=\,$ $\dfrac{Length \, of \, Adjacent \, side}{Length \, of \, Opposite \, side}$
$\implies \cot{(45^°)} \,=\, \dfrac{PR}{QR}$
The lengths of adjacent and opposite sides are equal when angle of right triangle is $\dfrac{\pi}{4}$ radians. Therefore, the length of both opposite and adjacent sides is denoted by $l$ in this case.
$\implies \cot{(45^°)} \,=\, \dfrac{l}{l}$
$\implies \cot{(45^°)} \,=\, \require{cancel} \dfrac{\cancel{l}}{\cancel{l}}$
$\,\,\, \therefore \,\,\,\,\,\, \cot{(45^°)} \,=\, 1$
You can even find the exact value of cot of $\dfrac{\pi}{4}$ radians on your own by constructing a right triangle with $45$ degrees angle by using geometrical tools. Here, you are going to learn how to find it geometrically.
The $\Delta MKL$ is a right triangle with $45$ degrees angle. Now, let’s find the exact value of $\cot{(50^g)}$ from this triangle.
$\cot{(45^°)} = \dfrac{Length \, of \, Adjacent \, side}{Length \, of \, Opposite \, side}$
$\implies \cot{(45^°)} \,=\, \dfrac{KM}{LM}$
Actually, the lengths of adjacent and opposite sides are unknown but they can be measured by a ruler.
If you measure them by a ruler, you will be understood that the lengths of both adjacent side ($KM$) and opposite side ($LM$) are equal and the length of each side is equal to $7.1 \, cm$ approximately.
$\implies \cot{(45^°)} \,=\, \dfrac{KM}{LM} = \dfrac{7.1}{7.1}$
$\implies \cot{(45^°)} \,=\, \require{cancel} \dfrac{\cancel{7.1}}{\cancel{7.1}}$
$\,\,\, \therefore \,\,\,\,\,\, \cot{(45^°)} \,=\, 1$
The value of cotangent of $45$ degrees can be exactly evaluated in trigonometry by the reciprocal identity of tan function.
$\cot{(45^°)} \,=\, \dfrac{1}{\tan{(45^°)}}$
Now, substitute the value of tan of $45$ degrees to get the $\cot{\Big(\dfrac{\pi}{4}\Big)}$ value.
$\implies \cot{(45^°)} \,=\, \dfrac{1}{1}$
$\,\,\, \therefore \,\,\,\,\,\, \cot{(45^°)} \,=\, 1$
According to proofs of cot of $45$ degrees from above three methods, the exact value of $\cot{\Big(\dfrac{\pi}{4}\Big)}$ is equal to one.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Let $Z^t = (Y_1,\ldots,Y_t)$ be a sequence of random variables each taking values in $Y$. The random variables are not necessarily i.i.d but we know the joint distributions. i.e for every $z = (z_1,...,z_t)$ we know $P^{Z^t}(z)$
The min-entropy of a random variable $X$ is defined as $-\log_2(\max P(x) )$ for x in the value set of $X$.
Finally, we define a sequence of values $H^\infty(t) = -\log_2(\max P^{Z^t}(z))$ for $Z^t$ defined as above.
How can we show that $H^\infty(t)$ is monotonically increasing in t? i.e for $t' \geq t$ it is the case that $H^\infty(t') \geq H^\infty(t)$.
What I am trying without success is to show that $\max P^{Z^t}(z^t) \geq \max P^{Z^{t'}}(z^{t'})$ where $t \leq t'$.
|
Let $V$ be a finite-dimensional vector space, $\:A\:$ a matrix (or linear transformation) from $V$ to $W$ (which sends a vector $v \in V$ to $Av \in W$) , $x,y \in Rowspace(A),\: x \neq y$. $$If Ax = Ay \:\text{then}\: Ax-Ay=A(x-y)=0, \text{then}\: x-y \in Nullspace(A)$$ Since $Rowspace(A)$ is a linear subspace of $V$, every linear combination of $x,y$ is in $Rowspace(A)$, in particular $x-y$, so $x-y \in Rowspace(A)$ and $x-y \in Rowspace(A) \cap Nullspace(A)$.
The last step is a consequence of the "Rank-Nullity theorem" (you can look for it in any linear Algebra book, for example, Friedberg's "Linear Algebra"). In terms of matrices it says that:
$$ dimension(Rowspace(A)) \oplus dimension (Nullspace(A)) = dimension (V)$$
Now, $\oplus$ means direct sum of 2 linear subspaces of $V$, which means that $Rowspace(A) \cap Nullspace(A) = {0}$. By this last statement, $x-y=0$, so $x=y$, a contradiction of the hypothesis $x\neq y$.
If you want to really learn Linear Algebra I suggest you expand your learning experience from a book.
|
Equivalence of Definitions of Equivalent Division Ring Norms Contents 1 Theorem 2 Proof 2.1 Topologically Equivalent implies Convergently Equivalent 2.2 Convergently Equivalent implies Null Sequence Equivalent 2.3 Null Sequence Equivalent implies Open Unit Ball Equivalent 2.4 Open Unit Ball Equivalent implies Norm is Power of Other Norm 2.5 Norm is Power of Other Norm implies Topologically Equivalent 2.6 Norm is Power of Other Norm implies Cauchy Sequence Equivalent 2.7 Cauchy Sequence Equivalent implies Open Unit Ball Equivalent 3 Sources Theorem
Let $R$ be a division ring.
Let $\norm {\,\cdot\,}_1: R \to \R_{\ge 0}$ and $\norm {\,\cdot\,}_2: R \to \R_{\ge 0}$ be norms on $R$.
$\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ are
equivalent if and only if for all sequences $\sequence {x_n}$ in $R$: $\sequence {x_n}$ converges to $l$ in $\norm{\,\cdot\,}_1 \iff \sequence {x_n}$ converges to $l$ in $\norm{\,\cdot\,}_2$
$\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ are
equivalent if and only if for all sequences $\sequence {x_n}$ in $R$: $\sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_1 \iff \sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_2$
$\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ are
equivalent if and only if $\forall x \in R: \norm{x}_1 \lt 1 \iff \norm{x}_2 \lt 1$
$\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ are
equivalent if and only if $\exists \alpha \in \R_{\gt 0}: \forall x \in R: \norm{x}_1 = \norm{x}_2^\alpha$
$\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ are
equivalent if and only if for all sequences $\sequence {x_n}$ in $R$: $\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1 \iff \sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_2$ Proof
Let $d_1$ and $d_2$ be topologically equivalent metrics.
Let $\sequence {x_n}$ converge to $l$ in $\norm {\,\cdot\,}_1$.
Let $\epsilon \in \R_{\gt 0}$ be given.
By the definition of an open set in a metric space then:
$\exists \delta \in \R_{\gt 0}: \map {B_\delta^1} l \subseteq \map {B_\epsilon^2} l$
Hence:
$\forall x \in R: \norm {x - l}_1 < \delta \implies \norm {x - l}_2 < \epsilon$ Since $\sequence {x_n}$ converges to $l$ in $\norm{\,\cdot\,}_1$ then: $\exists N \in \N: \forall n \ge N: \norm {x_n - l}_1 < \delta$
Hence:
$\exists N \in \N: \forall n \ge N: \norm {x_n - l}_2 < \epsilon$ Since $\sequence {x_n}$ and $\epsilon \gt 0$ were arbitrary then it has been shown that for all sequences $\sequence {x_n}$ in $R$: $\sequence {x_n}$ converges to $l$ in $\norm {\,\cdot\,}_1 \implies \sequence {x_n}$ converges to $l$ in $\norm {\,\cdot\,}_2$. By a similar argument it is shown that for all sequences $\sequence {x_n}$ in $R$: $\sequence {x_n}$ converges to $l$ in $\norm {\,\cdot\,}_2 \implies \sequence {x_n}$ converges to $l$ in $\norm {\,\cdot\,}_1$.
The result follows.
$\Box$
Let $\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ satisfy:
for all sequences $\sequence {x_n}$ in $R:\sequence {x_n}$ converges to $l$ in $\norm{\,\cdot\,}_1 \iff \sequence {x_n}$ is a converges to $l$ in $\norm{\,\cdot\,}_2$ Let $0_R$ be the zero of $R$, then: $\sequence {x_n}$ converges to $0_R$ in $\norm{\,\cdot\,}_1 \iff \sequence {x_n}$ converges to $0_R$ in $\norm{\,\cdot\,}_2$
Hence:
$\sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_1 \iff \sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_2$
$\Box$
Let $\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ satisfy:
for all sequences $\sequence {x_n}$ in $R:\sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_1 \iff \sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_2$ Let $x \in R$.
Let $\sequence {x_n}$ be the sequence defined by: $\forall n: x_n = x^n$.
\(\displaystyle \norm{x}_1 \lt 1 \quad\) \(\iff\) \(\displaystyle \) $\sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_1$ Sequence of Powers of Number less than One in Normed Division Ring \(\displaystyle \) \(\iff\) \(\displaystyle \) $\sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_2$ Assumption \(\displaystyle \) \(\iff\) \(\displaystyle \) $\norm{x}_2 \lt 1$ Sequence of Powers of Number less than One in Normed Division Ring $\Box$
Let $\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ satisfy:
$\forall x \in R:\norm{x}_1 \lt 1 \iff \norm{x}_2 \lt 1$ Case 1
Let for all $x \in R, x \neq 0_R$, satisfy $\norm{x}_1 \ge 1$.
Then:
$\norm{\,\cdot\,}_1$ is the trivial norm.
By assumption, for all $x \in R, x \neq 0_R$, then $\norm{x}_2 \ge 1$.
Similarly $\norm{\,\cdot\,}_2$ is the trivial norm.
Hence $\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ are equal.
For $\alpha = 1$ the result follows.
$\Box$
Case 2
Let $x_0 \in R$ such that $x_0 \neq 0_R$ and $\norm{x_0}_1 \lt 1$.
By assumption then $\norm{x_0}_2 \lt 1$.
Let $\alpha = \dfrac {\log \norm {x_0}_1 } {\log \norm {x_0}_2 }$.
Then $\norm{x_0}_1 = \norm{x_0}_2^\alpha$.
Since $\norm{x_0}_1, \norm{x_0}_2 \lt 1$ then: $\log \norm {x_0}_1 < 0$ $\log \norm {x_0}_2 < 0$
So $\alpha \gt 0$.
Then:
$\forall x \in R: \norm{x}_1 = \norm{x}_2^\alpha$ $\Box$
Let $\norm {\,\cdot\,}_1$ and $\norm {\,\cdot\,}_2$ satisfy:
$\exists \alpha \in \R_{\gt 0}: \forall x \in R: \norm x_1 = \norm x_2^\alpha$
Let $x \in R$ and $\epsilon \in \R_{\gt 0}$
Then for $y \in R$:
\(\displaystyle \norm {y - x}_1 < \epsilon\) \(\leadstoandfrom\) \(\displaystyle \norm {y - x}_2^\alpha < \epsilon\) \(\displaystyle \) \(\leadstoandfrom\) \(\displaystyle \norm {y - x}_2 < \epsilon^{1 / \alpha}\)
Hence:
$\map {B^1_\epsilon} x = \map {B^2_{\epsilon^{1 / \alpha} } } x$
where:
$\map {B^1_\epsilon} x$ is the open ball in $d_1$ centered on $x$ with radius $\epsilon$ $\map {B^2_{\epsilon^{1 / \alpha} } } x$ is the open ball in $d_2$ centered on $x$ with radius $\epsilon^{1 / \alpha}$ Since $x$ and $\epsilon$ were arbitrary then: Similarly, for $y \in R$:
\(\displaystyle \norm {y - x}_2 < \epsilon\) \(\leadstoandfrom\) \(\displaystyle \norm {y - x}_2^\alpha < \epsilon^\alpha\) \(\displaystyle \) \(\leadstoandfrom\) \(\displaystyle \norm {y - x}_1 < \epsilon^\alpha\)
So:
$\Box$
Let $\norm{\,\cdot\,}_1$ and $\norm{\,\cdot\,}_2$ satisfy:
$\exists \alpha \in \R_{\gt 0}: \forall x \in R: \norm{x}_1 = \norm{x}_2^\alpha$
Let $\sequence {x_n}$ be a Cauchy sequence in $\norm{\,\cdot\,}_1$.
Let $\epsilon \gt 0$ be given.
Since $\sequence {x_n}$ is a Cauchy sequence then: $\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_1 \lt \epsilon^\alpha$
Then:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_2^\alpha \lt \epsilon^\alpha$
Hence:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_2 \lt \epsilon$ So $\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_2$ It follows that for all sequences $\sequence {x_n}$ in $R$: $\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1 \implies \sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_2$
$\Box$
Let $\sequence {x_n}$ be a Cauchy sequence in $\norm{\,\cdot\,}_2$.
Let $\epsilon \gt 0$ be given.
Since $\sequence {x_n}$ is a Cauchy sequence then: $\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_2 \lt \epsilon^{1/\alpha}$
Then:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_2^\alpha \lt \epsilon$
Hence:
$\exists N \in \N: \forall n,m \ge N: \norm {x_n - x_m}_1 \lt \epsilon$ So $\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1$ It follows that for all sequences $\sequence {x_n}$ in $R$: $\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_2 \implies \sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1$ The result follows
$\Box$
The contrapositive is proved.
Let there exist $x \in R$ such that $\norm{x}_1 \lt 1$ and $\norm{x}_2 \ge 1$.
Let $\sequence {x_n}$ be the sequence defined by: $\forall n: x_n = x^n$.
By Sequence of Powers of Number less than One in Normed Division Ring then $\sequence {x_n}$ is a null sequence in $\norm{\,\cdot\,}_1$.
By convergent sequence in normed division ring is a Cauchy sequence then $\sequence {x_n}$ is a Cauchy sequence in $\norm{\,\cdot\,}_1$.
By norm of unity and the assumption $\norm{x}_1 \lt 1$ then $x \neq 1_R$.
Then $x - 1_R \neq 0_R$.
By norm axiom (N1) (Positive Definiteness) then $\norm {x - 1_R}_2 \gt 0$.
Let $\epsilon = \dfrac {\norm {x - 1_R}_2} 2$. Then $\norm {x - 1_R}_2 \gt \epsilon$. Since $\norm{x}_2 \ge 1$ then for all $n \in \N$ then:
\(\displaystyle \norm{x_n}_2\) \(=\) \(\displaystyle \norm{x^n}_2\) Definition of $x_n$ \(\displaystyle \) \(=\) \(\displaystyle \norm{x}_2^n\) norm axiom (N2) (Multiplicativity) \(\displaystyle \) \(\ge\) \(\displaystyle 1\) For all $n \in \N$ then:
\(\displaystyle \norm {x_{n+1} - x_n}_2\) \(=\) \(\displaystyle \norm {x^{n+1} - x^n}_2\) \(\displaystyle \) \(=\) \(\displaystyle \norm {x^n x - x^n}_2\) \(\displaystyle \) \(=\) \(\displaystyle \norm {x^n \paren {x - 1_R} }_2\) \(\displaystyle \) \(=\) \(\displaystyle \norm {x^n}_2 \norm {x - 1_R}_2\) norm axiom (N2) (Multiplicativity) \(\displaystyle \) \(\gt\) \(\displaystyle \epsilon\)
So $\sequence {x_n}$ is not a Cauchy sequence in $\norm{\,\cdot\,}_2$.
The theorem now follows by the Rule of Transposition.
$\blacksquare$
|
$\log_{3}{(5x-2)}$ $-$ $2\log_{3}{\sqrt{3x+1}}$ $\,=\,$ $1-\log_{3}{4}$ is a logarithmic equation. It is developed in mathematics by taking number $3$ as base of the logarithms.
The square root of $3x+1$ can be eliminated from second term by the multiply factor $2$ as exponent of the $3x+1$. It can be done by using power rule of logarithms.
$\implies$ $\log_{3}{(5x-2)}$ $-$ $\log_{3}{{(\sqrt{3x+1})}^2}$ $\,=\,$ $1-\log_{3}{4}$
$\implies$ $\log_{3}{(5x-2)}$ $-$ $\log_{3}{(3x+1)}$ $\,=\,$ $1-\log_{3}{4}$
Make the logarithmic equation to have log terms one side and constant term in other side of the equation.
$\implies$ $\log_{3}{(5x-2)}$ $-$ $\log_{3}{(3x+1)}$ $+$ $\log_{3}{4}$ $\,=\,$ $1$
A negative sign between first two log terms represents subtraction of log terms. They can be combined by using quotient rule of logarithms.
$\implies$ $\log_{3}{\Bigg(\dfrac{5x-2}{3x+1}\Bigg)}$ $+$ $\log_{3}{4}$ $\,=\,$ $1$
A plus sign between the log terms expresses a summation of them. They can be merged as a logarithmic term by the product rule of logarithms.
$\implies$ $\log_{3}{\Bigg(\dfrac{4(5x-2)}{3x+1}\Bigg)}$ $\,=\,$ $1$
Write the logarithmic equation in exponential form equation by the mathematical relation between logarithms and exponents.
$\implies$ $\dfrac{4(5x-2)}{3x+1}$ $\,=\,$ $3^1$
$\implies$ $\dfrac{4(5x-2)}{3x+1}$ $\,=\,$ $3$
Use cross multiplication method to solve the algebraic equation and it evaluates the value of $x$.
$\implies$ $4(5x-2)$ $\,=\,$ $3(3x+1)$
$\implies$ $4 \times 5x-4 \times 2$ $\,=\,$ $3 \times 3x + 3 \times 1$
$\implies$ $20x-8$ $\,=\,$ $9x+3$
$\implies$ $20x-9x$ $\,=\,$ $3+8$
$\implies$ $11x$ $\,=\,$ $11$
$\implies$ $x$ $\,=\,$ $\dfrac{11}{11}$
$\implies$ $x$ $\,=\,$ $\require{cancel} \dfrac{\cancel{11}}{\cancel{11}}$
$\,\,\, \therefore \,\,\,\,\,\, x$ $\,=\,$ $1$
Thus, the log equation $\log_{3}{(5x-2)}$ $-$ $2\log_{3}{\sqrt{3x+1}}$ $\,=\,$ $1-\log_{3}{4}$ is solved by properties of logarithms in logarithmic mathematics.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
|
If $c$ is odd=$2k+1$(say),
(i)$c^2=(2k+1)^2=8\frac{k(k+1)}{2}+1=8y+1$ for some integer $y$.
$c^4=(c^2)^2=(8y+1)^2=64y^2+16y+1=1+16(y+4y^2)=1+16z$ for some integer $z$.
(ii) $c^4=(2k+1)^4=(2k)^4+ ^4C_1(2k)^3+ ^4C_2(2k)^2+ ^4C_3(2k)+ 1$
$=16k^4+32k^3+24k^2+8k+1≡8k^2+8k+1\pmod {16}=16\frac{k(k+1)}{2}+1≡1\pmod {16}$
(iii)we have already found $8\mid(c^2-1)$
Now $2\mid(c^2+1)$ as $c$ is odd, so, $8\cdot 2\mid(c^2-1)(c^2+1)=>16\mid(c^4-1)$
(iv) Using this, $\lambda(16)=\frac{\phi(16)}{2}$ as $16$ is a power$(≥3)$ of $2$, so $\lambda(16)=4=>c^4≡ 1\pmod {16}$ if $(c,16)=1$ i.e., if $c$ is odd.
So, in all the 4 ways we have proved, $c^4$ leaves remainder $1$ when divided by $16$ if $c$ is odd.
So, $a^4$, $b^4$ each will leave $1$ as remainder when divided by $16$
|
I am reading an article entitled "Diffuse Interface Models on Graphs for Classification of High Dimensional Data." Seems like the idea is to use the Ginzburg-Landau functional, in association with graph partitioning methods to apply classification on high-dimensional data.
The Ginzburg-Landau functional looks like: $$ GL(u) = \frac{\epsilon}{2}\int |\nabla u|^2dx + \frac{1}{\epsilon}\int W(u)dx $$
Where the function $W(u)$ is a double well potential like $\frac{1}{4}(u^2 - 1)^2$. So the first term in the functional is just the familiar Dirichlet energy which applies smoothness on the solution. But the second term is the double well potential which looks like a $w$ with wells corresponding to -1 and +1.
The paper makes a couple of claims.
That the GL functional can be used instead of the Total Variation norm, which is $\int |\nabla u|dx$.
That minimizing the functional aids in some image processing applications like segmentation.
So I was trying to understand both of these claims.
First, why can the GL function be a replacement for the Total Variation norm, when the total variation norm has no double well? The paper says something about Gamma convergence, but I was not clear on what that meant.
Second. can someone explain the intuition or motivation behind using the GL functional for this type of classification. I mean I understand the benefit of the Dirichlet energy in applications like image denoising and such. The double well potential $W(u)$ is not clear. This is not a difference operator like the gradient. So the function $W(u)$ will just try and adjust the interface between the two wells so that pixels fall at lower energy points either in the -1 or +1 wells. But I don't understand what this kind of separation or partitioning does? Is the resulting GL functional used to calculate the edge weights in a graph partitioning scheme, like spectral clustering?
Any insights would be appreciated.
|
I am trying to recreate the Bayesian Hierarchical Clustering algorithm using Python. The example in section two requires evaluating the following double integral (univariate case):
\begin{align} p(D_k|H_k) &= \int_\theta p(D_k | \theta) p(\theta | \theta_0) d\theta \\ &= \int_\mu \int_\phi \prod_{i = 1}^n \mathcal{N}(x_i | \mu, \phi) \mathcal{N}(\mu | \mu_0, \tau^2) \mathcal{G}(\phi | \alpha, \beta) \thinspace d\phi \thinspace d\mu \\ &= \int_\mu \int_\phi \prod_{i = 1}^n \frac{\phi^{\frac{1}{2}}}{ \sqrt{2 \pi} } \exp \left({\frac{\phi(x_i - \mu)^2}{-2}} \right) \frac{\beta^\alpha}{\Gamma(\alpha)}{\phi^{\alpha - 1} } e^{ - \beta \phi} \frac{\tau^{\frac{1}{2}}}{ \sqrt{2 \pi} } \exp \left({\frac{\tau(\mu - \mu_0)^2}{-2}} \right) \thinspace d\phi \thinspace d\mu \\ &= \int_\mu \left[ \int_\phi \prod_{i = 1}^n \frac{\phi^{\frac{1}{2}}}{ \sqrt{2 \pi} } \exp \left({\frac{\phi(x_i - \mu)^2}{-2}} \right) \frac{\beta^\alpha}{\Gamma(\alpha)}{\phi^{\alpha - 1} } e^{ - \beta \phi} \thinspace d\phi \thinspace \right] \frac{\tau^{\frac{1}{2}}}{ \sqrt{2 \pi} } \exp \left({\frac{\tau(\mu - \mu_0)^2}{-2}} \right) \thinspace \thinspace d\mu \\ \end{align}
Note that $\beta$, $\alpha$, $\mu_0$, and $\tau$ are priors that I have set (i.e. they are known).
I think it is best to use importance sampling here starting with the integral with respect to $\phi$. I am teaching myself importance sampling, but I do not understand a few things for this case:
1) How do I make sure the capture the $x_i$ product term in the inner integral? Do I need to evaluate the density at sampled values of \mu and \phi? If so, what is the best method to taking these samples and then performing the following importance sampling?
2) More broadly, is importance sampling in fact the best method for computing this integral? Is there a way to check that my results are valid for a complicated expression like this one?
Thank you for your time.
EDIT:
I have used the notion of Normal-Gamma conjugacy to reduce the integral above to the following:
$$ \int_\mu \int_\phi \mathcal{G}(\phi | \frac{n + \alpha}{2}, \frac{\beta \Sigma_i(x_i - \mu)^2}{2}) \thinspace d\phi \mathcal{N}(\mu | \mu_0, \tau^2) \thinspace d\mu \\ $$
But I am stil unsure about the importance sampler due to the unknown $\mu$.
|
This is an old revision of the document!
In this tutorial, we write an Alphabets (or Alpha, for now the two are synonymous) program, starting from a mathematical equation for LU decomposition. Then we will generate code to execute the alpha program, and test the generated code for correctness.
The equation for LU Decomposition, derived from first principles using simple algebra in Foundations (pg.3), is as follows: $$ U_{i,j}=\begin{cases} 1=i\le j & A_{i,j}\\ 1<i\le j & A_{i,j}-\sum_{k=1}^{i-1}L_{i,k}U_{k,j} \end{cases}\\ L_{i,j}=\begin{cases} 1 = i\le j & \frac{A_{i,j}}{U_{j,j}}\\ 1< i\le j & \frac{1}{U_{j,j}}(A_{i,j}-\sum_{k=1}^{j-1}L_{i,k}U_{k,j}) \end{cases} $$
[Temp note due to : in the last case of L, the condition is “1 < j ⇐ i”]
Let's start from an empty alpha file, with LUD as the name of the system, and a positive integer N as its parameter. A system (Affine System) takes its name from system of affine recurrence equations, and represents a block of computation. An Alpha program may contain multiple systems.
Caveat: Remember the phrase, “It's not a bug, it's a feature”? Well, in a tutorial, a feature is called a “learning opportunity.”
Parameters are runtime constants represented with some symbol in the code. In this example, parameter N will be used to define the size of the matrices, which is not known until runtime.
affine LUD {N|N>0} .
In most cases, a computation uses some inputs and produces outputs. Such variables must be declared with a name, a data type, and a shape/size. In Alpha, the shape/size is represented with polyhedral domains.For this example, the
A matrix is given, and we are computing two triangular matrices
L and
U.
A is an NxN square matrix. The declaration for
A looks as follows:
float A {i,j|1<=(i,j)<=N}; //starting from 1 to be consistent with the equation in the notesSimilarly,
L is a lower triangular matrix of size N (with unit diagonals, implicit) and
U is an upper triangular matrix of size N. The declarations should look like the following:
// The convention is that i is the vertical axis going down, and j is the horizontal axis float L {i,j|1<i<=N && 1<=j<i}; // Note that the diagonal elements of L are not explicitly declared float U {i,j|1<=j<=N && 1<=i<=j};Now these variable declarations need to be placed at appropriate places to specify whether they are input/output/local.
input/
given is the keyword for input,
output/
returns is the keyword for output, and
local/
using is the keyword for local variables.
affine LUD {N|N>0} input float A {i,j|1<=(i,j)<=N}; output float L {i,j|1<=j<i<=N}; float U {i,j|1<=i<=j<=N}; .
Polyhedral domains are represented as { “index names” | “affine constraints using indices and parameters” }, where constraints can be intersected with
“&&”. Sometimes constraints can be expressed with short-hand notation like
“a<b<c” or
“(b,c)<0”.
Unions of such domains can be expressed as “{ a,b | constraints on a and b } || { c,d | constraints on c and d }”. One important point about Alphabets domains is that the names given to indices are only for textual representation. Internally, all analysis/transformation/code generation tools only care about which dimension the constraint applies to.For example, a domain { i,j | 0⇐i<j<N } is equivalent to { x,y | 0⇐x<y<N }, because
i and
x are both names given to the first dimension, and
j and
y are names given to the second dimension.
Now the only remaining step before a complete Alphabets program is writing the equations. After a little experience, the connection from mathematical equations (of a certain form) to Alphabets equations should become increasingly clear. There are two slightly different syntactic conventions for writing equations, one is called the “Show syntax” and the other is called “AShow syntax”. Show syntax is closer to the internal representation of Alphabets programs, and is more expressive when writing complex programs. AShow syntax uses “array notation” so that it is easier for people used to imperative programs.
We will first write the equation for
U in AShow syntax, and then move on to Show as we write the equation for
L.
In this equation,
U is on the left hand side, and the right hand side should define
U for each point in the declared domain of the
U variable.In AShow syntax, the names for indices used appear on the LHS of the equations.
For this example, the following LHS for
U gives
i, j as the names for the first and second dimensions to be used when writing the expressions in the RHS.These names do not have to match the names used in variable declaration. You could use
x,y instead of
i,j if desired.
U[i,j] = RHSexpr;
The first thing you notice in the definition of
U in the mathematical equation is the branch based on values of
i and
j. This branching is expressed with CaseExpression in Alphabets.A CaseExpression starts with the keyword “
case”, ends with keyword “
esac”, and has list of “
;”-delimited expressions, called “clauses” as its children.Often (but not always), each child of a case is a RestrictExpression (whose syntax is “domain : expr”), which restricts the domain to the specified domain.
Using the above expressions, the branching of the definition of
U is as follows :
U[i,j] = case {|1==i} : expr1; {|1<i} : expr2; esac;Note that because index names are already declared in the context (equation LHS), there is nothing to the left of the
| in the AShow syntax.
Moving on to the definitions in each case, the first case is . This is written as
A[i,j] in AShow syntax, similar to accessing an array. A variable without a square bracket, is treated either as a scalar variable (as in,
X[i,j] = 0) or as an access with the identity dependence function, (i.e.,
X[i] = A[i] would be the same as
X[i] = A).
The last piece missing before completing the definition of
U is the summation in the second branch.Mathematically, a reduction is an operation that applies an associative-comutative operator (in general, the operator may only be associative, but In Alphabets, we have only associative-comutative operators) to a set of values, such as summation (sum over a set of numbers).
Reductions are expressed with the following syntax :
reduce(operator, projection, expr);operator: operator to be applied (+, *, max, min, and, or)
In the mathematical equation, summation with one new index
k is used. For each value of
k, the expression
L[i,k]*U[k,j] is computed and added up to produce the result
U[i,j]. Thus, the projection function is
(i,j,k → i,j). (from the three dimensional space indexed by
i,j,k, all values computed at
[i,j,k] are used to compute
U[i,j] in the two dimensional space indexed by
i,j – i.e., the
k is 'projected out')
When the projection function is canonic (e.g.,
(i,j,k→i,j)), then the projection function can be replaced with a simpler syntax (AShow syntax for reductions) that specifies the names of new dimensions surrounded by square brackets.For example, the projection
(i,j,x,y→i,j) can be expressed as
[x,y].
Using the above, summation in the original equation can be written as the following Alphabets fragment.
reduce(+, [k], L[i,k]*U[k,j]);
Putting all this together, the final equation for
U is:
U[i,j] = case {|1==i} : A[i,j]; {|1<i} : A[i,j] - reduce(+, [k], L[i,k]*U[k,j]); esac;
This is exactly like the original equation Caveat
Now we will write the equation for
L, but this time in Show syntax. Unlike the AShow syntax, Show syntax does not rely on the context for naming of indices. Index names can be different in every (sub)expression if it makes sense to do so.Because of this, the LHS does not have square brackets, all we need is the variable name.
L = RHSexpr; //Show syntax
CaseExpression and RestrictExpression are same as AShow syntax. However, since index names are no longer deduced from the context where they occur, they must be explicitly named everywhere. While this may seem cumbersome, it allows expressions to have compositional semantics. In our example, the index names used in the domain of RestrictExpression have to be made explicit.The branch in the definition of
L becomes the following Alphabets :
L = case {i,j|1==j} : expr1; {i,j|1<i} : expr2; esac;
In the array notation in AShow syntax, a DependenceExpression was implicit: just add expressions within square brackets to access variables). In the Show syntax
DependenceExpression is used to explicitly specify which value of a variable is required for a computation. The syntax of
DependenceExpression is “(affine_function)@expr”, where
affine_function is of the form
(list_of_indices → list_of_affine_expressions). For example, the dependence
(i,j→i-1,i+j)@A means that at index point
(i,j) this computation evaluates to the value of
A at index point
(i-1,i+j).
The child of
DependenceExpression can be any Alphabets expression, possibly another DependenceExpression. For example,
(i,j→i,j,i+j,0)@(a,b,c,d→a,c-a)@A is a perfectly legal Alphabets expression.
Reductions in Show syntax are exactly like in the AShow syntax, except that the projection function is specified in the dependence syntax. This is all you need in order to write the rest in of the equation in Show syntax.
L = case {i,j|1==j} : (A / (i,j->j,j)@U); {i,j|1<i} : (A - reduce(+, (i,j,k->i,j), (i,j,k->i,k)@L*(i,j,k->k,j)@U))/(i,j->j,j)@U; esac;
Combine all of the above, and you will get the Alphabets program for LU decomposition. Don't forget the keyword
let/
through before equations the period at the end (since our example has no local variables). Notice how we can mix and match Show and AShow syntax within the program, but each equation must obviously, be consistent.
affine LUD {N|N>0} input float A {i,j|1<=(i,j)<=N}; output float L {i,j|1<i<=N && 1<=j<i}; float U {i,j|1<=j<=N && 1<=i<=j}; let U[i,j] = case {|1==i} : A[i,j]; {|1<i} : A[i,j] - reduce(+, [k], L[i,k]*U[k,j]); esac; L = case {i,j|1==j} : A / (i,j->j,j)@U; {i,j|1<j} : (A - reduce(+, (i,j,k->i,j), (i,j,k->i,k)@L*(i,j,k->k,j)@U))/(i,j->j,j)@U; esac; .
Analyses, transformations, and code generation of Alphabets programs are performed using the AlphaZ system. The normal interface for using AlphaZ is the scripting interface called compiler scripts. Given below is an example script for that does several things using the LUD program we wrote above.
# read program and store the internal representation in variable prog prog = ReadAlphabets("./LUD.ab"); # store string (corresponding to system name) to variable system system = "LUD"; # store output directory name to variable outDir outDir = "./test-out/"+system; # print out the program using Show syntax Show(prog); # print out the program using AShow syntax AShow(prog); # prints out the AST of the program (commented out) #PrintAST(prog); # generate codes (this is demand-driven, memoized code) generateWriteC(prog, system, outDir); generateWrapper(prog, system, outDir); generateMakefile(prog, system, outDir);Save this script with .cs extension, place the alphabets file in the same directory as the script, and then right click on the editor and select
“Run As → Compiler Script” to run the script.
If you get some error message, try looking at the first line of the error messages to find out what it is about. Common problems are:
FileNotFoundException in this case)
xxx does not exist)
In this tutorial, we use two basic code generators, without going into too much detail. The two types of codes generated are
WriteC and
Wrapper.
WriteC code may not be efficient, but it can be generated without any additional specification beyond the program.
Wrapper code is a wrapper around other generated codes that allocates/frees memory for input and output variables, and it also have different options for testing.
Note:Current implementation of the Wrapper prints out the bounding box of the domain of the output variable.
generateMakefile produces a Makefile that should compile the generated codes. You can make with different options.
Congratulations!! You are nearly at the end. Now, you will actually make and execute the code (in a separate terminal window).
compiles the code and produces an executable
xxx (where
xxx is the system name) that executes the program with default input that is 1 everywhere. Compiling with this option does not test very much, but it will test if it compiles and runs and produces no errors.
Compiles the code and produces an executable
xxx.check (
xxx is the system name) that prompts the user for all values of input variables.After executing, it prints out all values of the output variables.This option should be used for testing small to mid-sized input data.
Compiles the code with another code named
xxx_verify.c that defines a function
xxx_verify (
xxx is the system name).Users can provide different program as
xxx_verify to compare outputs.
Same as verify, except the inputs are generated randomly.
You will see that when you execute the code,
. You may be able to easily fix the error in your Alpha program and regenerate correctly executing C code, or you may want a bit of help. In either case, we would like to know. Please email Sanjay.Rajopadhye@colostate.edu with the error message that is produced. it will produce an error
|
Heat loss within a control volume
1 Attachment(s)
Hello,
I have the following problem:
I got an electrical component within a chamber and the chamber has an opening, where air can be supplied from and an outlet. Attached you can see an illustration of the problem.
What I want to know is:
The first thing I attempted by calculating the mass flow with the use of
$\displaystyle Q = \alpha A (T_{surface} - T_1 ) $
where I can rewrite the equation to the mass flow by rewriting the heat transfer coefficient alpha with the Nusselt correlation. Furthermore, in order to solve for the mass flow rate I assumed the surface temperature to be 70°C as this is my maximum.
Is this the correct way to solve it or do I have to consider something else as well?
I also had an attempt to set up the energy balance by
$\displaystyle E_{in} = E_{out}$
$\displaystyle W_{in, electrical} + m h_1 = \alpha A (T_2 - T_1) + m h_2$
thus, $\displaystyle T_2 = W_{in, electrical} / (\alpha A + m c_p) + T_1 $
which I can calculate for different mass flow rates. I assumed to be the surface temperature equal to $\displaystyle T_2$
I think, that there is a mistake in this equation but do not know exactly how to set it up differently.
*The Q, W and m are all rate thus they should be seen with a point above them (don´t know how to insert it)
I have also made the assumption to keep it simple at first to just consider convection - can that assumption be made?
I hope it is clear.
Thanks for the help in advance!
let's assume isolated walls e a VC surrounding all the chamber:
For steady state and using the lumped formulation of 1st law of thermodynamics:
$\displaystyle 0=\dot{Q}_{gen}+\dot{m}_{in}h_{in}-\dot{m}_{out}h_{out}$
For $\displaystyle \dot{m}_{in}=\dot{m}_{out}$
$\displaystyle \dot{Q}_{gen}=\dot{m}\Delta h$
$\displaystyle \dot{Q}_{gen}=\dot{m}\Delta h$
Assuming constant pressure:
$\displaystyle \dot{Q}_{gen}=\dot{m}c_p\Delta T$
and now you can calculate $\displaystyle T_{out}$
In this problem, in general, its temperature profile is in the form T (t, x, y). I'm not sure how deep is this problem, but it's possible to assume $\displaystyle T_{chamber}=T_{out}$
You could calculate Prandt and Reynolds to find Nusselt and use external flow approach but i don't how deep is your problem
We have heat output 150W .... input air temp 30C ... what will be the air output temp at different flow rates???
specific heat of air is close to 1 J/gmC
so for an output temp of 50C ... 150/20 = 7.5 liters /sec
............ output temp of 40C ......... 15 liters /sec
These are quiet modest air flow rates ........... you haven't given the diameters of the inlet and outlet holes ...
I would just buy a computer fan to suit the hole size ... measure the outlet temp .. the flow will be so fast it won't be above 50C and will keep your component plenty cool ..
https://i.ebayimg.com/thumbs/images/...vAi/s-l225.jpg
this one costs $1 inc delivery , eBay ... will suit a hole 35mm dia
Thanks for the answers!
All times are GMT -7. The time now is 04:15 AM.
Copyright © 2016 Physics Help Forum. All rights reserved.
Copyright © 2008-2012 Physics Help Forum. All rights reserved.
|
turicreate.recommender.item_similarity_recommender.ItemSimilarityRecommender¶ class
turicreate.recommender.item_similarity_recommender.
ItemSimilarityRecommender(
model_proxy)¶
A model that ranks an item according to its similarity to other items observed for the user in question.
Creating an ItemSimilarityRecommender
This model cannot be constructed directly. Instead, use
turicreate.recommender.item_similarity_recommender.create()to create an instance of this model. A detailed list of parameter options and code samples are available in the documentation for the create function.
See also
Notes
Model Definition
This model first computes the similarity between items using the observations of users who have interacted with both items. Given a similarity between item \(i\) and \(j\), \(S(i,j)\), it scores an item \(j\) for user \(u\) using a weighted average of the user’s previous observations \(I_u\).
There are three choices of similarity metrics to use: ‘jaccard’, ‘cosine’ and ‘pearson’.
Jaccard similarity is used to measure the similarity between two set of elements. In the context of recommendation, the Jaccard similarity between two items is computed as\[\mbox{JS}(i,j) = \frac{|U_i \cap U_j|}{|U_i \cup U_j|}\]
where \(U_{i}\) is the set of users who rated item \(i\). Jaccard is a good choice when one only has implicit feedbacks of items (e.g., people rated them or not), or when one does not care about how many stars items received.
If one needs to compare the ratings of items, Cosine and Pearson similarity are recommended.
The Cosine similarity between two items is computed as\[\mbox{CS}(i,j) = \frac{\sum_{u\in U_{ij}} r_{ui}r_{uj}} {\sqrt{\sum_{u\in U_{i}} r_{ui}^2} \sqrt{\sum_{u\in U_{j}} r_{uj}^2}}\]
where \(U_{i}\) is the set of users who rated item \(i\), and \(U_{ij}\) is the set of users who rated both items \(i\) and \(j\). A problem with Cosine similarity is that it does not consider the differences in the mean and variance of the ratings made to items \(i\) and \(j\).
Another popular measure that compares ratings where the effects of means and variance have been removed is Pearson Correlation similarity:\[\mbox{PS}(i,j) = \frac{\sum_{u\in U_{ij}} (r_{ui} - \bar{r}_i) (r_{uj} - \bar{r}_j)} {\sqrt{\sum_{u\in U_{ij}} (r_{ui} - \bar{r}_i)^2} \sqrt{\sum_{u\in U_{ij}} (r_{uj} - \bar{r}_j)^2}}\]
The predictions of items depend on whether target is specified. When the target is absent, a prediction for item \(j\) is made via\[y_{uj} = \frac{\sum_{i \in I_u} \mbox{SIM}(i,j) }{|I_u|}\]
Otherwise, predictions for
jaccardand
cosinesimilarities are made via\[y_{uj} = \frac{\sum_{i \in I_u} \mbox{SIM}(i,j) r_{ui} }{\sum_{i \in I_u} \mbox{SIM}(i,j)}\]
Predictions for
pearsonsimilarity are made via\[y_{uj} = \bar{r}_j + \frac{\sum_{i \in I_u} \mbox{SIM}(i,j) (r_{ui} - \bar{r}_i) }{\sum_{i \in I_u} \mbox{SIM}(i,j)}\]
For more details of item similarity methods, please see, e.g., Chapter 4 of [Ricci_et_al].
References
[Ricci_et_al] Francesco Ricci, Lior Rokach, and Bracha Shapira. Introduction to recommender systems handbook. Springer US, 2011.
Methods
ItemSimilarityRecommender.evaluate(self, dataset)
Evaluate the model’s ability to make rating predictions or recommendations.
ItemSimilarityRecommender.evaluate_precision_recall(…)
Compute a model’s precision and recall scores for a particular dataset.
ItemSimilarityRecommender.evaluate_rmse(…)
Evaluate the prediction error for each user-item pair in the given data set.
ItemSimilarityRecommender.export_coreml(…)
Export the model in Core ML format.
ItemSimilarityRecommender.get_num_items_per_user(self)
Get the number of items observed for each user.
ItemSimilarityRecommender.get_num_users_per_item(self)
Get the number of users observed for each item.
ItemSimilarityRecommender.get_similar_items(self)
Get the k most similar items for each item in items.
ItemSimilarityRecommender.get_similar_users(self)
Get the k most similar users for each entry in users.
ItemSimilarityRecommender.predict(self, dataset)
Return a score prediction for the user ids and item ids in the provided data set.
ItemSimilarityRecommender.recommend(self[, …])
Recommend the
k highest scored items for each user.
ItemSimilarityRecommender.recommend_from_interactions(…)
Recommend the
k highest scored items based on the
ItemSimilarityRecommender.save(self, location)
Save the model.
ItemSimilarityRecommender.summary(self[, output])
Print a summary of the model.
|
The point is not to know if it’s easier or smarter to look at the dual of G = Gal(L/K) instead of G itself. To understand the motivation, I think one should take the « experimental » point of view : given any Galois extension L/K, how does one describe its Galois group ? In exercises in Galois theory, generators of the extension L over K are usually given, and the student is asked to determine G using these generators. In this situation, one is « philosophically » convinced that one can do it, even if this could be quite non obvious, e.g. for L = $Q(\sqrt p_1, … , \sqrt p_n)$, where the $p_i$’s are distinct primes. But in a research problem, things do not happen this way. What is given is the base field K, and some desired properties of the extension L, and one must manage to describe G in order to go on. The setting of Kummer theory is a perfect example : K contains $\mu_n$, n prime to the characteristic of K (1), and L/K is an abelian extension of exponent dividing n. What Kummer’s main theorem says is that : first, L is generated over K by the $\sqrt [n] a$ of elements $a$ $\in K$* ; second, that the elements $\sigma$ of G are determined by their action on these $\sqrt [n] a$ , and this action is obviously given by $\sigma (\sqrt [n] a) /\sqrt [n] a $ = an n-th root of 1 (2).
It is striking that the answer is entirely « contained » in the base field, but the limitation of Kummer’s theory is the requirement that K must contain $\mu_n$, so one cannot catch abelian extensions of higher exponent. One could perhaps say that CFT was developped to overcome this difficulty : CFT gives a complete, much more elaborate description of the abelian extensions of a global field (number field and function field of positive characteristic), again with parameters entirely contained in the base field . However this is no longer abstract field theory, but number theory.
(1) If char K divides n, the analog of Kummer theory is the Artin-Schreier-Witt theory
(2) To complete Stahl’s hint, I give here a sleek cohomological proof of Kummer’s theorem. Let L/K be Galois with group G, and take the cohomology of the exact sequence of G-modules 1 --> $\mu_n $--> $L^*$ --> $L^{*n}$ --> 1. This yields 1 --> $\mu_n (K)$ --> $K^*$ --> $K^* \cap L^{*n}$ --> $H^1(G, \mu_n )$ --> $H^1(G,L^*)$ . The last term is trivial by Hilbert 90, so the "Kummer radical" ($K^* \cap L^{*n}$) / $K^{*n}$ is isomorphic to $H^1(G,\mu_n )$, which is $Hom (G, \mu_n)$ if $K$ contains $\mu_n$. Note that these isomorphisms are explicit, they are exactly as described above.NB: in the formula given by Shoutre, one takes for L the maximal abelian extension of exponent dividing n .
|
→ → → → Browse Dissertations and Theses - Mathematics by Title
Now showing items 80-99 of 1147
application/pdfPDF (2MB)
(1997)Dynamical properties of the baker's transformation B with integer base $b\ge 2$ and several related maps on discrete subsets of the domain are studied. The baker's transformation with base $b,\ B: \lbrack 0,1)\times\lbrack ...
application/pdfPDF (4MB)
(1972)
application/pdfPDF (2MB)
(2017)"The apple of my eye" is something I cherish above all others. As a mathematician I study minimally degenerate Poisson manifolds. You can think of a Poisson manifold as the skin of a fruit. Imagine an apple. You probably ...
application/pdfPDF (2MB)
(1998)The definition of the Baum-Connes map is extended to the case of any smooth groupoid. Then we prove that the Baum-Connes map is invariant under the groupoid equivalence.
application/pdfPDF (4MB)
(2012-09-18)This dissertation is divided into three main sections. The main result of Section 1 is that, for $a,b>1$, irrational, the quantity $\log (a/b)$ is ``not too far'' from the series of fractional parts $$ \sum_{n=1}^{\i ...
application/pdfPDF (1MB)
(2003)Let A be an excellent local normal domain and fninfinity n=1 a sequence of prime elements lying in successively higher powers of the maximal ideal, such that each hypersurface A/ fnA satisfies R1. We establish the map ...
application/pdfPDF (2MB)
(2006)This thesis is a study of the axiomatics of quantum vertex algebras based on the bicharacter construction suggested by R. Borcherds in [Bor01]. One of the goals is to use the ideas of [Bor01] to incorporate the examples ...
application/pdfPDF (2MB)
(2019-04-02)In this thesis, we consider nonlinear Schrödinger equations with double well potentials with attractive and repelling nonlinearities. We discuss bifurcations along bound states, especially ground states and the first excited ...
application/pdfPDF (618kB)
(2010-05-14)This dissertation involves two topics. The first is on the theory of partitions, which is discussed in Chapters 2 − 5. The second is on covering systems, which are considered in Chapters 6 − 8. In 2000, Farkas and Kra ...
application/pdfPDF (942kB)
(1982)This thesis considers the notions of decomposable operators in the sense of C. Foias, and a certain analytic condition--called condition ((beta))--due to Errett Bishop. It is shown that the concepts are related, and that ...
application/pdfPDF (2MB)
application/pdfPDF (5MB)
(1989)We use R. Knorr's theory of virtually irreducible lattices to study the blocks of a finite group.
application/pdfPDF (2MB)
Boolean-Valued Models of Set Theory When the Boolean Algebra Is a Proper Class in the Ground Model (1971)
application/pdfPDF (2MB)
(1989)In this paper the Boolean valued method is used to develop a theory closely resembling the theory of probabilistic metric spaces. In this development the complete Boolean algebra used must have the form of the quotient ...
application/pdfPDF (3MB)
application/pdfPDF (4MB)
(2015-12-10)We study the bound states of the 1+1 dimensional Dirac equation with a scalar potential, which can also be interpreted as a position dependent "mass'', analytically as well as numerically. We derive a Prüfer-like representation ...
application/pdfPDF (1MB)
(2019-04-10)Although the hierarchically hyperbolic space boundary is a generalization of the Gromov boundary, we will show there are fundamental differences between the two. First, we provide negative answers to questions posed by ...
application/pdfPDF (679kB)
(1969)
application/pdfPDF (3MB)
(1958)
application/pdfPDF (2MB)
|
I am reading Kolenkow and Kleppner's
Classical Mechanics and they have tried to calculate the gravitational force between a uniform thin spherical shell of mass $M$ and a particle of mass $m$ located at a distance $r$ from the center.
The shell has been divided into narrow rings.$R$ has been assumed to be the radius of the shell with thickness $t$ ($t<<R$). The ring at angle $\theta$ which subtends angle $d\theta$ has circumference $2\pi R\sin\theta$. The volume is $dV=2\pi R^2t\sin \theta d\theta$ and its mass is $pdV=2\pi R^2t\rho\sin\theta d\theta$. If $\alpha$ be the angle between the force vector and the line of centers, $$dF=\frac{Gm\rho dV}{r'^2}\cos\alpha$$ where $r'$ is the distance of each part of the ring from $m$.
Next, an integration has been carried out using $\cos\alpha=\frac{r-R\cos\theta}{r'}$ and $r'=\sqrt{r'^2+R^2-2\pi R\cos\theta}$.
Question: I would like to avoid these calculations and I was wondering if there exists a better solution.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.