text
stringlengths
256
16.4k
Let $\Omega \subset \mathbb R^d$ an open set and $u,v\in W^{1,p}(\Omega )\cap L^\infty (\Omega )$. Prove that $uv\in W^{1,p}(\Omega )$ and $$\partial _i(uv)=u\partial _iv+v\partial _iu.$$ I in fact don't really understand why we want $u,v\in L^\infty (\Omega )$. But here is the proof : Let $K\subset \subset \Omega $ and $\varphi_n$ a standard mollifier. We denote $$u_n=\varphi_n*u\quad \text{and}\quad v_n=\varphi_n*v.$$ We know that $u_n,v_n\in W^{1,p}(\Omega )$ and $\partial_i u_n=\varphi_n*\partial_i u$ and $\partial _i v_n=\varphi_n*\partial _iv,$ and thus $$u_n\to u\quad \text{and}\quad v_n\to v$$ in $W^{1,p}(\Omega )$. Moreover $$\|u_n\|_{L^\infty (K)}\leq \|u\|_{L^\infty (\Omega )}\quad \text{and}\quad \|v_n\|_{L^\infty (K)}\leq \|v\|_{L^\infty (\Omega )}.$$ Supposons WLOG that $u_n\to u$ and $\partial _i u_n\to \partial u$ a.e. Then $$\partial _i(u_nv_n)=u_n\partial _i v_n+v_n\partial _i u_n\to u\partial _iv+v\partial _iu \in L^p(\Omega ).$$ Q1) Is it the limit in $L^p(\Omega )$ are pointwise ? Then if $\zeta \in \mathcal C^1_0(\Omega )$, $$-\int_\Omega uv\partial _i \zeta =\lim_{n\to \infty }-\int_\Omega u_nv_n\partial _i\zeta =\lim_{n\to \infty }\int \partial _i(u_nv_n)\zeta =\lim_{n\to \infty }\int_\Omega (u_n\partial _iv_n+v_n\partial _iu_n)\zeta =\int_\Omega (u\partial_i v+v\partial _i u)\zeta $$ and thus $uv\in W^{1,p}(\Omega )$ and $\partial_i (uv)=u\partial_i v+v\partial_i u$. Q2) Where did we use the fact that $u,v\in L^\infty $ and $\partial _i(u_nv_n)=u_n\partial _i v_n+v_n\partial _i u_n\to u\partial _iv+v\partial _iu \in L^p(\Omega )$ ? Q3)In what the compact $K$ is important ? I don't have the impression we used it.
I found this question here but it does not fully answer my question. The answer there was that "composite bosons can occupy the same state when the state is spatially delocalized on a scale larger than the scale of the wavefunction of the fermions inside". Let's say we do a BEC with bosonic atoms (for example in a harmonic trap). The BEC means that a huge number of atoms will occupy the same energy level. This cannot be exactly true because the atoms are made out of fermions. So I guess that "the" energy level is actually a collection of many different energy levels that originate somehow from the internal structure of the atoms. This effectively creates a degeneracy of "the" energy level. I think this is what he meant by "spatially delocalized on a larger scale than the scale of the wavefunction of the fermions inside". I have a few questions regarding this: Is this correct? Where does these extra energy levels come from (There must be a huge amount of them)? If there is a huge amount of internal energy states it should give a great enhancement of the density of states. Since many thermodynamic quantities depend on the density of states (for instance the particle number) this should change the thermodynamics of a gas (not only at small temperatures but also at higher ones)? EDIT: This edit is about Chiral Anomaly's answer. I would like to do this a bit more quantitatively. Consider a sodium atom. Its Hamiltonian (like for the H-atom) can be composed in a rest frame part (which will become the spatial wavefunction of the atom later) and a internal part. The internal part has a hydrogen-like spectrum. The quantum numbers of these states are what you called $n$. If the electrons have $k$ accessible states then there are $k$ over 11 possibilities to arrange the 11 electrons. For 20 Million atoms (as in here) you need about 34 internal states (This are all states up to $n \leq 4$). For Rubidium you need all states up to $n \leq 5$. I'm not fully convinced of your argument because of several reasons: This would imply that all of the atoms in a BEC are excited. You need a specific electronic configuration for cooling and (even more important) trapping the atoms (i.e. you need one electron in a specific state). So all of those excited configurations where this state is not occupied would simply fall out of the trap. One observes the BEC by shining light with some transition frequency on them. If all internal states are occupied there cannot be a transition. EDIT 2: Let's assume for a moment an idealized world. The nucleus and the electrons create a atom where the wavefunction splits into an internal part $\psi_i$ (with $k$ discrete states) and an external wavefunction $\psi(x)$. We put those atoms in a harmonic potential. Now assume that the internal structure is not affected by the potential and that there is no residual interaction between the atoms. So we can write the total Hamiltonian as $H = H_{ext} + H_{in}$ where $H_{ext} = p^2/2m + V(x) = \hbar \omega (n+\frac{1}{2})$ and $H_{in}$ is just the (independent) internal Hamiltonian. Let's choose the groundstate of the harmonic trap to create a BEC. If the atoms were fundamental bosons this degeneracy of this energy level is 1 (which is no problem here). But now we have composite bosons so for the fermions this state has a degeneracy of $1 \times k$. So we can put at most $k$ atoms into this state. (I think we both agree on this). Now turn on interactions. There are many different things changing. The internal structure is affected by the potential (this is fine since it does not change the the number of states). The atoms interact with each other. This will lift the $k$-fold degeneracy of the ground state (i.e. different atoms will have a different $e^{-iEt}$ time dependence). If the interaction is small the splitting will be small, therefore the time dependence of the atoms will be nearly equal. If we run our experiment only for a small time it will look like all the atoms have the same time dependence (BEC). If the interactions are not neglectable the level splitting will be of order $\hbar \omega$. So it will not look like all atoms occupying the groundstate but rather the two lowest states (no BEC). However now we can put $2k$ atoms into our gas because we are treating two (unperturbed) states as equal. But I doubt that this will solve the problem because as I said there won't be a BEC anymore. Now comes the complicated part. The internal and external wavefunctions (even of different atoms) can mix. This is hard to analyze. But we know two things: 1. The overall numbers of states does not change. 2. The resulting gas must be able to form a BEC (i.e. you need enough states which have (nearly) the same time dependence). If you just crazy mix some high energy states into low energy states the nice time dependence will get lost. Also in this case all the BEC analysis would be completely wrong (since it does not account for such mixing). So I think this must be neglectable. All in all when turning on interactions will not create extra states. Therefore if you see a BEC you have at maximum $k$ atoms in it.
Difference between revisions of "User talk:WikiSysop" m (12 intermediate revisions by the same user not shown) Line 1: Line 1: + + ==Test external Link 29th January 2015== ==Test external Link 29th January 2015== + + + + + + + + − [http:// + [http://.de] ==Test Copy&Paste HTML== ==Test Copy&Paste HTML== Line 437: Line 447: . . + + Latest revision as of 11:10, 12 September 2017 Test Contents 1 Test external Link 29th January 2015 2 Test Copy&Paste HTML 3 Test Asymptote 3.1 Test January 12th 2015 3.2 Tests November 1th 3.3 Tests November 17th 3.4 Tests November 4th 3.5 Tests October 27th 3.6 Previous tests 4 Test Cite Extension 5 Test MathJax 6 Pages A-Z 7 Recent Changes 8 Legacy Images Test external Link 29th January 2015 123 Test Link2 Test Copy&Paste HTML Test-copy-paste Test Test Test Asymptote Test January 12th 2015 Case 1 modified Tests November 1th Case 1 modified Case 1 Tests November 17th Case 1 Tests November 4th Case 1 Case 2 Case 3 Tests October 27th Case 1 Case 2 Case 3 Case 4 Case 5 [asy] pair A,B,C,X,Y,Z; A = (0,0); B = (1,0); C = (0.3,0.8); draw(A--B--C--A); X = (B+C)/2; Y = (A+C)/2; Z = (A+B)/2; draw(A--X, red); draw(B--Y,red); draw(C--Z,red); [/asy] Previous tests Test Cite Extension Example: Cite-Extension Test MathJax \begin{align} \dot{x} & = \sigma(y-x) \\ \dot{y} & = \rho x - y - xz \\ \dot{z} & = -\beta z + xy \end{align} \[ \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \] Some Text \( \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \) Some Text \[ \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \] Alphabetically ordered index of all pages List of previous changes on EOM . Legacy Images How to Cite This Entry: WikiSysop. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=WikiSysop&oldid=36274 Test January 12th 2015
In particle physics, scalar potentials have to be bounded from below in order for the physics to make sense. The precise expressions of checking lower bound of scalar potentials are essential, which is an analytical expression of checking copositivity and positive definiteness of tensors given by such scalar potentials. Because the tensors given by general scalar potential are 4th order and symmetric, our work mainly focuses on finding precise expressions to test copositivity and positive definiteness of 4th order tensors in this paper. First of all, an analytically sufficient and necessary condition of positive definiteness is provided for 4th order 2 dimensional symmetric tensors. For 4th order 3 dimensional symmetric tensors, we give two analytically sufficient conditions of (strictly) cpositivity by using proof technique of reducing orders or dimensions of such a tensor. Furthermore, an analytically sufficient and necessary condition of copositivity is showed for 4th order 2 dimensional symmetric tensors. We also give several distinctly analytically sufficient conditions of (strict) copositivity for 4th order 2 dimensional symmetric tensors. Finally, we apply these results to check lower bound of scalar potentials, and to present analytical vacuum stability conditions for potentials of two real scalar fields and the Higgs boson. 本文基于阻尼块反幂法与子空间投影算法设计了一种求解特征值问题的广义共轭梯度算法, 同时也实现了相应的计算软件包. 然后对算法和计算过程进行了一系列的优化来提高算法的稳定性、计算效率和并行可扩展性, 使得本文的算法适合在并行计算环境下求解大规模稀疏矩阵的特征值. 所形成的软件包是基于Matrix-Free和Vector-Free设计的, 可以应用于任意的矩阵向量结构. 针对几种典型矩阵的测试结果表明本文的算法和软件包不但具有良好的数值稳定性, 同时相比于SLEPc软件包中的LOBPCG以及Jacobi-Davidson解法器有2-6倍的效率提升. 软件包的网址: https://github.com/pase2017/GCGE-1.0. Based on the point of view of neuroethology and cognition-psychology, general frame of theory for intelligent systems is presented by means of principle of relative entropy minimizing in this paper. Cream of the general frame of theory is to present and to prove basic principle of intelligent systems: entropy increases or decreases together with intelligence in the intelligent systems. The basic principle is of momentous theoretical significance and practical significance .From the basic principle can not only derive two kind of learning algorithms (statistical simulating annealing algorithms and annealing algorithms of mean-field theory approximation) for training large kinds of stochastic neural networks,but also can thoroughly dispel misgivings created by second law of thermodynamics on 'peoplespsychology ,hence make one be fully confident of facing life.Because of Human society, natural world, and even universe all are intelligent systems. 研究了一道工序智能加工系统轨道式自动引导车(RGV)调度问题. 该问题为2018年全国大学生数学建模竞赛B题的一部分. 系统由一辆轨道式自动引导车和若干台计算机数控机床(CNC)等部件组成, RGV操控多台CNC完成多个物料加工, RGV调度方案决定了系统的效率. 以 RGV的移动路径为决策变量, 以RGV在CNC上的操作结束时刻为时间节点, 以物料加工剩余时间为状态变量, 给出了问题的数学模型, 但模型中的部分参数以决策变量为下标. 通过定义新的变量和约束, 将模型修改为不含变量下标和分段函数的非线性混合整数规划模型. 最后给出了算例, 说明了模型的正确性和可操作性. 本文提出一个名为接圆回归的点击率预测新方法,尝试替代常用的因子分解机(FM)。接圆回归用超平面拼接出一个封闭凸多面体,圈出正样本,有直观的几何解释, 能从任意初始值一次收敛到全局最优解。 拟合出来的曲面Lipschitz连续,变化平缓。在人工设计的星环集、双堆集、双月集上,接圆回归的分类准确性、解释性、平滑性全面超过FM。在同量级参数量、计算量 的条件下,接圆回归在Avazu集和Criteo集上的AUC超过FM。 The three frameworks for theories of consciousness taken moust seriously by neuroscientists are that consciousness is a biological state of the brain,the global workspace perspective,and the perspective of higher state. Consciousness is discussed from viewpoint of theory of Entropy—partition of complex system in present article. Human brain’s system self-organizably and adaptively implements partition、aggregation and integration, and consciousness emerges. Abstract. In studying of a class of random neural network, some of relative researchers have proposed Markov model of neural network. Wherein Markov property of the neural network is based on “assuming”. To reveal mechanism of generating of Markov property in neural network, it is studied how infinite-dimensional random neural network (IDRNN) forms inner Markov representation of environment information in this paper.Because of equivalence between markov property and Gibbsian our conclusion is that knowledge is eventually expressed by extreme Gibbs probability measure—ergodic Gibbs probability measure in IDRNN. This conclusion is also applicable to quantum mechanical level of IDRNN. Hence one can see “ concept “- “ consciousness” is generated at particle(ion) level in the brain and is experienced at the level of the neurons; We have discussed also ergodicity of IDRNN with random neural potential. 本文系统地探讨了霍奇星算子与外微分算符作用于任意微分形式场时两者的一般组合规律。首先,找到了保持微分形式场的次不变的两个组合算符,并通过二者的线性组合得到了一个新算符。其次,当由任意数目的霍奇星算子与外微分算符进行组合时,作者导出了所有形式上彼此互异的组合算符的统一表达式,这些表达式由单个霍奇星算子与外微分算符以及二者的任选两个的非零组合构成。在此基础上,分析了所有算符之间的相互作用关系,并根据这些算符对微分形式的次的改变情况,对它们进行了具体分类。最后,作为一个应用,作者详细讨论了如何由次相同的微分形式的线性组合来构造电磁场的麦克斯韦方程。 为了从根本上消灭存在于数学基础中的各种悖论,使数学建筑在高度可靠的基础上,发现形式逻辑只能用于同一律,矛盾律和排中律这三大规律都成立的讨论域 (称为可行域) 内,否则就会产生包括悖论在内的各种错误,而在形式逻辑的适用范围即可行域内,只要前提可靠,推导严格,悖论是不存在的。根据该结论,分析了说谎者悖论和理发师悖论等一些历史上比较著名的悖论的形成原因,同时指出了数学基础中皮亚诺公理的应用和康托尔定理、区间套和对角线法证明中的一些逻辑错误,提出了能够避免这些错误的统一的定义自然数、有理数和无理数的建议。 The aim of this paper is to study the heterogeneous optimization problem \begin{align*} \mathcal {J}(u)=\int_{\Omega}(G(|\nabla u|)+qF(u^+)+hu+\lambda_{+}\chi_{\{u>0\}} )\text{d}x\rightarrow\text{min}, \end{align*} in the class of functions $ W^{1,G}(\Omega)$ with $ u-\varphi\in W^{1,G}_{0}(\Omega)$, for a given function $\varphi$, where $W^{1,G}(\Omega)$ is the class of weakly differentiable functions with $\int_{\Omega}G(|\nabla u|)\text{d}x<\infty$. The functions $G$ and $F$ satisfy structural conditions of Lieberman's type that allow for a different behavior at $0$ and at $\infty$. Given functions $q,h$ and constant $\lambda_+\geq 0$, we address several regularities for minimizers of $\mathcal {J}(u)$, including local $C^{1,\alpha}-$, and local Log-Lipschitz continuities for minimizers of $\mathcal {J}(u)$ with $\lambda_+=0$, and $\lambda_+>0$ respectively. We also establish growth rate near the free boundary for each non-negative minimizer of $\mathcal {J}(u)$ with $\lambda_+=0$, and $\lambda_+>0$ respectively. Furthermore, under additional assumption that $F\in C^1([0,+\infty); [0,+\infty))$, local Lipschitz regularity is carried out for non-negative minimizers of $\mathcal {J}(u)$ with $\lambda_{+}>0$.
I am having some troubles with a question regarding a joint distribution with several normal distributions. The question is put up like this: $X_1 , X_2$ are two normal distributions with the distribution $N(0,I_1) $. $Z_1 = \frac{X_1 +X_2}{2}, Z_2 = \frac{X_1 -X_2}{2}, Z_1 = \frac{X_2 -X_1}{2}$. Find the joint distribution of $(Z_1, Z_2, Z_3)$. What I did so far was to try find the distributions of $Z_1, Z_2, Z_3$, where I used the rules for sums of normal distributions. This gives me the result: $Z_1 \sim N(\frac{1}{2}\cdot 0 + \frac{1}{2}\cdot 0, \frac{1}{2}^2\cdot I_1 +\frac{1}{2}^2\cdot I_1 ) = N(0, \frac{1}{2}I_1)$ $Z_2 \sim N(\frac{1}{2}\cdot 0 + \frac{-1}{2}\cdot 0, \frac{1}{2}^2\cdot I_1 +\frac{-1}{2}^2\cdot I_1 ) = N(0, \frac{1}{2}I_1)$ $Z_2 \sim N(\frac{1}{2}\cdot 0 + \frac{-1}{2}\cdot 0, \frac{1}{2}^2\cdot I_1 +\frac{-1}{2}^2\cdot I_1 ) = N(0, \frac{1}{2}I_1)$ I then try using the rules for multivariate normal distributions to find the joint distribution of $Z_1, Z_2, Z_3 $. This gives: $ Z \sim N(\mu , \sum) $ where $\mu = [E[Z_1],E[Z_2], E[Z_3]]$ $\sum = [Cov[Z_i,Z_j]] \; i = 1,2,3 \; j = 1,2,3$ This however just gives me a vector with zeroes and a matrix with zeroes with my calculations. What am I doing wrong in my assumptions?
Institute of Mathematical Statistics Lecture Notes - Monograph Series Escape of mass in zero-range processes with random rates Abstract We consider zero-range processes in $\Z^d$ with site dependent jump rates. The rate for a particle jump from site $x$ to $y$ in $\Z^d$ is given by $\lambda_x g(k) p(y-x)$, where $p(\cdot)$ is a probability in $\Z^d$, $g(k)$ is a bounded nondecreasing function of the number $k$ of particles in $x$ and $\lambda = \{\lambda_x\}$ is a collection of i.i.d. random variables with values in $(c,1]$, for some $c>0$. For almost every realization of the environment $\lambda$ the zero-range process has product invariant measures $\{\nlv: 0\le v \le c\}$ parametrized by $v$, the average total jump rate from any given site. The density of a measure, defined by the asymptotic average number of particles per site, is an increasing function of $v$. There exists a product invariant measure $\nlc$, with maximal density. Let $\mu$ be a probability measure concentrating mass on configurations whose number of particles at site $x$ grows less than exponentially with $\|x\|$. Denoting by $S_{\lambda}(t)$ the semigroup of the process, we prove that all weak limits of $\{\mu S_{\lambda}(t), t\ge 0 \} $ as $t \to \infty$ are dominated, in the natural partial order, by $\nlc$. In particular, if $\mu$ dominates $\nlc$, then $\mu S_{\lambda}(t)$ converges to $\nlc$. The result is particularly striking when the maximal density is finite and the initial measure has a density above the maximal. Chapter information Source Asymptotics: Particles, Processes and Inverse Problems: Festschrift for Piet Groeneboom (Beachwood, Ohio, USA: Institute of Mathematical Statistics, 2007) Dates First available in Project Euclid: 4 December 2007 Permanent link to this document https://projecteuclid.org/euclid.lnms/1196797071 Digital Object Identifier doi:10.1214/074921707000000300 Mathematical Reviews number (MathSciNet) MR2459934 Zentralblatt MATH identifier 1205.60169 Rights Copyright © 2007, Institute of Mathematical Statistics Citation Ferrari, Pablo A.; Sisko, Valentin V. Escape of mass in zero-range processes with random rates. Asymptotics: Particles, Processes and Inverse Problems, 108--120, Institute of Mathematical Statistics, Beachwood, Ohio, USA, 2007. doi:10.1214/074921707000000300. https://projecteuclid.org/euclid.lnms/1196797071
How to treat $\epsilon$ and '\$' in top-down parser using predict table? The construction of the predict table Given a product $X \rightarrow w$, row $X$ and column $t$ -Mark $X \rightarrow w$ for each $t \in FIRST(w)$ -If $NULLABLE(w)$, then mark $X \rightarrow w$ for each $t \in FOLLOW(w)$ as well says to create columns for all terminal symbols. $\epsilon$ is a terminal symbol so it's naturally added as a column. However, I've somehow interpreted that I could/should add '\$' as a terminal symbol as well. Basically because '\$' is used in the FOLLOW sets (and the FOLLOW sets don't contain $\epsilon$). But does this create redundancy since then the table would hold basically the same predict rules for '\$' and $\epsilon$ (at least in the implementation I have here)? The rules given here also treat '\$' and $\epsilon$ as if they were separate: http://www.jambe.co.nz/UNI/FirstAndFollowSets.html The FOLLOW sets basically use '\$' in place of $\epsilon$, but the predict table uses $\epsilon$, because it's a terminal symbol.
This document is part of the “DrBats” project whose goal is to implement exploratory statistical analysis on large sets of data with uncertainty. The idea is to visualize the results of the analysis in a way that explicitely illustrates the uncertainty in the data. The “DrBats” project applies a Bayesian Latent Factor Model. This project involves the following persons, listed in alphabetical order : Bénédicte Fontez (aut) Nadine Hilgert (aut) Susan Holmes (aut) Gabrielle Weinrott (cre, aut) \(\mathbf{Y}\) : observed response variable, \(y_1\), …, \(y_i\), …, \(y_N\) are the rows each of length \(P\) \(\mathbf{W}\) : is a low-rank matrix, with \(W_1\), …, \(W_D\) columns of length \(P\), \(D \leq P\) \(\mathbf{\beta}\) : a \(D \times N\) matrix of factor loadings, and \(\beta_i\) are the factor loadings of the \(i^{th}\) individual observed \(\epsilon\) : the \(P \times N\) matrix of errors, with precision \(\sigma^2\) We want to visualize \(\mathbf{Y}\) in a lower dimension. The \(\mathbf{\beta}\) are the coordinates of the observations in the lower dimension, \(\mathbf{W}\) is the transition matrix, and \(\epsilon\) represents the difference between the low-rank represenation and the actual data. Without loss of generality, we assume that \(\mathbf{Y}\) is centered. The Latent Factor Model is therefore: \(\mathbf{Y_i}^T = \mathbf{W}\beta_i + \epsilon_i\) For identifiability reasons, we will estimate a \(P \times D\) positive lower triangular (PLT) matrix, instead of the full matrix \(\mathbf{W}\). We can do a matrix decomposition of \(\mathbf{W}\) as an orthogonal matrix and an upper-triangular matrix, \(\mathbf{W} = \mathbf{Q} \mathbf{R}\). As such, we estimate the PLT matrix \(\mathbf{R}^T = \left(r_{j, k}\right)_{j = 1:P, k = 1:D}\) with the rotation matrix \(\mathbf{Q}\) known (for instance, estimated from the classical PCA): \[ \mathbf{R}^T = \begin{pmatrix} r_{1, 1} & 0 & \cdots & 0\\ r_{2, 1} & r_{2, 2} & \ddots & 0 \\ r_{3, 1} & r_{3, 2} & \ddots & \vdots \\ \vdots & & \ddots & r_{P, D} \end{pmatrix} \] We assume for now that all rows of \(\mathbf{Y}\) have the same variance, that is \(\sigma^2 \mathbf{1}_P\). The full Bayesian model is:\[\begin{eqnarray}\label{model} \mathbf{Y_i}^T|\mathbf{R}^T, \beta_i, \sigma^2 &\overset{i.i.d.}{\sim}& \mathcal{N}_P(\mathbf{R}^T\beta_i, \sigma^2 \mathbf{1}_P) \end{eqnarray}\] \[\begin{eqnarray*} \beta_i & \overset{i.i.d.}{\sim}& \mathcal{N}_D(0, \mathbf{1}_D) \\ r_{j, k} & \overset{i.i.d.}{\sim}& \mathcal{N}(0, \tau^2) \\ \sigma^2, \tau^2 &\sim& IG(0.001, 0.001) \end{eqnarray*}\] We assume that the non-null entries of the PLT matrix are independent, centered and normal, with same variance \(\tau^2\). To integrate information about uncertainty for interval-valued data, we can put an informative prior on the variance of each individual \(\mathbf{Y_i}\). This could be in the form of a weight matrix, for instance \(\sigma^2 \Phi_i\) where \(\Phi_i\) is either fixed by the user, or estimated. Finally, contrary to classical Principal Component Analysis, in this model the factor loadings \(\beta_i\) for each individual are random variables. This allows for uncertainty of projection, resulting in non-elliptical confidence regions around the estimated factor loadings. We can simulate data using \(\mathbf{Y_i}^T = \mathbf{R}^T\beta_i + \epsilon_i\) with the function drbats.simul(). We choose a matrix \(\mathbf{R}\), and then build \(\mathbf{Y_i}\) by simulating \(\beta_i\) and \(\epsilon_i\) for each individual. To obtain the full matrix \(\mathbf{Y}\), we stack the rows \(\mathbf{Y_i}\). The matrix \(\mathbf{R}\) built in this package is, as previously stated, the result of the matrix decomposition of a full low-rank matrix \(\mathbf{W}\). To choose an \(\mathbf{R}\) that would resemble something out of an agronomic dataset, we build \(\mathbf{W}\) to be the low-rank matrix of an extreme case of data found in agronomy: often times observations are signals that trend over time, with peaks at certain moments over the observation period. In addition, the number of observations can be small when these peaks occur. To build \(\mathbf{W}\) and subsequently, \(\mathbf{R}\), we first simulate bi-modal signals observed unevenly over time. suppressPackageStartupMessages(require(DrBats))set.seed = 45toydata <- drbats.simul(N = 5, P = 150, t.range = c(0, 1000), b.range = c(0.2, 0.4), c.range = c(0.6, 0.8), b.sd = 5, c.sd = 5, y.range = c(-5, 5), sigma2 = 0.2, breaks = 8, data.type = 'sparse.tend') matplot(t(toydata$t), t(toydata$X), type = 'l', lty = 1, lwd = 1, xlab = 'Time', ylab = ' ')points(t(toydata$t), t(toydata$X), pch = '.') For details please refer to the simul_and_project.pdf vignette. We set the dimensionality of the low-rank matrix as the number of axes retained after Principal Component Analysis: barplot(toydata$proj.pca$lambda.perc, ylim = c(0, 1), col = mycol[1:length(toydata$proj.pca$lambda.perc)]) ## [1] "Number of retained axes: 2" If you want to use the PCA rotation to anchor the latent factors, the function wlu() does an LU matrix decomposition of the matrix of latent factors. See the modelFit.pdf vignette for details. fit <- modelFit(model = "PLT", var.prior = "IG", prog = "stan", Xhisto = toydata$Y.simul$Y, nchains = 4, nthin = 50, niter = 10000, D = toydata$wlu$D) The main.modelFit() function outputs an object called fit, containing the posterior estimates for the parameters of the model. For evaluation, we can convert the object to an mcmc.list to apply the diagnostic tests in the coda package manual. Our package also works on mcmc.lists for coherence. We can plot the histogram of the posterior density of the data: post <- postdens(codafit, Y = toydata$Y.simul$Y, D = toydata$wlu$D, chain = 1)hist(post, main = "Histogram of the posterior density", xlab = "Density") It’s possible to visualize the projection of the observations onto the lower dimensional space with the function \(visbeta()\). We can project onto the latent factors of our choice, here we chose the first and second (we didn’t have a choice actually since there are only two latent factors in the toy example). The uncertainty envelope at \(95 \%\) is also plotted if we choose \(quant = c(0.05, 0.95)\). beta.res <- visbeta(codafit, toydata$Y.simul$Y, toydata$wlu$D, chain = 1, axes = c(1, 2), quant = c(0.05, 0.95)) ggplot2::ggplot() + ggplot2::geom_path(data = beta.res$contour.df, ggplot2::aes(x = x, y = y, colour = ind)) + ggplot2::geom_point(data = beta.res$mean.df, ggplot2::aes(x = x, y = y, colour = ind)) + ggplot2::ggtitle("Convex hull of Score Estimates") W.res <- visW(codafit, toydata$Y.simul$Y, toydata$wlu$D, chain = 1, factors = c(1, 2))W.df <- data.frame(time = 1:9, W.res$res.W)ggplot2::ggplot() + ggplot2::geom_step(data = W.df, ggplot2::aes(x = time, y = Estimation, colour = Factor)) + ggplot2::geom_step(data = W.df, ggplot2::aes(x = time, y = Lower.est, colour = Factor), linetype = 3) + ggplot2::geom_step(data = W.df, ggplot2::aes(x = time, y = Upper.est, colour = Factor), linetype = 3) + ggplot2::ggtitle("Latent Factor Estimations") G. Weinrott, B. Fontez, N. Hilgert & S. Holmes, “Modèle Bayésien à facteurs latents pour l’analyse de données fonctionnelles”, Actes des JdS 2016.
I'm trying to find all conformal automorphisms of the upper half plane $\{\Im[z] \gt 0\}$, known to be $f(z) = \frac{az + b}{cz + d}$ where $a, b, c, d$ are real and $ad - bc \gt 0$. The main work is to show that automorphisms are rational functions with real coefficients. The thing I'm having trouble with is steps (1.) and (2.) below. Show that taking the limit as $z$ approaches the real line, we get a automorphism of $\{\Im[z] \ge 0\} \cup \{\infty\}$ Then the the real line + $\infty$ maps to the real line + $\infty$. Extend to an automorphism of the entire complex plane + infinity by Schwartz Reflection. $\infty$ maps to a finite real number or else we have an entire function and the only automorphisms of the complex plane are $f(z) = az + b$ and we are done because $f(0)$ must be real and $f(1)$ must be real by (2.) So for one real number $r$, $f(r) = \infty$. $r$ cannot be an essential singularity or else we violate injectivity. So $f$ is a pole of order 1, because any higher order and we violate injectivity again. In the end we have a meromorphic function in the entire plane with one pole on the real line bounded at infinity. By Mittag Leffler we know that $f$ is a rational function. By injectivity, we know its degree is 1. Since it's real on the real line, we know its coefficients can be written real. Here we can appeal to the power series expansion of $f(z)(z-r)$ around $r$ which must have real coefficients since they correspond to derivatives of $f(z)(z-r)$. But such derivatives must be real since we can view the function as a real function of a real variable. I know there is another way to do this with mappings of the unit disk, but I want to see if this way can work out.
Am I doing this right? I split the problem up into the cases of 2 same, 3 same, 4 same, but I feel like something special has to be done for 2 of the same, because what if there are 2 pairs (like two 3's and two 4's)? This is what I have: For 2 of the same: $5\times 5\times 6\times {4\choose 2}=900$ For 3 of the same: $5\times 6\times {4\choose 3}=120$ For 4 of the same: $6\times {4\choose 4}=6$ Combined: $900+120+6=1026$ Total possibilities: $6^4=1296$ Probability of at least 2 die the same: $\frac {1026}{1296}\approx 79.17$% Confirmation that I'm right, or pointing out where I went wrong would be appreciated. Thanks! Sorry if the formatting could use work, still getting the hang of it.
I came across the following problems on null sequences during the course of my self-study of real analysis. Let $x_n = \sqrt{n+1}- \sqrt{n}$. Is $(x_n)$ a null sequence? Consider $y_n = \sqrt{n+1}+ \sqrt{n}$. Then $x_{n}y_{n} = 1$ for all $n$. So either $(x_n)$ or $(y_n)$ is not a null sequence. It seems $(y_n)$ is not a null sequence. I think $(x_n)$ is a null sequence because $\sqrt{n+1} \approx \sqrt{n}$ for large $n$ which implies that $x_n \approx 0$ for large $n$. If $(x_n)$ is a null sequence and $y_n = (x_1+ x_2+ \dots + x_n)/n$ then $(y_n)$ is a null sequence. Suppose $|x_n| \leq \epsilon$ for all $n >N$. If $n>N$, then $y_n = y_{N}(N/n)+ (x_{N+1}+ \dots+ x_n)/n$. From here what should I do? If $p: \mathbb{R} \to \mathbb{R}$ is a polynomial function without constant term and $(x_n)$ is a null sequence, then $p((x_n))$ is null. We know that $|x_n| \leq \epsilon$ for all $n>N$. We want to show that $|p(x_n)| \leq \epsilon$ for all $n>N_1$. We know that $p(x) = a_{d}x^{d} + \cdots+ a_{1}x$. So $$|p(x_n)| \leq a_{d} \epsilon^{d} + \cdots+ a_{1} \epsilon$$ for all $n>N_1$.
This is the fifth post in a series on bubbles in the U.S. Equity Market. The first part can be found here. Overview In Part 2, we discussed the test developed by Phillips et al. to detect bubbles. As I mentioned in that post, the ADF test is basically looking for extreme return persistence. This gave me an idea - in the presence of a “bubble”, returns should become more predictable. This posts sets up a simple model and tests it empirically with the Nasdaq “bubble” discussed in Part 3. Simple Model Consider the classical asset pricing model, where dividends are the only fundamentals:\begin{equation}p_t = E_t \left [\sum\limits_{j=1}^\infty \beta^j \frac{u’(c_{t+j})}{u’(c_t)} d_{t+j} \right] + B_t\end{equation}Now, suppose the process for dividends follows a random walk without drift:\begin{equation}d_{t+1}=d_t + \epsilon_t\end{equation}where is white noise. Further, suppose the bubble term . Under these assumptions, the stock price follows a random walk as well, so . In other words, our best forecast of tomorrow’s price, , is the price today, . Now suppose and . With dividends still following a random walk without drift, our best forecast of tomorrow’s price is . As long as the bubble persists, the bubble term should dominate the asset’s returns, and on average, the price should grow at rate (assuming the variance of is sufficiently small). To convince you of this, consider a simplified model with linear utility. Then with no bubble, and using the fact that : \begin{equation} p_t=E_t \Big[\beta d_{t+1} + \beta^2 d_{t+2} + \dots \Big] = \frac{\beta}{1-\beta} d_t \end{equation} Applying the law of iterated expectations: \begin{equation} E_t[p_{t+1}]=E_t \Big[\beta d_{t+2} + \beta^2 d_{t+3} + \dots \Big] = \frac{\beta}{1-\beta} d_t=p_t \end{equation} Now suppose we have a bubble: \begin{equation} p_t=E_t \Big[\beta d_{t+1} + \beta^2 d_{t+2} + \dots \Big] + B_t= \frac{\beta}{1-\beta} d_t + B_t \end{equation} Again applying the law of iterated expectations: \begin{equation} E_t[p_{t+1}]=E_t \Big[\beta d_{t+2} + \beta^2 d_{t+3} + \dots \Big] + E_t[B_{t+1}] = \frac{\beta}{1-\beta} d_t +(1+r)B_t=p_t+(1+r)B_t \end{equation} Let denote the total return of an asset over the past periods. Then, in the presence of a bubble: \begin{equation} (TR_i+1)^{\frac{1}{6}}-1 =\hat{r} \approx r \end{equation} To test for a bubble, construct two forecasts: 1) Under no bubble: 2) With a bubble: Finally, (mis)use the Diebold-Mariano test to compare the forecasts (I know this is not the intended use for the Diebold-Mariano test - this will be the topic of a future blog post when I review Diebold (2013)). Empirics Relative Forecast Error Before getting into the test described above, I wanted to make a first pass with some “eyeball econometrics” (a great quote from Uhlig (2005)). Note - I present all my results here, not just the ones that worked the way I wanted. Only reporting positive results and data snooping is a much bigger issue, and it will be the topic of a future blog post. Daily Data For now, forget the model. Suppose we only use the idea that returns become more predictable in a bubble. Using daily data run the following regression: where is returns and is the level (for example - the value of the Nasdaq or S&P 500 index). Use this to forecast , and then forecast . The exact procedure I used is: 1) Select based on AIC and BIC 2) Use a 30 day rolling window to run the regression and calculate a one-step-ahead forecast 3) Compute the relative forecast error as 4) Take a 30 day moving average of forecast error to remove noise In the figure below, the first observation for is normalized to 100. The result here is counterintuitive. Although we think returns become more predictable during a bubble, the forecasting actually gets worse! A possible explanation is that in a bubble, growth is exponential, rather than linear, so I repeated the exercise with . The results are very similar. Given these two failures, I was worried that the data might be too noisy at the daily frequency to get good predictions (recall in Part 3 I discussed how the volatility of the index was very high at the time). The next section repeats the exercise with monthly data. Monthly Data The plot below is computed using the same methodology as above, except I did not smooth the error: You can see there is still a large relative error during the bubble, but it is also very volatile. Smoothing the error at the quarterly frequency (using two prior months and current month) we get: Now, this is interesting - we get a dip in the forecasting error in the middle of 1999. I don’t think it’s worth reading into this too much (as I had to do several transformations before finding this result), but there may be something here. Forecasting Performance Given the results above, it doesn’t seem like returns will be much more predictable during a bubble. That being said, I still think it is a worthwhile exercise to test the model described above. The procedure I used to calculate forecasts is as follows: 1) Use 6 months of data to compute 2) Compute two forecasts, one being and 3) Use the Diebold-Mariano test to compare the two forecasts Doing this for 1998-2001 (inclusive), the random walk is better (at the 5% level) for forecasting the S&P 500 than the bubble model. For the Nasdaq, the forecasts are not significantly different at the 5% level (although the random walk model is better at the 10% level). As above, there is not much of a difference, but there is a difference. I don’t think it makes sense to go too much further down this route without better theoretical justification, as at some point we are just data snooping. Conclusion Although the results were not what I expected, they are unsurprising. During a bubble we don’t just have high returns, but volatile returns as well. Even if returns become more predictable, this is obscured by high volatility.
I'm wondering if someone can provide a clarification between 2 seemingly opposing definitions from reputable sources on dynamical systems! My Russian textbook, "Dynamical Systems I: Ordinary Differential Equations and Smooth Dynamical Systems" by Anosov, Arnol'd, Aronson, et al., says the following to determine whether a singular point of a dynamical system is asymptotically stable: Theorem 4.2: If all eigenvalues of the linear part of a vector field $v$ at a singular point have negative real part, then the singular point is asymptotically stable. To me, this means for any arbitrary dynamical system, say, $\dot{x} = f(x)$, where $x \in \mathbb{R}^{n}$, one can find where $f(x) = 0$, and solve the corresponding Jacobian for the eigenvalues of to determine stability. Further, if one finds that $\lambda_{i} < 0$, for $i = 1,2,...n$ then, this point is locally stable, by this theorem. But, is this theorem now suggesting that this is point is now asymptotically stable as well? Almost every single textbook on ODEs that I have checked says to determine whether an equilibrium point is asymptotically stable, some more general method is required like constructing Lyapunov functions, determining limit sets, etc... Why is there such a difference? Is there a difference?
I want to calculate the total electrostatic energy of the Cavendish Experiment (two concentric spheres of radii $R_{1,2}$ which are connected, outer one gets charged, then removed, then after removing the outer sphere and measuring the charge of the inner one, it's zero). The formula for this is: $E_{tot} = \frac{1}{2} \sum_{i,j = 1,2} E_{ij}$ with $E_{ij} = \frac{\sigma_i \sigma_j}{4 \pi \epsilon_0} \int_{S_i} d^2r \int_{S_j} d^2 r' \frac{1}{|\vec{r} - \vec{r}'|} e^{-\mu |\vec{r}-\vec{r}'|} $ where $S_{i,j}$ denotes integrating over the sphere i or j. I have the sample solution for this exercise and they massively confuse me. They simply state: "We calculate: $|\vec{r}-\vec{r}'| = \sqrt{r^2 + r'^2 - 2\vec{r}\cdot\vec{r}'} = \sqrt{R_i^2 + R_j^2 - 2R_i R_j cos \theta'}$" Now everything would be fine here if $cos \theta '$ would simply be the angle between $\vec{r}$ and $\vec{r}'$. But they actually go on calculating the integral then and use $\theta '$ as the same angle as the polar angle in the $r'$ spherical coordinatesystem. How is this true? This greatly simplifies the integral, such that we can immediately solve $\int_{S_i} d^2r$ to give $4 \pi R_i^2$ and with the substitution $x = cos \theta'$ the integral over sphere $j$ gets easy aswell. But I can't make sense of why that formula is true. If I directly compute the scalar product of $\vec{r}'$ and $\vec{r}$ this gives me: \begin{pmatrix} R_j sin(\theta') cos (\phi')\\ R_j sin (\theta') sin(\phi') \\ R_j cos(\phi') \end{pmatrix} scalar produced with \begin{pmatrix} R_i sin(\theta) cos (\phi)\\ R_i sin (\theta) sin(\phi) \\ R_i cos(\phi) \end{pmatrix} $= R_i R_j (sin(\theta') cos (\phi'))sin(\theta) cos (\phi) + sin (\theta') sin(\phi')sin (\theta) sin(\phi)+ cos(\phi')cos(\phi))$ This does never equal simply $R_i R_j cos \theta'$ Where did I go wrong here? I really hope someone can clear this up for me
Expected Discounted Utility Expected discounted utility is one of the most common ways to represent preferences over risky consumption plans. Consider an agent, sitting at time , who will receive a consumption stream until :\begin{equation}U_t(c)= E_t \left[ \sum \limits_{s=t}^T \beta^{s-t}u_s(c_s)\right]\end{equation}Where is the discount factor and is a within-period utility function. A problem with expected discounted utility is that it cannot separate preferences for smoothing over time, and smoothing across states. Consider the following example: You are stranded on an island at . A man comes in a boat and offers you a choice of two deals (1) Every morning he comes and flips a coin, if it comes up heads, you get a bushel of bananas that day (2) He flips a coin today, if it comes up heads you get a bushel of bananas every day until time , and if it comes up tails you get no bananas until time . It’s initiative that plan 2 is riskier than plan 1, but under expected discounted utility, for any and the agent is indifferent between the two plans: \begin{equation} U(Plan 1) = \sum\limits_{t=0}^T \beta^t \frac{u_t(1)+u_t(0)}{2}= U(Plan 2) \end{equation} Recursive Utility The only way to even partially separate preferences for smoothing over time, and preferences for smoothing across states is to use recursive utility (see Skiadas 2009 for a complete proof - this is an if and only if relationship).Recursive utility has two ingredients, the aggregator, which determines preferences over deterministic plans (time smoothing) and the conditional certainty equivalent (state smoothing). The steps below formulate expected discounted utility as recursive utility.For simplicity, drop the dependence of all functions on time, so we can remove all the subscript ’s. Now, propose a desirable property for the utility function - normalization. Consider any deterministic plan , then a utility is normalized if . Normalize utility , the expected discounted utility defined above, as where . Basically, gives the discounted utility of deterministic plan , so gives the deterministic required to make the agent indifferent between potentially risky plan and deterministic plan . For expected discounted utility, the aggregator is: . The intuition is that with expected discounted utility, the agent’s utility from plan is a weighted average of their consumption today, and the utility of the equivalent deterministic plan until . For utility to be normalized, the aggreator must satisfy for any deterministic plan . Put this into the equation above to solve for : . Then, apply to both sides: \begin{equation} u(x) + \beta \psi_{t+1} (x) = \psi_t(x) \end{equation} Fix , and interpret terminal consumption value as consuming for the rest of time (equivalently, imagine letting go to infinity). This implies we can drop the subscripts on the : \begin{equation} u(x)=\psi(x)-\beta\psi(x) \end{equation} Rearranging yields and . Putting this back into our expression above for implies: \begin{equation} f(t,x,y)=u^{-1}((1-\beta)u(x)+\beta u(y)) \end{equation} Given the way the aggregator is defined, we can see that depends on the curvature of - in other words, the within period utility function will influence preferences for smoothing over time. This also gives intuition for how to make an agent not indifferent between deal (1) and deal (2) described above - needs to be defined independently of (or ). Conclusion Recursive utility is a general framework, with expected discounted utility as a special case. For a deeper look at recursive utility, see Asset Pricing Theory by Costis Skiadas.
Yes, the question you meant to ask is true, although (as pointed out already) it is not quite correct as stated. To keep things clear, I will start by restating your question in a more precise form. Let $n> 1$ be an integer and let $a_1,\ldots,a_n$ be positive real numbers, all between $0$ and $1$. Prove that$$\frac{\sum_{i=1}^{n-1}a_i}{1-\prod_{i=1}^{n-1}(1-a_i)}\leq \frac{\sum_{i=1}^{n}a_i}{1-\prod_{i=1}^{n}(1-a_i)}.$$ Proof. Let $S=\sum_{i=1}^{n-1}a_i$ and let $P=\prod_{i=1}^{n-1}(1-a_i)$. Since the $a_i$ are positive, both denominators are also positive so we can multiply through and substitute $S$ and $P$ to obtain the equivalent inequality$$\frac{S}{1-P}\leq \frac{S+a_n}{1-(1-a_n)P}\iff S\bigl(1-(1-a_n)P\bigr)\leq (1-P)(S+a_n).$$Expanding this out and collecting like terms leads to the equivalent inequality$$a_nSP \leq a_n(1-P),$$which (since $a_n$ is positive) is equivalent to $SP\leq 1-P$, which is the same as $P(1+S)\leq 1$. But this is true, since$$P(1+S)=\prod_{i=1}^{n-1}(1-a_i) \ \cdot \ \Bigl(1+\sum_{i=1}^{n-1}a_i\Bigr)\leq \prod_{i=1}^{n-1}(1-a_i)\ \cdot \ \prod_{i=1}^{n-1}(1+a_i)=\prod_{i=1}^{n-1}(1-a_i^2)\leq 1.$$This establishes the desired inequality. $\square$
How would I solve the following. An algorithm that is $O(n^2)$ takes 10 seconds to execute on a particular computer when n=100, how long would you expect to take it when n=500? Can anyone help me answer dis. Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community Since it is $O(n^2)$, then $t \leq c n^2$. Therefore, since $t = 10$, and $n = 100$, $c \geq t / n^2 = 10 / 100 00$. Therefore, at $n = 500$. You have $t \leq 10 / 100 00 \times 500^2 = 250$. But wait, the definition of the $O$-notation claims that the above formula works for large $n$'s only. I assumed that. Although honestly, I dont like this question. It is a bet weird. Formally, there is absolutely no way to tell. O( ) notation is about the limiting behavior of a function (in this case, the running time of an algorithm) as its argument (in this case, the input size) grows to infinity. Without more information, it is absolutely impossible, even in principle, to predict behavior in the limit from a finite number of function values.
I'm learning about Measure Theory and need some help with this problem: Let $0 < \alpha < 1$. We construct a set $C_\alpha$ (Cantor type) as follows: In the first step we remove from the interval $[0, 1]$ a "middle" open interval of length $(1 - \alpha)3^{-1}$. In the nth step we remove $2^{n-1}$ open intervals of length $(1 - \alpha)3^{-n}$. Find the Lebesgue measureof $C_\alpha$. My work and thoughts: If we remove the set $C_\alpha$ from the close interval $[0, 1]$ we are left with the union of pairwise disjoint intervals. In more traditionally formulaic notation, we can write: $$[0, 1] \setminus C_\alpha = E_1 \cup E_2 \cup E_3 \cup \ldots$$ Therefore, taking the Lebesgue measure on both side of the preivous equality we get: $$\mu \left([0, 1] \setminus C_\alpha \right) = \mu \left( \bigcup_{n=1}^{+\infty} E_n \right) = \sum_{n = 1}^{+\infty} \mu(E_n).$$ I need to find a way to express $\mu(E_n)$ and calculate the above series. If I can do so then the result is immediate since: $$\mu(C_\alpha) = \mu([0, 1]) - \mu([0, 1]\setminus C_\alpha)$$ where $\mu([0, 1]) = 1$.
If I understood well Riemann-Lesbegue lemma it says that the Fourier transform $\hat{f}(\xi)$ of a function $f(x)$, $x\in\mathbb{R}$ decays to zero with $\xi\to\infty$. Furthermore the more $f$ has continuous derivative the faster its Fourier transform goes to zero. In particular a discontinuous e.g. $f(x)=\text{Heaviside}(|a^2-x^2|)$ will go as $|\hat{f}(\xi)|\propto \xi^{-1}$. A continuous but not continuous first derivative function $f(x)=e^{-|x|}$ will go as $|\hat{f}(\xi)|\propto \xi^{-2}$. For $C^\infty(\mathbb{R})$ function the decays is faster than any power law. Now I believe that Paley-Wiener theorem says that for analytic function $f$, $|\hat{f}(\xi)|= O( e^{-\alpha \xi})$ for some $\alpha$. (Here I am not sure it is valid for $\mathbb{R}$). My question is how to distinguish smoothness/regularity of two analytic function on $\mathbb{R}$? For example $g(x)=e^{-x^2/(2\sigma^2)}$ and $h(x)=1/(1+(x/\sigma)^2)$. From Fourier transform we know that $\hat{g}\propto e^{-(\sigma\xi)^2/2}$ and $\hat{h}\propto e^{-\sigma\xi}$. So do it mean that Gaussian function $g$ is "more than analytic" than Cauchy $h$? How to understand this very fast decay in terms of Paley-Wiener theorem or Riemann-Lesbegue lemma? Is there other means to measure regularity of functions than Fourier transform? I have read about modulus of continuity $w$ (see one example of definition), but so far I have not manage to compute it (analytically or numerically) for $g$ or $h$ (numerically I always find $w(1/\xi)\propto 1/\xi$ which does not reflect the Fourier transform).
Let's say I have a noncyclic multiplicative group $G_m = \{j < m, \gcd(j,m) = 1\}$, with multiplication $\bmod m$. This has order $\varphi(m)$, and suppose I can determine its factorization. Now suppose I have some prime $p_1 \in G_m$ which has order $n_1$ such that $n_1 < \varphi(m)$ (since $G_m$ is noncyclic) and $n_1 \mid \varphi(m)$ (Lagrange's theorem). Given another prime $p_2 \in G_m$, is there a way to tell whether $p_2 \in \left<p_1\right>$, that is, $p_2 = {p_1}^k \pmod m$ for some $k$ --- without trying all possible values of $k$? If $p_2 \notin \left<p_1\right>$, is there a way to determine the order of the multiplicative group $H_{p_1,p_2} = \{{p_1}^{k_1}{p_2}^{k_2} \mod m\}$ without exhaustively trying to enumerate all elements? I know I can determine the order of $p_1$ or $p_2$ by computing $p_1^{\varphi(m)/j}$ for various primes $j \mid \varphi(m)$; if the result is 1 then I should be able to confirm that $K$ = some product of powers of those primes $j$ is the largest possible value such that $p_1^{\varphi(m)/K} =1$, and therefore the order of $p_1$ is $\varphi(m)/K$. Not sure how to determine the order of a group that has a generating set of more than one element, though.
Answer See the answer below. Work Step by Step $0\leq\ell\leq n-1$ $\begin{smallmatrix} &n &\ell & Valid\ ?\\ \hline\\ 3p&3&1&\surd\\ 4s&4&0&\surd\\ 2f&2&3&\times\\ 1p&1&2&\times \end{smallmatrix}$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
I found an unclear step for me in Zorich, Mathematical Analysis I, sec. 5.5, pag. 270. We are trying to find all $z \in\mathbb{C}$ for which the series: $$c_0+c_1(z-z_0)+c_2(z-z_0)^2+...$$ converges. To do this, we try to understand when it converges absolutely, applying the Cauchy criterion to the series: $$|c_0|+|c_1(z-z_0)|+|c_2(z-z_0)^2|+...$$ obtaining that it converges if: $$|z-z_0|< \frac{1}{\overline{\lim}_{n\to\infty}\sqrt[n]{|c_n|}} $$ and all is ok.Now, Zorich says (referring to the series with absolute values): the general term does not tend to zero if $|z-z_0| \geq \frac{1}{\overline{\lim}_{n\to\infty}\sqrt[n]{|c_n|}} $. I can't completely understand this sentence, in particular the reason why he also includes the case $|z-z_0| =\frac{1}{\overline{\lim}_{n\to\infty}\sqrt[n]{|c_n|}} $. In this case I would have: $$|c_0|+\frac{|c_1|}{\left (\overline{\lim}_{n\to\infty}\sqrt[n]{|c_n|} \right )} +\frac{|c_2|}{\left( \overline{\lim}_{n\to\infty}\sqrt[n]{|c_n|}\right )^2} +...$$ and so I would verify that: $$\lim_{k\to\infty}\frac{|c_k|}{\left( \overline{\lim}_{n\to\infty}\sqrt[n]{|c_n|}\right )^k}\neq 0$$ but I can't. How does Zorich manage to say that for all $z\in\mathbb{C}$, such that $|z-z_0| =\frac{1}{\overline{\lim}_{n\to\infty}\sqrt[n]{|c_n|}} $, the series with absolute values diverges? Thanks.
Difference between revisions of "Applied/ACMS/absS18" (→ACMS Abstracts: Spring 2018) (→ACMS Abstracts: Spring 2018) Line 18: Line 18: The numerical simulation of single particle scattering of electromagnetic energy plays a fundamental role in remote sensing studies of the atmosphere and oceans, and in efforts to model aerosol "radiative forcing" processes in a wide variety of models of atmospheric and climate dynamics, I will briefly explain the main challenges in the numerical simulation of single particle scattering and describe how work with 3-d simulations of scattering of an incident Gaussian pulse, using a Pseudo-Spectral Time Domain method to numerically solve Maxwell’s Equations, led to an investigation of episodic bursts of energy that were observed at various points in the near field during the decay phase of the simulations. The main focus of the talk will be on simulations in dimensions 1 and 2, simple geometries, and a single refractive index (ice at 550 nanometers). The periodic emission of pulses is easy to understand and predict on the basis of Snell’s laws in the 1-d case considered. In much more interesting 2-d cases, simulations show traveling waves within the crystal that give rise to pulsed emissions of energy when they interact with each other or when they enter regions of high surface curvature. The time-dependent simulations give a more dynamical view of "photonic nanojets" reported earlier in steady-state simulations in other contexts, and of energy release in "morphology-dependent resonances." The numerical simulation of single particle scattering of electromagnetic energy plays a fundamental role in remote sensing studies of the atmosphere and oceans, and in efforts to model aerosol "radiative forcing" processes in a wide variety of models of atmospheric and climate dynamics, I will briefly explain the main challenges in the numerical simulation of single particle scattering and describe how work with 3-d simulations of scattering of an incident Gaussian pulse, using a Pseudo-Spectral Time Domain method to numerically solve Maxwell’s Equations, led to an investigation of episodic bursts of energy that were observed at various points in the near field during the decay phase of the simulations. The main focus of the talk will be on simulations in dimensions 1 and 2, simple geometries, and a single refractive index (ice at 550 nanometers). The periodic emission of pulses is easy to understand and predict on the basis of Snell’s laws in the 1-d case considered. In much more interesting 2-d cases, simulations show traveling waves within the crystal that give rise to pulsed emissions of energy when they interact with each other or when they enter regions of high surface curvature. The time-dependent simulations give a more dynamical view of "photonic nanojets" reported earlier in steady-state simulations in other contexts, and of energy release in "morphology-dependent resonances." + + + + + + + + === Haizhao Yang (National University of Singapore) === === Haizhao Yang (National University of Singapore) === Revision as of 18:13, 12 February 2018 Contents 1 ACMS Abstracts: Spring 2018 ACMS Abstracts: Spring 2018 Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics. Michael Herty (RWTH-Aachen) Opinion Formation Models and Mean field Games Techniques Mean-Field Games are games with a continuum of players that incorporate the time dimension through a control-theoretic approach. Recently, simpler approaches relying on reply strategies have been proposed. Based on an example in opinion formation modeling we explore the link between differentiability notions and mean-field game approaches. For numerical purposes a model predictive control framework is introduced consistent with the mean-field game setting that allows for efficient simulation. Numerical examples are also presented as well as stability results on the derived control. Lee Panetta (Texas A&M) Traveling waves and pulsed energy emissions seen in numerical simulations of electromagnetic wave scattering by ice crystals The numerical simulation of single particle scattering of electromagnetic energy plays a fundamental role in remote sensing studies of the atmosphere and oceans, and in efforts to model aerosol "radiative forcing" processes in a wide variety of models of atmospheric and climate dynamics, I will briefly explain the main challenges in the numerical simulation of single particle scattering and describe how work with 3-d simulations of scattering of an incident Gaussian pulse, using a Pseudo-Spectral Time Domain method to numerically solve Maxwell’s Equations, led to an investigation of episodic bursts of energy that were observed at various points in the near field during the decay phase of the simulations. The main focus of the talk will be on simulations in dimensions 1 and 2, simple geometries, and a single refractive index (ice at 550 nanometers). The periodic emission of pulses is easy to understand and predict on the basis of Snell’s laws in the 1-d case considered. In much more interesting 2-d cases, simulations show traveling waves within the crystal that give rise to pulsed emissions of energy when they interact with each other or when they enter regions of high surface curvature. The time-dependent simulations give a more dynamical view of "photonic nanojets" reported earlier in steady-state simulations in other contexts, and of energy release in "morphology-dependent resonances." Francois Monard (UC Santa Cruz) Inverse problems in integral geometry and Boltzmann transport The Boltzmann transport (or radiative transfer) equation describes the transport of photons interacting with a medium via attenuation and scattering effects. Such an equation serves as the model for many imaging modalities (e.g., SPECT, Optical Tomography) where one aims at reconstructing the optical parameters (absorption/scattering) or a source term, out of measurements of intensities radiated outside the domain of interest. In this talk, we will review recent progress on the inversion of some of the inverse problems mentioned above. In particular, we will discuss an interesting connection between the inverse source problem (where the optical parameters are assumed to be known) and a problem from integral geometry, namely the tensor tomography problem (or how to reconstruct a tensor field from knowledge of its integrals along geodesic curves). Haizhao Yang (National University of Singapore) A Unified Framework for Oscillatory Integral Transform: When to use NUFFT or Butterfly Factorization? This talk introduces fast algorithms of the matvec $g=Kf$ for $K\in \mathbb{C}^{N\times N}$, which is the discretization of the oscillatory integral transform $g(x) = \int K(x,\xi) f(\xi)d\xi$ with a kernel function $K(x,\xi)=\alpha(x,\xi)e^{2\pi i\Phi(x,\xi)}$, where $\alpha(x,\xi)$ is a smooth amplitude function , and $\Phi(x,\xi)$ is a piecewise smooth phase function with $O(1)$ discontinuous points in $x$ and $\xi$. A unified framework is proposed to compute $Kf$ with $O(N\log N)$ time and memory complexity via the non-uniform fast Fourier transform (NUFFT) or the butterfly factorization (BF), together with an $O(N)$ fast algorithm to determine whether NUFFT or BF is more suitable. This framework works for two cases: 1) explicite formulas for the amplitude and phase functions are known; 2) only indirect access of the amplitude and phase functions are available. Especially in the case of indirect access, our main contributions are: 1) an $O(N\log N)$ algorithm for recovering the amplitude and phase functions is proposed based on a new low-rank matrix recovery algorithm; 2) a new stable and nearly optimal BF with amplitude and phase functions in form of a low-rank factorization (IBF-MAT) is proposed to evaluate the matvec $Kf$. Numerical results are provided to demonstrate the effectiveness of the proposed framework. Eric Keaveny (Imperial College London) Linking the micro- and macro-scales in populations of swimming cells Swimming cells and microorganisms are as diverse in their collective dynamics as they are in their individual shapes and swimming mechanisms. They are able to propel themselves through simple viscous fluids, as well as through more complex environments where they must interact with other microscopic structures. In this talk, I will describe recent simulations that explore the connection between dynamics at the scale of the cell with that of the population in the case where the cells are sperm. In particular, I will discuss how the motion of the sperm’s flagella can greatly impact the overall dynamics of their suspensions. Additionally, I will discuss how in complex environments, the density and stiffness of structures with which the cells interact impact the effective diffusion of the population. Molei Tao (Georgia Tech) Explicit high-order symplectic integration of nonseparable Hamiltonians: algorithms and long time performance Symplectic integrators preserve the phase-space volume and have favorable performances in long time simulations. Methods for an explicit symplectic integration have been extensively studied for separable Hamiltonians (i.e., H(q,p)=K(p)+V(q)), and they lead to both accurate and efficient simulations. However, nonseparable Hamiltonians also model important problems, such as non-Newtonian mechanics and nearly integrable systems in action-angle coordinates. Unfortunately, implicit methods had been the only available symplectic approach for general nonseparable systems. This talk will describe a recent result that constructs explicit and arbitrary high-order symplectic integrators for arbitrary Hamiltonians. Based on a mechanical restraint that binds two copies of phase space together, these integrators have good long time performance. More precisely, based on backward error analysis, KAM theory, and some additional multiscale analysis, a pleasant error bound is established for integrable systems. This bound is then demonstrated on a conceptual example and the Schwarzschild geodesics problem. For nonintegrable systems, some numerical experiments with the nonlinear Schrodinger equation will be discussed. Boualem Khouider (UVic) Title TBA Abstract TBA
Suppose that there is some natural number $a$ and $b$. Now we perform $c = a^2 + b^2$. This time, c is even. Will this $c$ only have one possible pair of $a$ and $b$? edit: what happens if c is odd number? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Suppose that there is some natural number $a$ and $b$. Now we perform $c = a^2 + b^2$. This time, c is even. Will this $c$ only have one possible pair of $a$ and $b$? edit: what happens if c is odd number? Not necessarily. For example, note that $50=1^2+7^2=5^2+5^2$, and $130=3^2+11^2=7^2+9^2$. For an even number with more than two representations, try $650$. We can produce odd numbers with several representations as a sum of two squares by taking a product of several primes of the form $4k+1$. To get even numbers with multiple representations, take an odd number that has multiple representations, and multiply by a power of $2$. To help you produce your own examples, the following identity, often called the Brahmagupta Identity, is quite useful:$$(a^2+b^2)(x^2+y^2)=(ax\pm by)^2 +(ay\mp bx)^2.$$ If a,b are both even or both odd, c is even. c will be odd iff a and b are opposite parity. Let a=2A and b=2B+1, then c≡1(mod 4). So if c≡3≡-1(mod 4), there will be no solution. In other way, we can take (a,b)=d, then $d^2|c$ , let $C^2=d^2.c$ Let $\frac{a}{A}=\frac{b}{B}=d$, clearly (A,B)=1 Then, A & B are both not even. The case of A,B being opposite parity has been dealt above. If A & B are both odd, let A=2m+1, B=2n+1. $A^2+B^2=(2m+1)^2+(2n+1)^2=2(p^2+q^2)$ (say)=$(p+q)^2+(p-q)^2$ Then p=m+n+1, q=m-n and p+q=2m+1 is odd=> p & q are of opposite parity. So, the problem boils down to finding A,B such that (A,B)=1 and A,B are of opposite parity. Clearly, C≡1(mod 4) is a necessary condition for solubility. Brahmagupta Identity can used to prove that if C is a product of n primes of the form 4r+1, it can be represented as the sum of two squares in $2^{n-1}$ ways.
There are a thousand apps for organising your life, calendars, todo lists, note trackers, but the big kahuna, the true Swiss army knife is org-mode. Out of the box, org-mode understands LaTeX and code snippets, todo lists and bookmarks, projects and agendas. Best of all, it comes with a powerful text editor, Emacs! This week, Ben gave us a live demo of org-mode in Emacs You can get the original org-mode file from here What is ORG mode? Text-based way to organise notes, links, code, and more Time management tool TODO lists Tasks, projects Table editor, spreadsheet Interactive code notebook Can be exported to LaTeX, markdown, HTML,… All in Emacs! The ORG file format A simple text format like Markdown. Anything can be edited, fixed easily by hand if needed. You can start simple, and discover extra features [Try Alt right/left, shift right/left, TAB to collapse/expand] * Some section** A subsection*** Subsection**** subsubsection - list - another item - sub-items** Another subsection Links Links files, URLs, shell command, elisp, DOI, … <10.5281/zenodo.2530733> LaTeX Standard LaTeX formulae are recognised and rendered Inline like this (e^{i\pi} = -1) [ e^{i\theta} = \cos\left(\theta\right) + i\sin\left(\theta\right) ] Toggle equation view: C-c C-x C-l Tables Start creating a table with “ headings ” then TAB Alt + arrow keys move columns, rows Functions can also manipulate cells | b | a | c || | | || | kjdhfsf | | Source code, notebooks Type “<s TAB” (other shortcuts for Examples, Quotes, …) Supports many languages C-c C-c runs the code block #+BEGIN_SRC python :results outputprint("hello")#+END_SRC#+RESULTS:: hello Tables and code blocks Tables can be used as input and output to code blocks Provides a way to pass data between languages #+NAME: cxx-generate#+BEGIN_SRC C++ :includes <iostream> for(int i = 0; i < 5; i++) { std::cout << i << ", " << i*i*i - 2*i << "\n";}#+END_SRC#+RESULTS: cxx-generate| 0 | 0 || 1 | -1 || 2 | 4 || 3 | 21 || 4 | 56 |#+BEGIN_SRC python :var data=cxx-generateimport matplotlib.pyplot as pltimport numpy as npd = np.array(data)plt.plot(d[:, 0], d[:, 1])plt.show()#+END_SRC#+RESULTS:: None Task management Creating tasks in org-mode: Add “TODO” to the start of a (sub)section (or S-right) C-c C-d to choose a deadline from calendar C-c a to see Agenda views ** Project1*** TODO thing1 DEADLINE: <2019-02-22 Fri>*** TODO thing2 DEADLINE: <2019-02-20 Wed>** Project2*** TODO do something DEADLINE: <2019-02-27 Wed>*** TODO send email More task management Once tasks are done they can be marked “DONE” Other states can be customised: NEXT, WAITING, CANCELLED,… S-right to cycle between states, type C-c C-t or just write yourself. ** DONE that thing CLOSED: [2019-02-18 Mon 10:28] - State "DONE" from "WAITING" [2019-02-18 Mon 10:28] - State "WAITING" from "DONE" [2019-02-18 Mon 10:27] \\ waiting for X This can be customised to fit your preferred way of working Getting Things Done (GTD) Time management How much time to you spend on each task? C-c C-x C-i clock in C-c C-x C-o clock out C-c a c Agenda clock view C-c C-x C-r Insert / update clock table Presentations! This presentation is Org mode with org-show Files can be exported to many other formats: C-c C-e e.g. LaTeX -> PDF C-c C-e l p Can be used as alternative to writing raw LaTeX.
for disjointsets $(A_n)_{n \in \mathbb{N}}$ $P(\cup_n A_n) = \sum_n P(A_n)$ Is that really disjoint rather than pairwise disjoint? If we have events $A, B, C$ s.t. $A \cap B = \emptyset$ $A \cap C = \emptyset$ $B \cap C \neq \emptyset$ $P(B \cap C) > 0$, then A, B and C are disjoint but not pairwise disjoint...I think? (*) I don't think it follows that $P(A \cup B \cup C) = P(A) + P(B) + P(C)$. I think $P(A \cup B \cup C) = P(A) + P(B) + P(C \setminus B)$ ? (*) From what I remember in advanced probability class: $\{A_n\}_n$'s are disjoint if $\cap_n A_n = \emptyset$ $\{A_n\}_n$'s are pairwise disjoint if $A_i \cap A_j = \emptyset$ for distinct indices i,j From Larsen and Marx (book used in my elementary probability class): I find this strange. If 'disjoint' and 'pairwise disjoint' are equivalent (ie disjoint does not mean what I said above), why even say that $A_i \cap A_j = \emptyset$ for distinct indices i,j? Why not just say disjoint? On the other hand, disjointness is used to justify the $P(\cup_n A_n) = \sum_n P(A_n)$ statements later on. Seems kind of inconsistent.
I have a question refer to The first theorem of the isomorphisms. The theorem states that if we have a $$\dot G \cong \frac{G}{\ker(\varphi)}$$ where the co-domain $\dot G$ will be isomorphic to the quotient group $G$ by the $\ker(\varphi)$ and $\frac{G}{\ker(\varphi)}$ from my understanding contains all equivalence classes (cosets) generated by the subgroup $\ker(\varphi)$ that are all this cosets if we denote them $\overline e$ that is the $\ker(\varphi)$ itself then $\overline a, \overline b,...,\overline n$. $\ker(\varphi)$ must be a normal \ subgroup then all cosets are split into exactly the same number of elements that are contained in $\ker(\varphi)$. Also we know that $(\ast) \ \ker(\varphi)=\{g\in G\ | \ (\varphi(g)=e_\dot G$) in other words every element from the $\ker(\varphi)$ must map a element from his domain to the co domain in $\dot G$ and that should be the identity element. Now my confusion comes from this the theorem says that all the elements in the same coset are mapped to the same element in $\dot G$. The proof is that if we take an element from the coset, for example, $\overline a$ it should be in the following form: $$\overline a \ \circ n \ where: \\n\in \ker(\varphi) \\ a\in \overline a$$ so if we use the theorem we get: $$\varphi(a \circ n ) =\varphi(a) \circ\varphi(n)$$ that sends the result of the composition in the domain into a result from the composition in the co domain if I am undesranding it right and $\varphi(n)$ is obviously the identity element because the theorem states $(\ast)$ that every g that is mapped from the domain to the co-domain from the kernel is onto the identity element and i have take n $n\in \ker(\varphi)$ then $$\varphi(a \circ n)= \varphi(a) \circ E_\dot g=\varphi(a)$$ this proofs that if we take two elements from our quotient group that they will end up mapped to the same element in the co-domain. And how this proofs that all elements in one coset will end up into just one element in the co-domain. Because we prove this by taking two elements and one is from the subgroup and the other from the coset, that means that these two elements are not both from the coset $\overline a$ itself or I am missing something? Thank you for any help in advance. I have a question refer to The first theorem of the isomorphisms. The theorem states that if we have a $$\dot G \cong \frac{G}{\ker(\varphi)}$$ where the co-domain $\dot G$ will be isomorphic to the quotient group $G$ by the $\ker(\varphi)$ and $\frac{G}{\ker(\varphi)}$ from my understanding contains all equivalence classes (cosets) generated by the subgroup $\ker(\varphi)$ that are all this cosets if we denote them $\overline e$ that is the $\ker(\varphi)$ itself then $\overline a, \overline b,...,\overline n$. $\ker(\varphi)$ must be a Note that an element in the group $\dfrac{G}{\ker\varphi}$ is of the form $[a]=\{a\cdot n\mid n\in\ker\varphi\}$ for some $a\in G$. Now every element of $[a]$ gets mapped to the same element, say $\varphi(a)$. So $\varphi$ restricted to $[a]$ is a constant. Hence you can define a map $\tilde{\varphi}:\dfrac{G}{\ker\varphi}\to H$ by $\tilde{\varphi}([a])=\varphi(a)$, which is well defined.
Kaj Hansen As an undergraduate, I studied mathematics at the University of Georgia. In early 2014, I put together a short series of expository videos on Ramsey theory that can be viewed here (produced and published by my good friend Eddie Beck). I'm active primarily in the point-set topology and various abstract algebra tags. Here's a handful of my less run-of-the-mill contributions to this site: Finding primitive elements for finite extensions of $\mathbb{Q}$: a Galois-theoretic technique Constructing connected spaces with arbitrarily many path components On symmetric polynomials Square-and-multiply: an algorithm for computationally efficient exponentiation in a given semigroup On the infinite dihedral group $D_{\infty}$ Finding Galois groups $\cong S_n$ via generating sets: an example If $\text{Gal}(p) \cong G_1$ and $\text{Gal}(q) \cong G_2$, when is $\text{Gal}(pq) \cong G_1 \times G_2$? Visualizing ring homomorphisms The intersection of two compact sets need not be compact A continuous function $f:[a,b] \to \mathbb{R}$ is Riemann integrable Quotient spaces are ill-behaved with respect to separation axioms The box topology on infinite products: problems with continuity The Galois group of an irreducible, rational, cubic polynomial is determined by its discriminant A one-point connectificationof any topological space A simple application of the Banach fixed-point theorem Proving that closed & open subsets of locally compact Hausdorff spaces are locally compact—without toomuch machinery Friendly logarithms: Example 1, Example 2, Example 3 Outside of math, I am interested in existentialism, Christianity, and exploring the nature of consciousness. I also hold music in the highest regard. I (at least try to) listen to a wide variety of genres, from minimalist / ambient to folk to psychedelic rock, with a particular soft spot for "extreme" metal—especially death and black. If you want to connect with me elsewhere, I play chess here under the username Kaj_Hansen and Starcraft II (main-racing as Terran) under the "BattleTag" MementoMori#11653. Feel free to add me. Athens, GA Member for 5 years, 6 months 8,891 profile views Last seen 21 hours ago
When i asked this question, the teacher thought i was questioning the concept of integrals, whereas I do nearly fully understand integrals, just not how it is used to find area under a curve. Note sure what you are asking. There could be 2 interpretations to this question: How to use integration to find area under a curve? or Why does integration give the area under a curve? It looks like you are asking about 1. The area under a curve is the definite integral of the curve. This sounds vague - lets go with a few examples. Lets find the area of the rectangle (ABCD) in two ways. You know that the area of the rectangle is hw. Let's using integration to find the area of the following: area between the x-axis and the line y=h, from x=0 to x=w. Note that y=h represents the curve DC. The area between this curve and x-axis, bounded by x=0 to x=w is the same as the area of the rectangle ABCD. Now, this area can be written as \int_0^wydx\ =\ \int_0^whdx. Why is this the case? We are computing the area of the curve y=h, from x=0 to x=w. This is the definite integral given the end-points. \int_0^whdx=hw\ -\ h0\ =\ hw, the area of the rectangle as expected. Lets look at a triangle now. The area of the triangle ABC = \frac{1}{2}bh. Lets see if integration gives us the same. We first need to find the equation of the curve (AC) and then compute the definite integral of the curve from 0 to b. The equation of AC is y=\frac{h}{b}x. The slope is h/b and the y-intercept is 0. The area of ABC is \int_0^bydx=\int_0^b\frac{h}{b}xdx=\frac{h}{b}\int_0^bxdx=\frac{h}{b}\times\frac{1}{2}\left(b^2-0^2\right)=\frac{1}{2}\times\frac{h}{b}\times b^2=\frac{1}{2}bh Essentially, the definite integral of the curve from a to b represents the area under the curve and x-axis from x=a to x=b. This applies to any curve - even 3-d dimensional surfaces. Try with a trapezium and see if that works. If your equation was "why does integration lead to the area under the curve?", please ask another question.
Search Now showing items 1-10 of 19 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto)
System of generalized mixed nonlinear ordered variational inclusions Department of Mathematics, Jazan University, Jazan, 45142, KSA In this paper, we consider a system of generalized mixed nonlinear ordered variational inclusions in partially ordered Banach spaces and suggest an algorithm for a solution of the considered system. We prove an existence and convergence result for the solution of the system of generalized mixed nonlinear ordered variational inclusions. Keywords:Algorithm, convergence, sequences, resolvent operators, solution, system, partial ordered Banach spaces. Mathematics Subject Classification:Primary: 49J40, 47H09; Secondary: 47J20. Citation:Salahuddin. System of generalized mixed nonlinear ordered variational inclusions. Numerical Algebra, Control & Optimization, doi: 10.3934/naco.2019026 References: [1] R. Ahmad, M. F. Khan and Sa lahuddin, Mann and Ishikawa type perturbed iterative algorithm for generalized nonlinear variational inclusions, [2] [3] M. K. Ahmad and Salahuddin, Perturbed three step approximation process with errors for a general implicit nonlinear variational inequalities, [4] [5] [6] X. P. Ding and H. R. Feng, Algorithm for solving a new class of generalized nonlinear implicit quasi variational inclusions in Banach spaces, [7] [8] [9] Y. P. Fang, N. J. Huang and H. B. Thompson, A new system of variational inclusions with $(H, \eta)$-monotone operators in Hilbert spaces, [10] [11] [12] S. Hussain, M. F. Khan and Sa lahuddin, Mann and Ishikawa type perturbed iterative algorithms for completely generalized nonlinear variational inclusions, [13] P. Junlouchai, S. Plubtieng and Sa lahuddin, On a new system of nonlinear regularized nonconvex variational inequalities in Hilbert spaces, [14] [15] M. F. Khan and Salahuddin, Generalized co-complementarity problems in [16] [17] [18] [19] [20] H. G. Li, L. P. Li, J. M. Zheng and M. M. Jin, Sensitivity analysis for generalized set-valued parametric ordered variational inclusion with $(\alpha, \lambda)$-nodsm mappings in ordered Banach spaces, [21] H. G. Li, D. Qui and Y. Zou, Characterization of weak-anodd set-valued mappings with applications to approximate solution of gnmoqv inclusions involving $\oplus$ operator in ordered Banach spaces, [22] H. G. Li, L. P. Li and M. M. Jin, A class of nonlinear mixed ordered inclusion problems for oredered $(\alpha_a, \lambda)$-ANODM set-valued mappings with strong comparison mapping, [23] H. G. Li, A nonlinear inclusion problem involving $(\alpha, \lambda)$-NODM set-valued mappings in ordered Hilbert space, [24] [25] H. G. Li, D. Qiu and M. M. Jin, GNM ordered variational inequality system with ordered Lipschitz continuous mappings in an ordered Banach space, [26] H. G. Li, X. B. Pan, Z. Y. Deng and C. Y. Wang, Solving GNOVI frameworks involving $(\gamma_g, \lambda)$-weak-GRD set-valued mappings in positive Hilbert spaces, [27] H. H. Schaefer, [28] [29] [30] [31] [32] Y. K. Tang, S. S. Chang and Salahuddin, A system of nonlinear set valued variational inclusions, SpringerPlus, [33] [34] [35] R. U. Verma and Sa lahuddin, Extended systems of nonlinear vector quasi variational inclusions and extended systems of nonlinear vector quasi optimization problems in locally FC-spaces, show all references References: [1] R. Ahmad, M. F. Khan and Sa lahuddin, Mann and Ishikawa type perturbed iterative algorithm for generalized nonlinear variational inclusions, [2] [3] M. K. Ahmad and Salahuddin, Perturbed three step approximation process with errors for a general implicit nonlinear variational inequalities, [4] [5] [6] X. P. Ding and H. R. Feng, Algorithm for solving a new class of generalized nonlinear implicit quasi variational inclusions in Banach spaces, [7] [8] [9] Y. P. Fang, N. J. Huang and H. B. Thompson, A new system of variational inclusions with $(H, \eta)$-monotone operators in Hilbert spaces, [10] [11] [12] S. Hussain, M. F. Khan and Sa lahuddin, Mann and Ishikawa type perturbed iterative algorithms for completely generalized nonlinear variational inclusions, [13] P. Junlouchai, S. Plubtieng and Sa lahuddin, On a new system of nonlinear regularized nonconvex variational inequalities in Hilbert spaces, [14] [15] M. F. Khan and Salahuddin, Generalized co-complementarity problems in [16] [17] [18] [19] [20] H. G. Li, L. P. Li, J. M. Zheng and M. M. Jin, Sensitivity analysis for generalized set-valued parametric ordered variational inclusion with $(\alpha, \lambda)$-nodsm mappings in ordered Banach spaces, [21] H. G. Li, D. Qui and Y. Zou, Characterization of weak-anodd set-valued mappings with applications to approximate solution of gnmoqv inclusions involving $\oplus$ operator in ordered Banach spaces, [22] H. G. Li, L. P. Li and M. M. Jin, A class of nonlinear mixed ordered inclusion problems for oredered $(\alpha_a, \lambda)$-ANODM set-valued mappings with strong comparison mapping, [23] H. G. Li, A nonlinear inclusion problem involving $(\alpha, \lambda)$-NODM set-valued mappings in ordered Hilbert space, [24] [25] H. G. Li, D. Qiu and M. M. Jin, GNM ordered variational inequality system with ordered Lipschitz continuous mappings in an ordered Banach space, [26] H. G. Li, X. B. Pan, Z. Y. Deng and C. Y. Wang, Solving GNOVI frameworks involving $(\gamma_g, \lambda)$-weak-GRD set-valued mappings in positive Hilbert spaces, [27] H. H. Schaefer, [28] [29] [30] [31] [32] Y. K. Tang, S. S. Chang and Salahuddin, A system of nonlinear set valued variational inclusions, SpringerPlus, [33] [34] [35] R. U. Verma and Sa lahuddin, Extended systems of nonlinear vector quasi variational inclusions and extended systems of nonlinear vector quasi optimization problems in locally FC-spaces, [1] [2] Felipe Alvarez, Juan Peypouquet. Asymptotic equivalence and Kobayashi-type estimates for nonautonomous monotone operators in Banach spaces. [3] B. S. Lee, Arif Rafiq. Strong convergence of an implicit iteration process for a finite family of Lipschitz $\phi -$uniformly pseudocontractive mappings in Banach spaces. [4] [5] Michela Eleuteri, Pavel Krejčí. An asymptotic convergence result for a system of partial differential equations with hysteresis. [6] Daliang Zhao, Yansheng Liu, Xiaodi Li. Controllability for a class of semilinear fractional evolution systems via resolvent operators. [7] [8] [9] [10] [11] [12] [13] Filippo Gazzola, Mirko Sardella. Attractors for families of processes in weak topologies of Banach spaces. [14] [15] [16] Sergiu Aizicovici, Yimin Ding, N. S. Papageorgiou. Time dependent Volterra integral inclusions in Banach spaces. [17] [18] Muhammad Qiyas, Saleem Abdullah, Shahzaib Ashraf, Saifullah Khan, Aziz Khan. Triangular picture fuzzy linguistic induced ordered weighted aggregation operators and its application on decision making problems. [19] Marcin Studniarski. Finding all minimal elements of a finite partially ordered set by genetic algorithm with a prescribed probability. [20] Giuseppe Maria Coclite, Lorenzo di Ruvo. A note on the convergence of the solution of the Novikov equation. Impact Factor: Tools Metrics Other articles by authors [Back to Top]
A subset $A$ of a topological space $(X,\tau)$ is said to be denseif $\overline A=X$. Prove that if for each open set $O\neq\varnothing$ we have $A\cap O\neq\varnothing$, then $A$ is dense in $X$. Could you please check if the following is right? Thanks! A characterization of $\overline A$ is $$\overline A=\bigcap_{\alpha\in I} F_\alpha$$ where $\{F_\alpha\}_{\alpha\in I}$ is the family of all closed sets containing in $A$. ($C(Y)$ denote the complement of a set $Y$) Then $$C(\overline A)=C\left(\bigcap_{\alpha\in I} F_\alpha\right)=\bigcup_{\alpha\in I}C(F_\alpha)$$ where $O_\alpha:=C(F_\alpha)$ is open, for all $\alpha\in I$. Since $A\subset F_\alpha$ for all $\alpha\in I$, we have $O_\alpha\subset C(A)$, and since $O_\alpha(\neq\varnothing)$ is open, by hypothesis $O_\alpha\cap A\neq\varnothing$. But this is impossible, so $O_\alpha=\varnothing$ for each $\alpha$, and thus $$C(\overline A)=\bigcup_{\alpha\in I}\varnothing =\varnothing $$ This leads to $\overline A=C(\varnothing)=X$.
It should be little surprise that this can be attacked via contour integration. As always, the challenge is to find the contour and the function to integrate over that contour. At the risk of presenting a deus ex machina, I am simply going to present these and demonstrate how they provide what we need. Consider the following integral: $$\oint_C dz \;f(z) $$ where $$f(z) = \frac{e^{i \pi z^2}}{\sinh{\left (\sqrt{3} \pi z\right )} \left [ 2 \cosh{\left ( \frac{2 \pi}{\sqrt{3}} z\right )}-1\right ]}$$ and $C$ is the rectangle with vertices in the complex plane $\pm R\pm i \sqrt{3}/2$. This contour integral is thus equal to $$\int_{-R}^R dx \; f\left (x-i \frac{\sqrt{3}}{2}\right ) + i \int_{-\sqrt{3}/2}^{\sqrt{3}/2} dy \, f(R+i y) \\ \int_R^{-R} dx\; f\left (x+i \frac{\sqrt{3}}{2}\right ) +i \int_{\sqrt{3}/2}^{-\sqrt{3}/2} dy \, f(-R+i y)$$ It should be clear that the second and fourth integrals vanish as $R\to\infty$, as these integrals vanish as $\pi e^{-\pi R}$ as $R\to\infty$. In this limit, then, the contour integral is equal to $$\int_{-\infty}^{\infty} dx \; \left [ f\left (x-i \frac{\sqrt{3}}{2}\right ) - f\left (x+i \frac{\sqrt{3}}{2}\right )\right ] = i 2 e^{-i 3 \pi/4}\int_{-\infty}^{\infty} dx \frac{e^{i \pi x^2}}{2 \cosh{\left ( \frac{2 \pi}{\sqrt{3}} x\right )}+1}$$ I chose not to clutter up this space with the algebra involved in producing this last equation. The reader, however, should prove to his/herself that this is indeed correct. By the residue theorem, the contour integral is also equal to $i 2 \pi$ times the sum of the residues of the poles of $f$ inside $C$. I leave it to the reader to verify that these poles are at $z=0$, $z=\pm i \sqrt{3}/3$, and $z=\pm i\sqrt{3}/6$. The residues at these poles are straightforward to compute because the poles are simple: $$\operatorname*{Res}_{z=0} f(z) = \frac1{\sqrt{3} \pi}$$$$\operatorname*{Res}_{z=\pm i \sqrt{3}/3} f(z) = \frac{e^{-i \pi/3}}{2 \sqrt{3} \pi}$$$$\operatorname*{Res}_{z=\pm i \sqrt{3}/6} f(z) = -\frac{e^{-i \pi/12}}{2 \pi}$$ So the residue theorem states, equivalently, that $$\begin{align}\int_{-\infty}^{\infty} dx \frac{e^{i \pi x^2}}{2 \cosh{\left ( \frac{2 \pi}{\sqrt{3}} x\right )}+1} &= \frac{e^{i 3 \pi/4}}{\sqrt{3}} + \frac{e^{i 5 \pi/12}}{\sqrt{3}} - e^{i 2 \pi/3}\\ &= \frac1{\sqrt{6}} (-1+i) + \frac1{2 \sqrt{6}} \left [ (\sqrt{3}-1) + i (\sqrt{3}+1)\right ] + \frac12 (1-i \sqrt{3})\end{align}$$ The integral of interest here is equal to $1/2$ the real part of the above, which is $$\int_0^{\infty} dx \frac{\cos{\pi x^2}}{2 \cosh{\left ( \frac{2 \pi}{\sqrt{3}} x\right )}+1} = \frac{2+\sqrt{2}-\sqrt{6}}{8} $$ which was to be shown. ADDENDUM We may as well reap the reward of the imaginary part we get for free: $$\int_0^{\infty} dx \frac{\sin{\pi x^2}}{2 \cosh{\left ( \frac{2 \pi}{\sqrt{3}} x\right )}+1} = \frac{\sqrt{2}+\sqrt{6}-2 \sqrt{3}}{8} $$
Can biological enzymes catalyze thermodynamically unfavorable reactions? I read that an enzyme lowers the activation energy of a reaction by offering an alternative reaction pathway with a lower activation energy, however the ΔG of the reaction is unchanged. If enzymes can do this, then how exactly does this work? Can biological enzymes catalyze thermodynamically unfavorable reactions? I read that an enzyme lowers the activation energy of a reaction by offering an alternative reaction pathway with a lower activation energy, however the ΔG of the reaction is unchanged. If enzymes Enzymes can catalyze a thermodynamically unfavorable reaction by coupling it with a thermodynamically favorable reaction. Most often, enzymes use ATP hydrolysis reaction (energetically favorable) as a source of energy (in simple terms) to drive the unfavorable reaction forward. One important point to keep in mind here is that enzymes don't drive a reaction equilibrium forward or backward; they just help in achieving the reaction equilibrium faster. Also pay attention that enzyme itself does not change the thermodynamics of a reaction (as you say, ∆G of the concerned reaction remains the same, except at equilibrium), coupling of a favorable reaction with an unfavorable one only helps in making the overall reaction favorable. This phenomenon, of one reaction changing the rate of another reaction, is called induced catalysis and has nothing to do with the enzyme itself. I will talk about how enzymes do this using examples, while the other answer talks about the semantics. An example which will help you understand this concept is ATP synthase. It is an enzyme found in the inner mitochondrial membrane which allows the movement of H + from the inter-membrane space towards the mitochondrial matrix, using the energy of movement (since this is favorable and exergonic) to generate ATP. The reaction would look like this: $$\ce{ADP + P_i + 3 H^+_{IMS} \rightarrow ATP + 3 H^+_{matrix}}$$ (remember that exactly 3 H + are not required, this is a simplified version) But, ATP synthase does not drive the reaction forward; it just helps in reaching the following equilibrium faster:$$\ce{H^+_{IMS} \leftrightharpoons H^+_{matrix}}$$ i.e. keeping the amount of H + in inter-membrane space and matrix the same. This is why a constant gradient is required to keep them in working condition. For this, the electron transport chain continuously throws H + into the inter-membrane space to maintain an electrochemical gradient. Along with this, enzymes ATP-ADP translocase and phosphate carrier continuously take out ATP from the matrix and throw in ADP and P i. However, this lowers the electrochemical gradient (exchanging ADP 3- for ATP 4- in inter-membrane space, while phosphate carrier catalyzes electroneutral reaction), making the final reaction: $$\ce{ADP + P_i + 4 H^+_{IMS} \leftrightharpoons ATP + 4 H^+_{matrix}}$$ However, since the enzyme just helps in maintaining equilibrium, it can also work backwards i.e. it can also transport H + against electrochemical gradient (unfavorable), coupling it with ATP hydrolysis (favorable) to reach equilibrium. The enzyme is now called ATPase and the reaction becomes: $$\ce{ATP + 4 H^+_{matrix} \rightarrow ADP + P_i + 4 H^+_{IMS}}$$ There are many more such enzymes which couple ATP hydrolysis (or other favorable reactions) to catalyze reactions which are either unfavorable or very slow (i.e. require a source of activation energy) under normal cellular conditions. Another factor, as the other answer explains, which helps in this is changing the concentration of products. As seen above, ATP-ADP translocase and phosphate carrier continuously decrease the concentration of products (ATP) and increase concentration of reactants (ADP and P i) so that the overall equilibrium shifts forward and the enzyme keeps on driving the reaction forward (this, again, has nothing to do with with the enzyme itself, it is called Le Chatelier's Principle). Again, the other answer focuses on this issue. References: Can enzymes catalyze thermodynamically unfavourable reactions? Enzymes don't change the equilibrium of a reaction, but the fact that an equilibrium exists means that the reaction proceeds in both the forward and reverse directions. Before equilibrium is attained, ΔG for the reaction is not 0. Thus, by definition, one direction is thermodynamically favourable (ΔG<0) while the other is thermodynamically unfavourable (ΔG>0). Since an enzyme lowers the activation energy of a reaction, it is catalysing both the forward and reverse reactions. This is a semantic argument since the reaction will still proceed in the direction which is thermodynamically favourable. How can enzymes catalyze thermodynamically unfavourable reactions? Other than in the sense I described above, enzymes do not catalyze thermodynamically unfavourable reactions. However, biological systems can make unfavourable reactions favourable. In addition to the reaction coupling discussed in the other answer (sometimes referred to as pushing) thermodynamically unfavourable reactions can be made favourable by pulling. In this case, for reactions where ΔG°>0 (ie the reaction under standard conditions is unfavourable), the reaction can be made favourable by decreasing the concentration of products relative to reactants, such as by immediately using them in a subsequent, thermodynamically favourable reaction. Mathematically, the relationship between the free energy change of the reaction under standard conditions and the actual reaction is given by: $$\Delta G=\Delta G^{\circ} +R T\ln\left(\frac{[P]}{[R]}\right)$$ As the ratio of product to reactant concentrations decreases, the actual free energy change of the reaction decreases and can be made thermodynamically favourable. To be clear, it is not the enzyme that makes an unfavourable reaction favourable (again, it only lowers activation energy). Rather, the enzyme is catalyzing a reaction that is now thermodynamically favourable.
A minimum, if boring example is that of two uncoupled 1D linear ODE (with incommensurate constants) on a torus, such as can be found in Hasselblatt & Katok: $$\left\{\begin{aligned}\dot x &= \omega_1\\\dot y &= \omega_2,\end{aligned}\right. \tag{4.2.3}$$ whose trivial solution is $$\left\{\begin{aligned}x &= x_0 + \omega_1t\\y &= y_0 + \omega_2t.\end{aligned}\right. \tag{4.2.4}$$ The simplest nontrivial example I could find is the forced van der Pol oscillator (still simpler than the example of two of them coupled, mentioned in the OP), found, e.g., in Guckenheimer's 1980 paper (e-print): $$\left\{\begin{aligned}\dot x &= y - \epsilon(x^3/3 -x)\\\dot y &= -x+b\cos(\omega t),\end{aligned}\right.$$ which can be written as an autonomous system in 3D: $$\left\{\begin{aligned}\dot x &= y - \epsilon(x^3/3 -x)\\\dot y &= -x + b\cos z\\\dot z &= \omega.\end{aligned}\right.$$ A numerically integrated quasiperiodic trajectory of this system, in normalized variables, can be found in this paper (e-print):
Osmotic pressure Osmotic pressure is the minimum pressure which needs to be applied to a solution to prevent the inward flow of water across a semipermeable membrane. [1] It is also defined as the measure of the tendency of a solution to take in water by osmosis. The phenomenon of osmotic pressure arises from the tendency of a pure solvent to move through a semi-permeable membrane and into a solution containing a solute to which the membrane is impermeable. This process is of vital importance in biology as the cell's membrane is selective toward many of the solutes found in living organisms. In order to visualize this effect, imagine a U-shaped clear tube with equal amounts of water on each side, separated by a membrane at its base that is impermeable to sugar molecules (made from dialysis tubing). Sugar has been added to the water on one side. The height of the water on each side will change proportional to the pressure of the solutions. Osmotic pressure causes the height of the water in the compartment containing the sugar to rise, due to movement of the pure water from the compartment without sugar into the compartment containing the sugar water. This process will stop once the pressures of the water and sugar water toward both sides of the membrane become equal (see osmosis). Contents Theory 1 Thermodynamic explanation 1.1 Derivation of osmotic pressure 1.2 Morse equation 1.3 Tonicity 2 Applications 3 Potential osmotic pressure 3.1 Additional images 4 See also 5 References 6 External links 7 Theory Thermodynamic explanation Consider the system at the point it has reached equilibrium. The condition for this is that the chemical potential of the solvent (since only it is free to flow toward equilibrium) on both sides of the membrane is equal. The compartment containing the pure solvent has a chemical potential of \mu^0(l,p). On the other side, the compartment containing the solute has an additional contribution from the solute (factored as the mole fraction of the solvent, x_s < 1) but there also appears an addition in pressure. The balance is therefore: \mu_s^0(l,p)=\mu_s(l,x_s,p+\Pi) where p denotes the external pressure, l the solvent, x_s the mole fraction of the solvent and \Pi the osmotic pressure exerted by the solutes. The addition of solute decreases the chemical potential (an entropic effect), while the pressure increases the chemical potential, and thus a balance is reached. Note that the presence of the solute decreases the potential because x_s is smaller than 1. Derivation of osmotic pressure In order to find \Pi, the osmotic pressure, we consider equilibrium between a solution containing solute and pure water. \mu_s(l,x_s,p+\Pi)=\mu_s^0(l,p) We can write the left hand side as: \mu_s(l,x_s,p+\Pi)=\mu_s^0(l,p+\Pi)+RT\ln\gamma_s x_s where \gamma_s is the activity coefficient of the solvent. The product \gamma_s x_s is also known as the activity of the solvent, which for water is the water activity a_w. The addition to the pressure is expressed through the expression for the energy of expansion: \mu_s^o(l,p+\Pi)=\mu_s^0(l,p)+\int_p^{p+\Pi}\! V \, \mathrm{d}p where V is the molar volume (m³/mol). Inserting the expression presented above into the chemical potential equation for the entire system and rearranging will arrive at: -RT\ln\gamma_s x_s=\int_p^{p+\Pi}\! V \, \mathrm{d}p If the liquid is incompressible the molar volume is constant and the integral becomes \Pi V. Hence we can get an equation with high accuracy \Pi=-(RT/V)\ln(\gamma_s x_s) For pure substances the activity coefficient can be found as a function of concentration and temperature, but in the case of mixtures we are often forced to assume it is 1.0, so \Pi=-(RT/V)\ln(x_s) For aqueous solutions, when determining the mole fraction of water, it is necessary to take into account the ionisation of salts. For example 1 mole of NaCl ionises to 2 moles of ions, and the mole fraction of water reduces accordingly. Historically chemists have found it time consuming to calculate natural logs, so they used molal concentrations for dilute solutions as shown below. With spreadsheets the equations above are easy to use and offer much greater accuracy over a wider range of concentrations. Morse equation \Pi = i M R T, where iis the dimensionless van 't Hoff factor Mis the molarity R=.08205746 L atm K −1mol −1is the gas constant Tis the thermodynamic (absolute) temperature This equation gives the pressure on one side of the membrane; the total pressure on the membrane is given by the difference between the pressures on the two sides. Note the similarity of the above formula to the ideal gas law and also that osmotic pressure is not dependent on particle charge. This equation was derived by van 't Hoff. Tonicity Hypertonicity is the presence of a solution that causes cells to shrink. Hypotonicity is the presence of a solution that causes cells to swell. Isotonicity is the presence of a solution that produces no change in cell volume. When a biological cell is in a hypotonic environment, the cell interior accumulates water, water flows across the cell membrane into the cell, causing it to expand. In plant cells, the cell wall restricts the expansion, resulting in pressure on the cell wall from within called turgor pressure. Applications Osmotic pressure is the basis of filtering ("reverse osmosis"), a process commonly used to purify water. The water to be purified is placed in a chamber and put under an amount of pressure greater than the osmotic pressure exerted by the water and the solutes dissolved in it. Part of the chamber opens to a differentially permeable membrane that lets water molecules through, but not the solute particles. The osmotic pressure of ocean water is about 27 atm. Reverse osmosis desalinates fresh water from ocean salt water. Osmotic pressure is necessary for many plant functions. It is the resulting turgor pressure on the cell wall that allows herbaceous plants to stand upright, and how plants regulate the aperture of their stomata. In animal cells, which lack a cell wall, excessive osmotic pressure can result in cytolysis. For the calculation of molecular weight by using colligative properties, osmotic pressure is the most preferred property. Potential osmotic pressure Potential osmotic pressure is the maximum osmotic pressure that could develop in a solution if it were separated from distilled water by a selectively permeable membrane. It is the number of solute particles in a unit volume of the solution that directly determines its potential osmotic pressure. If one waits for equilibrium, osmotic pressure reaches potential osmotic pressure. Additional images Isotonic Solution Hypotonic Solution Hypertonic Solution See also References Voet, Donald; Judith G. Voet; Charlotte W. Pratt (2001). Fundamentals of Biochemistry(Rev. ed.). New York: Wiley. p. 30. Mansoor M. Amiji, Beverly J. Sandmann (2002). Applied Physical Pharmacy. McGraw-Hill Professional. pp. 54–57. Osmotic pressure and potential Osmotic pressure Nobel Prize lecture on osmotic pressure and chemical equilibrium (pdf)
What does it mean for the ratio of the lengths of the sides of a rectangle to be rational, say $\frac{5}{3}$? It means that if the long side of the rectangle is divided into five equal parts, and if one counts out three of those parts, then the length of the resulting line segment equals the length of the short side of the rectangle. The short side can now be divided into three parts all equal to the parts of the long side. Hence the rectangle can be tiled as a $3\times5$ array of squares. Conversely an $m\times n$ array of squares forms a rectangle whose ratio of sides is the rational number $\frac{n}{m}$. So to say that the ratio of sides of a rectangle is rational is the same as to say that the rectangle can be tiled as an array of squares. The question now is, can every rectangle be tiled as an array of squares? The answer isn't obvious. One might imagine that as long as we make the squares small enough, it can always be done. Before answering this question, let's imagine that someone has told you that a rectangle can be tiled as an array of squares but hasn't told you what size square to use. How would you go about finding the size of square? To approach this, notice that if you manage to find a square that tiles a given rectangle, the same square will tile the shorter rectangle you get by chopping off a square section from the long end of the rectangle. On the other hand, if the rectangle is, in fact, a square, then the rectangle is a square tiling of itself (using only one tile). Because of these two properties, we can find the square we want by chopping square sections off of the rectangle until the remainder rectangle is itself a square. By reversing the process, this remainder square will tile the original rectangle, as the following image should make clear. At this point, the idea that every rectangle can be tiled as an array of squares may seem much less plausible than it did previously. In order for a tiling with squares to exist, the remainder rectangle in the chopping-off process must eventually be a square. But it is not clear that this always has to happen. Why couldn't it be the case that, for certain rectangles, the chopping-off process continues forever, never resulting in a square? After all, for the remainder rectangle to be a square, its two sides have to be precisely equal. That the two sides should always be at least slightly unequal seems much more probable, from a random starting rectangle, than that they should ever be exactly the same. These misgivings are all well-founded, but the true situation can actually be even worse than this. For certain starting rectangles, the sides of the remainder rectangle never even get close to approximate equality, much less exact equality. This happens, for example, when the sequence of remainder rectangles falls into a repeating pattern, which occurs when a remainder rectangle is geometrically similar to an earlier remainder rectangle in the sequence. The simplest example of such a rectangle is the golden rectangle, which is defined by the property that chopping a square section off of the long end of the rectangle results in a rectangle that is similar to the starting rectangle. The ratio of the long side to the short side is known as the golden ratio and has decimal expansion $1.61803\ldots$. All remainder rectangles in the sequence have side lengths in this ratio, and hence none is ever close to being square. As a consequence, the golden rectangle cannot be tiled as a rectangular array of squares, and therefore the ratio of its side lengths is not rational. The defining property of the golden ratio implies that its value is $(1+\sqrt{5})/2$. It turns out that similar numbers involving square roots, that is, numbers of the form $r+s\sqrt{d}$ where $d$ is a natural number that is not a perfect square and $r$ and $s$ are rational numbers, always fall into a repeating, but generally more complicated, pattern of remainder rectangles. None of these numbers are rational. These examples are but the simplest way to see the phenomenon of irrationality. An interesting irrational number has decimal expansion $1.433127\ldots$. If the chopping-off process is carried out on a rectangle whose horizontal side has this length and whose vertical side has length $1$ then, after chopping a square off the long side, the vertical side becomes the long side; after chopping two squares off the long side, the horizontal side again becomes the long side; after chopping three squares off the long side, the vertical side becomes the long side; after chopping four squares off the long side, the horizontal side becomes the long side; and so on. Hence among the remainder rectangles are rectangles that become progressively longer and longer relative to their width. Assuming that this continues, it follows that the remainder rectangle is never a square and hence that that original rectangle does not have a whole-number ratio of sides. Actually proving that this pattern continues for this particular ratio (which is $I_0(2)/I_1(2)$, where the functions $I_n(z)$ are things called modified Bessel functions of the first kind) is considerably more work than in the case of the golden ratio. A more familiar number that exhibits a similar, but more complicated pattern is $e$, which is therefore also irrational. The chopping-off pattern in this case is$[2,1,2,1,1,4,1,1,6,1,1,8,\ldots]$, where the the numbers represent how many squares get chopped off with each change in orientation of the long edge. Unlike the examples we have looked at so far, however, most irrational numbers exhibit a rather unpredictable chopping-off pattern. For example, $\pi$ has the pattern $[3,7,15,1,292,1,1,1,2,\ldots]$. In any case, it is only when the chopping-off process terminates that the side lengths have a whole-number ratio. Incidentally, the chopping-off process is usually called the Euclidean algorithm, and the sequences of numbers representing squares chopped off with each change in orientation of the long side are called the coefficients of the continued fraction. Examples of continued fraction coefficients for various numbers, some of which were discussed in this post, are listed below.$$\begin{aligned}5/3&=[1,1,2]\\22/9&=[2,2,4]\\99/34&=[2,1,10,3]\\(1+\sqrt{5})/2&=[1,1,1,\ldots]\\\sqrt{2}&=[1,2,2,2,\ldots]\\(6+\sqrt{10})/4&=[2,3,2,3,1,3,2,3,1,3,2,3,1,\ldots]\\I_0(2)/I_1(2)&=[1,2,3,4,5,6,\ldots]\\e&=[2,1,2,1,1,4,1,1,6,1,1,8,1,1,\ldots]\\\sqrt[3]{2}&=[1,3,1,5,1,1,4,1,1,8,1,14,1,10,2,1,\ldots]\\\pi&=[3,7,15,1,292,1,1,1,2,1,3,1,14,2\ldots]\end{aligned}$$
Suppose I want to estimate the following VAR(1) model: $$ Y_t = \mu + \Phi Y_{t-1} + \varepsilon_t $$ where $Y_t=(y_{1t}, y_{2t},…,y_{kt})'$, $\mu=(\mu_1,…,\mu_{k})’$ and $\Phi$ a matrix of coefficients. I’m interested in obtaining the coefficients in $\Phi$ such that the resulting vector of predicted values $\hat{Y}_t = (\hat{y}_{1t}, \hat{y}_{2t},…,\hat{y}_{kt})’$ obeys some constraints. Just to give an unrealistic example I want to estimate the $\Phi$ matrix via least squares such that $3\hat{y}_{1t} + 2\hat{y}_{2t}\geq 0$ and $\hat{y}_{3t}+\hat{y}_{4t}+\hat{y}_{5t}\geq0$. How can I do it? In particular how can I implement it in MATLAB? EDIT : so far my approach has been to minimise the sum of squared errors obtained by every equation of the VAR(1). Suppose I have a bivariate VAR(1), my problem has been : $$ \min_{\mu,\Phi} e_1’e_1 + e_2’e_2 $$ $$ \text{s.t. constraint 1,2,3...} $$ which I tried to solve with the fseminf function in MATLAB. Is there some better way? EDIT 2: Notice that the constrain is on the fitted values, not on the estimated coefficients
One excellent resource is to try Kaggle and to examine some of the competitions, some of which are specifically on the application of machine learning to credit scoring.https://www.kaggle.com/c/GiveMeSomeCreditYou wil see that the winning solution is made public, including source code and output.https://github.com/IdoZehori/Credit-Score/blob/master/... You cannot do it.It is an under-determined problem. That is to say, a whole multitude (subspace of $\mathbb{R}^{N\times N}$) of migration matrices will agree with any given table of default probabilities.Say you want to find a transition matrix for 2 states (IG, HY) plus default$$\left(\begin{matrix}p_{11} & p_{12} & p_{1D} \\p_{21} &... One option to do it is a heatmap. Not sure which software are you using, but in matlab it is extremely simple to do and powerful to tweak.Below an example. Let's assume there are 30 periods $t$ to $t+30$ and 21 ratings.Then you could run:rating = {'Aaa'; 'Aa1';'Aa2';'Aa3';'A1';'A2';'A3';'Baa1';'Baa2';'Baa3';'Ba1';'Ba2';'Ba3';'B1';'B2';'B3';'Caa1';'... Firstly it's good to straighten out our goal.You correctly say, that IFRS9 requires analysis of expected losses.There are two components of expected losses.1) Expected probability of a default event2) Expected recovery rateSo, not only do we need the probability but also the recovery rate.Luckily, both are approximated by the credit spread, which ... U.S. Government DID save American International Group (AIG) from bankruptcy, since it was considered too big to fail, actually: a lot of financial institutions were insured by AIG. This Investopedia page is a nice summary on the topic about AIG's bailout. Here (Investopedia again) about Lehman Brothers, that became really too much leveraged and exposed to ... This is an interesting question. I'll make a guess on what may be the driving factors for "ratings inflation" based on these assumptions:Rating agencies compete among themselves to conduct bond rating business with issuers, since they are paid for their services by the issuer.Bond issuers choose the agency that promises the highest rating, since the ... (P) prefix : As a service to the market and typically at the request of an issuer, Moody's will assign a provisional rating when it is highly likely that the rating will become final after all documents are received, or an obligation is issued into the market. A provisional rating is denoted by placing a (P) in front of the rating. Such ratings may also be ... Actually, there is a practical way to do it.You can use you PoD estimates to assign a credit rating to your securities and then use a published transition matrix for your purposes.Or you can estimate transition probabilities by linear interpolation based on the PoD values that you have.Here is a publication containing transition matrices from Moody'... Most of the papers concern CDS spreads which you will need to convert to a PD.Paper using country specific fundamentals: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2517018This paper uses leverage: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2361872Another one that decomposes them against peer groups: http://papers.ssrn.com/sol3/papers.... Yes, you can have two different ratings. The issuer has one credit rating, but the individual issues, even if they are both senior unsecured/secured with the same maturity, coupon, etc. can have different ratings. The key factor is going to be the structure/provisions of the issue itself. For example, an issue with a sinking fund is going to be viewed as ... Reuters uses a proprietary model defined StarMine structural/SmartRatios Credit Risk model that has been developed by themselves and provided with the Reuters data service.It does not exist a formal definition or paper about the model, in which it is explained how to get that score; Reuters simply explains roughly what is in its website without going into ... You are right, the rules to time-scale a T-years transition matrix $M_T$ are:$M_{k·T} = M_T^k$$M_{T/k} = \sqrt[k]{M_T}$The root of a matrix M can be obtained using the spectral decomposition:$M = P·D·P^{-1} \Longrightarrow M^k = P·D^k·P^{-1}$where $P$ and $D$ are the eigenvectors and eigenvalue matrices of $M_T$.Note: The Perron-Frobenius tells ... Depending upon how much data you have, you might find Violi (2004) useful.Nickell et al. (2000), while principally considering time-dependent stability tests, refers a bit to significance testing between the matrices of different agencies and might also provide some insight. I believe that your problem can be formulated as:Find PD matrix that is as close as possible to a given PD matrix (result of some previous calibration, or the matrix computed using average hazard rate, or any other "target", or the penalty on non-smoothness) subject to the following constraints:The values that are given must be matched exactly... Yes of course, credit rates depend on interest rates (i.e. https://en.wikipedia.org/wiki/Libor), which are set by some group of banks in almost every countryGoing further bankers analyze the market situation and also national interest rates, which are set by central bankers in every country which has a central bank (https://en.wikipedia.org/wiki/... You can do this using the optim function in R. One possible solution is as follows:base <- c(0.9190, 0.0739, 0.0072, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,0.0113, 0.9126, 0.0709, 0.0031, 0.0021, 0.0000, 0.0000, 0.0000,0.0010, 0.0256, 0.9119, 0.0533, 0.0062, 0.0021, 0.0000, 0.0000,0.0000, 0.0021, 0.0536, 0.8794, 0.... Regarding how the rating agencies gave AAA ratings to CDOs and the like that clearly did not deserve those ratings - straightforward answer. The SEC licences all the ratings agencies as "nationally recognized statistical rating organizations" (NRSRO). It is blindingly obvious that the SEC was not actually overseeing the rating organizations that it was ... I am also not aware of any papers in this area. But having developed many such models, I can list the important steps:Decide on the target variable: usual choices are historical default data, agency ratings and expert rankingsCreate a sample containing the possible predictorsReduce the list with the help of some expert, e.g. exclude all the predictors ... The "issuer-pay" model works like this: The Rating Agency goes to the issuer and says "We heard that you are going to issue bonds. We can give you a rating if you pay us XXX dollars. It will help you a lot to have our rating". The Issuer of course is free to refuse this offer (after all this is just a rating agency, not the Cosa Nostra). In this case the ... Merton model will be a bit more quantitiative.Z-Score is an option, as is Ohlson.In the end you are going to want some non-defaulted->defaulted transition mapping based on factors you identify as meaningful. Yes, you can. Also, do not use Altman's Z. The extreme scores are predictive, but a load of empirical research shows the intermediate values are not predictive.The best solution is a Bayesian solution because you are gambling money. Bayesian methods are coherent.Coherence is the statistical property by which fair gambles can be placed. Frequentist ... Mapping ordinal data to interval data is arbitrarily.The ranking of rating agencies is ordinal data, so only comparing operators > or < can be applied. The data can be sorted and as a central tendency, you can calculate the median.The main aspect of ordinal data is that it allows for rank order but it does not allow for the relative degree of ... Assume your outcome/dependant variable is the rating agencies rating category, say 10 to 20 rating categories, you can use ordinal logistic regression which is more natural for this kinda problem. So the model will predict the rating category.If your dependant variable is the internal default flag then you can have your model predict the default rate and ... What you're looking for looks to be more in the realm of a mathematical model (specific to the company's size, available liquidity, and industry). Credit Risk Pricing Models may provide a decent overview of how to build such a model.Unfortunately duration/convexity will only help you capture the interest rate risk on your bonds, and not any of the ... Bloomberg has a Default Risk model, which is similar to what you are querying. You can see a screenshot in this PDF. There you can also see the kind of variables they use.You can access it by typing DRSK at the CDS screen is Bloomberg. (If the screenshot in the PDF is not clear enough, let me know and I can post one with better resolution from Bbg)This ... I would think it is becauseit can be bound between 2 pointsit can assume wide range shapesIt fits the data empirically (as you said)On a related noteSometime back I read a paper which might give you more formal reason. It is for estimating and simulating recovery rates . I havnt used it to model credit migration probabilities . But I think one ...
Which is the best way to put function plots into a LaTeX document? To extend the answer from Mica, pgfplots can do calculations in TeX: \documentclass{standalone}\usepackage{pgfplots}\begin{document}\begin{tikzpicture} \begin{axis}[ xlabel=$x$, ylabel={$f(x) = x^2 - x +4$} ] \addplot {x^2 - x +4}; \end{axis}\end{tikzpicture}\end{document} or using GNUplot (requires --shell-escape): \documentclass{standalone}\usepackage{pgfplots}\begin{document}\begin{tikzpicture} \begin{axis}[ xlabel=$x$, ylabel=$\sin(x)$ ] \addplot gnuplot[id=sin]{sin(x)}; \end{axis}\end{tikzpicture}\end{document} You can also pre-calculate values using another program, for example a spreadsheet, and import the data. This is all detailed in the manual. With version 3 of PGF/TikZ the datavisualization library is available for plotting data or functions. Here are a couple of examples adapted from the manual (see part VI, Data Visualization). \documentclass[border=2mm,tikz]{standalone}\usepackage{tikz}\usetikzlibrary{datavisualization}\usetikzlibrary{datavisualization.formats.functions}\begin{document}\begin{tikzpicture}\datavisualization [school book axes, visualize as smooth line, y axis={label={$y=x^2$}}, x axis={label} ]data [format=function] { var x : interval [-1.5:1.5] samples 7; func y = \value x*\value x; };\end{tikzpicture}\begin{tikzpicture}\datavisualization [scientific axes=clean, y axis=grid, visualize as smooth line/.list={sin,cos,tan}, style sheet=strong colors, style sheet=vary dashing, sin={label in legend={text=$\sin x$}}, cos={label in legend={text=$\cos x$}}, tan={label in legend={text=$\tan x$}}, data/format=function ]data [set=sin] { var x : interval [-0.5*pi:4]; func y = sin(\value x r);}data [set=cos] { var x : interval [-0.5*pi:4]; func y = cos(\value x r);}data [set=tan] { var x : interval [-0.3*pi:.3*pi]; func y = tan(\value x r);};\end{tikzpicture}\end{document} tikz + gnuplot (see the manual for details). Here's a "live" example used in a lecture (using beamer) to illustrate the convergence of a series of square-integrable functions. \begin{tikzpicture}[domain=-1:1,yscale=2,xscale=4,smooth]\fill[gray] (-1.2,-1.2) rectangle (1.2,2.5);\draw[very thin] (-1.1,-1.1) grid[step=.5] (1.1,2.4);\draw[thick,->] (-1.2,0) -- (1.2,0);\draw[thick,->] (0,-1.2) -- (0,2.5);\draw[color=red] plot[id=1] function{cos(pi*x)};\draw<2->[color=blue,thick] plot[id=2] function{cos(pi*x)+cos(2*pi*x)/2};\draw<3->[color=green!50!black,thick] plot[id=3] function{cos(pi*x) + cos(2*pi*x)/2 + cos(3*pi*x)/3};\draw<4->[color=yellow,thick] plot[id=4] function{cos(pi*x) + cos(2*pi*x)/2 + cos(3*pi*x)/3 + cos(4*pi*x)/4};\draw<5->[color=cyan,thick] plot[id=5] function{cos(pi*x) + cos(2*pi*x)/2 + cos(3*pi*x)/3 + cos(4*pi*x)/4 + cos(5*pi*x)/5};\end{tikzpicture} OK, here's a non-TikZ answer for balance (you'd think TikZ is the second coming on SE!) \documentclass{minimal}\usepackage{pstricks-add}\begin{document}\psset{xunit=7cm,yunit=0.6cm}\def\xlim{1}\def\ylim{16}\begin{pspicture*}(-\xlim,-\ylim)(\xlim,\ylim)\psaxes[Dx=0.5,Dy=5]{<->}(0,0)(-\xlim,-\ylim)(\xlim,\ylim)\psplot[plotpoints=500,showpoints=false,algebraic]{-1}{1}{sin(1/x)/x}\end{pspicture*}\end{document} Vincent Zoonekynd gives an example for this, from his long list of Metapost examples: beginfig(166) ux:=2mm; uy:=5mm; numeric xmin, xmax, ymin, ymax, M; xmin := -6.3; xmax := 12.6; ymin := -2; ymax := 2; M := 100; draw (ux*xmin,0) -- (ux*xmax,0); draw (0,uy*ymin) -- (0,uy*ymax); pair a[]; for i=0 upto M: a[i] := ( xmin + (i/M)*(xmax-xmin), sind(180/3.14*( xmin + (i/M)*(xmax-xmin) )) ) xscaled ux yscaled uy; endfor; draw a[0] for i=1 upto M: --a[i] endfor; endfig; gives This is much longer than the other examples, because it does everything from scratch, but it would be easy to put some functions for creating axes and scaling the graph, so that specifying the plot was some boilerplate plus the function definition. I might do that later... Is there a specific reason you need to graph the function within LaTeX? wouldn't it be better to use something like R or matlab to generate a pdf that you can then \includegraphics ? This will generally speed up compilation, and graphs thus generated are probably more customisable and so on. If you absolutely have to generate the graph inside LaTeX then consider using the standalone package: this will save some time when compiling big documents... Then of course, there is sweave... R and sweave were already mentioned but I couldn't pass up the opportunity to mention tikzDevice (yes, again tikz). I have successfully been using it to generate .tex documents with R, for example options(tikzLatex='/path to TeX distribution on computer' )require(tikzDevice) tikz("~/some destination/rgraph.tex", width = 5, height = 5.5)Some R codedev.off() Usually I point it to the same folder as the working LaTeX document, and put it in the document \input{rgraph} I feel this gives me much needed control over my graphs, although I'll have to try some other solutions here before I decide which solution is the most comfortable for me. Just thought I'd add something (hopefully) of value. The latest version of gnuplot itself also has a tikz output terminal xyplot is nice. edit: Oops — I thought you meant graph-theory graphs, not plots of ƒ(x) versus x. I would use R and Sweave to make the graphs in LaTeX.
I guess I can't avoid the temptation. Try: simulate this circuit – Schematic created using CircuitLab In both examples above, you want to use an ultra low current supply, rail-to-rail i/o opamp, with low offset voltage and current. I think the LT1494 may be at least interesting here. The opamp needs to be in control with the inputs near the top rail (the bottom rail won't matter much.) The opamp supply's current will be just equal to the current in \$R_2\$ (plus its own stray requirements, which is why you want a very low supply current opamp), sinking it through \$R_3\$. Since the input node voltages are approximately equal to each other, this means the current in \$R_2\$ is \$I_{LOAD}\cdot \frac{R_1}{R_2}\$. So the left side's voltage across \$R_3\$ is just a proportional output where \$V_{prop}\approx I_{LOAD}\cdot\frac{R_1\cdot R_3}{R_2}\$. The exact value depends. But in this case I don't think precision isn't terribly important to you. The right side uses this developed voltage to drive a BJT indicator output. This output is the exact opposite of what you say you want. But I'm sure you can work out how to invert it. [I selected \$R_5\$'s value to be pretty high (mostly because I'm thinking about keeping the added circuit's current load fairly low and I didn't want to add much load to the voltage developed across \$R_3\$.)] I've set this to generate an active output when the load current rises over about \$10\:\textrm{mA}\$. You specified zero, but you know that is impossible here. Some reasonable value has to be used. I picked this. You can pick something else, if you want. Of course, the dedicated ICs (such as those mentioned by Ali Chen) are probably a better way to go.
Let $A=\mathbb K[X_1,\ldots,X_n]$ be a polynomial ring over some field $\mathbb K$. Let $\mathfrak p\subseteq A$ be a prime ideal. Let $Z(\mathfrak p)=\{ \mathfrak m\subset A\text{ maximal}\mid \mathfrak p\subseteq\mathfrak m\}$ be the set of maximal ideals of $A$ that lie over $\mathfrak p$. My intuition says that $$ \bigcap_{\mathfrak m\in Z(\mathfrak p)} \mathfrak m^s=\mathfrak p^s$$ because instead of counting a subvariety $s$ times, I can count each of its points $s$ times. Is this true? If no, what if $\mathbb K$ is algebraically closed? If yes, does it hold for a certain larger class of rings $A$? For $s=1$, as Martin Brandenburg correctly states, your condition is exactly that of being a Jacobson ring, and hence is true for any factor ring of a polynomial ring in finitely many variables over a field or over $\mathbb{Z}$. However, for $s>1$, your condition should generally fail even in polynomial rings over an algebraically closed field. Indeed, the Zariski-Nagata theorem says that if $k$ is an algebraically closed field, $R=k[X_1, \dotsc, X_n]$, and $\mathfrak{p} \in \operatorname{Spec} R$, then the intersection you give is the $s$th symbolic power of $\mathfrak{p}^{(s)}$ of $\mathfrak{p}$. By definition $\mathfrak{p}^{(s)} = \mathfrak{p}^s R_{\mathfrak{p}} \cap R$, which in general can be larger than $\mathfrak{p}^s$. In an article of Huneke in Math. Ann. from 1986, for example, he gives a way to construct 3-generated prime ideals $\mathfrak p$ of height two in $k[[X,Y,Z]]$ (which can easily be altered to give such examples in $k[X,Y,Z]$) such that $\mathfrak p^{(2)}$ has arbitrarily many generators. (That is, pick a number; he can then give you a 3-generated prime ideal whose second symbolic power has at least that many generators.) But it's obvious that any such $\mathfrak p^2$ can be generated by at most 9 elements (namely, the pairwise products of the original generators of $\mathfrak p$).
Please tell me where I've gone wrong (if I did in fact make a mistake). I'm pricing a long forward on a stock. The usual setup applies: This has payoff $S(T) - K$ at time $T$. We are at $t$ now. $S(T) = S(t)e^{(r-\frac12 \sigma^2)(T-t)+\sigma(W(T)-W(t))}$. $W(t)$ is a Wiener process. $K \in \mathbb{R}_+$. $Q$ is the risk-neutral measure. $\beta(t) = e^{rt}$ is the domestic savings account, a tradable asset. $r$ is the constant riskless rate. My Attempt: $f(t,S) = E^Q[\frac{\beta(t)}{\beta(T)}(S(T)-K)|\mathscr{F}_t]$ $ = E^Q [\frac{\beta(t)}{\beta(T)}S(T)|\mathscr{F}_t] - E^Q [\frac{\beta(t)}{\beta(T)}K|\mathscr{F}_t]$ $ = E^{P_S}[\frac{\beta(t)}{\beta(T)}S(T) \frac{\beta(T)S(t)}{\beta(t)S(T)}|\mathscr{F}_t] - \frac{\beta(t)}{\beta(T)}K$ $ = S(t) - K\frac{\beta(t)}{\beta(T)}$ $ = S(t) - Ke^{-r(T-t)}$ This isn't graded homework or assignment. (It is ungraded homework)
Related Topics: More Lessons on Vectors In these lessons, we will learn how to determine if the given vectors are parallel. A vector is a quantity that has both magnitude and direction. What are Parallel Vectors? Vectors are parallel if they have the same direction. Both components of one vector must be in the same ratio to the corresponding components of the parallel vector. Example: How to define parallel vectors? Two vectors are parallel if they are scalar multiples of one another. If u and v are two non-zero vectors and u = c v , then u and v are parallel. The following diagram shows several vectors that are parallel. How to determine if the given 3-dimensional vectors are parallel? Example: Determine which vectors are parallel to v = <-3, -2, 5> What are the conditions for two lines to be parallel given their vector equations? Lines are parallel if the direction vectors are in the same ratio. Example: If the lines l 1 :\(r = \left( {\begin{array}{*{20}{c}}1\\{ - 5}\\7\end{array}} \right) + \lambda \left( {\begin{array}{*{20}{c}}{a - 1}\\{ - a - 1}\\b\end{array}} \right)\) and l 2 : \(r = \left( {\begin{array}{*{20}{c}}9\\3\\{ - 8}\end{array}} \right) + \mu \left( {\begin{array}{*{20}{c}}{2a}\\{3 - 5a}\\{15}\end{array}} \right)\). Find the values of a and b. What is a vector is, how to add and how to prove vectors are parallel and collinear? Examples: (1) A, B, C are midpoints of their respective lines. Find the vector OB. (2) N = midpoint of OB, M = midpoint of OA. Show that MN is parallel to AB. (3) Given the vectors, prove that the three given points are collinear. How to answer a question that involves Vectors - Lines, Parallel, Perpendicular & Intersection? Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
Let $R$ be a commutative ring with unit element. Suppose that it only has a finite number of ideals. Show that it is a field My reasoning is as follows: Let $a \in R-\{0\}$. Since $R$ is a commutative ring, it can be shown that $Ra$ is an ideal of $R$. It follows that $Ra^n$ is also an ideal of $R$, for every $n \in \mathbb N^*$, and $Ra \supset Ra^2 ... \supset Ra^n \supset ...$. But $R$ only has a finite number of ideals, so there must exists $m \in \mathbb N^*$ such that $Ra^n=Ra^{n+1}=(Ra)a^n$, for every $n \geq m$. Now, $(Ra)a^n=\{(ba)a^n,b \in R\}$. If $ba \neq 1, \forall b \in R \Rightarrow 1a^n \notin (Ra)a^n = Ra^n \Rightarrow$ contradiction because $1 \in R$. In other words, there exists $b \in R$ such that $ba=1$, or $R$ is a field. Is my reasoning correct? Can we conclude that: $R=Ra$, and thus $R=Ra^n$, for every $n \in \mathbb N^*$?
How many kids? This is the toughest factor to find. While Santa isn't specific to a single religion, traditions differ in various parts of the world, and belief rates in Santa Claus are not tracked by the WHO, looking at Christians seemed to be a reasonable, if imperfect, start. According to 2010 estimates, there are 2.2 billion Christians in the world. I have no idea how many of them are kids, but we're all kids at heart, aren't we? So I'll just go with 2.2 billion kids in my calculations. That might sound like a huge overestimate, but, as you'll see, even if that number is off by a factor of ten, it won't make much difference. No matter how you look at it, though, that's a lot of chimneys. How many cookies? Assuming Santa heeds his doctor's advice and doesn't eat the entire plate of cookies (he just eats one—he has to keep up appearances, after all!): Excluding the obligatory sip of milk, by the time the night's over, Jolly Ol' Saint Nick would have ingested food energy (it's mostly empty carbs anyway!) equivalent to the yield of a 200 kT thermonuclear bomb! Naughty or Nice list: 225 TB Accounting for names (100 characters), identifying information (200 characters), address/coordinates (200 characters), photograph (100KB) (he has to be able to recognize the little tykes on sight if one of them happens to wake up in the middle of the night), Santa would need storage for around 100KB on each kid. $$100 \text{KiB} \cdot 2.2\times10^9 = 225 \text{TiB}$$ Very roughly speaking, that means Santa's carrying around about a hundred 2TB external HDDs (2015 budget: US$7,500, chump change for Kris Kringle!). That's a fair amount of storage for one man to carry around, for sure, but compared to Santa's other challenges, a hundred hard drives would be the least of his worries. Cargo This one's easy: 2.2 billion Red Ryder Lever Action BB Guns at 1kg each is 2.2 million metric tons. Volume-wise, let's say you can get about 10 presents per cubic meter (don't want to smash the pretty bow ribbon!). For a sense of scale, check this query: 44% of the volume of Sydney Harbor 69% of the total annual volume of oil transported by oil tankers worldwide If I were Santa and refused to delegate the actual home visits, I would at least mobilize a huge global shipping network months in advance, to ship the presents to local warehouses and perhaps even neighborhood depots from there, for staging purposes. That way on Christmas Eve itself, he has much less work to do, and much, much less cargo to drag around the globe. Time estimate This is where it starts to get rough. According to the map on the Wikipedia page linked above, Christians are spread out East-West more or less across the entire globe. Very roughly speaking, but precise enough for our purposes here, let's say Santa has 24 hours to deliver all of the presents. Can one man do it? $$24h / 2.2\times10^9 kids = 1.091\times 10^{-8} h/kid = 39.27 \mu s/kid$$ I hope Santa's a fast eater! Forces on Santa It's hard to get a precise estimate of the distance Santa will be traveling, how much time will be spent taking off, flying, and landing, versus just going door-to-door in apartments. While acceleration during takeoff would surely be very bad, why don't we just consider the best possible case for Santa's delivery: every kid in the world is beamed with a transporter, semi-comatose, into Russia, in a single line, shoulder to shoulder, with their arms outstretched. Santa would then just need to take a step, stop, hand out a present, and repeat. Let's say taking a step and stopping take 2/3 of the time (1/3 accelerating, 1/3 decelerating), and handing out the present takes the other 1/3. Santa therefore has $39.27/3 = 13.09 \mu s$ to accelerate through roughly half a metre. Thanks to Newton's 2nd law, we know: $$s = v_0t + \frac{1}2 at^2$$ With a little arranging, and $s = 0.5m, t = 13.09 \mu s, v_0 = 0$ (he accelerates from a standstill), we get: $$a = \frac{2s}{t^2} = 5.84 \times 10^9 m/s^2 = 5.95\times10^8 g$$ That's 595,000,000g. You can check this page for the gory details of human tolerance, but, really, don't bother. Most humans wouldn't survive 40g-50g or so. At 595 million g, Santa is now, at best, a buttery sweet-smelling liquid. But there's another, worse problem than that! (Worse than Liquid Santa? This can't be good...) No matter what kind of fuel you use, moving Santa in one direction means an equal force in the opposite direction is required. As it turns out, that will amount to a lot of energy. How much? $6.4\times10^{20} \text{J}$, which is about 11% of the world's total energy (oil) reserves. But what happens when you burn all that heat in a short amount of time? Liquid Santa is now setting off a continuous chain of fireballs, with a total yield equivalent to $1.5\times 10^{11}$ tons of TNT, which is more than enough to vaporize every child on Earth, and plunge the rest of us into a nuclear winter severe enough to have a deadly White Christmas, all year round! Anyway, I think that takes care of your cargo problem!
NTS ABSTRACT Return to NTS Fall 2015 Contents Sep 03 Kiran Kedlaya On the algebraicity of (generalized) power series A remarkable theorem of Christol from 1979 gives a criterion for detecting whether a power series over a finite field of characteristic p represents an algebraic function: this happens if and only if the coefficient of the n-th power of the series variable can be extracted from the base-p expansion of n using a finite automaton. We will describe a result that extends this result in two directions: we allow an arbitrary field of characteristic p, and we allow "generalized power series" in the sense of Hahn-Mal'cev-Neumann. In particular, this gives a concrete description of an algebraic closure of a rational function field in characteristic p (and corrects a mistake in my previous attempt to give this description some 15 years ago). Sep 10 Sean Rostami Fixers of Stable Functionals The epipelagic representations of Reeder-Yu, a generalization of the "simple supercuspidals" of Gross-Reeder, are certain low-depth supercuspidal representations of reductive algebraic groups G. Given a "stable functional" f, which is a suitably 'generic' linear functional on a vector space coming from a Moy-Prasad filtration for G, one can create such a representation. It is known that the representations created in this way are compactly induced from the fixer in G of f and it is important to identify explicitly all the elements that belong to this fixer. This work is in-progress. Sep 17 David Zureick-Brown Tropical geometry and uniformity of rational points Let X be a curve of genus g over a number field F of degree d = [F:Q]. The conjectural existence of a uniform bound N(g,d) on the number #X(F) of F-rational points of X is an outstanding open problem in arithmetic geometry, known to follow from the Bomberi-Lang conjecture. We prove a special case of this conjecture - we give an explicit uniform bound when X has Mordell-Weil rank r ≤ g-3. This generalizes recent work of Stoll on uniform bounds on hyperelliptic curves. Using the same techniques, we give an explicit, unconditional uniform bound on the number of F-rational torsion points of J lying on the image of X under an Abel-Jacobi map. We also give an explicit uniform bound on the number of geometric torsion points of J lying on X when the reduction type of X is highly degenerate. Our methods combine Chabauty-Coleman's p-adic integration, non-Archimedean potential theory on Berkovich curves, and the theory of linear systems and divisors on metric graphs. This is joint work with Joe Rabinoff and Eric Katz. Sep 22 Joseph Gunther Embedding Curves in Surfaces and Stabilization of Hypersurface Singularity Counts We'll present two new applications of Poonen's closed point sieve over finite fields. The first is that the obvious local obstruction to embedding a curve in a smooth surface is the only global obstruction. The second is a proof of a recent conjecture of Vakil and Wood on the asymptotic probability of hypersurface sections having a prescribed number of singularities. Sep 24 Brandon Alberts The Moments Version of Cohen-Lenstra Heuristics for Nonabelian Groups Cohen-Lenstra heuristics posit the distribution of unramified abelian extensions of quadratic fields. A natural question to ask would be how to get an analogous heuristic for nonabelian groups. In this talk I take and extend on recent work in the area of unramified extensions of imaginary quadratic fields and bring it all together under one Cohen-Lenstra style heuristic. Oct 08 Ana Caraiani On vanishing of torsion in the cohomology of Shimura varieties I will discuss joint work in progress with Peter Scholze showing that torsion in the cohomology of certain compact unitary Shimura varieties occurs in the middle degree, under a genericity assumption on the corresponding Galois representation. Oct 15 Valentin Blomer Arithmetic, geometry and analysis of a senary cubic form We establish an asymptotic formula (with power saving error term) for the number of rational points of bounded height for a certain cubic fourfold, thereby proving a strong form of Manin's conjecture for this algebraic variety by techniques of analytic number theory. Oct 22 Brian Cook Configurations in dense subsets of Euclidean spaces A result of Katznelson and Weiss states that given a suitably dense (measurable) subset of the Euclidean plane realizes every sufficiently large distance, that is, for every prescribed (sufficiently large) real number the set contains two elements whose distance is this number. The analogue of this statement for finding three equally spaced points on a line, i.e. for finding three term arithmetic progressions, in a given set is false, and in fact false in every dimension. In this talk we revisit the case of three term progressions when the standard Euclidean metric is replaced by other metrics. Oct 29 Aaron Levin Integral points and orbits in the projective plane We will discuss the problem of classifying the behavior of integral points on affine subsets of the projective plane. As an application, we will examine the problem of classifying endomorphisms of the projective plane with an orbit containing a Zariski dense set of integral points (with respect to some plane curve). This is joint work with Yu Yasufuku. Nov 12 Vlad Matei A geometric perspective on Landau's theorem over function fields We revisit the recent result of Lior-Bary-Soroker. It deals with a function field analogue of Landau's classical result about the asymptotic density of numbers which are sums of two integer squares. The results obtained are just in the large characteristic and large degree regime. We obtain a characterization as q Dec 17 Tonghai Yang A generating function of arithmetic divisors in a unitary Shimura variety: modularity and application is a very important step, where Z(m) are the Heeger divisors and [\xi] is the rational canonical divisor of degree 1 (associated to Hodge bundle). In the proof, they actually use arithmetic version in calculation: [\hat{\xi}] + \sum \hat{Z}(m) q^m \in \widehat{CH}^1(X) \otimes \C which is also modular. In this talk, we define a generalization of this arithmetic generating function to unitary Shimura variety of type (n, 1) and prove that it is modular. It has application to Colmez conjecture and Gross-Zagier type formula. Dec 17 Nathan Kaplan Coming soon... Coming soon...
Here $\mathbb N = \{2,3,4,\dots\}$ with the binary operation of addition. If $m \in \mathbb N$ we denote by $G_{\mathbb N} (m)$ the semigroup generated by $m$. Definition: A number $p$ is said to be prime if for all $m \lt p$, $\;p \notin G_{\mathbb N} (m) $. We denote the set of non-empty finite subsets of $\mathbb N$ by $\mathcal F (\mathbb N)$. Let $\mathtt E$ be a function $\quad \mathtt E: \mathbb N \to \mathcal F (\mathbb N)$ satisfying the following: $\quad \quad\quad\forall n \in \mathbb N$ $\tag 0 \mathtt E (2) = \{2\}$ $\tag 1 \text{ If } (\forall \text{ prime } p \lt n) \; n \notin G_{\mathbb N} (p) \text{ then } \mathtt E (n) = \{n\}$ $\tag 2 \text{ If } \, (\exists \text{ prime } p \lt n) \; n \in G_{\mathbb N} (p) \text{ then } \mathtt E (n) \text{ is the union of all such primes}$ $\tag 3 \mathtt E (n+1) \cap \mathtt E (n) = \emptyset$ We have the following result: Theorem 1: There exist one and only one function $\mathtt E$ satisfying $\text{(0)}$ thru $\text{(2)}$; it will also satisfy $\text{(3)}$. Moreover, for every $n$, all the numbers in the set $\mathtt E (n)$ are prime (the prime 'factors'). Question: Can the theorem be proved in this $(\mathbb N,+)$ setting? If yes, we can continue. Theorem 2: The set of all prime numbers is an infinite set. Proof If $a_1$ is any number, consider the 'next further out' number $\tag 4 a_2 = \sum_{i=1}^{a_1+1}\, a_1 = \sum_{i=1}^{a_1}\,( a_1 + 1)$. A simple argument using $\text{(3)}$ shows that $\mathtt E (a_1) \subsetneq \mathtt E (a_2)\;$ (c.f. Bill Dubuque's remark). Employing recursion we get a sequence $a_1, a_2, a_3,\dots$ with a corresponding chain of strictly increasing sets $\quad \mathtt E (a_1) \subsetneq \mathtt E (a_2) \subsetneq E (a_3) \dots$ So there are sets of prime numbers with more elements than any finite set. $\blacksquare$ My Work Please see Note that the proof supplied by Filip Saidak has most likely been known for many years; see Bill Dubuque's answer to the math.stackexchange.com question Is there an intuitionist (i.e., constructive) proof of the infinitude of primes?
I'm having a little difficulty understanding the definition of monoidal category. I intuitively understand what the axioms are expressing, but I'm having a bit of difficulty knowing which functors the natural isomorphisms are actually relating. My weak understanding of the situation is that $\alpha$ is a natural isomorphism between the functor $F: \mathcal{C}\times\mathcal{C}\times\mathcal{C}\rightarrow\mathcal{C}$ given by $\otimes(1\times\otimes)$ and the functor $G: \mathcal{C}\times\mathcal{C}\times\mathcal{C}\rightarrow\mathcal{C}$ given by $\otimes(\otimes\times 1)$. Likewise, $\lambda$ is a natural isomorphism between $H: \mathcal{C}\rightarrow\mathcal{C}$ given by fixing $I$ in the left argument of $\otimes$ and $Id_\mathcal{C}$ (and a similar story for $\rho$). Is this correct? The functors under consideration are most apparent if you express the axioms for a monoidal category by commutative diagrams. By analogy, recall that a monoid (in the category of sets) is an object $M$ ogether with an identity map $i:\ast \to M$ from the one-point set $\ast$, together with a multiplication map $\mu: M\times M \to M$ such that certain diagrams commute. One such diagram is the following: which expresses the fact that $i$ is a left-unital with respect to $\mu$. Here the map $\pi$ is the projection. There is a similar diagram for right-unitality, and one for associativity, namely Now for a monoidal category $(\mathcal{C}, \otimes, I)$, replace $M$ above by $\mathcal{C}$, $\mu$ by $\otimes$ and $i$ by the constant functor at $I$. Then $\mathcal{C}$ is a monoidal category if the (relabelled) above diagrams commute up to natural isomorphism, and some additional axioms hold (pentagon identity, etc).
Mathematical symbols are used to perform various operations. The symbols make it easier to refer the maths quantities and help in easy denotation. It is interesting to note that the whole of maths is completely based on numbers and symbols. The math symbols not only refer to different quantities but also represent the relationship between two quantities. The tables below provide you with a list of all the common symbols in maths and examples on how to read and operate with them. Basic Math Symbols List with Names This is a list of commonly used symbols in the stream of mathematics. Symbol Symbol Name Meaning or Definition Example ≠ not equal sign inequality 10 ≠ 6 = equals sign equality 3 = 1 + 2 < strict inequality less than 7 < 10 > strict inequality greater than 6 > 2 ≤ inequality less than or equal to x ≤ y, means, y = x or y > x, but not vice-versa. ≥ inequality greater than or equal to a ≥ b, means, a = b or a > b, but vice-versa does not holds true. [ ] brackets calculate expression inside first [ 2×5] + 7 = 17 ( ) parentheses calculate expression inside first 3 × (3 + 7) = 30 − minus sign subtraction 5 − 2 = 3 + plus sign addition 4 + 5 = 9 ∓ minus – plus both minus and plus operations 1 ∓ 4 = -3 and 5 ± plus – minus both plus and minus operations 5 ± 3 = 8 and 2 × times sign multiplication 4 × 3 = 12 * asterisk multiplication 2 * 3 = 6 ÷ division sign / obelus division 15 ÷ 5 = 3 ∙ multiplication dot multiplication 2 ∙ 3 = 6 – horizontal line division / fraction 8/2 = 4 / division slash division 6 ⁄ 2 = 3 mod modulo remainder calculation 7 mod 3 = 1 a b power exponent 2 4 = 16 . period decimal point, decimal separator 4.36 = 4 +36/100 √ a square root √a · √a = a √9 = ±3 a^b caret exponent 2 ^ 3 = 8 4√a fourth root 4√a · 4√a · 4√a · 4√a = a 4√16= ± 2 3√a cube root 3√a · 3√a · 3√a = a 3√343 = 7 % percent 1% = 1/100 10% × 30 = 3 n√a n-th root (radical) n√a · n√a · · · n times = a for n=3, n√8 = 2 ppm per-million 1 ppm = 1/1000000 10ppm × 30 = 0.0003 ‰ per-mille 1‰ = 1/1000 = 0.1% 10‰ × 30 = 0.3 ppt per-trillion 1ppt = 10-12 10ppt × 30 = 3×10-10 ppb per-billion 1 ppb = 1/1000000000 10 ppb × 30 = 3×10-7 Maths Logic symbols Symbol Symbol Name Meaning or Definition Example ^ caret / circumflex and x ^ y · and and x · y + plus or x + y & ampersand and x & y | vertical line or x | y ∨ reversed caret or x ∨ y x bar not – negation x x ‘ single quote not – negation x’ ! exclamation mark not – negation ! x ¬ not not – negation ¬ x ~ tilde negation ~ x ⊕ circled plus / oplus exclusive or – xor x ⊕ y ⇔ equivalent if and only if (iff) ⇒ implies n/a n/a ∀ for all n/a n/a ↔ equivalent if and only if (iff) n/a ∄ there does not exists n/a n/a ∃ there exists n/a n/a ∵ because / since n/a n/a ∴ therefore n/a n/a Calculus and Analysis Symbols in Maths Symbol Symbol Name Meaning or definition Example ε epsilon represents a very small number, near-zero ε → 0 lim x→a limit limit value of a function lim x→a(3x+1)= 3 × a + 1 = 3a + 1 y ‘ derivative derivative – Lagrange’s notation (5x 3)’ = 15x 2 e e constant / Euler’s number e = 2.718281828… e = lim (1+1/x)x , x→∞ y(n) nth derivative n times derivation nth derivative of 3x n = 3 n (n-1)(n-2)….(2)(1)= 3n! y” second derivative derivative of derivative (4x 3)” = 24x \(\frac{d^2 y}{d x^2}\) second derivative derivative of derivative \(\frac{d^2 }{d x^2}(6x^{3}+x^{2}+3x+1) = 36x + 1\) dy/dx derivative derivative – Leibniz’s notation \(\frac{d }{d x}(5x) = 5\) \(\frac{d^n y}{d x^n}\) nth derivative n times derivation n/a \(\ddot{y}= \frac{d^{2} y}{dt^{2}}\) Second derivative of time derivative of derivative n/a \(\dot{y}\) Single derivative of time derivative by time – Newton’s notation n/a D 2x second derivative derivative of derivative n/a Dx derivative derivative – Euler’s notation n/a ∫ integral opposite to derivation n/a \(\frac{\ af(x,y)}{ax}\) partial derivative ∂(x2+y2)/∂x = 2x n/a ∭ triple integral integration of function of 3 variables n/a ∬ double integral integration of function of 2 variables n/a ∯ closed surface integral n/a n/a ∮ closed contour / line integral n/a n/a [a,b] closed interval [a,b] = {x | a ≤ x ≤ b} n/a ∰ closed volume integral n/a ( a , b ) open interval (a,b) = {x | a < x < b} n/a z* complex conjugate z = a+bi → z*=a-bi z* = 3 + 2i i imaginary unit i ≡ √-1 z = 3 + 2i ∇ nabla / del gradient / divergence operator ∇f (x,y,z) z complex conjugate z = a+bi → z = a-bi z = 3 + 2i \(\vec{x}\) vector \(\vec{V} = x \hat{i} + y \hat{j} + z \hat{k}\) n/a x * y convolution y(t) = x(t) * h(t) n/a ∞ lemniscate infinity symbol n/a δ delta function n/a n/a Combinatorics Symbols in Mathematics Combinatorics is a stream of mathematics that concerns the study of combination of finite discrete structures. Some of the most important symbols are: Greek Alphabet Letters Used in Maths Greek Symbol Greek Letter Name English Equivalent Pronunciation Upper Case Lower Case Β β Beta b be-ta Α α Alpha a al-fa Δ δ Delta d del-ta Γ γ Gamma g ga-ma Ζ ζ Zeta z ze-ta Ε ε Epsilon e ep-si-lon Θ θ Theta th te-ta Η η Eta h eh-ta Κ κ Kappa k ka-pa Ι ι Iota i io-ta Μ μ Mu m m-yoo Λ λ Lambda l lam-da Ξ ξ Xi x x-ee Ν ν Nu n noo Ο ο Omicron o o-mee-c-ron Π π Pi p pa-yee Σ σ Sigma s sig-ma Ρ ρ Rho r row Υ υ Upsilon u oo-psi-lon Τ τ Tau t ta-oo Χ χ Chi ch kh-ee Φ φ Phi ph f-ee Ω ω Omega o o-me-ga Ψ ψ Psi ps p-see Common Numeral Symbols Name European Roman Hindu Arabic Hebrew zero 0 n/a 0 n/a one 1 I ١ א two 2 II ٢ ב three 3 III ٣ ג four 4 IV ٤ ד five 5 V ٥ ה six 6 VI ٦ ו seven 7 VII ٧ ז eight 8 VIII ٨ ח nine 9 IX ٩ ט ten 10 X ١٠ י eleven 11 XI ١١ יא twelve 12 XII ١٢ יב thirteen 13 XIII ١٣ יג fourteen 14 XIV ١٤ יד fifteen 15 XV ١٥ טו sixteen 16 XVI ١٦ טז seventeen 17 XVII ١٧ יז eighteen 18 XVIII ١٨ יח nineteen 19 XIX ١٩ יט twenty 20 XX ٢٠ כ thirty 30 XXX ٣٠ ל forty 40 XL ٤٠ מ fifty 50 L ٥٠ נ sixty 60 LX ٦٠ ס seventy 70 LXX ٧٠ ע eighty 80 LXXX ٨٠ פ ninety 90 XC ٩٠ צ one hundred 100 C ١٠٠ ק These were some of the most important and commonly used symbols in mathematics. It is important to get completely acquainted with the all the maths symbols to be able to solve maths problems efficiently. It should be noted that without knowing the maths symbols, it is extremely difficult to grasp certain concepts in a universal scale. Some of the key importance of maths symbols are summarized below. Importance of Maths Symbols Helps in denoting quantities Establishes relationships between quantities Helps to identify the type of operation Makes reference easier Maths symbols are universal and break the language barrier. Keep visiting BYJU’S to get more such maths topics and concepts. Also, register at BYJU’S and download the app to access various video lessons and practice tests on different maths topics.
$$ \lim_{(x,y)\to (0,0)} \frac {x^3y^2}{x^4+y^6} $$ Does this limit exist? I've tried substituting y=x^0.5 and y=x^(2/3) which both goes to 0. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let $A>0$ be a parameter. Consider the curve $C_A$ determined by the equation $$ A=x^4+y^6. $$ This curve is not at all circular - it becomes quite oblong when $A\to0$. Anyway let's use these instead of circles as the polar coordinates don't seem to lead to a convincing argument. Because both exponents are even, on the curve $C_A$ we have $|x|\le A^{1/4}$ and $|y|\le A^{1/6}$. Therefore $$ |x^3y^2|\le A^{3/4+1/3}=A\cdot A^{1/12} $$ for all points $(x,y)\in C_A$. This means that $$ |f(x,y)|=\frac{|x^3y^2|}{x^4+y^6}\le A^{1/12} $$ for all the points $(x,y)\in C_A$. This is all well, because $A^{1/12}\to0$ when $A\to0$. The interiors of the curves $C_A$ actually form a basis of neighborhoods of the origin, so we could already conclude that the limit exists and is equal to zero. To make this more precise consider a disk $B((0,0);r)$ of radius $r>0$ around the origin. Inside that disk we have $x^4\le r^4$ and $y^6\le r^6$, so we see that $B((0,0);r)$ is contained in the union of the curves $C_A$ with $A$ ranging over the interval $0\le A\le r^4+r^6$. This means that for a point $(x,y)\in B((0,0);r)$ we have the estimate $$ |f(x,y)|\le (r^4+r^6)^{1/12}. $$ Because $$ \lim_{r\to0+}(r^4+r^6)^{1/12}=0 $$ the sandwich principle then implies that $$ \lim_{(x,y)\to(0,0)}f(x,y)=0. $$ We have $$ \frac{|x|^3y^2}{ x^4 + y^6 } \leq c \sqrt{ |y|}$$ Indeed it suffices to show that $$ |x|^3y^2\leq c \sqrt{ |y|} (x^4 + y^6) $$ Which we see it holds from AM - GM on $$ x^4/3 +x^4/3 +x^4/3+y^6 \geq C|x|^3\sqrt{|y|^3} $$
Solve the following PDE, $$ u_t(x,t)=ku_{xx}(x,t)-bu(x,t)$$ where $b>0$, with boundary conditions $$u(0,t)=u(c,t)=0 $$ My attempt Assume $u(x,t)=X(x)T(t)$ and plugging into the diffirential equation, $$X(x)T'(t)=kX''(x)T(t)-bX(x)T(t)$$ $$\dfrac{T'(t)}{T(t)}=k\dfrac{X''(x)}{X(x)}-b=-\lambda$$ Then solving $X(x)$ gives the ODE $$kX''(x)+(\lambda-b)X(x)=0$$ with boundary conditions, $$X(0)=X(c)=0$$ How to go from here? Edit Setting up the characteristic equation, $$r^2+\dfrac{\lambda-b}{k}=0$$ gives, $$r=i\mkern1mu\sqrt{\dfrac{\lambda-b}{k}}$$ which gives a general solution, $$X(x)= c_1\cos\bigg(\sqrt{\dfrac{\lambda-b}{k}}x\bigg)+c_2\sin\bigg(\sqrt{\dfrac{\lambda-b}{k}}x\bigg)$$ And from here her I don't know
Let $G$ be a finite group with $|G|>2$. Prove that Aut($G$) contains at least two elements. We know that Aut($G$) contains the identity function $f: G \to G: x \mapsto x$. If $G$ is non-abelian, look at $g : G \to G: x \mapsto gxg^{-1}$, for $g\neq e$. This is an inner automorphism unequal to the identity function, so we have at least two elements in Aut($G$). Now assume $G$ is abelian. Then the only inner automorphism is the identity function. Now look at the mapping $\varphi: G \to G : x \mapsto x^{-1}$. This is an homomorphism because $\varphi (xy) = (xy)^{-1} = y^{-1} x^{-1} = x^{-1} y^{-1} = \varphi (x) \varphi (y)$. Here we use the fact that $G$ is abelian. This mapping is clearly bijective, and thus an automorphism. This automorphism is unequal to the identity function only if there exists an element $x \in G$ such that $x \neq x^{-1}$. In other words, there must be an element of order greater than $2$. Now assume $G$ is abelian and every non-identity element has order $2$. By Cauchy's theorem we know that the group must have order $2^n$. I got stuck at this point. I've looked at this other post, $|G|>2$ implies $G$ has non trivial automorphism, but I don't know what they do in the last part (when they start talking about vector spaces). How should this prove be finished, without resorting to vector spaces if possible? Thanks in advance
We will establish the identity by counting the elements of the same set two different ways. Begin by considering the trinomial coefficient for ($m$, $n$ - $m$, $k$); i.e., the coefficient of the $a$ m$b$ n-m$c$ k term in the expansion of ($a$ + $b$ + $c$) n+k.This coefficient can be calculated by counting the number of corresponding terms in the full expansion of ($a$ + $b$ + $c$) n+k. The number of combinations of positions for the $m$ $a$ factors is ${n+k \choose m}$. For each of these, there are ${n-m+k \choose k}$ ways to arrange the $b$ and $c$ factors, for a total of ${n+k \choose m}$${n-m+k \choose k}$. Also, note that in each of these terms, there are $k+1$ stretches of only $a$ & $b$ factors (possibly including the empty set); i.e., before the first $c$ factor, between each pair of subsequent $c$ factors, and after the last $c$ factor. Each combination of (positioning of the $k$ $c$ factors, partition of the $m$ $a$ factors into $m_i$ $a$ factors in the $i$th stretch) corresponds to $\prod_{i=0}^{k} {n_i \choose m_i}$ terms in the full expansion, where the $i$th stretch of only $a$ & $b$ factors has $m_i$ $a$'s and ($n_i$ -$m_i$) $b$'s; hence $\sum_{i=0}^{k} m_i$ = $m$ and $\sum_{i=0}^{k} n_i$ = $n$. (Note that $n_i$ - the length of the $i$th only $a$ & $b$ stretch - is determined by the positioning of the $c$'s.) The total coefficient is obtained by summing the contributions over all such combinations of positionings of the $k$ $c$ factors and partitions of the $m$ $a$ factors, or $\sum \left[\prod_{i=0}^{k} {n_i \choose m_i}\right]$; where $\sum_{i=0}^{k} m_i$ = $m$ and $\sum_{i=0}^{k} n_i$ = $n$. Now, we wish to count the elements of a subset of the terms described above - only those terms where there are an even number of $b$ factors before the first $c$ factor, between each pair of subsequent $c$ factors, and after the last $c$ factor. (As we are given that $n$ $\equiv$ $m$ (mod $2$), $n-m$ is even, so we know this is possible.) Again, the number of combinations of positions for the $m$ $a$ factors is ${n+k \choose m}$. For the remaining factors, consider $bb$ as a unit, as we know $b$'s must occur in pairs among the $c$'s. So the number of eligible arrangements of $b$'s and $c$'s is ${\frac{n - m}{2} + k \choose k}$, for a total count of ${n+k \choose m}$${\frac{n - m}{2} + k \choose k}$. Counting the other way, each eligible combination of (positioning of the $k$ $c$ factors, partition of the $m$ $a$ factors into $m_i$ $a$ factors in the $i$th stretch) again corresponds to $\prod_{i=0}^{k} {n_i \choose m_i}$ terms. To be eligible, ($n_i$ - $m_i$) must be even ∀$i$, which is equivalent to saying $n_i$ $\equiv$ $m_i$ (mod $2$) ∀$i$. So, the count is $$\sum_{∀i, m_i \equiv n_i (mod 2)} \left[\prod_{i=0}^{k} {n_i \choose m_i}\right]$$ A similar argument shows that given non-negative integers $k$, $m$, $n$, and $q$ where $n$ $\geq$ $m$, $q$ > $0$, and $n$ $\equiv$ $m$ (mod $q$), $${n + k \choose m} {\frac{n - m}{q} + k \choose k} = \sum_{∀i, m_i \equiv n_i (mod q)} \left[\prod_{i=0}^{k} {n_i \choose m_i}\right]$$
Definition:Standard Representation of Simple Function Jump to navigation Jump to search Definition Let $\left({X, \Sigma}\right)$ be a measurable space. Let $f: X \to \R$ be a simple function. A standard representation of $f$ consists of: a finite sequence $a_1, \ldots, a_n$ of real numbers a partition $E_0, E_1, \ldots, E_n$ of $\Sigma$-measurable sets subject to: $f = \displaystyle \sum_{j \mathop = 0}^n a_j \chi_{E_j}$ where $a_0 := 0$, and $\chi$ denotes characteristic function.
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Here are some crazy ideas, with absolutely no guarantee that they'll find a good solution -- but ideas you could try. One possible heuristic approach might be to use quadratic programming combined with randomized rounding and some more heuristics. In particular, introduce variables $x_i$ (for $i=1,2,120$); the idea is that $x_i=1$ means that $i$ is one of the 12 items you select, and $x_i=0$. Now we have linear constraints $0 \le x_i \le 1$ and $x_1+x_2+\dots+x_{120} = 12$. We could fix some constant $U$, introduce the constraint $\sum_i C_i x_i \le U$, and maximize the quadratic function $(\sum_i A_i x_i) \times (\sum_i B_i x_i)$. Now the values of $x_i$ you get back won't necessarily be 0 or 1 (they'll probably be fractions), but you could use randomized rounding to convert it to a 0/1-solution. Now sweep over a range of possible values of $U$ (increasing each time by a multiplicative factor of, say, 1.1 times the previous value), and keep the best solution you find. This might work, or it might not, but it's something you could try. You could also try formulating this as a semidefinite programming problem or quadratic programming with quadratic constraints (with each $x_i$ representing whether you take item $i$ or not), then apply randomized rounding to account for the fact that the $x_i$'s should all be 0 or 1. In particular, if you work on the decision problem (does there exist a solution whether the objective function exceeds $r$?), then you can phrase everything using quadratic constraints -- the requirement that $(\sum_i A_i) (\sum_i B_i)/(\sum_i C_i) \ge r$ is equivalent to $(\sum_i A_i) (\sum_i B_i) - r (\sum_i C_i)$, which is a quadratic function in the $x_i$'s. You might be able to express this as a semidefinite programming problem. Another heuristic might be to try splitting this into sub-problems and then trying to merge, using a divide-and-conquer style approach. Randomly partition the 120 items into two sets of 60, namely sets $L,R$ such that $|L|=|R|=60$ and $L\cup R=\{1,2,\dots,120\}$. Enumerate all $60 \choose 6$ ways to choose 6 items from $L$, and compute the value of $(\sum A_i) (\sum B_i)/(\sum C_i)$ for each such subset of 6 items. This gives you a list of ${60 \choose 6} \approx 2^{26}$ different 6-subsets, and a $U$-value for each. Sort that list by decreasing value. Do the same for $R$. Now try to combine these two lists. For instance, you could throw away all but the top 5% of each of the two lists, then try all ways of pairing up one subset of 6 items from the first list with one subset of 6 items from the second subset; compute the value of your objective function for each such subset of 12 items, and keep track of the best you've seen so far. Repeat many times with many different ways of partitioning the 120 items into two sets of 60. Again, I have no clue whether it will work well or not. There's no reason to believe it will necessarily give the best solution.
(For easier discussion, I suggest you to read the introduction of Schroder's equation and the section on 'Conjugacy' of iterated function, in case you are not familiar with these topics.) Let $f(x)=\frac12\arctan x$, and $f_n(x)$ be the $n$th iteration of $f$. Let us reduce functional iteration to multiplication: if we can solve the corresponding Schroder's equation$$\Psi(f(x))=s\Psi(x)$$ then it is well known (and also straightforward) that $$f_n(x)=\Psi^{-1}(f'(a)^n\cdot\Psi(x))$$ where $a$ is a fixed point of $f$. For the moment, let us focus on $\Psi(f(x))=s\Psi(x)$. Clearly, in our case, $a=0$, and $s=f'(a)=\frac12$. For $a = 0$, if $h$ is analytic on the unit disk, fixes $0$, and $0 < |h′(0)| < 1$, then Gabriel Koenigs showed in 1884 that there is an analytic (non-trivial) $\Psi$ satisfying Schröder's equation $\Psi(h(x))=s\Psi(x)$. Thus, $\Psi$ is analytic. A few more observations: $\Psi(0)=0$. $\Psi'(0)$ is up to our choice, since if a function $\psi$ is a solution to the Schroder's equation, then so is $k\cdot \psi$ for any constant $k$. For convenience, set $\Psi'(0)=1$. All other Taylor series coefficients of $\Psi$ are then uniquely determined, and can be found recursively. (The method will be illustrated below.) By Lagrange inversion theorem, $\Psi$ is invertible in a neighbourhood of $0$, and $\Psi^{-1}(z)=0+\frac1{\Psi'(0)}z+o(z)\implies \Psi^{-1}(z)\sim z\quad(z\to 0)$. Therefore, $f_n(x)=\Psi^{-1}(f'(a)^n\cdot\Psi(x))=\Psi^{-1}(2^{-n}\Psi(x))\sim 2^{-n}\Psi(x)$ as $n\to\infty$. Hence, for the limit the OP wanted to evaluate, $$\ell:=\lim_{n\to\infty}2^nf_n(x_0)=\Psi(x_0)$$ We shall now determine all the Taylor series coefficients of $\Psi(x)$ (valid only for $|x|<1$), since it can be assumed $0\le x_0<1$. Obviously, $\Psi$ is an odd function. Let$$\Psi(x)=x+\sum^\infty_{k=2}\phi_{2k-1} x^{2k-1}$$ The basic idea is to repeatedly differentiate both sides of $\Psi(f(x))=s\Psi(x)$ and substitute in $x=0$, then recursively solve for the coefficients. For example, differentiating both sides three times and substitute in $x=0$, we obtain$$-\Psi'(0)+\frac18\Psi'''(0)=\frac12\Psi'''(0)\implies\phi_3=-\frac49$$ Slightly modifying the notations of our respectable MSE user @Sangchul Lee, for $\lambda=(\lambda_1,\lambda_2,\cdots,\lambda_n)$ a $n$-tuple of non-negative integers: write $\lambda \vdash n$ if $\sum^n_{i=1}(2i-1)\lambda_i=2n-1$. write $|\lambda| = \sum_{i=1}^{n} \lambda_i$. define the tuple factorial as $\lambda !=\frac{|\lambda|!}{\lambda_1!\cdot\lambda_2!\cdots\lambda_n !}$. I will state, without proof, Faà di Bruno's formula for odd inner function: $$(\Psi\circ f)^{(2n-1)}=(2n-1)!\sum_{\lambda \vdash n}\lambda!\cdot\phi_{|\lambda|}\prod^n_{i=1}\left(\frac{f^{(2i-1)}(0)}{(2i-1)!}\right)^{\lambda_i}$$ $$\implies \frac12\phi_{2n-1}=\sum_{\lambda \vdash n}\lambda!\cdot\phi_{|\lambda|}\prod^n_{i=1}\left(\frac{(-1)^{i+1}}{2(2i-1)}\right)^{\lambda_i}$$ Further simplifications lead to the final result: $$\ell=\Psi(x_0)=\sum^\infty_{k=1}\phi_{2k-1} x_0^{2k-1} \qquad{\text{where}}\qquad \phi_1=1$$ $$\phi_{2n-1}=\frac{(-1)^{n}}{2^{-1}-2^{1-2n}}\sum_{\substack{\lambda \vdash n \\ \lambda_1\ne 2n-1}}\phi_{|\lambda|}\frac{\lambda! (-1)^{(|\lambda|+1)/2}}{2^{|\lambda|}}\prod^n_{i=1}\frac1{(2i-1)^{\lambda_i}}$$ Yeah, I know it’s ugly. But that’s the best we can obtain. If anyone have a nice math software, please help me calculate the first few Taylor coefficients.
Hello, I don't know if this is a good place for exposing my problem but I'll try... I have a gauge theory with action: $S=\int\;dt L=\int d^4 x \;\epsilon^{\mu\nu\rho\sigma} B_{\mu\nu\;IJ} F_{\mu\nu}^{\;\;IJ} $ Where $B$ is an antisymmetric tensor of rank two and $F$ is the curvature of a connection $A$ i.e: $F=dA+A\wedge A$, $\mu,\nu...$ are space-time indices and $I,J...$ are Lie Algebra indices (internal indices) I would like to find its symmetries. So I rewrite the Lagrangian by splitting time and space indices $\{\mu,\nu...=0..3\}\equiv \{O; i,j,...=1..3\}$ I find: $L = \int d^3 x\;(P^i_{\;IJ}\dot{A}_i+B_i^{\,IJ}\Pi^i_{\,IJ}+A_0^{\;IJ}\Pi_{IJ})$ Where $\dot{A}_i = \partial_0 A_i$, $P^i_{\;IJ} = 2\epsilon^{ijk}B_{jk\,IJ}$ is hence the conjugate momentum of $A_i^{\,IJ}$ $B_i^{\,IJ}$ and $A_0^{\;IJ}$ being Lagrange multipliers we obtain respectively two primary and two secondary constraints: $\Phi_{IJ} = P^0_{\;IJ} \approx0$ $\Phi_{\;\;IJ}^{\mu\nu} = P^{\mu\nu}_{\;\;IJ} \approx0$ $\Pi^i_{\,IJ} = 2\epsilon^{ijk}F_{jk\,IJ} \approx0$ $\Pi_{IJ}=(D_i P^i)_{IJ} \approx0$ Where $P^0_{\;IJ}$ are the conjugate momentums of $A_0^{\,IJ}$ and $P^{\mu\nu}_{\;\;IJ}$ those of $B_{\mu\nu}^{\;\;IJ}$. Making these constraints constant in time produces no further constraints. Whiche gives us a general constraint: $\Phi = \int d^3 x \;(\epsilon^{IJ}P^0_{\,IJ}+\epsilon_{\mu\nu}^{IJ}\;P^{\mu\nu}_{\;\;IJ}+\eta^{IJ}\Pi_{IJ}+\eta_i^{IJ}\Pi^i_{\;IJ})$ Each quantity $F$ have thus a Gauge transformation $\delta F = \{F,\Phi\}$ where $\{...\}$ denotes the Poisson bracket. Knowing that this theory have the following Gauge symmetry: $\delta A = D\omega$ $\delta B = [B,\omega]$ Where $\omega$ is a 0-form, I would like to retrieve these transformations using the relation below. (where $\Phi$ is considered as the generator of the Gauge symmetry) but my problem is that I don't know how to proceed, I already did this with a Yang-Mills theory and it worked... but for this theory it seems to le intractable! Someone to guide me?
The existence of the exponential on $\mathbb{C}$ has a very basic, yet very strong consequence : $(\mathbb{C}^*,\cdot)$ is a quotient of $(\mathbb{C},+)$. This question is concerned with fields $K$ such that $K^*$ is a quotient of $K$ ; that is, with the existence of a surjective group morphism $K\to K^*$. I will refer to such morphisms as "exponentials" on $K$. (I know that the notion of exponential fields exists, but I only consider the surjective case.) The existence of such a map is not benign : since the additive group $K$ is $q$-divisible for all $q\neq char(K)$, it implies that $K^*$ is also $q$-divisible. In characteristic $0$, this actually implies that $K$ is algebraically closed (I suspect that in characteristic $p$ this implies that $K$ is separably closed, but I'll focus on zero characteristic for now). EDIT : That was false, but it doesn't really matter since Eric's answer gives a construction when $K^*$ is divisible. Question 1: It's a basic fact that algebraically closed fields of characteristic zero and with the same transcendance degree over $\mathbb{Q}$ are isomorphic. So $\mathbb{C}_p \simeq \mathbb{C}$ for all $p$, and thus $\mathbb{C}_p$ admits (at least one) exponential map. On the other hand, isomorphisms $\mathbb{C}\to \mathbb{C}_p$ are highly non-constructible objects, so this does not give us any clue about what an exponential on $\mathbb{C}_p$ may look like. Can such an exponential map $\mathbb{C}_p\to \mathbb{C}_p^*$ be explicitly defined ? In particular, does it exist withoutthe axiom of choice (or with only a weak version) ? Question 2: If we put the axiom of choice back on : For any countable cardinal $\kappa$, do algebraically closed fields of transcendance degree $\kappa$ over $\mathbb{Q}$ admit a surjective exponential ? (In particular, what about $\overline{\mathbb{Q}}$ ?) We already know from $\mathbb{C}$ that this is true for $\kappa = 2^{\aleph_0}$ so since "algebraically closed fields of characteristic zero with a surjective exponential" may be countably axiomatized in first-order logic, Löwenheim-Skolem implies that there are models of all infinite cardinality, so in particular question 2 has a positive answer for all uncountable $\kappa$, and for some countable $\kappa$. But since all countable $\kappa$ give models of the same cardinality $\aleph_0$, we cannot use that remark to answer the question.
Some explanations first The substitution in the question introduces the reduced wave function $u(r)$ by solving the original radial equation in polar coordinates, $$-\frac{1}{2}\left(R''(r)+\frac {1}{r}R'(r)\right) - \frac{1}{r}R(r) + \frac {m^2}{2r^2}R(r) = E R(r)$$ using the ansatz $$R(r)\equiv \frac{1}{\sqrt{r}}u(r)$$ The apparently divergent prefactor is cancelled by the fact that $u(r)$ is then found to go to zero at $r\to 0$ with a power larger than, or equal to, $\frac12$. The equal sign occurs exactly for the case $m=0$, as one can prove analytically. This is essential because the new radial equation for the reduced wave function still contains a strongly divergent term even at $m=0$. The equation reads $$-\frac{1}{2}u''(r)- \frac{1}{r}u(r) + \frac {m^2-\frac14}{2r^2}u(r) = E u(r)$$ The reason this form is often used is that it is in the form of a Sturm-Liouville equation (with no first derivative), and the analytic solution for $m=0$ yields $u(r)\propto \sqrt{r}$ as $r\to 0$, which means that the original wave function approaches a finite limit. It doesn't go to zero, so Dirichlet boundary conditions don't apply to $R(r)$ at the origin. The problem with the substitution in the question is that it introduces an effective potential that diverges at $r\to 0$ even when the angular momentum is zero. This is difficult to treat numerically, and you can only get marginally closer to the correct value by decreasing MaxCellMeasure. A similar centrifugal term in the effective potential for $m>0$ is less problematic numerically, because then it enters with a larger prefactor. Since the centrifugal term always leads to a suppression of the wave amplitude near the origin independently of the boundary conditions, the limit of vanishing amplitude as $r\to 0$ is approached smoothly for nonzero $m$. But for $m=0$ the amplitude has to fall off with $\sqrt{r}$, and that means NDEigensystem has to deliver a result for $u(r)$ whose slope ideally diverges. This is why I think this formulation of the problem is not the right one for a numerical solution. Below, I therefore use the unmodified radial equation, denoting the radial wave function by $\psi(r)$ instead of $R(r)$. You'll understand why if you try to read $R(r)$ out loud. The Dirichlet boundary condition at $r=0$ that was needed for $u(r)$ is still correct for $R(r)$ at $m>0$ because these functions vanish as $r^m$ there. But the centrifugal potential is pretty much all by itself able to enforce this condition (see caveat at the end), so the main condition that leads to the quantization of the eigenvalues is the Dirichlet condition at $r\to\infty$. The remaining issue in the radial equation is that the $r\to\infty$ boundary conditions then needs to be faked by choosing a large but finite $r$ at which you expect the wave to have decayed to zero. This distance can in principle be estimated from the classical turning points of the Coulomb potential. However, in my approach you don't need to do that because I transform $r$ to a different variable defined on a finite interval, so that the large-$r$ variations (which are slow) get compressed into that interval, and the boundary condition can be applied at the finite point to which $r\to\infty$ has been mapped. Suggested numerical approach The correct eigenvalues are: e[n_] := -(1/(2 (n - 1/2)^2)) N[Table[e[n], {n, 0, 10}]] (* ==> {-2., -2., -0.222222, -0.08, -0.0408163, -0.0246914, -0.0165289, -0.0118343, -0.00888889, -0.00692042, -0.00554017} *) To reproduce this numerically, I would choose the same substitution of variables that I proposed in the linked answer: $r=\tan(\xi)$. This leads to the following modification of the radial equation: Clear[f, r, ξ, ψ, radialξ]; radialEq = -(1/r) f[r] - 1/2 f''[r] - 1/(2 r) f'[r] + m^2/(2 r^2) f[r]; radialξ[m_] = Simplify[radialEq /. f -> (ψ[ArcTan[#]] &) /. r -> (Tan[ξ]), Pi/2 > ξ > 0] (* ==> 1/4 Cot[ξ] (2 (-2 + m^2 Cot[ξ]) ψ[ξ] - Cos[ξ]^2 (2 Cos[2 ξ] Derivative[1][ψ][ξ] + Sin[2 ξ] (ψ^′′)[ξ])) *) Now we can't impose a Dirichlet boundary condition at the origin when the angular momentum (called here m instead of l for clarity) vanishes. But I find that this causes no problem. I just leave a free boundary at the origin. The resulting eigenvalues are in very good agreement with expectation, up to roughly the tenth eigenvalue: With[{max = 20, shift = 10, m = 0}, {ev, ef} = NDEigensystem[{radialξ[m] + shift ψ[ξ], DirichletCondition[ψ[ξ] == 0, ξ == Pi/2]}, ψ[ξ], {ξ, 0, Pi/2}, max, Method -> {"SpatialDiscretization" -> {"FiniteElement", {"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}}, "Eigensystem" -> {"Arnoldi", MaxIterations -> 40000}}]; evNew = ev - shift] (* ==> {-2., -0.222222, -0.08, -0.0408163, -0.0246914, -0.0165289, \ -0.0118343, -0.00888878, -0.00691999, -0.00553879, -0.0045312, \ -0.00377284, -0.00318483, -0.00267246, -0.00233131, -0.00196882, \ -0.00167769, -0.00141128, -0.000979967, -0.000749858} *) With[{n = 4, d = 10, amplitudes = {-1, 1, 1, 1}}, Plot[Evaluate[ Table[evNew[[i]] + amplitudes[[i]] (ef[[i]] /. ξ -> ArcTan[r]), {i, n}]], {r, 0, d}, PlotRange -> {{0, d}, {-5, 5}}, Epilog -> {Gray, Dashed, Table[Line[{{0, evNew[[i]]}, {d, evNew[[i]]}}], {i, n}]}]] With[{max = 20, shift = 10, m = 4}, {ev, ef} = NDEigensystem[{radialξ[m] + shift ψ[ξ], DirichletCondition[ψ[ξ] == 0, ξ == Pi/2]}, ψ[ξ], {ξ, 0, Pi/2}, max, Method -> {"SpatialDiscretization" -> {"FiniteElement", \ {"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}}, "Eigensystem" -> {"Arnoldi", MaxIterations -> 40000}}]; evNew = ev - shift] (* ==> {-0.0246914, -0.0165289, -0.0118343, -0.00888886, \ -0.00692024, -0.00553945, -0.00453286, -0.00377563, -0.00318396, \ -0.00269907, -0.00235954, -0.00196281, -0.00168251, -0.0014004, \ -0.00102958, -0.000964247, -0.000478284, 0.00014322, 0.00186455, \ 0.0042127} *) With[{n = 4, d = 130, amplitudes = {-1, 1, 1, 1}/1000}, Plot[Evaluate[ Table[evNew[[i]] + amplitudes[[i]] (ef[[i]] /. ξ -> ArcTan[r]), {i, n}]], {r, 0, d}, PlotRange -> {{0, d}, All}, Epilog -> {Gray, Dashed, Table[Line[{{0, evNew[[i]]}, {d, evNew[[i]]}}], {i, n}]}]] The plots are for $m = 0$ (top) and $m = 4$ (bottom). The reason why I didn't have to specify a boundary condition for $r=0$ is that whenever $m>0$ there is a centrifugal barrier in the effective potential that suppresses the solution near $r=0$ anyway. However, this suppression is a cheat because it doesn't enforce exact zero wave function, only exponential suppression. So a slightly more accurate solution is obtained if you replace the DirichletCondition above by DirichletCondition[ψ[ξ] == 0, If[m == 0, ξ == Pi/2, True]]
Alles, B. and D'Elia, M. and Giacomo, A. Di (2005) Analyticity in theta on the lattice and the large volume limit of the topological susceptibility. Physical review. D, Particles, fields, gravitation, and cosmology, 71 . 034503. ISSN 1550-7998 Abstract Non-analyticity of QCD with a \theta term at \theta=0 may signal a spontaneous breaking of both parity and time reversal invariance. We address this issue by investigating the large volume limit of the topological susceptibility $\chi$ in pure SU(3) gauge theory. We obtain an upper bound for the symmetry breaking order parameter <Q> and, as a byproduct, the value \chi=(173.4(+/- 0.5)(+/- 1.2)(+1.1 / -0.2) MeV)^4 at \beta=6 (a approx= 0.1 fermi). The errors are the statistical error from our data, the one derived from the value used for \Lambda_L and an estimate of the systematic error respectively. Item Type: Article Additional Information: Imported from arXiv Subjects: Area02 - Scienze fisiche > FIS/02 - Fisica teorica, modelli e metodi matematici Divisions: Dipartimenti (from 2013) > DIPARTIMENTO DI FISICA Depositing User: dott.ssa Sandra Faita Date Deposited: 29 Jan 2014 21:50 Last Modified: 29 Jan 2014 21:50 URI: http://eprints.adm.unipi.it/id/eprint/1529 Repository staff only actions View Item
The Reynolds number, with $\rho$ the density, $u$ the velocity magnitude, $\mu$ the viscosity and $L$ some characteristic length scale (e.g. channel height or pipe diameter) is given by$$\text{Re}=\frac{\rho~u~L}{\mu}.$$This is a dimensionless relation of the ratio of inertial forces ($\rho u u$) to viscous forces ($\mu\frac{u}{L}$). It therefore signifies the relative importance of inertial forces to viscous forces. In the laminar regime, viscous forces are dominant (i.e. $\text{Re}\ll 1$) while in the turbulent regime, inertial forces are dominant (i.e. $\text{Re}\gg 1$). In the transition from laminar to turbulent flow, inertial forces start to overtake viscous forces which simply means that viscosity can no longer smooth out velocity gradients into smooth laminar flow (except for near a boundary where they are still important) and inertia of the flow causes it to 'trip' over itself causing vortices and in general chaotic behaviour associated with turbulence. The Reynolds number is the way it is by a dimensional analysis of the hydrodynamic equations which govern the flow (i.e. the Navier-Stokes equations). Lets assume a steady flow (i.e. $\partial_t\mathbf{u}=0$)$$\rho~\mathbf{u}\cdot\mathbf{\nabla}\mathbf{u}=-\mathbf{\nabla}p + \mu~\mathbf{\nabla}^2\mathbf{u}.$$ Non-dimensionalizing this by defining $\bar{x}=\frac{x}{L}$, $\bar{\mathbf{u}}=\frac{\mathbf{u}}{U}$ and $\bar{p}=\frac{p}{P}$ where $U$ and $P$ are characteristic velocity and pressure scales respectively, we get: $$\rho~\frac{U^2}{L}~\bar{\mathbf{u}}\cdot\bar{\mathbf{\nabla}}\bar{\mathbf{u}}=-\frac{P}{L}~\bar{\mathbf{\nabla}}\bar{p} + \mu \frac{U}{L^2}~\bar{\mathbf{\nabla}}^2\bar{\mathbf{u}}$$ we can simplify this by dividing through by $\mu\frac{U}{L^2}$ and defining $P=\mu\frac{U}{L}$ to get: $$\text{Re}~\bar{\mathbf{u}}\cdot\bar{\mathbf{\nabla}}\bar{\mathbf{u}}=-\bar{\mathbf{\nabla}}\bar{p} + \bar{\mathbf{\nabla}}^2\bar{\mathbf{u}}$$ which reveals the Reynolds number. For $\text{Re}\ll 1$, where viscosity dominates, we see that the convective term on the left becomes negligible compared to the pressure gradient and viscous stress tensor on the right. For $\text{Re}\gg 1$ we can do the same except we then need to divide by $\rho\frac{U^2}{L}$ and define $P=\rho U^2$ to get: $$\bar{\mathbf{u}}\cdot\bar{\mathbf{\nabla}}\bar{\mathbf{u}}=-\bar{\mathbf{\nabla}}\bar{p} + \frac{1}{\text{Re}}\bar{\mathbf{\nabla}}^2\bar{\mathbf{u}}$$ Now the viscous stress tensor on the right becomes negligible compared to the pressure gradient and the convection term on the left. Note that the characteristic pressure scale $P$ was defined in a viscous and inertial scale depending on which regime we were in. This is necessary as it is required that the dimensionless pressure gradient is of the same order as at least one other term. Note also that real turbulence is inherently unsteady, my treatment above of the steady Navier-Stokes equations for different regimes was to focus on the role of the Reynolds number and simply to keep it as short as possible.
What causes it and how does it occur? If you do post some mathematics, please explain what each term means too please. Quantum fluctuations are a popular buzzword for the statistical triviality that the variance (the spread of values) of a random variable A (in context of quantum physics, this could be the position of a particle or the amount of energy that it has) with zero mean is typically not zero - except that A is now an operator. Some people, therefore, think that this deserves a much more mysterious name. Taken from the section ''Does the vacuum fluctuate?'' in Chapter A8: Virtual particles and vacuum fluctuations of A theoretical physics FAQ Fluctuations in the mean are also called fluctuations. It gives a notion about how reliable the mean value is (the second moment of the distribution). Any quantity that we are uncertain about will have that uncertainty encoded in a probability distribution, Quantum mechanics is no different in that respect then any other theory of inference, it is only different in that it claims that the uncertainty is intrinsic whereas other theories of inference simply assume that the data is observable in principle but not in practice. We use the term `quantum fluctuation' therefore to impose the idea of fluctuations on physical variables that we classically thought of as being exact and obtainable such as position and momentum. An interesting and quick calculation in scalar free field theory gives an interesting example of `quantum fluctuations' $$\langle \phi(x)\rangle_0=0$$ $$Var(\phi)_0=\langle\phi(x)^2\rangle_0-\langle \phi(x)\rangle_0=\langle\phi(x)^2\rangle_0 =\int \frac{d^3k}{(2\pi)^3}\frac{1}{\sqrt{\vec k^2+m^2}}\rightarrow\infty$$ The average value of the field is vanishing, but when we ask the extent to which this result can be trusted, it cannot, our ignorance is infinite.
First let us fix the terminology. The space (1) is known in General Topology as the Golomb space. More precisely, the Golomb space $\mathbb G$ is the set $\mathbb N$ of positive integers, endowed with the topology generated by the base consisting of arithmetic progressions $a+b\mathbb N_0$ where $a,b$ are relatively prime natural numbers and $\mathbb N_0=\{0\}\cup\mathbb N$. Let us call the space (2) the rational projective space and denote it by$\mathbb QP^\infty$. Both spaces $\mathbb G$ and $\mathbb QP^\infty$ are countable, connected and Hausdorff but they are nothomeomorphic. A topological property distinguishing these spaces willbe called the oo-regularity. Definition. A topological space $X$ is called oo-regularif for any non-empty disjoint open sets $U,V\subset X$ the subspace $X\setminus(\bar U\cap\bar V)$ of $X$ is regular. Theorem. The rational projective space $\mathbb QP^\infty$ is oo-regular. The Golomb space $\mathbb G$ is not oo-regular. Proof. The statement 1 is relatively easy, so is left to the interested reader. The proof of 2. In the Golomb space $\mathbb G$ consider two basic open sets $U=1+5\mathbb N_0$ and $V=2+5\mathbb N_0$.It can be shown that $\bar U=U\cup 5\mathbb N$ and $\bar V=V\cup 5\mathbb N$, so $\bar U\cap\bar V=5\mathbb N$. We claim that the subspace$X=\mathbb N\setminus (\bar U\cap\bar V)=\mathbb N\setminus 5\mathbb N$ of the Golomb spaceis not regular. Consider the point $x=1$ and its neighborhood $O_x=(1+4\mathbb N)\cap X$ in $X$. Assuming that $X$ is regular, we can find a neighborhood $U_x$ of $x$ in $X$ such that$\bar U_x\cap X\subset O_x$. We can assume that $U_x$ is of basic form$U_x=1+2^i5^jb\mathbb N_0$ for some $i\ge 2$, $j\ge 1$ and $b\in\mathbb N\setminus(2\mathbb N_0\cup 5\mathbb N_0)$. Since the numbers $4$, $5^j$, and $b$ are relatively prime, by the Chinese remainder Theorem, the intersection $(1+5^j\mathbb N_0)\cap (2+4\mathbb N_0)\cap b\mathbb N_0$contains some point $y$. It is clear that $y\in X\setminus O_x$. We claim that $y$ belongs to the closure of $U_x$ in $X$. We need to check that each basic neighborhood $O_y:=y+c\mathbb N_0$ of $y$ intersects the set $U_x$. Replacing $c$ by $5^jc$, we can assume that $c$ is divisible by $5^j$ and hence $c=5^jc'$ for some $c'\in\mathbb N_0$. Observe that $O_y\cap U_x=(y+c\mathbb N_0)\cap(1+4^i5^jb\mathbb N_0)\ne\emptyset$ if and only if $y-1\in 4^i5^jb\mathbb N_0-5^jc'\mathbb N_0=5^j(4^ib\mathbb N_0-c'\mathbb N_0)$. The choice of $y\in 1+5^j\mathbb N_0$ guarantees that $y-1=5^jy'$. Since $y\in 2\mathbb N_0\cap b\mathbb N_0$ and $c$ is relatively prime with $y$, the number $c'=c/5^j$ is relatively prime with $4^ib$. So, by the Euclidean Algorithm, there are numbers $u,v\in\mathbb N_0$ such that $y'=4^ibu-c'v$. Then $y-1=5^jy'=5^j(4^ibu-c'v)$ and hence $1+4^i5^ju=y+5^jc'v\in (1+4^i5^jb\mathbb N_0)\cap(y+c\mathbb N_0)=U_x\cap U_y\ne\emptyset$. So, $y\in\bar U_x\setminus O_x$, which contradicts the choice of $U_x$. Remark. Another well-known example of a countable connected space is the Bing space $\mathbb B$. This is the rational half-plane $\mathbb B=\{(x,y)\in\mathbb Q\times \mathbb Q:y\ge 0\}$ endowed with the topology generated by the base consisting of sets $$U_{\varepsilon}(a,b)=\{(a,b)\}\cup\{(x,0)\in\mathbb B:|x-(a-\sqrt{2}b)|<\varepsilon\}\cup\{(x,0)\in\mathbb B:|x-(a+\sqrt{2}b)|<\varepsilon\}$$ where $(a,b)\in\mathbb B$ and $\varepsilon>0$. It is easy to see that the Bing space $\mathbb B$ is not oo-regular, so it is not homeomorphic to the rational projective space $\mathbb QP^\infty$. Problem 1. Is the Bing space homeomorphic to the Golomb space? Remark. It is clear that the Bing space has many homeomorphisms, distinct from the identity. So, the answer to Problem 1 would be negative if the answer to the following problem is affirmative. Problem 2. Is the Golomb space $\mathbb G$ topologically rigid? Problem 3. Is the Bing space topologically homogeneous? Since the last two problems are quite interesting I will ask them as separate questions on MathOverFlow. Added in an edit. Problem 1 has negative solution. The Golomb space and the Bing space are not homeomorphic since 1) For any non-empty open sets $U_1,\dots,U_n$ in the Golomb space (or in the rational projective space) the intersection $\bigcap_{i=1}^n\bar U_i$ is not empty. 2) The Bing space contain three non-empty open sets $U_1,U_2,U_3$ such that $\bigcap_{i=1}^3\bar U_i$ is empty. Added in a next edit. Problem 2 has a partial affirmative solution: 1 is a fixed point of any homeomorphism of $\mathbb G$. This implies that $\mathbb G$ is not homeomorphic to the Bing space or the rational projective space (which do not have such a fixed point). Problem 3 has an affirmative solution: the Bing space is topologically homogeneous.
Boundedness and a priori estimates of solutions to elliptic systems with Dirichlet-Neumann boundary conditions 1. Mathematical Institute, Slovak Academy of Sciences, Štefánikova 49, 84173 Bratislava, Slovak Republic 2. Institute of Applied Mathematics and Statistics, Comenius University, Mlynská dolina, 84248 Bratislava Let us consider the borderline in the $(p,q)$-plane between the region where all very weak solutions are bounded and the region where unbounded solutions exist. It turns out that this borderline coincides with the corresponding borderline for the system with the Neumann boundary conditions $\partial_\nu u=\partial_\nu v = 0$ on $\partial\Omega$ if $p\leq N/(N-2)$, while it coincides with the borderline for the system with the Dirichlet boundary conditions $u=v=0$ on $\partial\Omega$ if $p\geq(N+1)/(N-2)$. If $p\in (N/(N-2),(N+1)/(N-2))$ then the borderline for the Dirichlet-Neumann problem lies strictly between the borderlines for the systems with pure Neumann and pure Dirichlet boundary conditions. Our proofs are based on some new $L^p-L^q$ estimates in weighted $L^p$-spaces. Mathematics Subject Classification:Primary: 35J55, 35J65; Secondary: 35B33, 35B45, 35B6. Citation:Sándor Kelemen, Pavol Quittner. Boundedness and a priori estimates of solutions to elliptic systems with Dirichlet-Neumann boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (3) : 731-740. doi: 10.3934/cpaa.2010.9.731 [1] Patrick Winkert, Rico Zacher. A priori bounds for weak solutions to elliptic equations with nonstandard growth. [2] Frédéric Abergel, Jean-Michel Rakotoson. Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation. [3] [4] Chérif Amrouche, María Ángeles Rodríguez-Bellido. On the very weak solution for the Oseen and Navier-Stokes equations. [5] [6] Lucas C. F. Ferreira, Everaldo Medeiros, Marcelo Montenegro. An elliptic system and the critical hyperbola. [7] Jesus Idelfonso Díaz, Jean Michel Rakotoson. On very weak solutions of semi-linear elliptic equations in the framework of weighted spaces with respect to the distance to the boundary. [8] Yinbin Deng, Wentao Huang. Positive ground state solutions for a quasilinear elliptic equation with critical exponent. [9] Xiaomei Sun, Wenyi Chen. Positive solutions for singular elliptic equations with critical Hardy-Sobolev exponent. [10] Yinbin Deng, Shuangjie Peng, Li Wang. Existence of multiple solutions for a nonhomogeneous semilinear elliptic equation involving critical exponent. [11] [12] Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. [13] [14] Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang. Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent. [15] Elder Jesús Villamizar-Roa, Henry Lamos-Díaz, Gilberto Arenas-Díaz. Very weak solutions for the magnetohydrodynamic type equations. [16] Claudianor Oliveira Alves, Paulo Cesar Carrião, Olímpio Hiroshi Miyagaki. Signed solution for a class of quasilinear elliptic problem with critical growth. [17] [18] Jing Zhang, Shiwang Ma. Positive solutions of perturbed elliptic problems involving Hardy potential and critical Sobolev exponent. [19] M. L. Miotto. Multiple solutions for elliptic problem in $\mathbb{R}^N$ with critical Sobolev exponent and weight function. [20] Futoshi Takahashi. An eigenvalue problem related to blowing-up solutions for a semilinear elliptic equation with the critical Sobolev exponent. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
I figured I'd answer a self-contained post here for anyone that's interested. This will be using the notation described here. Introduction The idea behind backpropagation is to have a set of "training examples" that we use to train our network. Each of these has a known answer, so we can plug them into the neural network and find how much it was wrong. For example, with handwriting recognition, you would have lots of handwritten characters alongside what they actually were. Then the neural network can be trained via backpropagation to "learn" how to recognize each symbol, so then when it's later presented with an unknown handwritten character it can identify what it is correctly. Specifically, we input some training sample into the neural network, see how good it did, then "trickle backwards" to find how much we can change each node's weights and bias to get a better result, and then adjust them accordingly. As we continue to do this, the network "learns". There are also other steps that may be included in the training process (for example, dropout), but I will focus mostly on backpropagation since that's what this question was about. Partial derivatives A partial derivative $\frac{\partial f}{\partial x}$ is a derivative of $f$ with respect to some variable $x$. For example, if $f(x, y)=x^2 + y^2$, $\frac{\partial f}{\partial x}=2x$, because $y^2$ is simply a constant with respect to $x$. Likewise, $\frac{\partial f}{\partial y}= 2y$, because $x^2$ is simply a constant with respect to $y$. A gradient of a function, designated $\nabla f$, is a function containing the partial derivative for every variable in f. Specifically: $$\nabla f(v_1, v_2, ..., v_n) = \frac{\partial f}{\partial v_1 }\mathbf{e}_1 + \cdots + \frac{\partial f}{\partial v_n }\mathbf{e}_n$$, where $e_i$ is a unit vector pointing in the direction of variable $v_1$. Now, once we have computed the $\nabla f$ for some function $f$, if we are at position $(v_1, v_2, ..., v_n)$, we can "slide down" $f$ by going in direction $-\nabla f(v_1, v_2, ..., v_n)$. With our example of $f(x, y)=x^2 + y^2$, the unit vectors are $e_1=(1, 0)$ and $e_2=(0, 1)$, because $v_1=x$ and $v_2=y$, and those vectors point in the direction of the $x$ and $y$ axes. Thus, $\nabla f(x, y) = 2x (1, 0) + 2y(0, 1)$. Now, to "slide down" our function $f$, let's say we are at a point $(-2, 4)$. Then we would need to move in direction $-\nabla f(-2, -4)= -(2 \cdot -2 \cdot (1, 0) + 2 \cdot 4 \cdot (0, 1)) = -((-4, 0) + (0, 8))=(4, -8)$. The magnitude of this vector will give us how steep the hill is (higher values means the hill is steeper). In this case, we have $\sqrt{4^2+(-8)^2}\approx 8.944$. Hadamard Product The Hadamard Product of two matrices $A, B \in R^{n\times m}$, is just like matrix addition, except instead of adding the matrices element-wise, we multiply them element-wise. Formally, while matrix addition is $A + B = C$, where $C \in R^{n \times m}$ such that $$C^i_j = A^i_j + B^i_j$$, The Hadamard Product $A \odot B = C$, where $C \in R^{n \times m}$ such that $$C^i_j = A^i_j \cdot B^i_j$$ Computing the gradients (most of this section is from Neilsen's book). We have a set of training samples, $(S, E)$, where $S_r$ is a single input training sample, and $E_r$ is the expected output value of that training sample. We also have our neural network, composed of biases $W$, and weights $B$. $r$ is used to prevent confusion from the $i$, $j$, and $k$ used in the definition of a feedforward network. Next, we define a cost function, $C(W, B, S^r, E^r)$ that takes in our neural network and a single training example, and outputs how good it did. Normally what is used is quadratic cost, which is defined by $$C(W, B, S^r, E^r) = 0.5\sum\limits_j (a^L_j - E^r_j)^2$$ where $a^L$ is the output to our neural network, given input sample $S^r$ Then we want to find $\frac{\partial C}{\partial w^i_j}$ and $\frac{\partial C}{\partial b^i_j}$ for each node in our feedforward neural network. We can call this the gradient of $C$ at each neuron because we consider $S^r$ and $E^r$ as constants, since we can't change them when we are trying to learn. And this makes sense - we want to move in a direction relative to $W$ and $B$ that minimizes cost, and moving in the negative direction of the gradient with respect to $W$ and $B$ will do this. To do this, we define $\delta^i_j=\frac{\partial C}{\partial z^i_j}$ as the error of neuron $j$ in layer $i$. We start with computing $a^L$ by plugging $S^r$ into our neural network. Then we compute the error of our output layer, $\delta^L$, via $$\delta^L_j = \frac{\partial C}{\partial a^L_j} \sigma^{ \prime}(z^L_j)$$. Which can also be written as $$\delta^L = \nabla_a C \odot \sigma^{ \prime}(z^L)$$. Next, we find the error $\delta^i$ in terms of the error in the next layer $\delta^{i+1}$, via $$\delta^i=((W^{i+1})^T \delta^{i+1}) \odot \sigma^{\prime}(z^i)$$ Now that we have the error of each node in our neural network, computing the gradient with respect to our weights and biases is easy: $$\frac{\partial C}{\partial w^i_{jk}}=\delta^i_j a^{i-1}_k=\delta^i(a^{i-1})^T$$ $$\frac{\partial C}{\partial b^i_j} = \delta^i_j$$ Note that the equation for the error of the output layer is the only equation that's dependent on the cost function, so, regardless of the cost function, the last three equations are the same. As an example, with quadratic cost, we get $$\delta ^L = (a^L - E^r) \odot \sigma ^ {\prime}(z^L)$$ for the error of the output layer. and then this equation can be plugged into the second equation to get the error of the $L-1^{\text{th}}$ layer: $$\delta^{L-1}=((W^{L})^T \delta^{L}) \odot \sigma^{\prime}(z^{L-1})$$$$=((W^{L})^T ((a^L - E^r) \odot \sigma ^ {\prime}(z^L))) \odot \sigma^{\prime}(z^{L-1})$$ which we can repeat this process to find the error of any layer with respect to $C$, which then allows us to compute the gradient of any node's weights and bias with respect to $C$. I could write up an explanation and proof of these equations if desired, though one can also find proofs of them here. I'd encourage anyone that is reading this to prove these themselves though, beginning with the definition $\delta^i_j=\frac{\partial C}{\partial z^i_j}$ and applying the chain rule liberally. For some more examples, I made a list of some cost functions alongside their gradients here. Gradient Descent Now that we have these gradients, we need to use them learn. In the previous section, we found how to move to "slide down" the curve with respect to some point. In this case, because it's a gradient of some node with respect to weights and a bias of that node, our "coordinate" is the current weights and bias of that node. Since we've already found the gradients with respect to those coordinates, those values are already how much we need to change. We don't want to slide down the slope at a very fast speed, otherwise we risk sliding past the minimum. To prevent this, we want some "step size" $\eta$. Then, find the how much we should modify each weight and bias by, because we have already computed the gradient with respect to the current we have $$\Delta w^i_{jk}= -\eta \frac{\partial C}{\partial w^i_{jk}}$$ $$\Delta b^i_j = -\eta \frac{\partial C}{\partial b^i_j}$$ Thus, our new weights and biases are $$w^i_{jk} = w^i_{jk} + \Delta w^i_{jk}$$$$b^i_j = b^i_j + \Delta b^i_j$$ Using this process on a neural network with only an input layer and an output layer is called the Delta Rule. Stochastic Gradient Descent Now that we know how to perform backpropagation for a single sample, we need some way of using this process to "learn" our entire training set. One option is simply performing backpropagation for each sample in our training data, one at a time. This is pretty inefficient though. A better approach is Stochastic Gradient Descent. Instead of performing backpropagation for each sample, we pick a small random sample (called a batch) of our training set, then perform backpropagation for each sample in that batch. The hope is that by doing this, we capture the "intent" of the data set, without having to compute the gradient of every sample. For example, if we had 1000 samples, we could pick a batch of size 50, then run backpropagation for each sample in this batch. The hope is that we were given a large enough training set that it represents the distribution of the actual data we are trying to learn well enough that picking a small random sample is sufficient to capture this information. However, doing backpropagation for each training example in our mini-batch isn't ideal, because we can end up "wiggling around" where training samples modify weights and biases in such a way that they cancel each other out and prevent them from getting to the minimum we are trying to get to. To prevent this, we want to go to the "average minimum," because the hope is that, on average, the samples' gradients are pointing down the slope. So, after choosing our batch randomly, we create a mini-batch which is a small random sample of our batch. Then, given a mini-batch with $n$ training samples, and only update the weights and biases after averaging the gradients of each sample in the mini-batch. Formally, we do $$\Delta w^{i}_{jk} = \frac{1}{n}\sum\limits_r \Delta w^{ri}_{jk}$$ and $$\Delta b^{i}_{j} = \frac{1}{n}\sum\limits_r \Delta b^{ri}_{j}$$ where $\Delta w^{ri}_{jk}$ is the computed change in weight for sample $r$, and $\Delta b^{ri}_{j}$ is the computed change in bias for sample $r$. Then, like before, we can update the weights and biases via: $$w^i_{jk} = w^i_{jk} + \Delta w^{i}_{jk}$$$$b^i_j = b^i_j + \Delta b^{i}_{j}$$ This gives us some flexibility in how we want to perform gradient descent. If we have a function we are trying to learn with lots of local minima, this "wiggling around" behavior is actually desirable, because it means that we're much less likely to get "stuck" in one local minima, and more likely to "jump out" of one local minima and hopefully fall in another that is closer to the global minima. Thus we want small mini-batches. On the other hand, if we know that there are very few local minima, and generally gradient descent goes towards the global minima, we want larger mini-batches, because this "wiggling around" behavior will prevent us from going down the slope as fast as we would like. See here. One option is to pick the largest mini-batch possible, considering the entire batch as one mini-batch. This is called Batch Gradient Descent, since we are simply averaging the gradients of the batch. This is almost never used in practice, however, because it is very inefficient.
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
this might be a stupid question, but I wanted to know that if a,b,c are positive reals and $$ab+bc+ca\geq 3$$ Can we say that $$3 \sqrt[3]{(abc)^2} \geq 3$$ By applying am-gm? I'm confused about whether we can do this or not. I'd really appreciate it if someone could explain. I believe we cannot arrive at this conclusion, because of the following counterexample: Let $a=3$, $b=1-\epsilon$, and $c=\frac{1}{3}$. Then $ab+bc+ca\geq 4-3\epsilon\geq 3$, but $3(abc)^{2/3}<3$. So for any sufficiently small $\epsilon$ our conclusion doesn't hold.
Given real vectors $d = (d_1, \ldots, d_n)$ and $\lambda = (\lambda_1, \ldots, \lambda_n)$, where I will assume that their coefficients are arranged in non-increasing order, the Schur-Horn theorem says that there exists a Hermitian matrix with the diagonal given by $d$ and spectrum given by $\lambda$ if and only if $d \prec \lambda$, with $\prec$ denoting majorization (i.e. $ \sum_{i=1}^n d_i = \sum_{i=1}^n \lambda_i$,and $ \sum_{i=1}^k d_i \leq \sum_{i=1}^k \lambda_i$ for all $1 \leq k < n$). Given a vector of eigenvalues, the Schur-Horn theorem then tells us all of the possible diagonal values that Hermitian matrices with such eigenvalues can take. However, it does not take into consideration the eigen vectors of the matrices, and the subspace which they lie in. My question is about an extension of the theorem which does. Given a vector of eigenvalues $\lambda = (\lambda_1, \ldots, \lambda_r, 0, \ldots, 0)$ and a subspace $S$ representing the span of the eigenvectors corresponding to non-zero eigenvalues, what diagonal values can Hermitian matrices with this set of eigenvalues and with this column space take? Does this set admit an easy characterization?
Consider a single particle (a single qubit if you will) in some arbitrary state $|\psi\rangle$ and an eigenvector $|\lambda\rangle$ corresponding to the eigenvalue $\lambda.$ Consider the time evolution of this system in some infinitesimal time $\epsilon$ to be given by a unitary operator U: $|\psi(\epsilon)\rangle = U|\psi(0\rangle)$. Time-evolution preserving the inner product: Consider the following statements holding that time evolution preserves inner product $\langle\psi|\lambda\rangle$. I think $\lambda$ is non-evolvable, or $\lambda(\epsilon) = \lambda(0)$, or $U$ does nothing on it. Then the following are true: $\langle\psi(\epsilon)| = \langle\psi(0)|U^{\dagger}$. $\implies$ $\langle\psi(\epsilon)|\lambda(\epsilon)\rangle = \langle\psi(0)|U^{\dagger}U|\lambda(0)\rangle = \langle\psi(0)|\lambda(0)\rangle$. So when you measure $|\psi(\epsilon)\rangle$, you get $|\lambda\rangle$ with probability $|\langle\psi(\epsilon)|\lambda(\epsilon)\rangle|^{2}$ which is equal to $|\langle\psi(0)|\lambda(0)\rangle|^{2}$. Superposition If you start with $|\psi(0)\rangle = |0\rangle$ and apply Hadamard operation to it, you get $|\psi(\epsilon)\rangle = \frac{|0\rangle + |1\rangle}{2^{1/2}}$. If you consider $|\lambda(0)\rangle = |\lambda(\epsilon)\rangle = |0\rangle$, then $|\langle\psi(0)|\lambda(0)\rangle|^{2} = 1$ and $|\langle\psi(\epsilon)|\lambda(\epsilon)\rangle|^{2} = \frac{1}{2}$. Question Have I done something wrong or is there some problem in my understanding of the time evolution of a quantum system? Is Hadamard- ing a state not considered in the class of operations that qualify as time evolution of a quantum system? In short, why are these probabilities different?
The soft-photon theorem is the following statement due to Weinberg: Consider an amplitude ${\cal M}$ involving some incoming and some outgoing particles. Now, consider the same amplitude with an additional soft-photon ($\omega_{\text{photon}} \to 0$) coupled to one of the particles. Call this amplitude ${\cal M}'$. The two amplitudes are related by $$ {\cal M}' = {\cal M} \frac{\eta q p \cdot \epsilon}{p \cdot p_\gamma - i \eta \varepsilon} $$ where $p$ is the momentum of the particle that the photon couples to, $\epsilon$ is the polarization of the photon and $p_\gamma$ is the momentum of the soft-photon. $\eta = 1$ for outgoing particles and $\eta = -1$ for incoming ones. Finally, $q$ is the charge of the particle. The most striking thing about this theorem (to me) is the fact that the proportionality factor relating ${\cal M}$ and ${\cal M}'$ is independent of the type of particle that the photon couples to. It seems quite amazing to me that even though the coupling of photons to scalars, spinors, etc. takes such a different form, you still end up getting the same coupling above. While I can show that this is indeed true for all the special cases of interest, my question is: Is there a general proof (or understanding) that describes this universal coupling of soft-photons?
AI News, Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras On Tuesday, March 6, 2018 By Read More Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras Sequence classification is a predictive modeling problem where you have some sequence of inputs over space or time and the task is to predict a category for the sequence. What makes this problem difficult is that the sequences can vary in length, be comprised of a very large vocabulary of input symbols and may require the model to learn the long-term context or dependencies between symbols in the input sequence. In this post, you will discover how you can develop LSTM recurrent neural network models for sequence classification problems in Python using the Keras deep learning library. The data was collected by Stanford researchers and was used in a 2011 paper where a split of 50-50 of the data was used for training and test. We will map each movie review into a real vector domain, a popular technique when working with text called word embedding. This is a technique where words are encoded as real-valued vectors in a high dimensional space, where the similarity between words in terms of meaning translates to closeness in the vector space. Finally, the sequence length (number of words) in each review varies, so we will constrain each review to be 500 words, truncating long reviews and pad the shorter reviews with zero values. Let’s start off by importing the classes and functions required for this model and initializing the random number generator to a constant value to ensure we can easily reproduce the results. The model will learn the zero values carry no information so indeed the sequences are not the same length in terms of content, but same length vectors is required to perform the computation in Keras. Finally, because this is a classification problem we use a Dense output layer with a single neuron and a sigmoid activation function to make 0 or 1 predictions for the two classes (good and bad) in the problem. For example, we can modify the first example to add dropout to the input and recurrent connections as follows: The full code listing with more precise LSTM dropout is listed below for completeness. Dropout is a powerful technique for combating overfitting in your LSTM models and it is a good idea to try both methods, but you may bet better results with the gate-specific dropout provided in Keras. The IMDB review data does have a one-dimensional spatial structure in the sequence of words in reviews and the CNN may be able to pick out invariant features for good and bad sentiment. Getting started with the Keras Sequential model You can create a Sequential model by passing a list of layer instances to the constructor: You can also simply add layers via the .add() method: The model needs to know what input shape it should expect. For this reason, the first layer in a Sequential model (and only the first, because following layers can do automatic shape inference) needs to receive information about its input shape. There are several possible ways to do this: As such, the following snippets are strictly equivalent: Before training a model, you need to configure the learning process, which is done via the compile method. Deep Learning for NLP Best Practices While many existing Deep Learning libraries already encode best practices for working with neural networks in general, such as initialization schemes, many other details, particularly task or domain-specific considerations, are left to the practitioner. While many of these features will be most useful for pushing the state-of-the-art, I hope that wider knowledge of them will lead to stronger evaluations, more meaningful comparison to baselines, and inspiration by shaping our intuition of what works. I will then outline practices that are relevant for the most common tasks, in particular classification, sequence labelling, natural language generation, and neural machine translation. The optimal dimensionality of word embeddings is mostly task-dependent: a smaller dimensionality works better for more syntactic tasks such as named entity recognition (Melamud et al., 2016) [44] or part-of-speech (POS) tagging (Plank et al., 2016) [32], while a larger dimensionality is more useful for more semantic tasks such as sentiment analysis (Ruder et al., 2016) [45]. First let us assume a one-layer MLP, which applies an affine transformation followed by a non-linearity \(g\) to its input \(\mathbf{x}\): \(\mathbf{h} = g(\mathbf{W}\mathbf{x} + \mathbf{b})\) A highway layer then computes the following function instead: \(\mathbf{h} = \mathbf{t} \odot g(\mathbf{W} \mathbf{x} + \mathbf{b}) + (1-\mathbf{t}) \odot \mathbf{x} \) where \(\odot\) is elementwise multiplication, \(\mathbf{t} = \sigma(\mathbf{W}_T \mathbf{x} + \mathbf{b}_T)\) is called the transform gate, and \((1-\mathbf{t})\) is called the carry gate. Residual connections are even more straightforward than highway layers and learn the following function: \(\mathbf{h} = g(\mathbf{W}\mathbf{x} + \mathbf{b}) + \mathbf{x}\) which simply adds the input of the current layer to its output via a short-cut connection. Dense connections Rather than just adding layers from each layer to the next, dense connections (Huang et al., 2017) [7] (best paper award at CVPR 2017) add direct connections from each layer to all subsequent layers. They have also found to be useful for Multi-Task Learning of different NLP tasks (Ruder et al., 2017) [49], while a residual variant that uses summation has been shown to consistently outperform residual connections for neural machine translation (Britz et al., 2017) [27]. While batch normalisation in computer vision has made other regularizers obsolete in most applications, dropout (Srivasta et al., 2014) [8] is still the go-to regularizer for deep neural networks in NLP. Recurrent dropout has been used for instance to achieve state-of-the-art results in semantic role labelling (He et al., 2017) and language modelling (Melis et al., 2017) [34]. While we can already predict surrounding words in order to pre-train word embeddings (Mikolov et al., 2013), we can also use this as an auxiliary objective during training (Rei, 2017) [35]. Using attention, we obtain a context vector \(\mathbf{c}_i\) based on hidden states \(\mathbf{s}_1, \ldots, \mathbf{s}_m\) that can be used together with the current hidden state \(\mathbf{h}_i\) for prediction. The context vector \(\mathbf{c}_i\) at position is calculated as an average of the previous states weighted with the attention scores \(\mathbf{a}_i\): \(\begin{align}\begin{split}\mathbf{c}_i = \sum\limits_j a_{ij}\mathbf{s}_j\\ \mathbf{a}_i Additive attention The original attention mechanism (Bahdanau et al., 2015) [15] uses a one-hidden layer feed-forward network to calculate the attention alignment: \(f_{att}(\mathbf{h}_i, \mathbf{s}_j) = \mathbf{v}_a{}^\top \text{tanh}(\mathbf{W}_a[\mathbf{h}_i; Analogously, we can also use matrices \(\mathbf{W}_1\) and \(\mathbf{W}_2\) to learn separate transformations for \(\mathbf{h}_i\) and \(\mathbf{s}_j\) respectively, which are then summed: \(f_{att}(\mathbf{h}_i, \mathbf{s}_j) = \mathbf{v}_a{}^\top \text{tanh}(\mathbf{W}_1 \mathbf{h}_i + \mathbf{W}_2 \mathbf{s}_j) \) Multiplicative attention Multiplicative attention (Luong et al., 2015) [16] simplifies the attention operation by calculating the following function: \(f_{att}(h_i, s_j) = h_i^\top \mathbf{W}_a s_j \) Additive and multiplicative attention are similar in complexity, although multiplicative attention is faster and more space-efficient in practice as it can be implemented more efficiently using matrix multiplication. Attention cannot only be used to attend to encoder or previous hidden states, but also to obtain a distribution over other features, such as the word embeddings of a text as used for reading comprehension (Kadlec et al., 2017) [37]. Self-attention Without any additional information, however, we can still extract relevant aspects from the sentence by allowing it to attend to itself using self-attention (Lin et al., 2017) [18]. Self-attention, also called intra-attention has been used successfully in a variety of tasks including reading comprehension (Cheng et al., 2016) [38], textual entailment (Parikh et al., 2016) [39], and abstractive summarization (Paulus et al., 2017) [40]. We can simplify additive attention to compute the unnormalized alignment score for each hidden state \(\mathbf{h}_i\): \(f_{att}(\mathbf{h}_i) = \mathbf{v}_a{}^\top \text{tanh}(\mathbf{W}_a \mathbf{h}_i) \) In matrix form, for hidden states \(\mathbf{H} = \mathbf{h}_1, \ldots, \mathbf{h}_n\) we can calculate the attention vector \(\mathbf{a}\) and the final sentence representation \(\mathbf{c}\) as follows: \(\begin{align}\begin{split}\mathbf{a} = \text{softmax}(\mathbf{v}_a \text{tanh}(\mathbf{W}_a \mathbf{H}^\top))\\ \mathbf{c} In practice, we enforce the following orthogonality constraint to penalize redundancy and encourage diversity in the attention vectors in the form of the squared Frobenius norm: \(\Omega = \|(\mathbf{A}\mathbf{A}^\top - \mathbf{I} \|^2_F \) A Key-value attention Finally, key-value attention (Daniluk et al., 2017) [19] is a recent attention variant that separates form from function by keeping separate vectors for the attention calculation. While predicting with an ensemble is expensive at test time, recent advances in distillation allow us to compress an expensive ensemble into a much smaller model (Hinton et al., 2015; Recent advances in Bayesian Optimization have made it an ideal tool for the black-box optimization of hyperparameters in neural networks (Snoek et al., 2012) [56] and far more efficient than the widely used grid search. Rather than clipping each gradient independently, clipping the global norm of the gradient (Pascanu et al., 2013) [58] yields more significant improvements (a Tensorflow implementation can be found here). While many of the existing best practices are with regard to a particular part of the model architecture, the following guidelines discuss choices for the model's output and prediction stage. Using IOBES and BIO yield similar performance (Lample et al., 2017) CRF output layer If there are any dependencies between outputs, such as in named entity recognition the final softmax layer can be replaced with a linear-chain conditional random field (CRF). If attention is used, we can keep track of a coverage vector \(\mathbf{c}_i\), which is the sum of attention distributions \(\mathbf{a}_t\) over previous time steps (Tu et al., 2016; See et al., 2017) [64, 65]: \(\mathbf{c}_i = \sum\limits^{i-1}_{t=1} \mathbf{a}_t \) This vector captures how much attention we have paid to all words in the source. We can now condition additive attention additionally on this coverage vector in order to encourage our model not to attend to the same words repeatedly: \(f_{att}(\mathbf{h}_i,\mathbf{s}_j,\mathbf{c}_i) = \mathbf{v}_a{}^\top \text{tanh}(\mathbf{W}_1 \mathbf{h}_i + \mathbf{W}_2 \mathbf{s}_j + \mathbf{W}_3 \mathbf{c}_i )\) In addition, we can add an auxiliary loss that captures the task-specific attention behaviour that we would like to elicit: For NMT, we would like to have a roughly one-to-one alignment; Beam search strategy Medium beam sizes around \(10\) with length normalization penalty of \(1.0\) (Wu et al., 2016) yield the best performance (Britz et al., 2017). BPE iteratively merges frequent symbol pairs, which eventually results in frequent character n-grams being merged into a single symbol, thereby effectively eliminating out-of-vocabulary-words. While it was originally meant to handle rare words, a model with sub-word units outperforms full-word systems across the board, with 32,000 being an effective vocabulary size for sub-word units (Denkowski On Monday, September 23, 2019 How to Do Sentiment Analysis - Intro to Deep Learning #3 In this video, we'll use machine learning to help classify emotions! The example we'll use is classifying a movie review as either positive or negative via TF Learn in 20 lines of Python. ... Deep Learning Lecture 13: Applying RNN's to Sentiment Analysis Get my larger machine learning course at We'll practice using recurrent neural networks.. Lesson 5: Practical Deep Learning for Coders INTRO TO NLP AND RNNS We start by combining everything we've learned so far to see what that buys us; and we discover that we get a Kaggle-winning result! One important point: in this lesson... Lecture 15: Coreference Resolution Lecture 15 covers what is coreference via a working example. Also includes research highlight "Summarizing Source Code", an introduction to coreference resolution and neural coreference resolution.... Lesson 6: Practical Deep Learning for Coders BUILDING RNNS This lesson starts by introducing a new tool, the MixIterator, which will (finally!) allow us to fully implement the pseudo-labeling technique we learnt a couple of lessons ago.... Lecture 18: Tackling the Limits of Deep Learning for NLP Lecture 18 looks at tackling the limits of deep learning for NLP followed by a few presentations. ------------------------------------------------------------------------------- Natural Language...
Hyperbolic tangent function \({\rm tanh}\) is often used to generate the stretched structured grid. In this blog post, I will introduce some examples I have found in the references. Example #1 [1] \begin{equation}y_j = \frac{1}{\alpha}{\rm tanh} \left[\xi_j {\rm tanh}^{-1}\left(\alpha\right)\right] + 1\;\;\;\left( j = 0, …, N_2 \right), \tag{1}\end{equation}with\begin{equation}\xi_j = -1 + 2\frac{j}{N_2}, \tag{2}\end{equation}where \(\alpha\) is an adjustable parameter of the transformation \((0<\alpha<1)\) and \(N_2\) is the grid number of the direction. As shown in the following figure, the grids are more clustered towards the both ends as the parameter \(\alpha\) approaches 1. Example #2 [2] \begin{equation}y_j = 1 -\frac{{\rm tanh}\left[ \gamma \left( 1 – \frac{2j}{N_2} \right) \right]}{{\rm tanh} \left( \gamma \right)}\;\;\;\left( j = 0, …, N_2 \right), \tag{3}\end{equation}where \(\gamma\) is the stretching parameter and \(N_2\) is the number of grid points of the direction. Grid Images Coming soon. References [1] H. Abe, H. Kawamura and Y. Matsuo, Direct Numerical Simulation of a Fully Developed Turbulent Channel Flow With Respect to the Reynolds Number Dependence. J. Fluids Eng 123(2), 382-393, 2001.[2] J. Gullbrand, Grid-independent large-eddy simulation in turbulent channel flow using three-dimensional explicit filtering. Center for Turbulence Research Annual Research Briefs, 2003. This report documents the results of a study to address the long range, strategic planning required by NASA’s Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the “Vision 2030” CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030. the existence of objects: free jet and impinging jet the differences of physical properties between a projected fluid and an ambient fluid: submerged jet and unsubmerged jet the geometry of a nozzle: round jet and slot jet and so on. Free Jet The following video visualizes the flow pattern of a submerged free jet (created by Bjarke Ove Andersen and Mathies Hjorth Jensen of Technical University of Denmark): Flow Regions of Impinging Jet [1, 2] Region Ⅰ is the region of flow establishment. It extends from the nozzle exit to the apex of the potential core. The so-called potential core is the central portion of the flow in which the velocity remains constant and equal to the velocity at the nozzle exit. Region Ⅱ is a region of established flow in the direction of the jet beyond the apex of the potential core; it is characterized by a dissipation of the centerline jet velocity and by a spreading of the jet in the transverse direction. Region Ⅲ is that region in which the jet is deflected from the axial direction. Region Ⅳ is known as the wall jet region, where the directed flow increases in thickness as the boundary layer builds up along the solid surface. I found an interesting presentation material by University of Oxford published in the 6th AIAA CFD Drag Prediction Workshop site.The other presentations in this workshop are also available in this link.
Construct a hyperbolic triangle from $A$, $B$, and the center of the unit circle, $C$. Let $a$ be the length of $BC$, $b$ the length of $AC$, and $c$ the length of $AB$. Let $\alpha$, and $\gamma$ be the values of $\angle CAB$ and $\angle ACB$, respectively. The value of $c$ is given as distance $d$:$$c = d$$ The value of $\alpha$ is determined from the angle of $A$ relative to the unit circle, $\epsilon$; and the angle correspondent to the given direction of travel, $\delta$:$$\alpha = |\pi - |\epsilon - \delta||$$ The value of $b$ can be determined based on the Euclidean distance from the center of the unit circle to $A$, $d_A$:$$b = 2 \operatorname{arctanh}(d_A)$$ The value of $a$, the hyperbolic distance between $B$ and the center of the unit circle, can now be determined by the hyperbolic law of cosines:$$cosh(a) = cosh(b)cosh(c) - sihn(b)sinh(c)\cos(\alpha)$$$$a = arccosh(cosh(b)cosh(c) - sihn(b)sinh(c)\cos(\alpha))$$ The value of $C$ can be also be determined by the hyperbolic law of cosines:$$\cos(\gamma) = \frac{cosh(a)cosh(b) - cosh(c)}{sinh(a)sinh(b)}$$$$\gamma = arccos(\frac{cosh(a)cosh(b) - cosh(c)}{sinh(a)sinh(b)})$$ Now, add (or subtract), $\gamma$ from $\epsilon$ to determine the angle of $B$ relative to the unit circle, $\theta$. The Euclidean distance between $B$ and the center of the unit circle, $d_B$, can be determined using the exponential function:$$d_B = \frac{e^a - 1}{e^a + 1}$$ Using this distance, the coordinates $x_B$ and $y_B$ of $B$ can be now be determined using basic trigonometric functions:$$x_B = d_B \cos(\theta)$$$$y_B = d_B \sin(\theta)$$ The coordinates of destination point $B$ are $(x_B, y_B)$.
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover. By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value? @AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$. Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$? $\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$ (cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$ Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and: @AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently. If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies @Overflow2341313 Could you send a picture or a screenshot of the problem? nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum So there are only countably many disjoint intervals in the cover $C$ @Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t). Simply restrict the codomain so that it is onto? Making it bijective and hence invertible. hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird... In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set: @Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}. To do this... the smallest t can be is the trivial topology on R - {\emptyset, R} But, we required that everything in U be in t under f? @Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse I'm not sure if adding the additional condition that $f$ is an open map will make an difference For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
I have been trying to find the gaussian curvature of a pseudo sphere. I assumed the parametrization: X(u,v) = (cos(u)*sech(v), sin(u)*sech(v), u - tanh(u)). I know that it's a surface of revolution obtained by the tractrix. I could find it calculating the coefficients of the first and second form. But I observed that its curvature should be the product of the principal curvature, that is, the curvature of the circle of radius sech(v) and of the tractrix, that as I discovered, is csch(v). But I should find that this product is -1, that is the curvature of the pseudosphere. What is wrong with my reasoning? Your calculation of the curvature of the tractrix is fine. The problem comes from the other part: the plane in which the circle $(\cos{u}\sech{v},\sin{u}\sech{v},v-\tanh{v})$ (with constant $v$) lies is not one of the planes normal to the surface at the point where you want to measure the curvature (this is easy to see in the case of a surface with radius decreasing rapidly: the plane must depend on the gradient of the radial component, therefore. Instead, we need to look at the curvature of a curve $\gamma(t)$ in the plane parallel to the surface normal, which is easily found to be $$ n=(\cos{u}\tanh{v},\sin{u}\tanh{v},\sech{v}). $$ If the curve is in both the surface and a plane parallel to this vector, it is easy to see that the normal vector to the curve must be parallel to the surface normal. (E.g., the curve's tangent vector is perpendicular to the surface normal by definition of the surface normal, the binormal is constant (and perpendicular to the plane and thus the surface normal) as the curve is planar, so the curve's normal vector, being perpendicular to both of these, lies in the plane and is perpendicular to the perpendicular of the surface normal, and hence parallel to it.) Therefore, all we have to do is find $ (\gamma(t)-\gamma(0)) \cdot n$, where $\dot{\gamma}(0)$ is parallel to the circle $v= \text{const}$., and expand to get to the first nonzero term near t=0, which is easily seen to be: $$ \begin{align} (\gamma(t)-\gamma(0)) \cdot n &= \gamma'(0) \cdot n \, t + \gamma''(0) \cdot n \frac{t^2}{2} +O(t^3) \\ &= s'(0)T(0) \cdot n t + (s''(0)T(0)+ s'(0)^2\kappa(0) N(0) ) \cdot n \frac{t^2}{2} + O(t^3) \\ &= s'(0)^2 \kappa \frac{t^2}{2} + O(t^3), \end{align} $$ by the definition of curvature, where $s$ is the arclength parameter. The symmetry shows that we only need to do the calculation for $u=0$, so $$ n=(\tanh{v},0,\sech{v}), $$ and some boring calculation later, we find that $$ s'(0)^2 \kappa(0) = \sech{v}\tanh{v} \, (-U'(0)^2+V'(0)^2). $$ Of course, $s'(0)^2$ is the square of the length of $\gamma'(0)$, which is also easy to calculate, as $$ \lVert \gamma'(0) \rVert^2 = \sech^2{v} \, (U'(0)^2+\sinh^2{v} \, V'(0)^2) $$ (you can easily fill this in yourself, with enough differentiation and application of trigonometrical and hyperbolic identities). Therefore, $$ \kappa(0) = \sinh{v} \, \frac{-U'(0)^2+V'(0)^2}{U'(0)^2+\sinh^2{v} \, V'(0)^2}. $$ To find the principal curvatures, one has to maximise and minimise this homogeneous function, but in this case, it's easy, with the minimum obviously when $V'(0)=0$ (i.e., parallel to the circle), the maximum when $U'(0)=0$ (parallel to the tractrix), with values $-\sinh{u}$ and $1/\sinh{u}$ respectively. The latter you have already, and the product is $-1$ as it should be. Gauss curvature is product of principal curvatures of the meridian and its perpendicular line. This is not curvature of radius of pseudosphere in cylindrical coordinate system as you incorrectly found. $v$ is the angle of rotation of a point on asymptotic line (zero normal curvature) around symmetry axis. Parametrization of an asymptotic line in space is: $$ [\text{sech } v \cos v, \text{sech } v \sin v , ( v- \text{tanh } v)]$$ You have correctly included $u$ to describe the pseudospherical surface obtained by rotating above asymptotic line about z-axis. Euler's theorem: $$ k_n= k_1\cos^2 \psi+ k_2\cos^2 \psi $$ Principal radii of curvature $$ (R_1,R_2)= ( -\cot \phi ,\ \tan \phi) $$ where $\phi $ is angle of slope to axis of symmetry. Also some properties $$ \phi =\psi ; v = s /a ;$$ when the asymptotic line starts from x-axis in projection. $a$ is radius of cuspidal equator, sometimes referred to as the radius of torsion.
How can I show this is the case? Since you have full specification of the sampling distribution of your observations, you can get the explicit form of the log-likelihood. Treating $\sigma$ as fixed and removing additive constants we have: $$\ell_\mathbf{x}(\theta) = -\frac{1}{2 \sigma^2} \sum_{i=1}^n (x_i - \theta)^2 \quad \quad \quad \text{for all } \theta \in \mathbb{R}.$$ From this function it is possible to derive the score function, the information function and the MLE, which means that you should be able to directly verify the equation by substituting all these items. (I will leave this work as an exercise.) Isn't the score of the MLE always zero? To understand when the score of the MLE is zero, think back to your early calculus classes. When you maximise a continuous differentiable function, this often gives a maximising value at a critical point of the function. But the maximising value is not always at a critical point. In some cases it may be at a boundary point of the function. Now, in the context of maximum-likelihood, it is common for the log-likelihood function to be strictly concave, so that there is a unique MLE at the critical point of the function --- i.e., when the score function equals zero. However, we still need to be careful that this is the case, and it is possible in some cases that the MLE will occur at a boundary point. Remember that there is nothing special about maximum likelihood analysis --- mathematically it is just a standard optimisation problem involving a log-likelihood function, and it is solved via ordinary optimisation techniques. Now, in this particular case, it turns out that the above log-likelihood function is strictly concave (show this by looking at its second-derivative) and so the MLE occurs at the unique critical point of the function. Thus, in this case, it is indeed correct that we find the MLE by setting the score function to zero (and so obviously the score of the MLE is equal to zero in this case). When statisticians deal with maximum-likelihood theory, they often assume "regularity conditions" which are the conditions required to allow the log-likelihood to be expanded into a Taylor expansion, and to ensure that the MLE falls at a critical point. So if you read material on the properties of MLEs, you will often find that they are of the form, "Under such-and-such regularity conditions, such-and-such a result occurs". Do these results depend on the data actually being normally distributed? In these kinds of problems, the log-likelihood function is taken to be the derived from the distribution we think the data follows. So even if the distribution of the data turns out not to be normal, the context of the problem suggest that we think it is normal, so this is the log-likelihood function we use for our analysis. Similarly, we derive the MLE as if the data were normal, even if they turn out not to be. In this particular case, all of the relevant equations you have should follow directly from the assumed form of the log-likelihood function, for all possible outcomes of the data. However, it is important to remember that the MLE is a function of the data, and so its probabilistic behaviour depends on the true distribution of the data, which might not be our assumed form. Thus, if you were to make some probabilistic statement about the MLE (e.g., that it will fall within a certain interval with a certain probability) then this would generally depend on the behaviour of the data, which would depend on its true distribution.
In my class it was said that "A tangent vector $X \in T_p(\mathbb{R}^n)$ acts on a one-form to give a real number" and "A one-form acts on a tangent vector to give a real number" Now the 'tangent space' $T_p(\mathbb{R}^n)$ is a $n$-dimensional vector space and the elements of $T_p(\mathbb{R}^n)$ which we call tangent vectors are actually derivations, which are linear maps $w : C^{\infty}(\mathbb{R}^n) \to \mathbb{R}$ satisfying a product rule. One-forms are elements of the dual vector space $T_p^*(\mathbb{R}^n)$, which we call the cotangent space. They are by definition of a dual vector space, linear maps from $T_p(\mathbb{R}^n)$ to $\mathbb{R}$, e.g $f : T_p(\mathbb{R}^n) \to \mathbb{R}$. From this it is easy to see that a one-form takes as input a tangent vector and outputs a real number. However I'm having trouble seeing how a tangent vector (derivation) takes as input a one-form to output a real number since it's domain isn't even $T_p^*(\mathbb{R}^n)$.
You are correct that with a greedy target policy $\pi$, $\pi(s_t, a_t)$ always equals either $1$ or $0$. This does not mean the algorithm cannot learn though. It only means that the algorithm can only learn from the sequence of steps up until an action was taken that the target policy (which can be greedy) would never take, because only from that point on you start multiplying by $0$. In Section 4, it is described that the sum of all updates that occur during an episode can be written as: \begin{equation}\sum_{t=0}^{T-1} \alpha (\bar{R}_t^\lambda - \theta^T \phi_t)\phi_t c_t,\end{equation} where: \begin{equation}c_t = \sum_{k=0}^{t} g_k \prod_{j=k+1}^{t} \rho_j\end{equation} Suppose you want the target policy $\pi$ to be the greedy policy. Suppose, for example, that we took actions that match the greedy action at $t=0$ and $t=1$. This means that $c_0 > 0$ and $c_1 > 0$. Suppose that we took a non-greedy action at $t=2$. Then, from $t=2$ onwards, every $c_t$ will involve a multiplication by $0$, and therefore equal $0$. However, we still have non-zero $c_0$ and $c_1$, so the sum of all updates during the entire episode still consists of non-zero terms for $t=0$ and $t=1$, and we can still learn something from those steps. This obviously does still mean that learning with a greedy target policy is often quite slow though, because sooner or later you'll have a multiplication by $0$ and you'll be unable to continue learning from that sequence of actions (that is: you'll be unable to continue learning more about the value of the first state-action pair in that sequence. You can still start learning again about another state-action pair, treat it as the beginning of a new sequence). This is not a problem just with a greedy target policy though. The method has high variance whenever the target policy and the policy used to select actions are very different from each other. Whenever the policies are very different from each other, the multiplications can rapidly get very close to $0$ if you keep playing actions that are unlikely according to the target policy, or rapidly become way too large if, by chance, you happen to keep playing many actions that were unlikely according to the policy $b$, but coincidentally are all really likely according to the target policy. The assumption that Sean was talking about holds for the policy used to select actions (the policy $b$), this assumption is not required for the target policy $\pi$. $b(s_t, a_t)$ needs to be nonzero for any $s_t$, $a_t$ pair, because otherwise you get a division by zero.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Yes, if $X$ is smooth and $\bar X\cap \mathbb P^{n-1}$ is smooth in the scheme theoretic sense, then $\bar X$ is indeed smooth. In other words if $s\in \bar X$ is singular, so is $s\in \bar X\cap H $ for any hyperplane $s\in H\subset \mathbb P^{n}$ . Beware however that you have to take scheme-theoretic intersections. For example if $\bar X$ is the curve $z^2y=x^3$ in $\mathbb P^{2} $, then that curve is smooth in the affine plane $z\neq1$ and its intersection with the line at infinity $H$ given by $z=0$ is the single point $s=[0:1:0]$. So, naïvely one might think that since a single point is a smooth variety one may conclude by the above that $\bar X$ is smooth. In reality $\bar X$ has a singularity at $s$. The mistake was to not see that the intersection of $\bar X$ with $H$ is not the reduced point $s$ but $s$ with a nilpotent structure. Edit: detailed calculation Indeed the intersection of $\bar X$ with the line at infinity $H$ is best computed in the affine plane $\mathbb A^2_{x,z}=Spec (k[x,z])$ (=the points of $\mathbb P^{2}$ where we may choose $y=1$), whose coordinates are $(x,z)=[x:1:z]$. The point $s$ then has coordinates $x=0,z=0$, $\bar X$ has equation $z^2=x^3$, $H$ has equation $z=0$ and the ideal in $k[x,z]$ of the intersection $\bar X \cap H$ is $(z^2-x^3,z)=(z,x^3)$. So the intersection $\bar X \cap H$ is the affine subscheme $Spec (k[x,z]/(z,x^3)) \subset \mathbb A^2_{x,z}$, which is clearly isomorphic to $A=Spec(k[x]/(x^3)$, hence non reduced and thus singular. Geometrically $\bar X$ is a cusp cut by every projective line in three points. The line at infinity however cuts it in what was classically known as a "triple point" before the introduction of schemes. Second edit Let me add a few words to the second sentence of my answer in order to address Daniel's comment. The problem is local at $s$, so we may assume that $s\in X$ is a singularity and we must show that $X\cap H $ is singular at $s$ too. For simplicity assume that $X$ is a hypersurface. It thus has equation $f(x_1,...,x_n)=0$, with $f(x_1,...,x_n)=q_2(x_1,...,x_n)+q_3(x_1,...,x_n)+...$ where $q_i(x_1,...,x_n)$ is homogeneous of degree $i$. The crucial point is that there is no linear term $q_1$: this is equivalent to $X$ being singular at $s$. If the hyperplane $H$ has equation $x_n=l(x_1,...,x_{n-1})$ ( $l$ linear) the intersection $X\cap H$ is given by $q_2(x_1,...,l(x_1,...,x_{n-1})+q_3(x_1,...,l(x_1,...,x_{n-1}))+...=0$ in the affine coordinates $x_1,...,x_{n-1}$ and is thus also singular since it begins with a quadratic term.
Here is one proof. Lemma. Let $X$ be a cell complex, $G$ a finite group and $G\times X\to X$ is an action. Then for every field $F$ of characteristic zero,$$H^*(X/G; F)\cong H^*(X;F)^{G},$$ where the right hand-side is the ring of invariants under the $G$-action on $H^*(X)$. This lemma is an application of the "transfer", you can find its proof for insatnce in Bredon's book "Compact Transformation Groups" or in these freely available notes by Alan Edmonds. (Edmonds treats only the case $F={\mathbb Q}$ but the general case is no different.) Proposition. Let $G={\mathbb Z}_2$ acting as complex conjugation on an irreducible smooth complex-projective curve $X$ defined over ${\mathbb R}$. (More generally, one can allow a singular irreducible curve $X$ such that none of its singular points is real.) Then the dimension of the $G$-invariant subspace in $H^1(X; {\mathbb Q})$ is half of the dimension of $H^1(X; {\mathbb Q})$. Proof. Let $Y:= X/G$. I first consider the case when $G$ has nonempty fixed-point set in $X$. Then $X$ (as a topological space) is obtained by gluing two copies of $Y$ along a disjoint union of circles $F\subset Y$ (the projection of the fixed-point set of the action of $G$ on $X$). Then, since $\chi(F)=0$, $$\chi(X)=2\chi(Y). $$Since $H^2(Y; {\mathbb Q})=0$, we obtain that $$\chi(X)= 2- dim H^1(X; {\mathbb Q})= 2- 2 dim H^1(Y; {\mathbb Q})$$ and, hence (by Lemma), $$ dim H^1(X; {\mathbb Q})= 2 dim H^1(Y; {\mathbb Q})= 2 dim H^1(X; {\mathbb Q})^G.$$If $G$ acts freely on $X$, the proof is essentially the same. Since the Euler characteristic is multiplicative under covering maps and taking into account that $Y=X/G$ is nonorientable and, hence, has $H^2(Y; {\mathbb Q})=0$, again obtain $$2- dim H^1(X; {\mathbb Q})= \chi(X)= 2\chi(Y)= 2- 2 dim H^1(Y; {\mathbb Q}).$$Then, proceed as before. qed If $X$ has odd number of singular points, then $dim H^1(X; {\mathbb Q})$ is odd and, hence, the proposition cannot be true. I leave it to somebody else to sort out the case of general complex-projective curves. Edit. Here are answers to your questions. Almost all of this one learns in a graduate course in algebraic or differential topology (plus a complex analysis class). First of all, this is a general fact of differential topology that if $G$ is a compact group acting smoothly on a manifold $M$ then the fixed-point set $Fix_M(G)$ of the action is a smooth submanifold (this was discussed several times at MSE, for instance here). If $M$ is compact then $Fix_M(G)$ is also compact. (The latter is actually a fact of general topology: a closed subset of a compact space is compact.) In the case when $M$ is a Riemann surface $X$ and $G$ is cyclic orientation-reversing (antiholomorphic) then by linearizing the action at its fixed-points you see that the generator of $g$ (in holomorphic coordinates near any fixed point) has the form $z\mapsto \bar{z}$. Then $Fix_M(G)$ is a 1-dimensional manifold. By the classification of 1-dimensional manifolds, every connected 1-dimensional manifold is either empty or is a circle. Every compact 1-dimensional manifold is a finite union of circles. If $Fix_X(G)$ is nonempty and $X$ is connected (which is always the case if $X$ is an irreducible smooth complex projective curve) then $Y=X/G$ is a connected manifold with nonempty boundary (projection of the fixed point set of $G$). Hence, $H^2(Y; {\mathbb Q})=0$. You can see this, for instance, by observing that $Y$ is homotopy-equivalent to a bouquet of circles. (One can derive this for instance from the classification of compact surfaces with boundary.) If $G$ acts freely then $Y=X/G$ is a connected manifold. (This is a general fact about properly discontinuous free group actions on manifolds: The quotient is always a manifold and the quotient map is a covering.) The fact that the image of a connected space under a continuous map is connected is a fact of general topology which one typically learns in a general topology class. In particular, $X/G$ cannot be a disjoint union of two homeomorphic copies of anything. If $X$ is an orientable connected manifold and $X\to X/G$ is a quotient by a free properly discontinuous action which does not preserve orientation then $Y=X/G$ is a nonorientable connected manifold. (This was discussed many times at MSE, e.g. here, here, here,....) In particular, $H^2(Y; {\mathbb Q})=0$. Lastly, consider the complex manifold $M={\mathbb C}P^n$ and let $g: M\to M$ be the complex conjugation$$(z_0: z_1:...:z_n)\mapsto (\bar{z}_0: \bar{z}_1:...:\bar{z}_n). $$Then for every complex line $L$ in $T_pM$ the map $dg_p: L\to dg_p(L)\subset T_pM$ is orientation-reversing. This is a fact of linear algebra. (Consider the lift of $g$ to ${\mathbb C}^{n+1}$.) In particular, if $X\subset M$ is a Riemann surface and $g(X)= X$, then $g: X\to X$ reverses orientation on $X$ (the orientation is induced by the complex structure of $X$).
Consider the parameter integral $$I(a)=\int_0^1\frac{\log(a+t^2)}{1+t^2}\mathrm dt\tag1$$ Where $\log$ denotes the natural logarithm and $a\in\mathbb{C}$. I am struggling to evaluate this integral in a closed-form. I am not even sure whether there is such an expression. However, first of all lets just concentrate on some particular values of $a$ for which I was actually able to evaluate the integral exactly $$\begin{align} &a=0:&&\int_0^1\frac{\log(t^2)}{1+t^2}\mathrm dt=-2G\\ &a=1:&&\int_0^1\frac{\log(1+t^2)}{1+t^2}\mathrm dt=\frac{\pi}2\log(2)-G \end{align}$$ Here $G$ denotes Catalan's Constant. The first case is just one of many integral definitions of Catalan's Constant whereas the second case can be deduced back to integrals of this type by the substitution $t=\tan(y)$. Furthermore WolframAlpha is capable of providing a closed-form for the case $a=-1$ $$a=-1:\int_0^1\frac{\log(t^2-1)}{t^2+1}\mathrm dt=\frac{\pi}4\log(2)+\frac{i\pi^2}4-G$$ It seems like the general anti-derivative of the case $a=-1$ can be expressed in terms of the Polylogarithm $($the term can be found within the given link but is far to complicated to be included here$)$. For other values of $a$ I was not able to get anything done. I tried to expand the $\log$ and respectively the denominator as a series which ended up in an infinite summation of Hypergeometric Functions $($of the kind $_2F_1(1,k+1;k+2;-1/3)$ paired with a denominator depending on $k$$)$ I was not able to express explicit. Furthermore I tried to apply Feynman's Trick, i.e. differentiate w.r.t. to $a$ in order to get rid of the $\log$. The so occuring integral was easily evaluated by using partial fraction decomposition. Anyway I did not managed to find suitable borders for the intgeration w.r.t. $a$ afterwards. Applying a trigonometric substitution $($to be precise $t=\tan(x)$$)$ lead to the logarithmic term $\log(1+\cos^2(x))$ which I was not sure how to handle without invoking several powers of the cosine function $($i.e. by using the Taylor series expansion of the natural logarithm$)$. The first approach aswell as the last one resulted in an infinite double summation. My knowledge about double sums, especially their evaluation, is quite weak. Maybe someone else is able to finish this up. I have doubts that it is possible to derive an explicit closed-form expression for $I(a)$. Nevertheless for the case that the upper bound is given by $\infty$ instead of $1$ there actually exists a closed-form expression which makes me curious $$I(a,b,c,g)=\int^\infty_0 \frac{\log(a^2+b^2x^2)}{c^2+g^2x^2}\mathrm dx = \frac{\pi}{cg}\log\left(\frac{ag+bc}{g}\right)\tag2$$ I am not familiar with the way this quite elegant relation was deduced since I just stumbled upon this one within this post. Anyway let me get this straight: I would highly appreciate an explicit expression for $I(a)$, maybe similiar to the one given for $(2)$, even though I am not sure whether such a term exists. Howsoever I am especially interested in the case $a=3$ for another integral I am working on right now. In addition I would be glad if someone could porvide a link or a source for $(2)$ since I have absolutely no idea how to prove this formula. Thanks in advance!
Let $\mathcal{N}$ and $\mathcal{M}$ be algebras of sets on $S$ and $T$ respectively. Let $\mathcal{N}\times\mathcal{M}$ the algebra generated by the rectangles in $S\times T$ (i,e the sets with the form $A\times B$ where $A\in\mathcal{N}$ and $B\in\mathcal{M}$). We denote by $\mathcal{N}\triangle\mathcal{M}$ to the $\sigma$-algebra generated by the algebra $\mathcal{N}\times\mathcal{M}$. I want to show that, if $\mathcal{N'}$ and $\mathcal{M'}$ are the $\sigma$-algebras generated by $\mathcal{N}$ and $\mathcal{M}$ respectively then $$ \mathcal{N}\triangle\mathcal{M}=\mathcal{N'}\triangle\mathcal{M'}. $$ It is easy to see that $\mathcal{N}\triangle\mathcal{M}\subseteq\mathcal{N'}\triangle\mathcal{M'}$. I have troubles with the other direction. My idea is to show that $\mathcal{N'}\times\mathcal{M'}\subseteq\mathcal{N}\triangle\mathcal{M}$ but I don't know how to get the last sentence. Can seomeone give me a hint?
You might consider 241162 and 102575. This is actually a history of science question, on a half-century old landmark in the intellectual history of 20th century physics, whose importance cannot be overstated. Perhaps it belongs to another site. For a quartic potential, following Schwinger, the σ mass may be sent to infinity, thereby also introducing the nonlinear σ model, as the authors do in section 6. Your question of the "experimental origin" of contemplating the chiral symmetry of two flavors (before quarks or an inkling of the underlying dynamics, or effective lagrangians, which this represents, of course!) misreads the significance of the paper: it is not a phenomenological fit to data: it is a creative grand synthesis conjecture of several physical facts and ideas coming from all over the place. The stated purpose of the paper is to explain/rationalize the baffling Goldberger-Treiman formula of the time and other low energy theorems of hadronic physics in terms of PCAC, all without an inkling of QCD or quarks. G-M & L adapted this simple model to illustrate how the 3 vector currents of isospin was "almost" conserved (CVC), but the other 3 current you could make out of the Dirac spinors of the nucleons, the Axial ones, were "partially conserved" (PCAC): their divergences cannot vanish, since then the pions would not decay, but they "nearly vanish", as Feynman discovered by fiddling with weak axial currents, i.e. these are proportional to the pion fields, the hallmark of spontaneous chiral symmetry breaking. Specifically, for the vectors denoting isospin indices, $$\vec{A}_\mu \sim \sigma \partial_\mu \vec{\pi} + g_A \bar{N} \gamma_\mu \gamma_5 \frac{\vec{\tau}}{2} N ~,$$and shifting σ ⟶f would yield a leading term of $f_\pi \partial \vec{\pi}$ for the current, so$$\langle 0| A^a_\mu(x) | \pi^b(p)\rangle \sim \delta^{ab}~ f_\pi ~ p_\mu ~e^{-ip\cdot x}, $$whose divergence would then go like $m_\pi^2$ and so vanish for vanishing pion masses. "Nearly"... π + σ' This, then would embolden good theorists to theorize about the limit of vanishing pion masses, today called the "chiral limit", and perturb around it, what is now called "chiral perturbation theory" in the explicit breaking masses... Conserved Vs and As would then be trivially unscrambled to the standard current algebra of $SU(2)_L\times SU(2)_R$ that you mention, V-A= L, V+A= R, with all Rs commuting with all Ls. Feynman and G-M and others had already deciphered the L, V-A nature of the weak interactions, pion decay, etc... Now for the Goldberger-Treiman relation, the piece de resistance of that paper. By positing a form of the Axial current related to the nucleon mass (= f !) you could relate π g A that to pion decay ($f_\pi$), the weak decay constant and the pion-nucleon Yukawa coupling ($g_A$). At the time, the confluence of weak and strong and current algebraic quantities into a tight relationship appeared almost miraculous, and a model that rationalized them all in the context of elegant global spontaneous symmetry breaking a godsent! Nowadays, they are mere footnotes in a QFT text, like M Schwartz's, for instance. You might read up in Georgi's classic text. : It struck me this might well offer a gratuitous teachable moment on the explicit breaking term Supplementary pedantic edit on chiral perturbation cσ , which you appear to appreciate, anyway, but some of us never tire of it when teaching the course...While the rest of the model is invariant under the 3 isospin and the 3 axial transformations (admire the concerted subtle invariance of the first, fermion, term under the axials and hence $SU(2)_L \times SU(2)_R$) the - cσ term is not under the axials: it shifts by $\propto c \vec{\theta}_A\cdot \vec{\pi}$. To lowest order in c, then, this extra perturbation shifts <σ> from $f_\pi$ to $f_\pi (1-\frac{c}{8\lambda f_\pi^3}+...)$ and so the mass of π from 0 to $m_\pi^2 \sim c/2f_\pi$ (Dashen's theorem). So taking the 4-divergence of the axial current $\vec{A}_\mu(x)$ on shell produces the $m_\pi^2$ of PCAC.In contemporary QCD language, $c=m_q \Lambda^3/f_\pi$, where Λ is a chiral condensate and $m_q$ is some average of GellMann-Oakes-Renner quark masses. A final aside, just like theirs. On p708, their "note added in proof" introduces the Cabbibo angle, arcsin $\epsilon/\sqrt{1-\epsilon^2}$, three years before Cabbibo's paper (which references this one): the fine print behind the other constants of the G-T relation mumbled about above. They relate the coupling strengths of the strange and non-strange hadronic currents to the weak leptonic one, as the sides of a right triangle.
I apolgize for contributing yet another question asking about an application of CS. Here it is: Suppose $p_1, \dots ,p_n$ and $a_1,...,a_n$ are real numbers such that $p_i \geq 0$, $a_i \geq 0$ for all $i$, and $p_1 + \dots + p_n = 1$. Then $$(p_1a_1+ \dots + p_na_n)\left(\frac{p_1}{a_1}+ \dots + \frac{p_n}{a_n}\right) \geq 1$$ The author of my textbook gives the following proof: Apply Cauchy's inequality to the sequences $\sqrt{p_1a_1} \dots \sqrt{p_n}{a_n}$ and $\sqrt{\frac{p_1}{a_1}} \dots \sqrt{\frac{p_n}{a_n}}$. (Thats it) In trying to fill in the blanks I obtained the following $$\sqrt{p_1a_1} +\dots +\sqrt{p_na_n} \leq \sqrt{p_1+\dots+p_n}\sqrt{a_1+...+a_n} = \sqrt{a_1+...+a_n}$$ and $$\sqrt{\frac{p_1}{a_1}} +\dots +\sqrt{\frac{p_n}{a_n}} \leq \sqrt{p_1+\dots+p_n}\sqrt{\frac{1}{a_1}+...+\frac{1}{a_n}} = \sqrt{\frac{1}{a_1}+...+\frac{1}{a_n}}$$ I'm not entirely sure where to go from here. Perhaps I have misunderstood what he meant by "apply cauchy's inequality to the sequences...". Another idea I had was to note that $$(p_1a_1+ \dots + p_na_n) \leq M_a(p_1+ \dots + p_n)$$ where $M_a$ is the largest $a_i$. And, that $$\left(\frac{p_1}{a_1}+ \dots + \frac{p_n}{a_n}\right) \leq \frac{1}{m_a}(p_1+\dots+p_n)$$ where $m_a$ is the smallest $a_i$. Therefore, since $\frac{M_a}{m_a} \geq 1$ the inequality follows. I am not very confident in the correctness of this method though and would like to understand how to prove the inequality via CS as my book suggests.
I want to find sum of the first $n^{th}$ term of this sqequence . $$2,5,13,35,97,275,793,...\\s_n=2+5+13+35+97+...$$ What is the closed form formula for $s_n$? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If you look at number again , in your problem .It would be $$2,5,13,35,97,275,393,...\\1+1,2+3,4+9,8+27,16+81,32+243,...$$so it would be $$2^0+3^0,2^1+3^1,2^2+3^2+,...\\\implies a_n=2^{n-1}+3^{n-1}$$ $S_n$ is sum of two geometric progression $$s_n=\sum_{k=0}^{n}(2^{k}+3^{k})=\\\sum_{k=0}^{n}(2^{k})+\sum_{k=0}^{n}(3^{k})=\\1.\cdot\frac{2^{n}-1}{2-1}+1.\cdot\frac{3^{n}-1}{3-1}$$ Alternatively, the sequence is: $$2,3\cdot 2-1,3\cdot 5-2,3\cdot 13-4,3\cdot 35-8,...$$ It is the recurrence relation: $$a_n=3a_{n-1}-2^{n-2},a_1=2.$$ Divide it by $2^{n}$: $$\frac{a_n}{2^{n}}=\frac{3a_{n-1}}{2^{n}}-\frac14.$$ Denote: $b_n=\frac{a_n}{2^n}$ to get: $$b_n=\frac{3}{2}b_{n-1}-\frac{1}{4},b_1=1.$$ Solution is: $$b_n=\frac{1}{3}\left(\frac{3}{2}\right)^n+\frac12.$$ Hence: $$a_n=2^nb_n=2^{n-1}+3^{n-1}.$$ Now the sum $S_n$ is calculated in the same way as in previous solution.
Lasted edited by Andrew Munsey, updated on June 15, 2016 at 2:10 am. Amplitude modulation (AM) is a technique used in electronics, most commonly for transmission via a There was an error working with the wiki: Code[23] wirelessly. Amplitude modulation works by varying the strength of the transmitted signal in relation to the information being sent, for example, changes in the signal strength can be used to reflect sounds being reproduced for a speaker or light intensity for a television pixel. (Contrast this with There was an error working with the wiki: Code[4] is varied.) In the mid-1870s, a form of amplitude modulation&mdashinitially called "undulatory currents"&mdashwas the first method to successfully produce quality over lines. Beginning in the early 1900s, it was also the original method used for radio transmissions, and remains in use by some forms of radio communication&mdash"AM" is often used to refer to the There was an error working with the wiki: Code[5] (see AM radio). As originally developed for the electric telephone, amplitude modulation was used to add audio information to the low-powered direct current flowing from a telephone transmitter to a receiver. As a simplified explanation, at the transmitting end, a telephone microphone was used to vary the strength of the transmitted current, according to the frequency and loudness of the sounds received. Then, at the receiving end of the telephone line, the transmitted electrical current affected an electromagnet, which strengthened and weakened in response to the strength of the current. In turn, the electromagnet produced vibrations in the receiver diaphragm, thus reproducing the frequency and loudness of the sounds originally heard at the transmitter. In contrast to the telephone, in radio communication what is modulated is a There was an error working with the wiki: Code[24] radio signal ( There was an error working with the wiki: Code[25]) produced by a radio transmitter. In its basic form, amplitude modulation produces a signal with power concentrated at the carrier frequency and in two adjacent There was an error working with the wiki: Code[26]s. Each sideband is equal in There was an error working with the wiki: Code[27] to that of the modulating signal and is a mirror image of the other. Thus, most of the power output by an AM transmitter is effectively wasted: half the power is concentrated at the carrier frequency, which carries no useful information (beyond the fact that a signal is present) the remaining power is split between two identical sidebands, only one of which is needed. To increase transmitter efficiency, the carrier can be removed (suppressed) from the AM signal. This produces a There was an error working with the wiki: Code[28] or double-sideband suppressed carrier (DSBSC) signal. If the carrier is only partially suppressed, a double-sideband reduced carrier (DSBRC) signal results. DSBSC and DSBRC signals need their carrier to be regenerated (by a There was an error working with the wiki: Code[29], for instance) to be demodulated using conventional techniques. Even greater efficiency is achieved&mdashat the expense of increased transmitter and receiver complexity&mdashby completely suppressing both the carrier and one of the sidebands. This is There was an error working with the wiki: Code[30], widely used in Amateur radio due to its efficient use of both power and bandwidth. A simple form of AM often used for There was an error working with the wiki: Code[6] data is represented as the presence or absence of a carrier wave. This is commonly used at radio frequencies to transmit There was an error working with the wiki: Code[31], referred to as There was an error working with the wiki: Code[32] (CW) operation. In 1982, the There was an error working with the wiki: Code[33] (ITU) designated the various types of amplitude modulation as follows: {|class="wikitable" |- !Designation!!Description |- |A3E|| There was an error working with the wiki: Code[34] full carrier - the basic AM modulation scheme |- |R3E|| There was an error working with the wiki: Code[7] There was an error working with the wiki: Code[8] |- |H3E|| There was an error working with the wiki: Code[9] full carrier |- |J3E|| There was an error working with the wiki: Code[10] |- |B8E|| There was an error working with the wiki: Code[35] emission |- |C3F|| There was an error working with the wiki: Code[36] |- |Lincompex||linked There was an error working with the wiki: Code[11] |} Suppose we wish to modulate a simple sine wave on a carrier wave. The equation for the carrier wave of frequency \omega_c, taking its phase to be a reference phase of zero, is :c(t) = C \sin(\omega_c t). The equation for the simple sine wave of frequency \omega_m (the signal we wish to broadcast) is :m(t) = M \sin(\omega_m t + \phi), with \phi its phase offset relative to c(t). Amplitude modulation is performed simply by adding m(t) to C. The amplitude-modulated signal is then :y(t) = (C + M \sin(\omega_m t + \phi)) \sin(\omega_c t) The formula for y(t) above may be written :y(t) = C \sin(\omega_c t) + M \frac{\cos(\phi + (\omega_m - \omega_c) t)}{2} - M \frac{\cos(\phi + (\omega_m + \omega_c) t)}{2} The broadcast signal consists of the carrier wave plus two sinusoidal waves each with a frequency slightly different from \omega_c, known as sidebands. For the sinusoidal signals used here, these are at \omega_c + \omega_m and \omega_c - \omega_m. As long as the broadcast (carrier wave) frequencies are sufficiently spaced out so that these side bands do not overlap, stations will not interfere with one another. :This relies on knowledge of the There was an error working with the wiki: Code[37]. The discussion of the figure may prove more useful for a quicker understanding. Consider a general modulating signal m(t), which can now be anything at all. The same basic rules apply: :\,y(t) = [C + m(t)]\cos(\omega_c t). Or, in There was an error working with the wiki: Code[38] form: :y(t) = [C + m(t)]\frac{e^{j\omega_c t} + e^{-j\omega_c t}}{2} Taking Fourier Transforms, we get: :|Y(\omega)| = \pi{}C\delta(\omega - \omega_c) + \frac{1}{2}M(\omega - \omega_c) + \pi{}C\delta(\omega + \omega_c) + \frac{1}{2}M(\omega + \omega_c), where \delta(x) is the There was an error working with the wiki: Code[12] There was an error working with the wiki: Code[39] &mdash a unit impulse at x &mdash and capital functions indicate Fourier Transforms. This has two components: one at positive Frequency (centered on +\omega_c) and one at There was an error working with the wiki: Code[13] (centered on -\omega_c). There is nothing mathematically wrong with negative frequencies, and they need to be considered here &mdash otherwise one of the sidebands will be missing. Shown below is a graphical representation of the above equation. It shows the modulating signal's There was an error working with the wiki: Code[14] on top, followed by the full spectrum of the modulated signal. There was an error working with the wiki: Code[2] This makes clear the two sidebands that this modulation method yields, as well as the carrier signals that go with them. The carrier signals are the impulses. Clearly, an AM signal's spectrum consists of its original (2-sided) spectrum shifted up to the carrier frequency. The negative frequencies are a mathematical nicety, but are essential since otherwise we would be missing the lower sideband in the original spectrum! As already mentioned, if multiple signals are to be transmitted in this way (by There was an error working with the wiki: Code[15], can be seen clearly in the figure &mdash with the carrier suppressed there will be no impulses and with a sideband suppressed, the transmission bandwidth is reduced back to the original, baseband, bandwidth &mdash a significant improvement in spectrum usage. An analysis of the power consumption of AM reveals that DS-AM with its carrier has an efficiency of about 33% &mdash very poor. The benefit of this system is that receivers are cheaper to produce. The forms of AM with suppressed carriers are found to be 100% power efficient, since no power is wasted on the carrier signal which conveys no information. As with other There was an error working with the wiki: Code[16], in AM, this quantity, also called modulation depth, indicates by how much the modulated variable varies around its 'original' level. For AM, it relates to the variations in the carrier amplitude and is defined as: :h = \frac{\mathrm{peak\ value\ of\ } m(t)}{C}. So if h=0.5, the carrier amplitude varies by 50% above and below its unmodulated level, and for h=1.0 it varies by 100%. Modulation depth greater than 100% is generally to be avoided - practical transmitter systems will usually incorporate some kind of limiter circuit, such as a There was an error working with the wiki: Code[40], to ensure this. Variations of modulated signal with percentage modulation are shown below. In each image, the maximum amplitude is higher than in the previous image. Note that the scale changes from one image to the next. There was an error working with the wiki: Code[3] A wide range of different circuits have been used for AM, but one of the simplest circuits uses anode or collector modulation applied via a There was an error working with the wiki: Code[17] (tube) circuits are shown here. In general, valves are able to easily yield RF powers far in excess of what can be achieved using solid state. Most high-power broadcast stations still use valves. Modulation circuit designs can be broadly divided into low and high level. Here a small There was an error working with the wiki: Code[18] stage is used to There was an error working with the wiki: Code[19] a low power stage, the output of this stage is then amplified using a There was an error working with the wiki: Code[20] RF amplifier. Advantages The advantage of using a linear RF amplifier is that the smaller early stages can be modulated, which only requires a small There was an error working with the wiki: Code[41] to drive the modulator. Disadvantages The great disadvantage of this system is that the amplifer chain is less There was an error working with the wiki: Code[21], because it has to be linear to preserve the modulation. Hence There was an error working with the wiki: Code[22] cannot be employed. An approach which marries the advantages of low-level modulation with the efficiency of a Class C power amplifier chain is to arrange a feedback system to compensate for the substantial distortion of the AM envelope. A simple detector at the transmitter output (which can be little more than a loosely coupled There was an error working with the wiki: Code[42]) recovers the audio signal, and this is used as There was an error working with the wiki: Code[43] to the audio modulator stage. The overall chain then acts as a linear amplifier as far as the actual modulation is concerned, though the RF amplifier itself still retains the Class C efficiency. This approach is widely used in practical medium power transmitters, such as AM There was an error working with the wiki: Code[44]s. Advantages One advantage of using class C amplifiers in a broadcast AM transmitter is that only the final stage needs to be modulated, and that all the earlier stages can be driven at a constant level. These class C stages will be able to generate the drive for the final stage for a smaller Direct current power input. However in many designs in order to obtain better quality AM the penultimate RF stages will need to be subject to modulation as well as the final stage. Disadvantages A large audio amplifier will be needed for the modulation stage, at least equal to the power of the transmitter output itself. Traditionally the modulation is applied using an audio transformer, and this can be bulky. There was an error working with the wiki: Code[45] from the audio amplifier is also possible (known as a There was an error working with the wiki: Code[46] arrangement), though this usually requires quite a high DC supply voltage (say 30V or more), which is not suitable for mobile units. AM radio also referred to as There was an error working with the wiki: Code[47] There was an error working with the wiki: Code[48] almost universally uses AM modulation, narrow FM occurring above 25 MHz. There was an error working with the wiki: Code[49], for a list of other modulation techniques There was an error working with the wiki: Code[50] Amplitude Modulation Signalling System, a digital system for adding low bitrate information to an AM signal. There was an error working with the wiki: Code[51], for some explanation of what this is. There was an error working with the wiki: Code[52], for the emission types designated by the There was an error working with the wiki: Code[53] There was an error working with the wiki: Code[1] Newkirk, David and Karlquist, Rick (2004). Mixers, modulators and demodulators. In D. G. Reed (ed.), The ARRL Handbook for Radio Communications (81st ed.), pp. 15.1&ndash15.36. Newington: ARRL. ISBN 0-87259-196-4. There was an error working with the wiki: Code[1], Wikipedia: The Free Encyclopedia. Wikimedia Foundation.
I have reduced my solution of a 1D heat equation boundary value problem to the following: $$W(z, t) = \sum_{n=1}^\infty b_n \sin(\lambda_n z) e^{-\lambda_n^2 \alpha t}$$ To get the coefficients $b_n$, I apply the initial condition that: $W(z, 0) = T_0$, which gives the Fourier Sine Series: $$\sum_{n=1}^\infty b_n \sin(\lambda_n z) = T_0$$ My question is how to obtain the coefficients $b_n$ for my problem here using the integral formula for the Fourier Sine Series? Namely, if $$f(x) = \sum_{n = 1}^\infty b_n \sin(nx)$$ Then: $$b_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin(nx)$$ The argument in the sine function for my problem is $\lambda_n$ = (some function of n, and not explicitly equal to $n$ as in the integral formula above. Is there a way that I am suppose to transform the argument such that the formula can be applied? Thanks kindly in advance,
We are interested in estimating Conditional Quantile Treatment Effects on the Treated (QTT) with two periods of panel data (or repeated cross sections) under a Difference in Differences Assumption. These are defined by \[ CQTT_x(\tau) = F^{-1}_{Y_{1t}|X=x,D=1}(\tau) - F^{-1}_{Y_{0t}|X=x,D=1}(\tau) \] for \(\tau \in (0,1)\) and where \(Y_{1t}\) are treated potential outcomes in period \(t\), \(Y_{0t}\) are untreated potential outcomes in period \(t\) and \(D\) indicates whether an individual is a member of the treated group or not. We are also thinking about the case where \(X\) is discrete. The identification challenge is to obtain the counterfactual conditional distribution of untreated potential outcomes for the treated group: \(F_{Y_{0t}|X=x, D=1}(y)\). This method is built for the standard DID case where a researcher has access to two periods of data, no one is treated in the first period \(t-1\), and the treated group is treated in period the last period \(t\). Assumption 1 (Distributional Difference in Differences) \[ \Delta Y_{0t} \perp D | X\] This is an extension of the conditional mean DID assumption (\(E[\Delta Y_{0t}|X=x, D=1] = E[\Delta Y_{0t}|X=x,D=0]\) to full independence. Relative to DID assumptions that are not conditional on \(X\), this assumption is nice as it allows the path of outcomes to depend on covariates. For example, suppose \(Y\) is earnings. The path of earnings, in the absence of some treatment, is likely to depend on covariates such as education and age. If these are distributed differently across the treated and untreated groups, then an unconditional DID assumption is unlikely to hold, but Assumption 1 will. Alone, Assumption 1 is not strong enough to identify the CQTT. We also impose the following additional assumption. Assumption 2 (Copula Invariance Assumption) \[ C_{\Delta Y_{0t}, Y_{0t-1} | X=x,D=1}(u,v) = C_{\Delta Y_{0t}, Y_{0t-1} | X=x,D=1}(u,v) \] This assumption says that the dependence of the change in outcomes and the initial level of outcomes is the same for the treated group as the untreated group. To make things concrete, consider the earnings example again. The CI Assumption says that if we observe the biggest gains in earnings for the untreated group going to those with the highest initial earnings, then, in the absence of treatment, we would observe the same thing for the treated group. Under Assumption 1 and Assumption 2, \[ F_{Y_{0t}|X=x,D=1}(y) = E[1\{\Delta Y_t + F^{-1}_{Y_{t-1}|X=x,D=1}(F^{-1}_{Y_{t-1}|X=x,D=1}(Y_{t-1})) \leq y\} | X=x, D=0] \] And then we can invert this to obtain the CQTT. The ddid2 method contains the code to implement this method. Here is an example. ##load the package library(qte) ## Registered S3 methods overwritten by 'ggplot2':## method from ## [.quosures rlang## c.quosures rlang## print.quosures rlang ##load the data data(lalonde) ## Run the panel.qtet method on the experimental data with no covariates dd1 <- ddid2(re ~ treat, t=1978, tmin1=1975, tname="year", data=lalonde.psid.panel, idname="id", se=FALSE, probs=seq(0.05, 0.95, 0.05)) summary(dd1) ## ## Quantile Treatment Effect:## ## tau QTE## 0.05 10616.61## 0.1 5019.83## 0.15 2388.12## 0.2 1033.23## 0.25 485.23## 0.3 943.05## 0.35 931.45## 0.4 945.35## 0.45 1205.88## 0.5 1362.11## 0.55 1279.05## 0.6 1618.13## 0.65 1834.30## 0.7 1326.06## 0.75 1586.35## 0.8 1256.09## 0.85 723.10## 0.9 251.36## 0.95 -1509.92## ## Average Treatment Effect: 2326.51
Can the Ricci curvature tensor be obtained by a 'double contraction' of the Riemann curvature tensor? For example $R_{\mu\nu}=g^{\sigma\rho}R_{\sigma\mu\rho\nu}$. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community I'm not sure what you mean by 'double contraction', but the Ricci tensor in local coordinates is given by \begin{align} R_{\mu \nu} = R^\rho_{~~\mu \rho \nu}, \end{align} which is the same as $g^{\sigma \rho} R_{\sigma \mu \rho \nu}$, exactly what you have written. Yes. The expression for the Ricci tensor is often written as (see here) $$ R_{\mu\nu} = R^{\alpha}_{\phantom\alpha \mu\alpha\nu}, $$ but the right hand side is precisely what you wrote since the metric simply raises the first index.
Differences This shows you the differences between two versions of the page. Both sides previous revision Previous revision rational_numbers [2016/04/23 13:49] nikolaj rational_numbers [2016/04/23 13:54] (current) nikolaj Line 1: Line 1: ===== Rational numbers ===== ===== Rational numbers ===== ==== Framework ==== ==== Framework ==== - $\dots,\,-\ tfrac{4}{3},\,0,\,\ tfrac{1}{17},\,1,\,7.528,\,9001,\dots$ + $\dots,\,-\ frac{4}{3},\,0,\,\ frac{1}{17},\,1,\,7.528,\,9001,\dots$ - The rational numbers ${\mathbb{Q}}$ can be defined as the field of characteristic 0 which has no proper subfield. In less primitive notions, it's the field of fractions for the integral domain of [[natural number]] s. The second order theory of rationals (see the note below) describes a countable collection. + The rational numbers ${\mathbb{Q}}$ can be defined as the field of characteristic 0 which has no proper sub-field. In less primitive notions, it's the field of fractions for the integral domain of [[natural numbers]]. The second order theory of rationals (see the note below) describes a countable collection. The rationals can also be set up straight forwardly from tuples of natural numbers. The rationals can also be set up straight forwardly from tuples of natural numbers. Line 12: Line 12: For all $m$ For all $m$ - ^ $\ sum_{ k=0} ^m x ^k = \dfrac{1}{1-x }(1-x^{m +1} )$ ^ + ^ $\ dfrac{ 1} {1-x }=\dfrac{1}{1-x \cdot x^{m} }\sum_{k=0}^m x^k $ ^ - ^ $\sum_{k=0}^m (1-y)^k = \dfrac{1}{y} -\dfrac{1}{y }(1-y)^{m +1}$ ^ + ^ $\dfrac{1}{y} =\dfrac{1}{ 1-(1-y )\cdot(1-y)^{m} }\sum_{k=0}^m (1-y)^k$ ^ == Logic == == Logic ==
Could somebody explain to me where these two formulas come from as applications of the binomial theorem? $$\sum_{k=0}^n {n \choose k}(-1)^kk^r=0$$ for non-negative integers $r\lt n$. And $$\sum_{k=0}^n {n \choose k}(-1)^kk^n=(-1)^nn!$$ They don’t come from the binomial theorem: they come from the inclusion-exclusion principle. This is easier to see if you multiply them by $(-1)^n$ to get $$\sum_{k=0}^n\binom{n}k(-1)^{n-k}k^r=\begin{cases} 0,&\text{if }r<n\\ n!,&\text{if }r=n\;. \end{cases}$$ The left-hand side counts surjections from $[r]=\{1,\ldots,r\}$ to $[n]=\{1,\ldots,n\}$. Of course this is $0$ when $r<n$ and $n!$ when $r=n$. The left-hand side can be rewritten as follows: $$\begin{align*} \sum_{k=0}^n\binom{n}k(-1)^{n-k}k^r&=\sum_{k=0}^n\binom{n}{n-k}(-1)^{n-k}k^r\\ &=\sum_{k=0}^n\binom{n}k(-1)^k(n-k)^r\\ &=n^r-\binom{n}1(n-1)^r+\binom{n}2(n-2)^r-+\ldots\;. \end{align*}$$ The first term, $n^r$, is the number of functions from $[r]$ to $[n]$. For each $k\in[n]$ there are $(n-1)^r$ functions from $[r]$ to $[n]\setminus\{k\}$, and there are $n$ possible choices for $k$; subtracting $\binom{n}1(n-1)^r$ throws out these non-surjective functions from $[r]$ to $[n]$. However, functions whose ranges miss (at least) two elements of $[n]$ get thrown out (at least) twice and have to be added back in, giving $$n^r-\binom{n}1(n-1)^r+\binom{n}2(n-2)^r\;.$$ This is now an overcount, since functions whose ranges miss (at least) three elements of $[n]$ have now been counted once, removed three times, and recounted three times: on net they’ve been counted once and need to be thrown away again. The inclusion-exclusion principle ensures that the full summation correctly accounts for everything and therefore really does give the number of surjections from $[r]$ to $[n]$. A Proof Using Binomial Theorem: We prove by induction. The binomial theorem says $$(x-1)^n = \sum_{k}{n\choose k}(-1)^{n-k} x^k.$$ Setting $x=1$ gives a proof for $r=0$. Suppose the statement is true for $<r$. Suppose $r\leq n$. Take $r$th derivative of the formula above, we get $$\begin{eqnarray}n(n-1)\ldots (n-r+1)(x-1)^{n-r} &=& \sum_{k}{n\choose k}(-1)^{n-k}k(k-1)\ldots (k-r+1)x^{k-r}\\ &=& \sum_{k}{n\choose k}(-1)^{n-k}P(k)x^{k-r},\end{eqnarray}$$ where $P(k)=k^r+$ lower terms. Set $x=1$. By the induction hypothesis, the terms evaluated by the lower terms sum to $0$, and so RHS $=\sum_k{n\choose k}(-1)^{n-k}k^r$. If $r < n$, then LHS $=0$. If $r=n$, then LHS $=n!$. This completes the proof. Remark: A slick way to prove it is to count the number of surjections from $[r]$ to $[n]$. By the inclusion-exclusion principle, we get the number of surjections equal to $$\sum_k {n\choose k}(-1)^{n-k}k^r.$$ However, when $r<n$, there are no surjections, whereas, when $r=n$, there are $n!$ many. Consider $\binom{k}{j}$ as a degree $k$ polynomial (combinatorial polynomial) in $k$: $$ \binom{k}{j}=\frac{k(k-1)(k-2)\cdots(k-j+1)}{j!}\tag{1} $$ It is not to difficult to see that we can write any polynomial of degree $m$ as sum of combinatorial polynomials of degree $m$ or less. In particular, we have $$ \newcommand{\stirtwo}[2]{\left\{{#1}\atop{#2}\right\}} k^m=\sum_{j=0}^mj!\stirtwo{m}{j}\binom{k}{j}\tag{2} $$ where $\stirtwo{m}{j}$ is a Stirling Number of the Second Kind. Since $k^m$ can be written as a sum of combinatorial polynomials of degree $m$ or less, $$ n\gt m\implies\stirtwo{m}{n}=0\tag{3} $$ Furthermore, since the coefficient of $k^m$ in $m!\binom{k}{m}$ is $1$, $$ \stirtwo{m}{m}=1\tag{4} $$ Using $(2)$ in your sum yields $$ \begin{align} \sum_{k=0}^n\binom{n}{k}(-1)^kk^m &=\sum_{k=0}^n\binom{n}{k}(-1)^k\sum_{j=0}^mj!\stirtwo{m}{j}\binom{k}{j}\\ &=\sum_{j=0}^mj!\stirtwo{m}{j}\sum_{k=0}^n(-1)^k\binom{n}{k}\binom{k}{j}\\ &=\sum_{j=0}^mj!\stirtwo{m}{j}\sum_{k=0}^n(-1)^k\binom{n}{j}\binom{n-j}{k-j}\\ &=\sum_{j=0}^mj!\stirtwo{m}{j}\binom{n}{j}(-1)^j(1-1)^{n-j}\\ &=(-1)^nn!\stirtwo{m}{n}\tag{5} \end{align} $$ Equtions $(3)$, $(4)$, and $(5)$ give the results sought. Rather than try to interpret this as a direct application of the binomial formula, I think it is better to recognise summations of the form $\sum_k(-1)^k\binom nkf(x+k)$ or $\sum_k(-1)^k\binom nkf(x-k)$ as coming from repeated finite differences of $f$. In your example it is $f(x)=x^r$ for fixed $0\leq r\leq n$, taken eventually at $x=0$. However this also occurs in different guises in this question and another and one similar to this one (and maybe in others I failed to find). On the space of functions defined at integer (or non-negative integer) arguments, define the forward difference operator $\Delta$ by $$ \Delta(f)=\bigl(x\mapsto f(x+1)-f(x)\bigr) \qquad \text{for any $f:\Bbb Z\to\Bbb R$} $$ Then one since has $\Delta=S-I$ where $S$ is the shift operator $f\mapsto\bigl(x\mapsto f(x+1)\bigr)$ and $I$ is the identity $f\mapsto \bigl(x\mapsto f(x)\bigr)=f$; since these operators commute one can apply the binomial formula to get $$ \Delta^n(f) = \sum_{k=0}^n\binom nk(-I)^{n-k}S^k(f) = \left(x\mapsto \sum_{k=0}^n(-1)^{n-k}\binom nkf(x+k) \right) . $$ For the purpose of recognition it is useful to have a variant where the exponent of $-1$ matches the lower index in the binomial coefficient: $$ \sum_{k=0}^n(-1)^k\binom nkf(x+k) = (-1)^n\Delta^n(f)(x) $$ Now the point that makes this easy to compute in certain situations, like that of the question, is that $\Delta$ lowers the degree of polynomial functions, killing constant ones, and multiplies the leading coefficient by the degree just like differentiation does. This means that with $f:x\mapsto x^r$ and $0\leq r\leq n$ one has $\Delta^n(f)=(x\mapsto 0)$ when $r<n$, while $\Delta^n(f)=(x\mapsto n!)$ when $r=n$. This gives your two equations.
Let $f\in \mathcal R[0,1]$ , and $g:\mathbb R \to \mathbb R $ is continuous and periodic with period $1$ . Then is it true that $\lim_{n \to \infty}\int_0 ^1 f(x)g(nx)dx=\Big(\int_0^1f(x)dx\Big)\Big(\int_0^1 g(x)dx\Big)$ ? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let $f\in \mathcal R[0,1]$ , and $g:\mathbb R \to \mathbb R $ is continuous and periodic with period $1$ . Then is it true that $\lim_{n \to \infty}\int_0 ^1 f(x)g(nx)dx=\Big(\int_0^1f(x)dx\Big)\Big(\int_0^1 g(x)dx\Big)$ ? This question appears to be off-topic. The users who voted to close gave this specific reason: Put $\displaystyle u_n=\int_0^1f(x)g(nx)dx$. We have by the change of variable $u=nx$: $$u_n=\int_0^n f(\frac{u}{n})g(u)\frac{du}{n}=\frac{1}{n}\sum_{k=0}^{n-1}\int_k^{k+1}f(\frac{u}{n})g(u)du$$ But as $g$ has period $1$: $$\int_k^{k+1}f(\frac{u}{n})g(u)du=\int_0^1f(\frac{t+k}{n})g(t+k)dt=\int_0^1f(\frac{t+k}{n})g(t)dt$$ Put $\displaystyle T_n(t)=\frac{1}{n}\sum_{k=0}^{n-1}f(\frac{t+k}{n})$. We have hence $\displaystyle u_n=\int_0^1 T_n(t)g(t)dt$. Now $T_n(t)$ is a Riemann sum for $f$ (On $\displaystyle I_k=[\frac{k}{n},\frac{k+1}{n}]$, we have $ {\rm Inf}_{u\in I_k}f(u)\leq f(\frac{t+k}{n})\leq {\rm Max}_{u\in I_k}(f(u))$). Hence $\displaystyle T_n(t)\to L=\int_0^1 f(t)dt$. Now there exists $M$ such that $|f(u)|\leq M$ for all $u$, hence we get $\displaystyle |T_n(t)g(t)|\leq M|g(t)|$, and by the convergence dominated theorem, we get $\displaystyle u_n\to \int_0^1Lg(t)dt$, and we are done.
I started to study about Metric space at uni and am confused with the definitions open and closed set. It seems to me that being an open set always satisfies the definition of a closed set. closed as unclear what you're asking by Parcly Taxel, Namaste, Jack, TheSimpliFire, Saad Feb 25 '18 at 9:37 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. A topology $\tau$ on a space $X$ is just a collection of sets which are specified as being open (by definition)... When $X$ is a metric space, a set $U$ being open is equivalent to the existence of an epsilon ball, $B(x,\epsilon)\subset U$ for any $x\in U$, centered at $x$ and contained in $U$... Always, for any topological space (whether metric or not), the complement of an open set is defined to be closed ... It follows immediately that the complement of a closed set is open... In metric topology every point in an open set is the center of an open ball which is contained in the open set. That is points come with an open ball around them. On the other hand a set is closed if its complement is open. Thus the complement of open sets are closed sets and the complement of closed sets are open sets. For example the open interval $(-1,1)$ is open and its complement $$(-\infty , -1] \cup [1, \infty )$$ is closed. The closed interval $[-1,1]$ is closed and its complement $$(-\infty , -1) \cup (1, \infty )$$ is open.
In chemistry, the mass fraction w_i is the ratio of one substance with mass m_i to the mass of the total mixture m_{tot}, defined as [1] w_i = \frac {m_i}{m_{tot}} The sum of all the mass fractions is equal to 1: \sum_{i=1}^{N} m_i = m_{tot} ; \sum_{i=1}^{N} w_i = 1 Mass fraction can also be expressed, with a denominator of 100, as percentage by mass (frequently, though erroneously, called percentage by weight, abbreviated wt%). It is one way of expressing the composition of a mixture in a dimensionless size; mole fraction (percentage by moles, mol%) and volume fraction (percentage by volume, vol%) are others. For elemental analysis, mass fraction (or "mass percent composition") can also refer to the ratio of the mass of one element to the total mass of a compound. It can be calculated for any compound using its empirical formula [2] or its chemical formula [3] Terminology "Percent concentration" does not refer to this quantity. This improper name persists, especially in elementary textbooks. In biology, the unit "%" is sometimes (incorrectly) used to denote mass concentration, also called "mass/volume percentage." A solution with 1 g of solute dissolved in a final volume of 100 mL of solution would be labeled as "1 %" or "1 % m/v" (mass/volume). This is incorrect because the unit "%" can only be used for dimensionless quantities. Instead, the concentration should simply be given in units of g/mL. "Percent solution" or "percentage solution" are thus terms best reserved for "mass percent solutions" (m/m = m% = mass solute/mass total solution after mixing), or "volume percent solutions" (v/v = v% = volume solute per volume of total solution after mixing). The very ambiguous terms "percent solution" and "percentage solutions" with no other qualifiers continue to occasionally be encountered. In thermal engineering vapor quality is used for the mass fraction of vapor in the steam. In alloys, especially those of noble metals, the term fineness is used for the mass fraction of the noble metal in the alloy. Properties The mass fraction is independent of temperature. Related quantities Mass concentration The mass fraction of a component in a solution is the ratio of the mass concentration of that component \rho_i (density of that component in the mixture) to the density of solution \rho. w_i = \frac {\rho_i}{\rho} Molar concentration The relation to molar concentration is like that from above substituting the relation between mass and molar concentration. w_i = \frac {\rho_i}{\rho}=\frac {c_i M_i}{\rho} Mass percentage Multiplying mass fraction by 100 gives the mass percentage. It is sometimes called weight percent (wt%) or weight-weight percentage. Mole fraction The mole fraction x_i can be calculated using the formula x_i = w_i \cdot \frac {M}{M_i} where M_i is the molar mass of the component i and M is the average molar mass of the mixture. Replacing the expression of the molar mass-produces: x_i = \frac {\frac{w_i}{M_i}}{\sum_i \frac{w_i}{M_i}} Spatial variation and gradient In a spatially non-uniform mixture, the mass fraction gradient triggers the phenomenon of diffusion. References This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.