text
stringlengths
256
16.4k
here is the main body of the question: suppose $\mu$ is a regular radon measure on $\mathbb R^n$, let $B(x,r)$ represents the closed ball on $\mathbb R^n$. (1) prove that $$\lim_{y\to x} \sup(\mu(B(y,r))) \le \mu (B(x,r))$$ (2) make an example to illustrate that the "$\le$" can be "$<$" strictly Here is my confusion. what is regular radon measure? We all know that radon measure is inner regular and local finite, and "regular" means inner and outer regular, does that mean the regular radon measure is actually a regular borel measure? And I don't know how to solve this inequation.
In general, structural time series (STS) models (either frequentist or Bayesian) can be written as a system of equations. The simplest possible example of STS model is the local level model, given by: where ${\xi _t} \sim {\cal N}(0,\sigma _\xi ^2)$ and ${\varepsilon _t} \sim {\cal N}(0,\sigma _\varepsilon ^2)$.Here, the first equation is called state equation and ${\mu _t}$ is an unobserved variable. Though ${\mu _t}$ is not observed, the second equation, called observation equation, which depends on ${\mu _t}$, contains the variable ${y_t}$ which is based on observed data (roughly speaking, ${\mu _t}$ can be interpreted as a time-dependent version of the intercept of simple linear regressions). The general form of the BSTS is more convoluted so let us consider the following case,$$\begin{array}{l} {y_t} = {\mu _t} + {\tau _t} + {\beta ^T}{{\vec x}_t} + {\varepsilon _t}\\ {\mu _t} = {\mu _{t - 1}} + {\delta _{t - 1}} + {\eta _t},\,\,\,\,\,\,\,\,\,\\ {\delta _t} = {\delta _{t - 1}} + {\omega _t},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\ {\tau _t} = - \sum\nolimits_{s = 1}^{S - 1} {{\tau _{t - s}} + {\gamma _t}} \end{array}$$ where the errors have similar properties as in the local level model. The variables have the following meanings: The last variable, namely, ${{\vec x}_t}$ is of crucial importance in this discussion. It is a vector time series that can be used to predict ${y_t}$ and the coefficients $\beta$ are estimated the spike-and-slab Bayesian method of feature selection. To use this system of equations to predict BP stock prices after the oil spill we need to identify seasonal effects, trends and choose ${{\vec x}_t}$. An important condition that must be obeyed is that ${{\vec x}_t}$ must (ideally) be highly correlated with ${y_t}$, but cannot have been impacted by the same factors that impacted $y_t$. Let us apply the CausalImpact package without going into much detail now. First let us build Some of the packages we need to install are: devtools to be able to download packages from Github magrittr for code readability and maintainability dplyr for data manipulation rga to obtain data from the Google Analytics (GA) APIs We need also to provided the id corresponding to our view from GA. # # install.packages("devtools")# # install.packages("magrittr")# # install.packages("dplyr")# # install.packages("curl")# # install_github("skardhamar/rga")# # install.packages(c("bitops", "jsonlite", "httr"),repos='http://cran.us.r-project.org')# install.packages("tidyverse")# install.packages("tidyr") library(devtools)library(magrittr)library("dplyr")library(lubridate)library(curl)library(tidyverse)library(tidyr) Reading the csv file containing the BP stock prices: stocks <- read.csv('stocks.csv')head(stocks) date bp nasdaq 2009-01-02 47.0 15.79 2009-01-05 48.3 16.21 2009-01-06 48.9 16.42 2009-01-07 48.0 16.22 2009-01-08 48.3 15.90 2009-01-09 47.9 16.17 The oil spil date is indicated by the dashed vertical line. par(mfrow = c(2,1))plot(stocks$nasdaq,col="red",type="l",xlim=c(200, 500), ylim=c(20, 26))abline(v=326, lwd=1, lty=2)plot(stocks$bp,col="blue",type="l", xlim=c(200, 500))abline(v=326, lwd=1, lty=2) The CausalImpact package is based on BSTMs. BSTMs are used to build synthetic controls i.e. scenarios that would have occurred without the intervention. This technique allows one to estimate the causal effect and subsequent time evolution due to the intervention. We give the package the following two inputs: CausalImpact then builds a BSTM which is used to predict the counterfactual. Note that one must be careful to choose a control series unaffected by the intervention. library(CausalImpact)stocks <- stocks[,c('bp', 'nasdaq')]pre.period <- c(1, 325)post.period <- c(326, 504) impact <- CausalImpact(data = stocks, pre.period = pre.period, post.period = post.period)summary(impact) Posterior inference {CausalImpact} Average Cumulative Actual 41 7250 Prediction (s.d.) 57 (2.7) 10146 (480.6) 95% CI [51, 62] [9195, 11113] Absolute effect (s.d.) -16 (2.7) -2896 (480.6) 95% CI [-22, -11] [-3863, -1945] Relative effect (s.d.) -29% (4.7%) -29% (4.7%) 95% CI [-38%, -19%] [-38%, -19%] Posterior tail-area probability p: 0.00106 Posterior prob. of a causal effect: 99.89384% For more details, type: summary(impact, "report") par(mfrow = c(1, 2))plot(impact); Warning message: “Removed 504 rows containing missing values (geom_path).”Warning message: “Removed 1008 rows containing missing values (geom_path).” The difference between the data and the prediction post-intervention is the causal effect of the intervention. The three panels show respectively:
$\newcommand{\t}{[\text{time}]}\newcommand{\e}{[\text{energy}]}\newcommand{\a}{[\text{angle}]}\newcommand{\l}{[\text{length}]}\newcommand{\d}[1]{\;\mathrm{d} #1}$ Dimensions vs Units: I want to take an educational guess as to why angles are considered to be dimensionless whilst doing an dimensional analysis. Before doing that you should note that the angles have units. They are just dimensionless. The definition of the unit of measurement is as follows: A unit of measurement is a definite magnitude of a physical quantity, defined and adopted by convention or by law, that is used as a standard for measurement of the same physical quantity. There are in fact a lot of units for measuring angles such as radians, angles, minute of arc, second of arc etc. You can take a look at this wikipedia page for more information about units of angles. The dimension of an object is an abstract quantity and it is independent of how you measure this quantity. For example the units of force is Newton, which is simply $kg \cdot m/s^2$. However the dimensions of force is $$[F] = [\text{mass}] \frac{ [\text{length}]} {\t^2}$$ sometimes denoted as $$[F] = [M] \frac{[X]}{[T]^2}$$ but I'll stick to the first convention. The difference between units and dimensions is basically that the dimensions of a quantity is unique and define what that quantity is. However the units of the same quantity may be different eg. the units of force may perfectly be $ounce \cdot inch / ms^2$. Angles as Dimensionless Quantities As to why we like to consider angles as dimensionless quantities, I'd give to examples and consider the consequences of angles having dimensions: As you know the angular frequency is given by $$\omega = \frac{2 \pi} T \;,$$ where $T$ is the period of the oscillation. Let's make a dimensional analysis, as if angles had dimensions. I'll denote the dimension of a quantity with square brackets $[\cdot]$ as I did above. $$[\omega] \overset{\text{by definition}}{=} \frac{[\text{angle}]}{[\text {time}]}$$ However using the formula above we have $$[\omega] = \frac{[2\pi]}{[T]} = \frac{1}{[\text{time}]} \; , \tag 1$$ since a constant is considered to be dimensionless I discarded the $2\pi$ factor. This is a somewhat inconvenience in the notion of dimensional analysis. On the one hand we have $[\text{angle}]/\t$, on the other hand we have only $1/\t$. You can say that the $2\pi$ represents the dimensions of angle so what I did in the equation (1) i.e. discarding the constant $2\pi$ as a dimensionless number is simply wrong. However the story doesn't end here. There are some factors of $2\pi$ that show up too much in equations, that we define a new constant e.g. the reduced Plank's constant, defined by $$\hbar \equiv \frac{h}{2\pi} \; ,$$ where $h$ is the Plank's constant. The Plank's constant has dimensions $\text{energy} \cdot \t$. Now if you says that $2\pi$ has dimensions of angles, then this would also indicate that the reduced Plank's constant has units of $\e \cdot \t / \a$, which is close to nonsense since it is only a matter of convenience that we write $\hbar$ instead of $h/2\pi$, not because it has something to do with angles as it was the case with angular frequency. To sum up: Dimensions and units are not the same. Dimensions are unique and tell you what that quantity is, whereas units tell you how you have measured that particular quantity. If the angle had dimensions, then we would have to assign a number, which has neither a unit nor a dimension, a dimension, which is not what we would like to do because it can lead to misunderstandings as it was in the case of $\hbar$. Edit after Comments/Discussion in Chat with Rex If you didn't buy the above approach or find it a little bit circular, here is a better approach: Angles are nasty quantities and they don't play as nice as we want. We always plug in an angle into a trigonometric function such as sine or cosine. Let's see what happens if the angles had dimensions. Take the sine function as an example and approximate it by the Taylor series: $$\sin(x) \approx x + \frac {x^3} 6$$ Now we have said that $x$ has dimensions of angles, so that leaves us with $$[\sin(x)] \approx \a + \frac{\a^3} 6$$ Note that we have to add $\a$ with $\a^3$, which doesn't make any physical sense. It would be like adding $\t$ with $\e$. Since there is no way around this problem, we like to declare $\sin(x)$ as being dimensionless, which forces us to make an angle dimensionless. Another example to a similar problem comes from polar coordinates. As you may know the line element in polar coordinates is given by: $$\d s^2 = \d r^2+ r^2 \d \theta^2$$ A mathematician has no problem with this equation because s/he doesn't care about dimensions, however a physicist, who cares deeply about dimensions can't sleep at night if s/he wants angles to have dimensions because as you can easily verify the dimensional analysis breaks down. $$[\d s^2]= \l^2 = [\d r^2] + [r^2] [\d\theta^2] = \l^2 + \l^2 \cdot \a^2$$ You have to add $\l^2$ with $\l^2 \cdot \a^2$ and set it equal to $\l^2$, which you don't do in physics. It is like adding tomatoes and potatoes. More on why you shouldn't add too different units do read this question and answers given to it. Upshot: We choose to say that angles have no dimensions because otherwise they cause us too much headache, whilst making a dimensional analysis.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
The aim of a phase I dose-escalation study is to estimate the maximum tolerated dose (MTD) of a novel drug or treatment. In practice this often means to identify a dose for which the probability of a patient developing a dose-limiting toxicity (DLT) is close to a prespecified target level, typically between 0.20 and 0.33 in cancer trials. Zhou & Whitehead (2003) described a Bayesian model-based decision procedure to estimate the MTD. It uses logistic regression of the form \[\log\left(\frac{\text{P(toxicity)}}{1 - \text{P(toxicity)}}\right) = \text{a} + \text{b} \times \log(\text{dose})\] to model the relationship between the dose and the probability of observing a DLT, and a ‘gain function’ (a decision rule) to determine which dose to recommend for the next cohort of patients or as the MTD at the end of the study. The method is Bayesian in the sense that is uses accumulating study data to continually update the dose-toxicity model. The purpose of this Shiny app is to facilitate the use of the Bayesian model-based decision procedure in phase I dose-escalation studies. It has two parts: Select a locally stored CSV design file obtained from the ‘Design’ app, and it wil be automatically uploaded and processed. Check the ‘Design’ tab for a summary of the design parameters and prior information. The design file should be used as downloaded from the ‘Design’ module and not manipulated by hand. Patient data can either be uploaded as a CSV file, or manually entered via a spreadsheet interface. When uploading a data file in CSV format, make sure it contains on row per patient and three columns with the following information: The CSV file may contain further columns, but these will be ignored. Here is an example: Cohort Dose Toxicity 1 1.50 0 1 1.50 0 1 1.50 0 2 1.50 0 2 1.50 0 2 1.50 0 3 2.25 0 3 2.25 1 3 2.25 0 Specify whether there are column headlines in the first row of the CSV file, and which operators are used as column and decimal separators; the latter will usually depend on the locale of the computer used to create the data file. Select which columns of the dataset contain the cohort, dose, and response variable, respectively. Note that the column headlines in the CSV file do not necessarily have to be ‘Cohort’, ‘Dose’, and ‘Response’. The uploaded or manually entered dataset will be displayed under the ‘Dataset’ tab. If the table and/or graphics look messed up, check whether the right column and decimal separators were selected and columns specified correctly. Alternatively, tick the box to enter data manually into a spreadsheet. By default, a 3x3 table pops up that is populated with some arbitrary values. Click on any cell to change its entry. In the ‘Event’ column, tick a box to indicate that this patient has experienced a DLT. Add additional rows by right-clicking anywhere on the table and selecting ‘Insert row above’ or ‘Insert row below’. Similarly, delete rows by right-clicking on the specific row and selecting ‘Remove row’. Two tables give an overview of the design parameters and the prior information as specified in the design file. A table displays the full dataset as uploaded or entered into the spreadsheet. The table is fully searchable can can be sorted by column in ascending or descending order. Two plots show which patients received which doses and whether they experienced a DLT or not (left), and how often each dose was administered over the course of the study (right). A warning is issued if the dataset contains doses that are not among those prespecified in the design. Based on the design parameters, prior information, and study data, one of the following recommendations is given for the next cohort to enter the study: Stopping may be recommended for one of the following reasons: Note that multiple reasons may apply at the same time, for example when the MTD estimate reaches sufficient accuracy at the envisaged end of the study. A plot shows the estimated dose-toxicity curves and corresponding MTDs based on: The idea behind the latter is to obtain a purely data-based estimate of the MTD. While the red and blue curves may look very different, their MTD estimates are usually very similar though, especially with not-too-small sample sizes. All curves are presented alongside pointwise 95% normal approximation confidence bands. A table summarises the intercept and slope parameters of the models. In some cases the study may be terminated despite the recommendation being to continue. Tick the box to indicate that the study has been stopped in order to display the final model estimates. A PDF report summarising the design, prior information, study data, and recommendation is available for download. Yinghui Zhou & John Whitehead (2003) Practical implementation of Bayesian dose-escalation procedures. Drug Information Journal, 37(1), 45-59. DOI: 10.1177/009286150303700108
Problem 17 of Chapter 6 of Rudin's Principles of Mathematical Analysis asks us to prove the following: Suppose $\alpha$ increases monotonically on $[a,b]$, $g$ is continuous, and $g(x)=G'(x)$ for $a \leq x \leq b$. Prove that, $$\int_a^b\alpha(x)g(x)\,dx=G(b)\alpha(b)-G(a)\alpha(a)-\int_a^bG\,d\alpha.$$ It seems to me that the continuity of $g$ is not necessary for the result above. It is enough to assume that $g$ is Riemann integrable. Am I right in thinking this? I have thought as follows: $\int_a^bG\,d\alpha$ exists because $G$ is differentiable and hence continuous. $\alpha(x)$ is integrable with respect to $x$ since it is monotonic. If $g(x)$ is also integrable with respect to $x$ then $\int_a^b\alpha(x)g(x)\,dx$ also exists. To prove the given formula, I start from the hint given by Rudin$$\sum_{i=1}^n\alpha(x_i)g(t_i)\Delta x_i=G(b)\alpha(b)-G(a)\alpha(a)-\sum_{i=1}^nG(x_{i-1})\Delta \alpha_i$$where $g(t_i)\Delta x_i=\Delta G_i$ by the intermediate mean value theorem. Now the sum on the right-hand side converges to $\int_a^bG\,d\alpha$. The sum on the left-hand side would have converged to $\int_a^b\alpha(x)g(x)\,dx$ if it had been $$\sum_{i=1}^n \alpha(x_i)g(x_i)\Delta x$$ The absolute difference between this and what we have is bounded above by $$\max(|\alpha(a)|,|\alpha(b)|)\sum_{i=1}^n |g(x_i)-g(t_i)|\Delta x$$ and this can be made arbitrarily small because $g(x)$ is integrable with respect to $x$.
given the asymptotic distribution of $\hat{\theta_1}$ construct a 95% confidence interval for $\theta$ for large samples: $\hat{\theta_1} = \frac{\hat{\theta_1}-\theta}{\frac{\theta}{\sqrt{n}}}$ I know that the confidence interval will be : $-1.96 \leq \frac{\hat{\theta_1}-\theta}{\frac{\theta}{\sqrt{n}}} \leq 1.96$ and hence: $\frac{\hat{\theta_1}}{1+\frac{1.96}{\sqrt{n}}} \leq \theta \leq \frac{\hat{\theta_1}}{1-\frac{1.96}{\sqrt{n}}}$ Question: How to get from $-1.96 \leq \frac{\hat{\theta_1}-\theta}{\frac{\theta}{\sqrt{n}}} \leq 1.96$ to $\frac{\hat{\theta_1}}{1+\frac{1.96}{\sqrt{n}}} \leq \theta \leq \frac{\hat{\theta_1}}{1-\frac{1.96}{\sqrt{n}}}$ ? I am aware it is just inequality manipulation, however i am not able to solve it. Please provide detailed steps as my mathematical background is rather weak.
Lasted edited by Andrew Munsey, updated on June 15, 2016 at 1:43 am. It is well known that Heaviside was the first to introduce magnetic charges into Maxwell’s electrodynamics. The magnetic charges and currents introduction is justified by the fact that each permanent magnet may be seen as a system of two magnetic charges, which are the magnet’s poles. Accordingly, magnets movement may be considered as magnetic current passage. We may note also that lately monopoles and magnetic current have been discovered [1]. In the book [2] discusses some tasks with the electric and magnetic charges, and in book [3] - the application of these tasks to describe the energy generators with permanent magnets. Let us consider a system of symmetrical Maxwell equations in Cartesian coordinates [2]. Let us denote: : ~E - electric field strength, : ~H - magnetic field strength, : ~\mu - absolute permeability, : ~\epsilon - absolute dielectric constant, : ~\vartheta - electric conductivity, : ~\varsigma - magnetic conductivity, : ~\varphi - electric scalar potential, : ~\phi - magnetic scalar potential, : ~\rho - electric charge density, : ~\sigma - magnetic charge density. This system looks as follows: : \frac{\partial H_z}{\partial y} - \frac{\partial H_y}{\partial z} - \epsilon \frac{\partial E_x}{\partial t} + \vartheta \frac{d \varphi}{dx} =0 , : \frac{\partial H_x}{\partial z} - \frac{\partial H_z}{\partial x} - \epsilon \frac{\partial E_y}{\partial t} + \vartheta \frac{d \varphi}{dy} =0 , : \frac{\partial H_y}{\partial x} - \frac{\partial H_x}{\partial y} - \epsilon \frac{\partial E_z}{\partial t} + \vartheta \frac{d \varphi}{dz} =0 , : \frac{\partial E_z}{\partial y} - \frac{\partial E_y}{\partial z} + \mu \frac{\partial H_x}{\partial t} - \varsigma \frac{d \phi}{dx} =0 , : \frac{\partial E_x}{\partial z} - \frac{\partial E_z}{\partial x} + \mu \frac{\partial H_y}{\partial t} - \varsigma \frac{d \phi}{dy} =0 , : \frac{\partial E_y}{\partial x} - \frac{\partial E_x}{\partial y} + \mu \frac{\partial H_z}{\partial t} - \varsigma \frac{d \phi}{dz} =0 , : \frac{\partial E_x}{\partial x} + \frac{\partial E_y}{\partial y} + \frac{\partial E_z}{\partial z} - \frac{\rho}{\epsilon} =0 , : \frac{\partial H_x}{\partial x} + \frac{\partial H_y}{\partial y} + \frac{\partial H_z}{\partial z} - \frac{\sigma}{\mu} =0 . Let us mark some characteristic features of this systems equations: the existence of magnetic charges and currents is assumed, instead of electric and magnetic charges the scalar potentials and conductivities, both electric and magnetic, are introduced. We shall consider a system containing magnetic and electric charges, whose density distributions are described by the functions: : ~\rho(x,y,z,t)= \rho_o \cosh (\beta z + vt) \cosh (\theta y) \delta (x), : ~\sigma(x,y,z,t)= \sigma_o \cosh (\beta z + vt) \cosh (\theta y) \delta (x), where : ~\rho_o, \sigma_o - amplitudes, : ~\beta, \theta, v - known constants, : ~\delta (x) - Dirac delta function. In [3] shows that such functions can describe the charges distribution in magnet motors. In this case the x-axis is directed along the axis of the permanent magnet, and the z-axis is directed along the velocity of the permanent magnet. Dirac delta function describes a layer of charges at the end (x=0) of the permanent magnet, and function cosh describes the charge distribution along the diameter of the permanent magnet end. If charges density distribution functions have (if x>0) the form above, then the solution of symmetrical Maxwell is as follows: : ~E_x(x,y,z,t)=e_x \cosh (\beta z + vt) \sinh (\theta y) \cos(\chi x), : ~E_y(x,y,z,t)=e_y \cosh (\beta z + vt) \cosh (\theta y) \sin(\chi x), : ~E_z(x,y,z,t)=e_z \sinh (\beta z + vt) \sinh (\theta y) \sin(\chi x), : ~H_x(x,y,z,t)=h_x \cosh (\beta z + vt) \cosh (\theta y) \cos(\chi x), : ~H_y(x,y,z,t)=h_y \cosh (\beta z + vt) \sinh (\theta y) \sin(\chi x), : ~H_z(x,y,z,t)=h_z \sinh (\beta z + vt) \cosh (\theta y) \sin(\chi x), : ~\varphi(x,y,z,t)=\varphi_o \sinh (\beta z + vt) \sinh (\theta y) \sin(\chi x), : ~\phi(x,y,z,t)=\phi_o \sinh (\beta z + vt) \cosh (\theta y) \cos(\sin x), where : ~e_x, e_y, e_z, h_x, h_y, h_z, \varphi_o, \phi_o - coefficients depending on the ~\rho_o, \sigma_o, \beta, \theta, and : ~\chi=\sqrt {\beta^2+\theta^2}. For fixed values ??of the variables we have: : ~E_x(x,t)=e_x^\prime \cosh (vt) \cos(\chi x), : ~E_y(x,t)=e_y^\prime \cosh (vt) \sin(\chi x), : ~E_z(x,t)=e_z^\prime \sinh (vt) \sin(\chi x), : ~H_x(x,t)=h_x^\prime \cosh (vt) \cos(\chi x), : ~H_y(x,t)=h_y^\prime \cosh (vt) \sin(\chi x), : ~H_z(x,t)=h_z^\prime \sinh (vt) \sin(\chi x). It is seen that the vectors satisfy the definition of Energy-dependent Electromagnetic Wave. :1. S.T. Bramwell, S.R. Giblin, S. Calder, R. Aldus, D. Prabhakaran, T. Fennell. Measurement of the charge and current of magnetic monopoles in spin ice Nature 461, 956-959 (15 October 2009) | doi: 10.1038/nature08500 Received 18 June 2009 Accepted 14 September 2009. :2. Khmelnik S.I. Variational Principle of Extremum in electromechanical and electrodynamic Systems. Publisher by “MiC”, printed in USA, Lulu Inc., ID 1142842, Israel, 2010, second edition, ISBN 978-0-557-08231-5, USA, Lulu Inc., ID 1142842 :3. Khmelnik S.I. Energy processes in free-full electromagnetic generators. Publisher by “MiC”, printed in USA, Lulu Inc., ID 10292524, Israel, 2011, second edition, ISBN 978-1-257-08919-2, USA, Lulu Inc., ID 10292524 There was an error working with the wiki: Code[1]
The action which describes Brans-Dicke theory is given by, $$S=\frac{1}{16\pi G}\int d^4x \, \sqrt{|g|} \left( -\Phi R + \frac{\omega}{\Phi}\partial_\mu \Phi \partial^\mu \Phi \right)$$ which features a scalar field $\Phi$ coupling to gravity through the Ricci scalar, and with its own kinetic term. To obtain the equations of motion, we vary our action with respect to the scalar and metric, like so, $$\delta S = \frac{1}{16\pi G} \int d^4x \, \delta \Phi \left( -R - \frac{2\omega}{\Phi} \square \Phi + \frac{\omega}{\Phi^2} \partial_\mu \Phi \partial^\mu\Phi\right)$$$$-\delta g^{\mu\nu} \left(\Phi G_{\mu\nu}-\frac{\omega}{\Phi} \partial_\mu \Phi \partial_\nu \Phi + \frac{1}{2}g_{\mu\nu}\frac{\omega}{\Phi}\partial_\lambda \Phi \partial^\lambda \Phi\right) + \Phi (\nabla_\mu\nabla_\nu \delta g^{\mu\nu}-\square g_{\mu\nu}\delta g^{\mu\nu})$$where we have already performed an integration by parts. From the variation, we may deduce,$$\Phi G_{\mu\nu} - \nabla_\mu \nabla_\nu \Phi + g_{\mu\nu} \square \Phi - \frac{\omega}{\Phi} \left( \partial_\mu \Phi \partial_\nu \Phi - \frac{1}{2}g_{\mu\nu} (\nabla\Phi)^2\right) = 8\pi T_{\mu\nu}$$ for some background matter with stress-energy tensor $T_{\mu\nu}$. There is an additional equation of motion due to the scalar field, namely, $$\Phi R + 2\omega \square \Phi - \frac{\omega}{\Phi} (\nabla\Phi)^2 = 0$$ which is zero providing the scalar field does not couple to the background matter. We can now take a trace with respect to the metric of the first equation, obtaining, $$-\Phi R+3\square \Phi + \frac{\omega}{\Phi}(\nabla \Phi)^2 = 8\pi T$$ presuming $d=4$, where $T \equiv T^\mu_\mu$. Adding this equation to the previous, we find, $$(3+2\omega) \square \Phi = 8\pi T.$$ The parameter $\omega$ measures how strongly $\Phi$ couples to matter content. We can rewrite the 'Einstein' field equations as, $$R_{\mu\nu}-\frac{1}{\Phi}\nabla_\mu \nabla_\nu \Phi + \frac{1}{\Phi}g_{\mu\nu} \square \Phi - \frac{\omega}{\Phi^2}\partial_\mu \Phi \partial_\nu \Phi = \frac{8\pi}{\Phi}T_{\mu\nu} - g_{\mu\nu}\frac{\omega}{\Phi} \Phi \square \Phi$$ by expanding the Einstein tensor and substituting the relation between the Ricci scalar and field. We can now write a relation between the Ricci tensor, field and stress-energy tensor, namely, $$R_{\mu\nu}-\frac{1}{\Phi}\nabla_\mu \nabla_\nu \Phi - \frac{\omega}{\Phi^2}\partial_\mu \Phi \partial_\nu \Phi = \frac{8\pi}{\Phi} \left( T_{\mu\nu}-\frac{(\omega+1)}{(3+2\omega)}T g_{\mu\nu} \right)$$This post imported from StackExchange Physics at 2014-12-31 12:12 (UTC), posted by SE-user JamalS
I'll summarize the comments from chi, and sketch the proof that There can be no proof of $\mathrm{False}:=\forall X:*.X$ in the CoC in head normal form. Furthermore, this fact can be proven in a weak theory, say Peano Arithmetic (though the excluded middle is not required). This fact implies that if the CoC is normalizing, then it is consistent, and furthermore this implication does not use classical logic. for 1., you proceed by induction on the term structure of a hypothetical closed term $t$ of type $\mathrm{False}$. Actually terms in normal form must be in the form: $$ \lambda x_1:T_1\ldots\lambda x_n:T_n.y\ t_1\ldots t_m$$ where $n$ and $m$ may be $0$. If $n$ is zero, then $t=y\ t_1\ldots t_m$, which is not possible since $t$ is closed (typed in the empty context). Otherwise, we may apply inversion and conclude that $T_1=*$ (and $x_1=X$), and we get$$ X:*\vdash\lambda x_2\ldots\lambda x_n.y\ t_1\ldots t_m\ :\ X$$to be derivable in the CoC. Now since $X$ is not a $\Pi$, we can apply inversion again to conclude that $n=1$ and in fact the above term is simply$$ X:*\vdash y\ t_1\ldots t_m\ :\ X$$Inversion yet again shows that$$X:*\vdash y\ :\ \Pi y_1:U_1\ldots\Pi y_m:U_m.X $$but $y$ must be $X$, since it is the only variable around! Therefore $m=0$, and $X=*$, which is impossible, contradiction. Now all this reasoning is intuitionistic, as I've proven a negative (there can be no proof of...) and proofs of negations are always constructive. I do rely heavily on inversion, and you'll just have to take it on faith that this also can be proven in arithmetic, without the excluded middle, which is non-trivial. Now for 2. we define consistency to mean "does not prove $\mathrm{False}$"! Again a negative statement. Now if the CoC is normalizing, one can take any normal form of a proof of $\mathrm{False}$ and use the above argument to get a contradiction. Again a constructive argument! Finally to tie it all together. Now suppose you had enough arithmetic to carry out the above arguments in CoC. Note that this is almost possible: you actually need to add the axiom $0\neq 1$ to get anything off the ground. You can then prove that normalization implies consistency within the CoC, and you also have enough arithmetic for the second incompleteness theorem to apply. Therefore you cannot (if CoC is consistent!) prove normalization, as then you would have a full proof of consistency within CoC.
This is not a complete answer, but maybe helpful nevertheless: As you noted, it is a necessary condition that $\chi_{U_n}$ (the characteristic function/indicator function of $U_n$) fulfils $\chi_{U_n} \to \chi_B$ pointwise. But conversely, if this is the case, then $$\mu(B \Delta U_n) = \int |\chi_B - \chi_{U_n} | \, d\mu \to 0$$ by dominated convergence (the integrand is dominated by $2$, which is integrable, because $\mu$ is a finite measure). Hence, what you are effectively asking is if for every Borel set $B$, there is some sequence $(U_n)_n$ of open sets with $\chi_{U_n} \to \chi_B$ pointwise. Now note that every open set $U$ fulfils $\chi_U = \lim_n f_n$ pointwise for a suitable sequence $(f_n)_n$ of continuous functions (basically, take $U = \bigcup_n K_n$ with $K_n$ compact and $K_n \subset K_{n+1}$ and use (e.g.) Urysohns Lemma to construct $f_n \in C_c(U)$ with $f_n \equiv 1 $ on $K_n$). Hence, every functio $\chi_U$ is of Baire class 1 (see http://en.wikipedia.org/wiki/Baire_function). So if what you are asking is true, then every indicator function $\chi_B$ with $B$ Borel would be of Baire class (at most) two. I highly doubt that this is true, but have not found an explicit counterexample/source where this is stated. EDIT: Also, the pointwise limit $\chi_B = \chi_{U_n}$ implies that for $x \in B$, we have $\chi_{U_n}(x) \to 1$. Because of $\chi_{U_n}(x) \in \{0,1\}$, this yields some $n_x \in \Bbb{N}$ with $\chi_{U_n}(x) = 1$ for all $n \geq n_x$ and hence $x \in \bigcap_{n\geq n_x} U_n$. Hence, $$x \in \bigcup_{k \geq 1}\bigcap_{n \geq k} U_n.$$ A similar argument shows that if $x \notin \bigcup_{k} \bigcap_{n \geq k} U_n$, then $x \notin B$. Hence, $$B = \bigcup_{k \geq 1} \bigcap_{n \geq k}U_n,$$ which implies that $B$ is a $G_{\delta, \sigma}$ set, or (other notation, same statement) a $\Sigma_3^0$ set (see http://en.wikipedia.org/wiki/Borel_hierarchy and http://en.wikipedia.org/wiki/G%CE%B4_set for the notation used here). The article http://en.wikipedia.org/wiki/Borel_hierarchy#Boldface_Borel_hierarchy claims (the last bullet point in that paragraph): If $X$ is an uncountable Polish space, it can be shown that $\mathbf{\Sigma}^0_\alpha$ is not contained in $\mathbf{\Pi}^0_\alpha$ for any $\alpha < \omega_1$, and thus the hierarchy does not collapse. This implies that not every Borel set is a $G_{\delta, \sigma}$ set. Hence, your claim is false.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
I'm doing some research into the RSA cryptosystem but I just need some clarity on how it worked when it was published in the 70s. Now I know that it works with public keys but did it also work with private keys back then or did it use a shared public key first and then private keys were introduced later? RSA never was intended as a symmetric/secret key cryptosystem, or extensively used as such. Public/Private key pairs have been used for RSA from day one. A very close cousin of RSA, also using Public/Private key pairs, was known (but not published) at the GCHQ significantly before RSA was published. See Clifford Cocks's declassified A Note on 'Non-secret Encryption' (1973). Public-key cryptography was theorized at the GCHQ even before, see James Ellis's declassified The Possibility of Secure Non-Secret Encryption (1969), and his account in the declassified The history of Non-Secret Encryption (1987). See this question for more references on these early works. As kindly reminded by poncho: the Pohlig-Hellman exponentiation cipher is a symmetric analog of textbook RSA. It uses as public parameters a large public prime $p$ with $(p-1)/2$ also prime, and two random odd secret exponents $e$ and $d$ with the relation $e\cdot d\equiv1\pmod{(p-1)}$; encryption of $m$ with $1<m<p$ is $c\gets m^e\bmod p$, decryption is $m\gets c^e\bmod p$. By incorporating the computation of $d$ from the encryption key $e$ into the decryption, and using cycle walking to coerce down the message space to bitstrings and remove a few fixed points, it becomes a full-blown block cipher by the modern definition of that. Security is related to the discrete logarithm problem in $\mathbb Z_p^*$ . An algorithm for solving that is the main subject of the article, and what Pohlig-Hellman now designates. The encryption algorithm has little practical interest, because it is very slow for a symmetric-only algorithm. It never caught in practice, and I believe never was intended to do so. I found no earlier reference than: Stephen C. Pohlig and Martin E. Hellman, An Improved Algorithm for Computing Logarithm over $GF(p)$ and Its Cryptographic Significance published in IEEE Transactions on Information Theory, Volume 24 Issue 1, January 1978. RSA was clearly known to the authors when they submitted this correspondence. They make explicit reference to: Ronald L. Rivest, Adi Shamir, and Leonard Adleman, On Digital Signatures and Public-Key Cryptosystems, Technical Memo MIT/LCS/TM-82, dated April 1977 (received by the Defense Documentation Center on May 3, 1977; publication date unknown). Ronald L. Rivest, Adi Shamir, and Leonard Adleman, A Method for Obtaining Digital Signatures and Public-Key Cryptosystemspublished in Communications of the ACM, Volume 21 Issue 2, February 1978 (received April 4, 1977; revised September 1, 1977). Note: I discovered (1.) only because it is referenced by Pohlig and Hellman; it has a number of rough edges fixed in (2.), including a byzantine and unnecessary complication in the handling of messages not coprime with the public modulus, that are telling of the novelty. I refer to poncho's account on chronology.
TransmissionEigenvalues¶ class TransmissionEigenvalues( configuration=None, energy=None, k_point=None, energy_zero_parameter=None)¶ Class for representing the transmission eigenvalues for a given configuration and calculator. Parameters: configuration( DeviceConfiguration) – The device configuration with attached calculator for which the transmission eigenvalues should be calculated. energy(PhysicalQuantity of type energy) – The energy for which the transmission eigenvalues should be calculated. Default: 0.0*eV k_point( list of two floats.) – The 2-dimensional k-point in fractional coordinates for which the transmission eigenvalues should be calculated (x,y), e.g. [0.8, 0.2]. Default: [0.0, 0.0] energy_zero_parameter( AverageFermiLevel| AbsoluteEnergy.) – Specifies the choice for the energy zero. Default: AverageFermiLevel electrodeFermiLevels()¶ Returns: The left and right electrodes Fermi levels in absolute energies. Return type: PhysicalQuantity energy()¶ Returns: The energy used in this transmission eigenvalue calculation. Return type: PhysicalQuantity of type energy energyZeroParameter()¶ Returns: The specified choice for the energy zero. Return type: AverageFermiLevel | AbsoluteEnergy evaluate( spin=None)¶ Obtain the calculated transmission eigenvalues. Parameters: spin( Spin.Up| Spin.Down) – The spin component to evaluate. Default: Spin.Up Returns: The requested transmission eigenvalues. Return type: numpy.array kPoint()¶ Returns: The two-dimensional fractional k-point used in this transmission eigenvalue calculation. Return type: list of two floats metatext()¶ Returns: The metatext of the object or None if no metatext is present. Return type: str | unicode | None nlprint( stream=None)¶ Print a string containing an ASCII table useful for plotting the AnalysisSpin object. Parameters: stream( python stream) – The stream the table should be written to. Default: NLPrintLogger() setMetatext( metatext)¶ Set a given metatext string on the object. Parameters: metatext( str | unicode | None) – The metatext string that should be set. A value of “None” can be given to remove the current metatext. Usage Examples¶ eigenvalues = TransmissionEigenvalues(device_configuration,0.0*eV )nlprint(eigenvalues) Notes¶ The TransmissionEigenvalues is an analysis option which finds the eigenvalues of the transmission matrix. The transmission matrix, for a given energy, \(E\), and k-point, \(\mathbf{k}\), is given by where \(t_{n\ell}(\mathbf{k})\) is the transmission amplitude from Bloch state \(\psi_n(\mathbf{k})\) in the left electrode to Bloch state \(\psi_\ell(\mathbf{k})\) in the right electrode. The transmission coefficient is given by the trace of the transmission matrix, Below, we will suppress the indices \(E\) and \(\mathbf{k}\) in most cases, but keep in mind that all quantities depend on these quantum numbers parametrically. The transmission eigenvalues \(\lambda_\alpha\) are the eigenvalues of the transmission matrix \(T_{nm}\). It follows from the invariance of the trace of a matrix, that the transmission eigenvalues sum up to the transmission coefficient, The transmission eigenvalues are, in QuantumATK, in the range [0,1] for each spin channel. See also, TransmissionEigenstate.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
Given a deterministic partial-information zero-sum game with only finitely many states, whose possible outcomes are [lose,draw,win] with values [-1,0,+1] respectively, what is the complexity of approximating the value of such a game additively within $\epsilon$? In particular, I can't come up any algorithm whatsoever for doing that. The rest of this post is devoted entirely to giving a more thorough description of the problem, so if you can already figure out what the question at the top of this post means, then there's no reason for you to read the rest of this post. Given a referee machine with states $\{1,2,3,...,S\}$, with a designated initial state $s_0$, a state $s_a$ whose score pair is $[-1,+1]$, a state $s_b$ whose score pair is $[+1,-1]$, and states of the form $[\mbox{p1_info,p2_info,num_of_choices,player_to_move,next_state_table}]$ where: $\mbox{player_to_move} \in \{1,2\}$ $\mbox{next_state_table}$ is a function from $\{1,2,3,...,\mbox{num_of_choices}\} \to \{1,2,3,...,S\}$ $\mbox{p1_info},\mbox{p2_info}, \mbox{num_of_choices} \geq 1$ When the machine is in a state of that form: sends $\mbox{p1_info}$ to Player_1 and sends $\mbox{p2_info}$ to Player_2, sends $\mbox{num_of_choices}$ to the indicated player, waits for an element of $\{1,2,3,...,\mbox{num_of_choices}\}$ as input from that player, then goes to the state indicated by $\mbox{next_state_table}$ When the machine enters one of the other two states $s_a$ or $s_b$, halts with that state's score pair as its output There is a natural two-player game: the referee machine is started in state $s_0 = 1$, the players provide the inputs that the referee machine waits for, if the referee machine halts then Player 1 scores the first value of the machine's output pair and Player 2 scores the second value of the machine's output pair, otherwise both players score 0. What is the complexity of the following problem? Given such a referee machine and a positive integer N, output a rational number that is (additively) within 1/N of the value of the natural game for Player 1. As mentioned earlier in this question, I can't come up with any algorithm whatsoever for doing that.
Verify Simulations with the Method of Manufactured Solutions How do we check if a simulation tool works correctly? One approach is the Method of Manufactured Solutions. The process involves assuming a solution, obtaining source terms and other auxiliary conditions consistent with the assumption, solving the problem with those conditions as inputs to the simulation tool, and comparing the results with the assumed solution. The method is easy to use and very versatile. For example, researchers at Sandia National Laboratories have used it with several in-house codes. Verification and Validation Before using a numerical simulation tool to predict outcomes from previously unforeseen situations, we want to build trust in its reliability. We can do this by checking whether the simulation tool accurately reproduces available analytical solutions or whether its results match experimental observations. This brings us to two closely related topics of verification and validation. Let’s clarify what these two terms mean in the context of numerical simulations. To numerically simulate a physical problem, we take two steps: Construct a mathematical model of the physical system. This is where we account for all of the factors (inputs) that influence observed behavior (outputs) and postulate the governing equations. The result is often a set of implicit relations between inputs and outputs. This is frequently a system of partial differential equations with initial and boundary conditions that collectively are referred to as an initial boundary value problem(IBVP). Solve the mathematical model to obtain the outputs as explicit functions of the inputs. However, such closed form solutions are not available for most problems of practical interest. In this case, we use numerical methods to obtain approximate solutions, often with the help of computers to solve large systems of generally nonlinear algebraic equations and inequalities. There are two situations where errors can be introduced. First, they can occur in the mathematical model itself. Potential errors include overlooking an important factor or assuming an unphysical relationship between variables. Validation is the process of making sure such errors are not introduced when constructing the mathematical model. Verification, on the other hand, is to ascertain that the mathematical model is accurately solved. Here, we are ensuring that the numerical algorithm is convergent and the computer implementation is correct, so that the numerical solution is accurate. In brief, during validation we ask if we posed the appropriate mathematical model to describe the physical system, whereas in verification we investigate if we are obtaining an accurate numerical solution to the mathematical model. Now, we will dive deeper into the verification of numerical solutions to initial boundary value problems (IBVPs). Different Verification Approaches How do we check if a simulation tool is accurately solving an IBVP? One possibility is to choose a problem that has an exact analytical solution and use the exact solution as a benchmark. The method of separation of variables, for example, can be used to obtain solutions to simple IBVPs. The utility of this approach is limited by the fact that most problems of practical interest do not have exact solutions — the raison d’être of computer simulation. Still, this approach is useful as a sanity check for algorithms and programming. Another approach is to compare simulation results with experimental data. To be clear, this is combining validation and verification in one step, which is sometimes called qualification. It is possible but unlikely that experimental observations are matched coincidentally by a faulty solution through a combination of a flawed mathematical model and a wrong algorithm or a bug in the programming. Barring such rare occurrences, a good match between a numerical solution and an experimental observation vouches for the validity of the mathematical model and the veracity of the solution procedure. The Application Libraries in COMSOL Multiphysics contain many verification models that use one or both of these approaches. They are organized by physics areas. Verification models are available in the Application Libraries of COMSOL Multiphysics. What if we want to verify our results in the absence of exact mathematical solutions and experimental data? We can turn to the method of manufactured solutions. Implementing the Method of Manufactured Solutions The goal of solving an IBVP is to find an explicit expression for the solution in terms of independent variables, usually space and time, given problem parameters such as material properties, boundary conditions, initial conditions, and source terms. Common forms of source terms include body forces such as gravity in structural mechanics and fluid flow problems, reaction terms in transport problems, and heat sources in thermal problems. In the Method of Manufactured Solutions (MMS), we flip the script and start with an assumed explicit expression for the solution. Then, we substitute the solution to the differential equations and obtain a consistent set of source terms, initial conditions, and boundary conditions. This usually involves evaluating a number of derivatives. We will soon see how the symbolic algebra routines in COMSOL Multiphysics can help with this process. Similarly, we evaluate the assumed solution at time t = 0 and at the boundaries to obtain initial conditions and the boundary conditions. Next comes the verification step. Given the source terms and auxiliary conditions just obtained, we use the simulation tool to obtain a numerical solution to the IBVP and compare it to the original assumed solution with which we started. Let us illustrate the steps with a simple example. Verifying 1D Heat Conduction Consider a 1D heat conduction problem in a bar of length L with initial condition and fixed temperatures at the two ends given by The coefficients A_c, \rho, C_p, and k stand for the cross-sectional area, mass density, heat capacity, and thermal conductivity, respectively. The heat source is given by Q. Our goal is to verify the solution of this problem using the method of manufactured solutions. First, we assume an explicit form for the solution. Let’s consider the temperature distribution where \tau is a characteristic time, which for this example is an hour. We introduce a new variable u for the assumed temperature to distinguish it from the computed temperature T. Next, we find the source term consistent with the assumed solution. We can hand calculate partial derivatives of the solution with respect to space and time and substitute them in the differential equation to obtain Q. Alternatively, since COMSOL Multiphysics is able to perform symbolic manipulations, we will use that feature instead of hand calculating the source term. In the case of uniform material and cross-sectional properties, we can declare A_c, \rho, C_p, and k as parameters. The general heterogeneous case requires variables, as do time-dependent boundary conditions. Notice the use of the operator d(), one of the built-in differentiation operators in COMSOL Multiphysics, shown in the screenshot below. The symbolic algebra routine in COMSOL Multiphysics can automate the evaluation of partial derivatives. We perform this symbolic manipulation with the caveat that we trust the symbolic algebra. Otherwise, any errors observed later could be from the symbolic manipulation and not the numerical solution. Of course, we can plot a hand-calculated expression for Q alongside the result of the symbolic manipulation shown above to verify the symbolic algebra routine. Next, we compute the initial and boundary conditions. The initial condition is the assumed solution evaluated at t = 0. The values of the temperature at the two ends of the bar are g_1(t) = g_2(t) = 500 K. Next, we obtain the numerical solution of the problem using the source term, as well as the initial and boundary conditions we have just calculated. For this example, let us use the Heat Transfer in Solids physics interface. Add initial values, boundary conditions, and sources derived from the assumed solution. For the final step, we compare the numerical solution with the assumed solution. The plots below show the temperature after a time period of one day. The first solution is obtained using linear elements, whereas the second solution is obtained using the quadratic elements. For this type of problem, COMSOL Multiphysics chooses quadratic elements by default. The solution computed using the manufactured solution with linear elements (left) and quadratic elements (right). Checking Different Parts of the Code The MMS gives us the flexibility to check different parts of the code. In the example given above, for the purpose of simplicity we have intentionally left many parts of the IBVP unchecked. In practice, every item in the equation should be checked in the most general form. For example, to check if the code accurately handles nonuniform cross-sectional areas, we need to define a spatially variable area before deriving the source term. The same is true for other coefficients such as material properties. A similar check should be made for all boundary and initial conditions. If, for example, we want to specify the flux on the left end instead of the temperature, we first evaluate the flux corresponding to the manufactured solution, i.e., -n\cdot(-A_ck \nabla u), where n is the outward unit normal. For the assumed solution in this example, the inward flux at the left end becomes \frac{A_ck}{L}\frac{t}{\tau}*1K. In COMSOL Multiphysics, the default boundary condition for heat transfer in solids is thermal insulation. What if we want to verify the handling of thermal insulation on the left end? We would need to manufacture a new solution where the derivative vanishes on the left end. For example, we can use Note that during verification, we are checking if the equations are being correctly solved. We are not concerned with whether the solution corresponds to physical situations. Remember that once we manufacture a new solution, we have to recalculate the source term, initial conditions, and boundary conditions according to the assumed solution. Of course, when we use the symbolic manipulation tools in COMSOL Multiphysics, we are exempt from the tedium! Convergence Rate As shown in the graph above, the solutions obtained by the linear element and the quadratic element converged as the mesh size was reduced. This qualitative convergence gives us some confidence in the numerical solution. We can further scrutinize the numerical method by studying its rate of convergence, which will provide a quantitative check of the numerical procedure. For example, for the stationary version of the problem, the standard finite element error estimate for error measured in the m-order Sobolev norm is where u and u_h are the exact and finite element solutions, h is the maximum element size, p is the order of the approximation polynomials (shape functions). For m = 0, this gives the error estimate where C is a mesh independent constant. Returning to the method of manufactured solutions, this implies that the solution with linear element (p = 1) should show second-order convergence when the mesh is refined. If we plot the norm of the error with respect to mesh size on a log-log plot, the slope should asymptotically approach 2. If this does not happen, we will have to check the code or the accuracy and regularity of inputs such as material and geometric properties. As the figures below show, the numerical solution converges at the theoretically expected rate. Left: Use integration operators to define norms. The operator intop1 is defined to integrate over the domain. Right: Log-log plot of error versus mesh size shows second-order convergence in the L_2-norm (m = 0) for linear elements, which is consistent with theoretical prediction. While we should always check convergence, the theoretical convergence rate can only be checked for those problems like the one above where a priori error estimates are available. When you have such problems, remember that the method of manufactured solutions can help you verify if your code shows the correct asymptotic behavior. Nonlinear Problems and Coupled Problems In case of constitutive nonlinearity, the coefficients in the equation depend on the solution. In heat conduction, for example, thermal conductivity can depend on the temperature. In such cases, the coefficients need to be derived from the assumed solution. Coupled (multiphysics) problems have more than one governing equation. Once solutions are assumed for all the fields involved, source terms have to be derived for each governing equation. Uniqueness Note that the logic behind the method of manufactured solutions holds only if the governing system of equations has a unique solution under the conditions (source term, boundary, and initial conditions) implied by the assumed solution. For example, in the stationary heat conduction problem, uniqueness proofs require positive thermal conductivity. While this is straightforward to check for in the case of isotropic uniform thermal conductivity, in the case of temperature dependent conductivity or anisotropy more thought should be given when manufacturing the solution to not violate such assumptions. When using the method of manufactured solutions, the solution exists by construction. In addition, uniqueness proofs are available for a much larger class of problems than we have exact analytical solutions. Thus, the method gives us more room to work with than looking for exact solutions starting with source terms and initial and boundary conditions. Try It Yourself The built-in symbolic manipulation functionality of COMSOL Multiphysics makes it easy to implement the MMS for code verification as well as for educational purposes. While we do extensive testing of our codes, we welcome scrutiny on the part of our users. This blog post introduced a versatile tool that you can use to verify the various physics interfaces. You can also verify your own implementations when using equation-based modeling or the Physics Builder in COMSOL Multiphysics. If you have any questions about this technique, please feel free to contact us! Resources For an extensive discussion of the method of manufactured solutions including relative strengths and limitations, see this report from Sandia National Laboratories. The report details a set of blind tests in which one author planted a series of code mistakes unbeknownst to the second author, who had to mine-sweep using the method described in this blog post. For a broader discussion on verification and validation in the context of scientific computing, check out W. J. Oberkampf and C. J. Roy, Verification and Validation in Scientific Computing, Cambridge University Press, 2010 W. J. Oberkampf and C. J. Roy, Standard error estimates for the finite element method are available in texts such as Thomas J. R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Dover Publications, 2000 B. Daya Reddy, Introductory Functional Analysis: With Applications to Boundary Value Problems and Finite Elements, Springer-Verlag, 1997 Thomas J. R. Hughes, Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Learning Objectives By the end of this section, you will be able to: Describe how Tycho Brahe and Johannes Kepler contributed to our understanding of how planets move around the Sun Explain Kepler’s three laws of planetary motion At about the time that Galileo was beginning his experiments with falling bodies, the efforts of two other scientists dramatically advanced our understanding of the motions of the planets. These two astronomers were the observer Tycho Brahe and the mathematician Johannes Kepler. Together, they placed the speculations of Copernicus on a sound mathematical basis and paved the way for the work of Isaac Newton in the next century. Tycho Brahe’s Observatory Three years after the publication of Copernicus’ De Revolutionibus, Tycho Brahe was born to a family of Danish nobility. He developed an early interest in astronomy and, as a young man, made significant astronomical observations. Among these was a careful study of what we now know was an exploding star that flared up to great brilliance in the night sky. His growing reputation gained him the patronage of the Danish King Frederick II, and at the age of 30, Brahe was able to establish a fine astronomical observatory on the North Sea island of Hven (Figure 1). Brahe was the last and greatest of the pre-telescopic observers in Europe. At Hven, Brahe made a continuous record of the positions of the Sun, Moon, and planets for almost 20 years. His extensive and precise observations enabled him to note that the positions of the planets varied from those given in published tables, which were based on the work of Ptolemy. These data were extremely valuable, but Brahe didn’t have the ability to analyze them and develop a better model than what Ptolemy had published. He was further inhibited because he was an extravagant and cantankerous fellow, and he accumulated enemies among government officials. When his patron, Frederick II, died in 1597, Brahe lost his political base and decided to leave Denmark. He took up residence in Prague, where he became court astronomer to Emperor Rudolf of Bohemia. There, in the year before his death, Brahe found a most able young mathematician, Johannes Kepler, to assist him in analyzing his extensive planetary data. Johannes Kepler Johannes Kepler was born into a poor family in the German province of Württemberg and lived much of his life amid the turmoil of the Thirty Years’ War (see Figure 1). He attended university at Tubingen and studied for a theological career. There, he learned the principles of the Copernican system and became converted to the heliocentric hypothesis. Eventually, Kepler went to Prague to serve as an assistant to Brahe, who set him to work trying to find a satisfactory theory of planetary motion—one that was compatible with the long series of observations made at Hven. Brahe was reluctant to provide Kepler with much material at any one time for fear that Kepler would discover the secrets of the universal motion by himself, thereby robbing Brahe of some of the glory. Only after Brahe’s death in 1601 did Kepler get full possession of the priceless records. Their study occupied most of Kepler’s time for more than 20 years. Through his analysis of the motions of the planets, Kepler developed a series of principles, now known as Kepler’s three laws, which described the behavior of planets based on their paths through space. The first two laws of planetary motion were published in 1609 in The New Astronomy. Their discovery was a profound step in the development of modern science. The First Two Laws of Planetary Motion The path of an object through space is called its orbit. Kepler initially assumed that the orbits of planets were circles, but doing so did not allow him to find orbits that were consistent with Brahe’s observations. Working with the data for Mars, he eventually discovered that the orbit of that planet had the shape of a somewhat flattened circle, or ellipse. Next to the circle, the ellipse is the simplest kind of closed curve, belonging to a family of curves known as conic sections (Figure 2). You might recall from math classes that in a circle, the center is a special point. The distance from the center to anywhere on the circle is exactly the same. In an ellipse, the sum of the distance from two special points inside the ellipse to any point on the ellipse is always the same. These two points inside the ellipse are called its foci (singular: focus), a word invented for this purpose by Kepler. This property suggests a simple way to draw an ellipse (Figure 3). We wrap the ends of a loop of string around two tacks pushed through a sheet of paper into a drawing board, so that the string is slack. If we push a pencil against the string, making the string taut, and then slide the pencil against the string all around the tacks, the curve that results is an ellipse. At any point where the pencil may be, the sum of the distances from the pencil to the two tacks is a constant length—the length of the string. The tacks are at the two foci of the ellipse. The widest diameter of the ellipse is called its major axis. Half this distance—that is, the distance from the center of the ellipse to one end—is the semimajor axis, which is usually used to specify the size of the ellipse. For example, the semimajor axis of the orbit of Mars, which is also the planet’s average distance from the Sun, is 228 million kilometers. The shape (roundness) of an ellipse depends on how close together the two foci are, compared with the major axis. The ratio of the distance between the foci to the length of the major axis is called the eccentricity of the ellipse. If the foci (or tacks) are moved to the same location, then the distance between the foci would be zero. This means that the eccentricity is zero and the ellipse is just a circle; thus, a circle can be called an ellipse of zero eccentricity. In a circle, the semimajor axis would be the radius. Next, we can make ellipses of various elongations (or extended lengths) by varying the spacing of the tacks (as long as they are not farther apart than the length of the string). The greater the eccentricity, the more elongated is the ellipse, up to a maximum eccentricity of 1.0, when the ellipse becomes “flat,” the other extreme from a circle. The size and shape of an ellipse are completely specified by its semimajor axis and its eccentricity. Using Brahe’s data, Kepler found that Mars has an elliptical orbit, with the Sun at one focus (the other focus is empty). The eccentricity of the orbit of Mars is only about 0.1; its orbit, drawn to scale, would be practically indistinguishable from a circle, but the difference turned out to be critical for understanding planetary motions. Kepler generalized this result in his first law and said that the orbits of all the planets are ellipses. Here was a decisive moment in the history of human thought: it was not necessary to have only circles in order to have an accepTable cosmos. The universe could be a bit more complex than the Greek philosophers had wanted it to be. Kepler’s second law deals with the speed with which each planet moves along its ellipse, also known as its orbital speed. Working with Brahe’s observations of Mars, Kepler discovered that the planet speeds up as it comes closer to the Sun and slows down as it pulls away from the Sun. He expressed the precise form of this relationship by imagining that the Sun and Mars are connected by a straight, elastic line. When Mars is closer to the Sun (positions 1 and 2 in Figure 4), the elastic line is not stretched as much, and the planet moves rapidly. Farther from the Sun, as in positions 3 and 4, the line is stretched a lot, and the planet does not move so fast. As Mars travels in its elliptical orbit around the Sun, the elastic line sweeps out areas of the ellipse as it moves (the colored regions in our figure). Kepler found that in equal intervals of time (t), the areas swept out in space by this imaginary line are always equal; that is, the area of the region B from 1 to 2 is the same as that of region A from 3 to 4. If a planet moves in a circular orbit, the elastic line is always stretched the same amount and the planet moves at a constant speed around its orbit. But, as Kepler discovered, in most orbits that speed of a planet orbiting its star (or moon orbiting its planet) tends to vary because the orbit is elliptical. Kepler’s Third Law Kepler’s first two laws of planetary motion describe the shape of a planet’s orbit and allow us to calculate the speed of its motion at any point in the orbit. Kepler was pleased to have discovered such fundamental rules, but they did not satisfy his quest to fully understand planetary motions. He wanted to know why the orbits of the planets were spaced as they are and to find a mathematical pattern in their movements—a “harmony of the spheres” as he called it. For many years he worked to discover mathematical relationships governing planetary spacing and the time each planet took to go around the Sun. In 1619, Kepler discovered a basic relationship to relate the planets’ orbits to their relative distances from the Sun. We define a planet’s orbital period, ( ), as the time it takes a planet to travel once around the Sun. Also, recall that a planet’s semimajor axis, P a,is equal to its average distance from the Sun. The relationship, now known as Kepler’s third law, says that a planet’s orbital period squared is proportional to the semimajor axis of its orbit cubed, or [latex]{P}^{2}\propto {a}^{3}[/latex] When P (the orbital period) is measured in years, and a is expressed in a quantity known as an astronomical unit (AU), the two sides of the formula are not only proportional but equal. One AU is the average distance between Earth and the Sun and is approximately equal to 1.5 × 10 8 kilometers. In these units, [latex]{P}^{2}={a}^{3}[/latex] Kepler’s third law applies to all objects orbiting the Sun, including Earth, and provides a means for calculating their relative distances from the Sun from the time they take to orbit. Let’s look at a specific example to illustrate how useful Kepler’s third law is. For instance, suppose you time how long Mars takes to go around the Sun (in Earth years). Kepler’s third law can then be used to calculate Mars’ average distance from the Sun. Mars’ orbital period (1.88 Earth years) squared, or P 2, is 1.88 2 = 3.53, and according to the equation for Kepler’s third law, this equals the cube of its semimajor axis, or a 3. So what number must be cubed to give 3.53? The answer is 1.52 (since 1.52 × 1.52 × 1.52 = 3.53). Thus, Mars’ semimajor axis in astronomical units must be 1.52 AU. In other words, to go around the Sun in a little less than two years, Mars must be about 50% (half again) as far from the Sun as Earth is. Example 1: Calculating Periods Imagine an object is traveling around the Sun. What would be the orbital period of the object if its orbit has a semimajor axis of 50 AU? From Kepler’s third law, we know that (when we use units of years and AU) [latex]{P}^{2}={a}^{3}[/latex] If the object’s orbit has a semimajor axis of 4 AU ( a = 50), we can cube 50 and then take the square root of the result to get P: [latex]\begin{array}{cc}\hfill P& =\sqrt{{a}^{3}}\hfill \\ \hfill P& =\sqrt{50\times 50\times 50}=\sqrt{125,000}=353.6\text{ years}\hfill \end{array}[/latex] Therefore, the orbital period of the object is about 350 years. This would place our hypothetical object beyond the orbit of Pluto. Check Your Learning What would be the orbital period of an asteroid (a rocky chunk between Mars and Jupiter) with a semimajor axis of 3 AU? Kepler’s three laws of planetary motion can be summarized as follows: Kepler’s first law: Each planet moves around the Sun in an orbit that is an ellipse, with the Sun at one focus of the ellipse. Kepler’s second law: The straight line joining a planet and the Sun sweeps out equal areas in space in equal intervals of time. Kepler’s third law: The square of a planet’s orbital period is directly proportional to the cube of the semimajor axis of its orbit. Kepler’s three laws provide a precise geometric description of planetary motion within the framework of the Copernican system. With these tools, it was possible to calculate planetary positions with greatly improved precision. Still, Kepler’s laws are purely descriptive: they do not help us understand what forces of nature constrain the planets to follow this particular set of rules. That step was left to Isaac Newton. Example 2: Applying Kepler’s Third Law Using the orbital periods and semimajor axes for Venus and Earth that are provided here, calculate P 2 and a 3, and verify that they obey Kepler’s third law. Venus’ orbital period is 0.62 year, and its semimajor axis is 0.72 AU. Earth’s orbital period is 1.00 year, and its semimajor axis is 1.00 AU. P 2= 0.62 × 0.62 = 0.38 year and a 3= 0.72 × 0.72 × 0.72 = 0.37 AU (rounding numbers sometimes causes minor discrepancies like this). The orbital period (0.38 year) approximates the semimajor axis (0.37 AU). Therefore, Venus obeys Kepler’s third law. For Earth, P 2= 1.00 × 1.00 = 1.00 year and a 3= 1.00 × 1.00 × 1.00 = 1.00 AU. The orbital period (1.00 year) approximates (in this case, equals) the semimajor axis (1.00 AU). Therefore, Earth obeys Kepler’s third law. Check Your Learning Using the orbital periods and semimajor axes for Saturn and Jupiter that are provided here, calculate P 2 and a 3, and verify that they obey Kepler’s third law. Saturn’s orbital period is 29.46 years, and its semimajor axis is 9.54 AU. Jupiter’s orbital period is 11.86 years, and its semimajor axis is 5.20 AU. P 2= 29.46 × 29.46 = 867.9 years and a 3= 9.54 × 9.54 × 9.54 = 868.3 AU. The orbital period (867.9 years) approximates the semimajor axis (868.3 AU). Therefore, Saturn obeys Kepler’s third law. Key Concepts and Summary Tycho Brahe’s accurate observations of planetary positions provided the data used by Johannes Kepler to derive his three fundamental laws of planetary motion. Kepler’s laws describe the behavior of planets in their orbits as follows: (1) planetary orbits are ellipses with the Sun at one focus; (2) in equal intervals, a planet’s orbit sweeps out equal areas; and (3) the relationship between the orbital period ( P) and the semimajor axis ( a) of an orbit is given by P 2 = a 3 (when a is in units of AU and P is in units of Earth years). Glossary astronomical unit (AU): the unit of length defined as the average distance between Earth and the Sun; this distance is about 1.5 × 10 8 kilometers eccentricity: in an ellipse, the ratio of the distance between the foci to the major axis ellipse: a closed curve for which the sum of the distances from any point on the ellipse to two points inside (called the foci) is always the same focus: (plural: foci) one of two fixed points inside an ellipse from which the sum of the distances to any point on the ellipse is constant Kepler’s first law: each planet moves around the Sun in an orbit that is an ellipse, with the Sun at one focus of the ellipse Kepler’s second law: the straight line joining a planet and the Sun sweeps out equal areas in space in equal intervals of time Kepler’s third law: the square of a planet’s orbital period is directly proportional to the cube of the semimajor axis of its orbit major axis: the maximum diameter of an ellipse orbit: the path of an object that is in revolution about another object or point orbital period ( P): the time it takes an object to travel once around the Sun orbital speed: the speed at which an object (usually a planet) orbits around the mass of another object; in the case of a planet, the speed at which each planet moves along its ellipse semimajor axis: half of the major axis of a conic section, such as an ellipse
Suppose that we assume the version of Radon-Nikodym for finite measure that is if $X$ is some set, $\mathcal{A}$ is a $\sigma$-algebra on $X$ and $\mu$ and $\nu$ are finite measures on $X$ where $\nu$ is absolutely continuous with respect to $\mu$ then we can find a nonnegative integrable function such that $$\nu(E) = \int_E gd{\mu}.$$ I was wondering if we could argue as follows in the case where $\mu$ and $\nu$ are $\sigma$-finite: Let $\{A_n\}_n$ be a sequence over $\mathcal{A}$ such that $A_n\subseteq A_{n+1}$, $\mu(A_{n+1})<\infty$, $\nu(A_{n+1})<\infty$ and $\cup_n A_n = X$. Then by applying the finite version of Radon-Nikodyn to the case where we restrict our measures to $A_n$ we obtain a sequence of functions $\{g_n\}_n$ where $g_n:A_n\rightarrow \mathbf{R}$ and for any $\mathcal{A}\ni E\subset A_n$ $$\nu(E) = \int_E g_n d\mu.$$ We extend each $g_n$ to be zero outside of $A_n$. Claim: $g_n\leq g_{n+1}$ almost everywhere. Let $E = \{x\in A_n\mid g_n(x)>g_{n+1}(x)\}$ then $$\nu(E) =\int_{E}g_{n}d\mu>\int_{E}g_{n+1}d\mu=\nu(E)$$ which implies that $\mu(E) = 0\Rightarrow \nu(E) =0$. Now define $$g = \lim_n g_n$$ then by the monotone convergence theorem $$\nu(E) = \lim_n \int_E g_nd\mu = \int_E gd\mu.$$ My question is whether this reasoning is valid?
Problem: For positive integers $n,k$, let $$S(n,k)=\sum_{i=1}^{n}i^k$$ and for positive integers $m,b$, with $b>1$, let $D(m,b)$ be the sum of the base-$b$ digits of $m$. Q$1$-Show that if $k\in\{1,2,3\}$, and $a$ is a positive integer such that $a{\,|\,}S(a,k)$, then $D(S(b,k),b)=b$, where $b=a+1$ but not satisfied $\forall$ $k > 3$? Q$2$-Show that $D((p')^{t}-D((p')^{2k+1}-S(p',2k),p'),p')=p'$ where $p$ is prime and $p+1=p'$ and $p>2k+1$ and $(p')^{t} \ge D((p')^{2k+1}-S(p',2k),p')>(p')^{t-1}$ and $k,t \in \mathbb{N}$? note Quasi well proved half of question$1$ in this link proof for $k\in\{1,2,3\}$ I have already mention the observation of question$2$ in different manner in this link reference for Q$2$
The short answer is: group velocity and phase velocity are just terms that help describe how frequency depends on wavelength in a material, and in specific instances can help give us information about how wave propagate in said material. However, at the end of the day, they're just mathematical quantities that aren't under any special obligation to have a neat physical interpretation. Now, for the slightly longer answer. As you might already be aware, purely sinusoidal waves are in reality a poor way of modeling real signals, since they're infinite both in time and space. Luckily for us, we can express any real life signal that has some spatial confinement as an integral of sinusoidal functions, and these sinusoidal functions are in many ways easier to handle. The tool that lets us do this is the Fourier transform, which basically says that given an arbitrary wave $\alpha(x,t)$ that depends on position and time, we can rewrite it as $$\alpha(x,t)=\int_{-\infty}^{\infty}A(k)e^{i(kx-\omega t)}dk$$ Where $k$ is the wavenumber (basically the reciprocal of wavelength), $A(k)$ is the Fourier transform of the waveform at $t=0$ (which basically tells us how much of each wavelength the initial signal packet contains), and $\omega=\omega(k)$ is some function of the wavenumber (notation here shamelessly stolen from the wikipedia page on group velocity). So far, this is pure math-- all we've done is write a function in a different way. Now, remembering that $e^{i\theta}=cos(\theta)+isin(\theta)$, you might realize that the integrand looks like an infinite sinusoidal wave traveling to the right at velocity $\omega / k$ for any given value of $k$ that we happen to be integrating over. This speed is the phase velocity $v_p$, and since $\omega$ is a function of $k$, $v_p$ is as well. The important thing to note is that there isn't necessarily a clean physical interpretation of this quantity, since the thing we physically observe is the integral of the sinusoids, not any individual components of this integral. About all we can say in general about the phase velocity is that it tells us how fast the crest of an infinite sinusoid of definite frequency would travel in our medium. But infinite sinusoids don't really transfer information, given that they're already present everywhere, so the phase velocity doesn't tell us anything about the rate of information transfer in any generality. So, it's perfectly possible for $v_p$ to be greater than $c$ for some specific value of $k$ as long as $\omega (k)$ is a function such that no signal can propagate faster than $c$. That being said, there are a few specific cases where phase velocity does have a physical interpretation. Namely, if $\omega/k$ is a constant, then waves will travel at the phase velocity undistorted so that the phase velocity is in fact the rate of information transfer. Aside from EM waves in a vacuum, this is rarely the case in physics-- $\omega$ is rarely proportional to $k$ and thus the phase velocity ceases to have a single value or simple physical meaning. Finally, group velocity is defined as $\frac{\partial \omega}{\partial k}$ and so it doesn't really have much meaning for a single sinusoidal wave since derivatives depend on values around a point, not just at it. The group velocity is useful if our $\omega (k)$ is nearly linear, in which case $v_g$ gives the approximate rate of information transfer (this is exact if the dispersion is exactly linear, as with EM waves in a vacuum). Like before, this isn't true for all materials and almost every material will exhibit non-linear dispersion if pushed into an extreme enough regime. It can also be useful if the packet doesn't contain a large spread of frequencies or doesn't travel a long distance (basically, it's useful whenever we can readily approximate $\omega(k)$ as its first order Taylor expansion in the integral above). TL;DR- In general, how a wave propagates through a medium is a very complex function that both depends on the medium and the shape of the wave. However, for some simple cases, the phase velocity and group velocity can point us in the right direction and save a lot of unnecessary work.
According to Wikipedia, the enthalpy of formation of water is $-285.8~\mathrm{kJ/mol}$ while the enthalpy of formation of steam is $-241.818~\mathrm{kJ/mol}$, implying the following: $$\ce{H2O(l) -> H2O(g)}\qquad (\Delta H=44.0~\mathrm{kJ/mol)}$$ Let us derive the enthalpy of the above reaction using the specific heat capacity of water and the specific heat of vaporization of water instead. $1~\mathrm{mol}$ of water weighs $0.018~\mathrm{kg}$. To raise $1~\mathrm{mol}$ of water from $25~\mathrm{^\circ C}$ to its boiling temperature requires $(0.018~\mathrm{kg}) \times (4200~\mathrm{J~kg^{-1}~K^{-1}}) \times (75~^\circ\mathrm{C}) = 5.67~\mathrm{kJ}$. To turn that amount of water to steam requires $(0.018~\mathrm{kg}) \times (2258~\mathrm{kJ~kg^{-1}}) = 40.644~\mathrm{kJ}$. Adding these two terms give $46.314~\mathrm{kJ}$, implying the following: $$\ce{H2O(l) -> H2O(g)}\qquad (\Delta H=46.314~\mathrm{kJ/mol)}$$ Is the discrepancies between the two thermochemical equations simply due to measurement inaccuracies?
need some guidance with a quadratic equation. Suppose $x^2+20x-4000=0$ Here is what I have done so far; Using the Quadratic Equation $x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ where from the above equation, $a=1$, $b=20$, $c=-4000$, we find $$\begin{align*}x&=\frac{-20+\sqrt{(20)^2-(4)(1)(-4000)}}{2\times 1}\\ &=\frac{-20+\sqrt{16400}}{1}\\ &=108.0624847 \end{align*}$$ and $$\begin{align*} x&=\frac{-20-\sqrt{20^2-(4)(1)(-4000)}}{2\times1}\\ &=\frac{-20-\sqrt{16400}}{1} \\ &=-148.0624847 \end{align*}$$ Neither of these values for $x$ prove correct when applied to $x^2+20x-4000=0$. I know something is wrong but I can’t figure out what. Any guidance would be appreciated.
Usually when we solve field equations, we start with a stress energy tensor and then solve for the Einstein tensor and then eventually the metric. What if we specify a desired geometry first? That is, write down a metric and then solve for the resulting stress energy tensor? You can certainly do this, and indeed it is regularly done. For example Alcubierre designed his FTL drive by starting with the metric he wanted and calculating the required stress-energy tensor. It is a straightforward calculation - it is somewhat tedious to do by hand but Mathematica would do the calculation in a few seconds. The problem is that the resulting stress-energy tensor will almost always contain contributions from exotic matter, as indeed the Alcubierre stress-energy tensor does, and that means it won't be physically meaningful. The chances of solving the Einstein equation by guessing geometries and ending up with a physically meaningful stress-energy tensor are vanishingly small. In most situations the distinction “matter first, geometry second” or “geometry first, matter second” is not that clear cut. Often assumptions are made that constrain both the geometry and stress energy tensor. Take for example the Schwarzschild metric. We derive it by writing down the most general metric compatible with restrictions imposed by physical description: “isolated static body with spherical symmetry”: $$ ds^2=-A(r)dt^2 + B(r)dr^2 + r^2(d\theta + \sin^2 \theta d\phi^2). $$ Only then we substitute this metric into vacuum Einstein equations (with zero stress energy tensor) and obtain a couple of ordinary differential equations for functions $A(r)$ and $B(r)$. So, we solve equations for a given matter content, but these equations are in a simple form because we specified large parts of geometry first. Another class of examples are what could be called “science fiction geometries”: time machines, warp drives, traversable wormholes that challenge our intuition on what is allowed in the universe. Such “solutions” are often start from geometry written down with a desired properties but the Einstein field equations are still considered in order to constrain what form of “exotic matter” is needed to obtain such geometries. Parameters of the geometry are often varied in order to minimize the “unnaturalness” of the resulting stress energy tensor. A few examples: Alcubierre warp drive and its variations allows faster than light travel with the help of negative mass. Traversable wormholes would allow travel (or communication) between distant regions of the Universe (or between different universes). See this paper for an example of obtaining conditions of stress-energy for such a spacetime. Yet another group of examples which have priority of the geometry over matter comes from astrophysics: observations often give us information about spacetime which could then be used to deduce the matter content. That is essentially how $\Lambda$CDM model appears, the matter content, most notably the dark energy is deduced from spacetime structure. You can certainly go from the metric to the energy-momentum tensor, but then you’re not “solving” anything. There are no differential equations to be solved if you’re doing that. It’s just a straightforward, although often tedious, computation (of the Einstein curvature tensor, which is proportional to the energy-momentum tensor) that requires nothing more than differentiation and algebra. It’s not a particularly useful thing to do. Trying lots of metrics and seeing what density and flow of energy and momentum they correspond to doesn’t really give you insight. It’s generally the energy-momentum tensor that is simple, and the metric that’s complicated, so you need to start with the former and solve for the latter. This is sometimes jokingly called Synge's method. Here's an excerpt from Ingemar Bengtsson's A Second Relativity Course describing it (see Chapter 5): We would now like to see a solution describing a physical system that approaches (in some sense) the Schwarzschild solution as it evolves. This can be obtained by means of a method invented by the Irish relativist Synge. Synge’s methodis as follows. To solve $$ G_{ab} = 8 \pi T_{ab}, $$ rewrite as $$ T_{ab} = \frac{1}{8 \pi} G_{ab}, $$ choose any metric tensor $g_{ab}$, compute its Einstein tensor $G_{ab}$, and read off the stress-energy tensor $T_{ab}$ from Eq. (5.2). The result is a solution of Eq. (5.1). To avoid any misunderstanding, Synge meant this as a joke (and he did not predict dark matter). A stress-energy tensor computed in this way is not likely to obey any of the positivity conditions that are necessary for it to qualify as physical. Very occasionally the method works though. (Bengtsson then proceeds to describe the Vaidya solutions, which are found by basically writing down a metric that looks vaguely like a time-dependent Schwarzschild solution and then interpreting it.) It's possible that Synge describes his "method" in his 1960 book—the textbook I'm drawing from cites it in the passage above—but I don't have a copy handy.
SIS (Short Integer Solution) Problem : Given $m$ uniformly random vectors $a \in Z_q^n$, grouped as the columns of a matrix $A \in Z_q^{n.m}$, find a nonzero integer vector $z \in Z^m$ with $||z|| \leq \beta \lt q$, such that $Az = 0 \mod q$. Concerning the hardness of the problem, there is a theorem that states that : for any $m = poly(n)$, $\beta \gt 0$, solving $SIS$ is at least as hard as solving other approximation problems like $GapSVP_\gamma$ ( Decisional approximate Short Vector Problem) and $SIVP_\gamma$ ( Short Independant Vector Problem) on arbitrary n-dimensional lattices, for some $\gamma = \beta.poly(n)$. My question is : what are the maximum values of $\beta$ and $m$ relatively to $n$ for which the problem stays hard to solve? For example in the $GPV$ signatures they consider $m = 2n\log q$, and $\beta = 6n\log q$. But can we consider too $m = 4n\log q$? $8n\log q \dots $? $n^{100} \log q$? $\dots$ Same thing goes for $\beta$. What's the limit for these parameters for which the problem starts becoming easy?
A Chowdhury Articles written in Pramana – Journal of Physics Volume 61 Issue 6 December 2003 pp 1121-1128 Absolute measurement for He-α resonance (1s 2 1S 0−1s2p 1 1, at 40.2 Å) line emission from a laser-produced carbon plasma has been studied as a function of laser intensity. The optimum laser intensity is found to be ≈1.3×10 12 W/cm 2 for the maximum emission of 3.2 × 10 13 photons sr −1 pulse −1. Since this line lies in the water window spectral region, it has potential application in x-ray microscopic imaging of biological sample in wet condition. Theoretical calculation using corona model for the emission of this line is also carried out with appropriate ionization and radiative recombination rate coefficients Volume 64 Issue 1 January 2005 pp 141-146 The effect of dielectronic recombination in determining charge-state distribution and radiative emission from a laser-produced carbon plasma has been investigated in the collisional radiative ionization equilibrium. It is observed that the relative abundances of different ions in the plasma, and soft X-ray emission intensity get significantly altered when dielectronic recombination is included. Theoretical estimates of the relative population of CVI to CV ions and ratio of line intensity emitted from them for two representative formulations of dielectronic recombination are presented. Volume 68 Issue 1 January 2007 pp 43-49 Research Articles Parametric dependence of the intensity of 182 Å Balmer-𝛼 line $(C^{5+}; n = 3 \rightarrow 2)$, relevant to xuv soft X-ray lasing schemes, from laser-produced carbon plasma is studied in circular spot focusing geometry using a flat field grating spectrograph. The maximum spectral intensity for this line in space integrated mode occurred at a laser intensity of $1.2 \times 10^{13}$ W cm -2. At this laser intensity, the space resolved measurements show that the spectral intensity of this line peaks at $\sim 1.5$ mm from the target surface indicating the maximum population of C 5+ ions $(n = 3)$, at this distance. From a comparison of spatial intensity variation of this line with that of C 5+ Ly-𝛼 $(n = 2 \rightarrow 1)$ line, it is inferred that $n = 3$ state of C 5+ ions is predominantly populated through three-body recombination pumping of C$^{6+}$ ions of the expanding plasma consistent with quantitative estimates on recombination rates of different processes. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
Consider each of the following encryption schemes and state whether the scheme is perfectly secret or not. Justify your answer by giving a detailed proof if your answer is Yes, and a counterexample if your answer is No. Consider an encryption scheme whose plaintext space is $\mathcal{M}=\{m\in\{0,1\}^\ell \mathrel{|} \text{the last bit of $m$ is $0$}\}$ and key generation algorithm chooses a uniform key from the key space $\mathcal{K}=\{0,1\}^{\ell-1}$. Suppose $\mathit{Enc}_k(m)=m \oplus (k\parallel 0)$ and $\mathit{Dec}_k(c)=c\oplus (k\parallel 0)$. $\newcommand{\given}{\mathrel{|}}$The definition of perfectly secret which states:An encryption scheme $(\mathit{Gen}, \mathit{Enc}, \mathit{Dec})$ with message space $\mathcal{M}$ is perfectly secret if for every probability distribution over $\mathcal{M}$, every message $m\in \mathcal{M}$, and every ciphertext $c\in \mathcal{C}$ for which $\Pr[C=c]>0$: $$\Pr[M=m\given C=c]=\Pr[M=m].$$ We first compute $\Pr[C=c\given M=m']$ for arbitrary $c\in \mathcal{C}$ and $m'\in \mathcal{M}$. \begin{equation*} \begin{aligned} \Pr[C=c\given M=m'] & =\Pr[\mathit{Enc}_K(m')=c]=\Pr[m' \oplus (K\parallel 0)=c] \\ & =\Pr[(K\parallel 0) = c\oplus m']=2^{1-\ell}\quad (1) \end{aligned} \end{equation*} where the final equality holds because the key $K$ is a uniform $\ell-1$-bit string. Fix any distribution over $\mathcal{M}$. For any $c\in \mathcal{C}$, we have \begin{equation*} \begin{aligned} \Pr[C=c] & = \sum_{m'\in\mathcal{M}} \Pr[C=c\given M=m'] \cdot \Pr[M=m'] \\ & = 2^{1-\ell} \cdot \sum_{m'\in \mathcal{M}} \Pr[M=m']=2^{1-\ell}\cdot 1=2^{1-\ell}\quad (2) \end{aligned} \end{equation*} where the sum is over $m'\in \mathcal{M}$ with $\Pr[M=m']\neq 0$. Bayes' Theorem gives: \begin{equation*} \begin{aligned} \Pr[M=m\given C=c] & = \dfrac{\Pr[C=c\given M=m]\cdot \Pr[M=m]}{\Pr[C=c]} \\ & = \dfrac{2^{1-\ell} \cdot \Pr[M=m]}{2^{1-\ell}} = \Pr[M=m] \end{aligned} \end{equation*} Hence we conclude that this encryption scheme is perfectly secret. MY QUESTION: I tried to follow the set up for the proof of the One-Time Pad being perfectly secure. However, I don't really understand the logic behind the proof (assuming what I did was correct). Can someone clear up why this technique is correct?
There are two parts to your question and I'd like to answer them separately.Curve ConstructionOn a daily basis, you can observe prices on a large variety of instruments, whose prices are driven by news and trading flows. Based on market prices of these instruments, there are a number of ways to create discount curves/forward curves. At a very high level (... Chapter 1: Goldilocks is ousted by the bearsOnce upon a time, the banks used a fixing called LIBOR as a measure of the risk-free interest rate. Then the big hairy crisis came along and ate all our assumptions, leaving just the bones of the fixing (upon which everything else still fixes) and the mantle of risk-free rate proxy was passed on to a family of ... It is incorrect to use 1m euribor or O/N euribor in a 6m Euribor forward curve. You should only use instruments based on 6M euribor, such as 1x7 FRA, 6x12 FRA or swaps v 6m Euribor, as you have done in your second example. The actual 6m euribor fixing itself can be thought of as a 0x6 FRA out of spot.Before the financial crisis basis between different ... Let's step back and look at the reason for making a DV01 calculation first before answering the question;The reason for making a DV01 calculation is to quantify what market movements has impact on the valuation of the trade.Since the 'flat' forecast curve won't be affected by market movements the answer is (using pre-2008 methodology): The floating ... I don't think they are implying that future interest rates are predictable. They may be speaking of implied forward rates as predictors of future rates or, generally, of the yield curve as an expectation of the future path of short-term interest rates.If $P(0,T_1)=1/(1+r_1)$ and $P(0,T_2)=1/(1+r_2)$ are the prices today of two "risk-free" zero coupon bonds ... Which currency are you looking at ?Say that your 1y swap would have yearly fixed payments vs 3M floating payments.Your 1.5y swap would probably have:a fixed payment 6m after effective date and another fixed payment 18mafter effective dateregular quarterly floating paymentsYour curve was built with 1y and 2y swaps, nothing in the middle ? Then yes, ... You have$\beta_1=\frac{1}{(1+r)^n}\frac{1}{(1+r)^\theta}$and$\beta_2=\frac{1}{(1+r)^n}\frac{1}{(1+\theta r)}$.Both are equal when $\theta=1$. If you consider simple interest then go for $\beta_2$. If you would like compound interest within fraction of year then pick $\beta_1$.However, because $\theta$ is between 0 and 1 then values $\beta$'s won't ... The short answer is that there's no consensus. A popular method is to shock each input instrument by 1 bp (i.e., change the futures rate by 1bp, the swap rates by 1bp, OIS rates by 1bp, etc.), rebuild the curve, and then reprice the instrument of interest to obtain its curve sensitivity. This of course is not quite a "parallel" shift of any curve (e.g., a ... By no arbitrage, market participants need to agree on the values of the discount factor, even if they are using different conventions (day count, compounding period) to convert the discount factor into a rate.For example, consider two discount factors computed using continuous compounding, where one is computed using the 30/360 day count (year fraction $t_{... if you are asking how CME collateral is discounted, then you have two considerations:What does the CME give you on your USD cash? That's simple, it's OIS. You don't get the interest immediately, but instead I think once a month. I'm not sure if they compound it - but I would imagine they do as it references OIS.What's your funding situation. For ... Generally speaking there are more inputs that are required to precisely specify the multicurve structure, and they are potentially more important.For example consider constructing a EUR interest rate curveset for 3 years, in the indexes EONIA, 3M EURIOBOR, 6M EURIBOR.The information you have available are:Some outright EONIA quotes in generic tenors; ... It's difficult to see your screenshot. But I think you should just follow some real examples online instead of having people find out what's wrong on your side.This is an excel example, go play with it.webuser.bus.umich.edu/Organizations/FinanceClub/resources/BootstrappingMath.xls I don't recommend linear interpolation of DFs and the swap rates you are applying this to are either against 12M libor which is illiquid or you are not accounting for Quarterly or Semi-Annual floating sides. And what I'm going to suggest uses a single curve framework which is long outdated. But that being said and given the nature of what's been asked...... Suppose you wanted to value a 5Y EUR IRS with a USD cash collateralised curve this is the broad process:Get the 5Y EUR 3M / OIS basis, say this is 10bps: This establishes the discounting basis in the local (EUR) currency.Now get the 5Y EUR/USD Cross-currency basis, say this is EUR 3M-IBOR - 40bps: This establishes your link to dollars.Now get the 5Y ... Update (2018-10-09):This solution is more correct. It's a class that solves for the DM using the class ForwardSpreadedTermStructure.public class DMFinder : ISolver1d{private readonly List<Cashflow> leg_;private readonly double dm_;private readonly DayCounter dayCounter_;private readonly Compounding compounding_;private ... Your valuation date is $t=$ Thu 10-Nov-11. The swaps start on the spot date which is $t + 2$ business days = Mon 14-Nov-11. The usual approach is to extrapolate between $t$ and the first curve pillar, in a manner consistent with the interpolation method that you are using for representing your discount curve. For instance if you use linear interpolation of ... Agree with Helin. For short term risk management a trader would be usually looking at delta to the forecast curve (i.e. swaps curve and government curve for a swaps/options trader), although he/she would also have delta risk to the OIS curve and also to curves of other currencies now that multi-currency collateralisation is quite common. Those other deltas ... You'll need to bootstrap a zero curve from your market data. This process is iterative in the sense that the implied zero rates for your short-term LIBOR rates are calculated before using those rates to bootstrap your zero rates implied by your FRAs. You will need to bootstrap for each time-point defined by your instruments.A good reference for you would ... You should use whatever currency in which the debt is denominated. Specifically, since it is the EUR currency and interest rate risk associated with the debt, some sort of EUR curve should be used.Theoretically, if you are looking for the present value in USD, although the debt is denominated in EUR, you could convert future payments at the forward ... Before the financial crisis, we used to assume that LIBOR is a risk-free rate and built swap curves in pretty much the same way your professor taught.Nowadays, OIS discounting is the norm (actually depends on the exact collaterialization mechanism, but let's not go there...). Simply put, you need to have a 3-month LIBOR curve to project 3-month LIBOR ... If you're bootstrapping and if there are bonds maturing on the same date, you should use only one. A good rule is to discard the older issue and keep the more recently issued securities.If you're building a spline, then it really doesn't matter since you're building a best fit curve that best approximates the prices of all bonds.Assuming the quotes you ... The formula seems to be correct. Negative interest rates are not impossible in these days.http://www.bloombergview.com/quicktake/negative-interest-ratesHave you checked the algorithm with values that produce positive rates? And in what area lie the negative ones?In the case of negative interest rates the discount factors should be greater than one, of ... Once upon a time, there was the One Curve. It was made of various instruments (Depos, Fixings, Futures, Swaps) and represented the One True Discount Rate for any given term. With that curve, and an appropriate interpolation method, it made sense to talk about expensive days, the curve up to 3m, etc.But that world is long gone. When you create a 3m curve ... It is not normal, for swaptions, your prices should be perfect or you open yourself up to arbitrage. Is Bloomberg calculating the swaptions with dual-curve stripping/bootstrapping? SWDF DFLT has a setting for that. If you are assuming in your model that your discount curve is the same as your forward rates curve, but Bloomberg is doing proper post-2006 OIS ... varies from market to market and from company to company... The methodology differs even for the US Treasury market (the most largest & most liquid govt bond market). Generally speaking, the benchmark bonds (2y, 3y, 5y, 7y, 10y, and 30y on-the-runs) are traded very very heavily and readily available. Their prices are driven by supply-demand. Non ... How are the future interest rates determined?Two ways. 1) They are observed in the market, i.e. they are the best estimate of the market participants. One way is to use Bloomberg. 2) You can create your own discount curve and from that calculate the forward rates. Discount and forward curves for non-collateralized swaps must be consistent, otherwise ...
In Physics 7A, we tied together the idea of potential energy and force. We learned that the magnitude of the force was given by the derivative (slope) of the graph of \(PE\) vs. \(r\). Just like a how forces exist between two objects, the potential energy is always an energy between two objects. Gravitational Potential What we would like to do in this unit is to talk about energy (or something like it), but only for one object. We got around this same problem with forces by introducing a new concept – fields – which told us the force on a 1 kg mass (for example) a distance \(r\) away. What to do with potential energy is now obvious; we should invent a new concept like fields that tells us what the potential energy would be for a 1 kg object a distance \(r\) from the source. The name for this new concept is potential and it is represented by \(U\). The name is an unfortunate choice because while it is closely related to the potential energy, they are not the same thing. For gravity, the relationship between the two is \[PE_{\text{grav between obj 1 and obj 2}} = M_1U_2\] This equation reminds us that it does not matter which object is considered the “source”, although in cases where one object is much bigger than the other it is conventional to treat the larger object as the source. For a point or spherical mass, the equation for the potential is relatively straight-forward: \[U_{grav} = -\dfrac{GM}{r} + U_0\] where\(r\) is the distance from the center of the mass creating the field. Here \(U_0\) is some arbitrary value. In Physics 7A we learned that absolute potential energy could not be measured; it was the changes in potential energy that could be observed. Likewise, Absolute potential cannot be measured either, instead only changes in potential are observable. If we add the same constant \(U_0\) to the potential at all locations, there is no experiment that can tell us what the value of \(U_0\) is. Furthermore, if we're only interested in the differences of potential energy at two different locations, then the value of \(U_0\) doesn't matter at all. Notice that far away from the mass (as \(r \rightarrow \infty\)) we have \(U_{grav} \approx U_0\). We will find it convenient to adopt the convention that the gravitational potential goes to zero a long way from the source. This corresponds to choosing \(U_0 = 0\), in which case the potential for our point or spherical mass is \[U_{grav} = - \dfrac{GM}{r}\] Notice that potential is negative, and because mass is always positive this tells us that the gravitational potential energy is Let us compare the gravitational potential energy (with zero at infinity) with the Lennard-Jones potential energy you looked at in 7A. There the \(PE\) went to zero as \(r\) became large (again, by convention), and because the potential energy was negative the total energy could be negative as well. If the total energy of a system was negative (the kinetic energy cannot be negative) this indicated the system was gravitationally bound. Exercise Starting with the equations presented above, can you derive the formula for the potential energy between two masses? Can you go from there to calculate the force between two objects? (See the diagrams in Relationships Between Concepts for help). Exercise Is the total mechanical energy of the Earth (i.e. \(KE + PE\)) positive, negative or zero? How can you tell? For this question, take \(PE = 0\)to be a very long way out of the solar system. (Hint: Remember that the Earth is orbiting the sun) Equipotentials An equipotential (i.e. “equal potential”) is the continuous curve along which every point has at the same potential. As a consequence, it takes no work to move along an equipotential; no forces pull or push in the direction of the equipotential. From this we can conclude that the force has no component along the direction of an equipotential. The most familiar example of equipotentials is height above the ground, as shown in the picture below. We know that the mass only gains or loses gravitational potential energy if the height changes, but moving it horizontally (i.e. along an equipotential) does not change the gravitational potential energy. Another example of equipotentials is the example of a topographical map from earlier. The contours show locations of constant height, and close to the Earth’s surface we have \[U = \dfrac{PE_{grav}}{m} = gh\] so lines of constant height \(h\) are also lines of constant \(U\). For the electric and the gravitational field, the force is always in the direction (or against the direction, for negative charges in an electric field) of the field lines. An equipotential cannot move with or against the field, as this would mean an object would gain or lose potential energy in the field. This means all equipotentials are at 90° to the field lines , and any given equipotential only intersects a given field line once. If an equipotential intersected a field line twice that would mean it was possible to move with (or against) the field and at the same time not change the potential energy of an object, which is impossible. We can use the fact that equipotentials and field lines are perpendicular to reconstruct one from the other. Let us take the Earth again, as it has been our example for all concepts in this chapter. We know that the equipotentials in this case (shown as dashed lines below) are all spheres because \(U\) only depends on \(r\); a sphere is a surface where each point has the same \(r\) value. Even if we did not know this, we would be able to reconstruct the equipotentials by drawing dashed lines perpendicular to the field lines as shown below: While every sphere we could draw this way is an equipotential, we choose to only draw selected equipotentials; we choose to draw equipotentials that are equally separated in potential, but not equally separated in space (\(r\) value). For example, the equipotentials shown in the above figure may be \(−6 \times 10^6 \text{ J/kg}\) for the one closest to the Earth, \(−5 \times 10^6 \text{ J/kg}\) for the middle equipotential and \(−4 \times 10^6 \text{ J/kg}\) for the outermost equipotential. We see that even though the equipotentials drawn change by \(1 \times 10^6 \text{ J/kg}\), they are not spaced evenly. As the equipotentials get further apart, we have to travel further with or against the field to get the same change in potential because the field gets weaker as \(r\) gets bigger. Finally, notice that if we only had the equipotentials (as on the diagram above on the right) we could completely reconstruct the field lines. Force and Equipotentials As we can see from the example of our topographical field, and our example above with Earth, if the equipotentials are close together, the field is stronger. In fact we can make this relationship precise: \[|\mathbf{g}| = \left| \dfrac{\text{d}U}{\text{d}r} \right| \approx \left| \dfrac{\Delta U}{\Delta r} \right|\] where \(\Delta U\) is the change in potential between two very close equipotentials, and \(\Delta r\) is the shortest distance between them. The direction of the field, \(\mathbf{g}\) in this case, points from high potential to low potential. Going back to the topographical map, contour lines with small distances \(\Delta r\) between them indicate steeper hills. Objects placed here would experience higher acceleration down the hill than objects placed elsewhere. The contour map is not an explanation of why the acceleration of an object would be greater (for that you should go back to force diagrams of a ball on a hill), but it is a convenient way of mapping the acceleration that a ball would feel. This is analogous to using gravitational equipotentials to display gravitational potential; they are a convenient description, but they do not explain why the field is the way it is. (While the hill serves as a good analogy, it is important to note that we are looking at the combined effect of gravity and the ground when discussing the acceleration of a ball. The gravitational field \(\mathbf{g}\) does not change significantly on a hill!) Example 2 Two equipotentials close to the surface of the Earth have a potential difference of 1 J/kg. How far apart are they? Solution We begin by taking \(\mathbf{g}_{Earth} = 10 \text{ m/s}^2\). We are interested in making steps of \(U = 1 \text{ J/kg}\) every equipotential. This tells us that the equipotentials are separated by \[\Delta r = \dfrac{|\Delta U|}{|\mathbf{g}_{Earth}|} = \dfrac{1 \text{ J/kg}}{10 \text{ m/s}^2} = 0.1 \text{ m}\] When we say the separation of the equipotentials is 10 cm, we mean given one equipotential, the one with the next lowest potential value is 10 cm closer to the ground. The fact that the distance traveled between equipotentials can get longer (for example, 50 cm as shown in the diagram) is completely irrelevant. The direction of the field is perpendicular to the equipotentials, going from a high equipotential to a low equipotential. In this case, the equipotentials are closest vertically and the potential decreases in height, leading to the (already known) conclusion that gravity points down. Example 3 The 1 J/kg equipotentials at the surface of Pluto are separated by 1.6 meters. Is the gravitational field on the surface of Pluto stronger or weaker than the gravitational field at the surface of the Earth? Solution The equipotentials are spaced further apart (larger \(\Delta r\)) for the same \(U\)). Therefore the gravitational field at the surface of Pluto is weaker than the gravitational field at the surface of the Earth. Notice that this does not explain why Pluto has a smaller gravitational field than Earth. To figure that out we would look at Pluto’s mass and size compared to Earth. But if someone has already calculated the field or the equipotentials for us, we can still use that information to answer useful questions. Potentials Do Not Always Exist As we learned in Physics 7A, the work done moving an object around is not a state function. This meant that the amount of work it took to move an object from one location to another could depend on more than the initial and final points; it could also depend on how you went from the initial to final point! If the amount of potential energy an electric charge loses while traveling through a field depends on the path it takes, then it would seem that the change in potential would depend on the path taken as well. But this makes no sense: the potential is defined at a particular point without a reference to a path. To calculate the change in potential we simply take the difference of the potential at the two end points! Therefore, if the change in potential energy of an object depends on the path taken, then the potential does not exist! (There are other things that can happen that can prevent a potential from existing as well). Let us show an example where it is impossible to construct a potential by showing it is impossible to construct an equipotential. Consider the vector map shown below: This is not completely artificial; as we learned in 7B the water velocity in a real pipe is like this. The friction on the sides reduces the velocity to zero at the edges, and the velocity is highest in the center. Our first attempt at constructing equipotentials will use the rule that the equipotentials are always perpendicular to the field lines. Because all the field lines are horizontal, our equipotentials will be vertical, as shown below. But there is a problem with this; as the field gets weaker (toward the edges) the equipotentials should be getting further apart. But the equipotentials cannot stay perpendicular to the field and get further apart: the first condition requires them to be vertical, the second requires them to bend. In this simple example, there are no equipotentials! This tells us that the potential does not exist either, and with a little bit of work you could show that the work required to move a charge through an electric field like this would depend on the path taken. The fact that we have two requirements that are not both automatically satisfied tell us that equipotentials only exist in very special circumstances. The cases that we are concerned with where potentials and equipotentials don’t exist are A changing electric field (this is actually essential to induction). This is because the electric field creates closed loops, as pointed out later. The magnetic field neverhas a potential, as the magnetic field cannot do work. We will learn that the magnetic force is always perpendicularto the direction a charge travels, so \(W = |\mathbf{F}| \Delta x\) must be zero.
For example how come $\zeta(2)=\sum_{n=1}^{\infty}n^{-2}=\frac{\pi^2}{6}$. It seems counter intuitive that you can add numbers in $\mathbb{Q}$ and get an irrational number. But for example $$\pi=3+0.1+0.04+0.001+0.0005+0.00009+0.000002+\cdots$$ and that surely does not seem strange to you... You can't add an infinite number of rational numbers. What you can do, though, is find a limit of a sequence of partial sums. So, $\pi^2/6$ is the limit to infinity of the sequence $1, 1 + 1/4, 1 + 1/4 + 1/9, 1 + 1/4 + 1/9 + 1/16, \ldots $. Writing it so that it looks like a sum is really just a shorthand. In other words, $\sum^\infty_{i=1} \cdots$ is actually kind of an abbreviation for $\lim_{n\to\infty} \sum^n_{i=1} \cdots$. Others have demonstrated some examples that make clear why this can happen, but I wanted to point out the key mathematical concept here is "Completeness" of the metric space. A metric space is any set with "distance" defined between any two elements (in the case of $\mathbb{Q}$, we would say $d(x,y) = |x-y|$). A sequence $x_i$ is "Cauchy" if late elements stop moving around very much, a necessary condition for a sequence to have a finite limit. To put it formally, ${x_i}$ is cauchy for $\epsilon>0$, there is a sufficiently large $N$ so that for every $m,n>N$ we have $d(x_n,x_m)<\epsilon$. A metric space is complete if all Cauchy sequences have a limit in the space. The canonical complete metric space is $\mathbb R$, which is in fact the completion of $\mathbb{Q}$, or the smallest complete set containing $\mathbb Q$. We think of an infinite sum as the limit of a sequence of partial sums: $$\sum_{n=1}^\infty x_n = \lim_{N\to\infty}\left( \sum_{i=1}^Nx_n \right)$$ As others have pointed out with a number of good counter-examples (my favorite of which is the decimal representation of an irrational number), $\mathbb{Q}$ is not complete, therefore an infinite sum of elements of $\mathbb Q$, for which partial sums are necessarily elements of $\mathbb Q$, can converge to a value not in $\mathbb Q$. It is counter-intuitive only if you are adding a "finite" number of rational numbers. Otherwise, as @Mariano implied, any irrational number consists of an infinite number of digits, and thus can be represented as a sum of rational numbers. Besides that. Given a sequence of positive rational numbers such that their sum converges and such that $a_n > \sum_{k = n + 1}^\infty a_k$. Then choosing a $\pm$ sign for each term of the sequence gives a new convergent series, each to a different number. By a countable-uncountable argument you get nonnumerable examples of that kind of series :). For example look at $$e = \sum_{k=0}^{\infty}{\frac{1}{k!}} $$ This has to see with the rate at which the sum/series converges to its limit and the Roth-Thue-Siegel theorem which allows you to use the rate of convergence to decide if the limit is rational or not. Maybe Emile (the OP) meant to ask something of this sort (please let me know, Emile): why do some (convergent, of course) infinite sums of rationals are rational and others are irrational?
Let $\{X_n\}_{n=0,1,\ldots}$ be a DTMC with states space $S = \mathbb{Z}$ and one-step transition matrix given by: $P_{i,i-1} = \frac{1}{2i}, P_{i,i+1} = \frac{1}{2(i + 2)}, P_{i,i} = 1- P_{i,i-1} - P_{i,i+1}$ for all $i \ge 1$ and $P_{i,i+1} = \frac{1}{2|i|}, P_{i,i-1} = \frac{1}{2(|i| + 2)}, P_{i,i} = 1- P_{i,i-1} - P_{i,i+1}$ for all $i \le -1$ and $P_{0,1} = P_{0,-1} = \frac{1}{4}, P_{0,0} = \frac{1}{2}$ Is this chain positive recurrent, null recurrent or transient? My trial: It is easy to obverse that this chain is irreducible, then we just need to classify the state $0$, and the intuition is : as the chain goes far away from the origin (say it is in state $i$), the probability it will stay in this state $i$ is becoming higher and higher as $|i|$ increasing. And as $|i| \to \infty$, it seems it becomes harder for the chain to leave the state $i$ given it starts in state $i$, so I guess this chain is positive recurrent, but how to make the proof rigorously? Thank you for your help! Edit: Thanks to @Math1000 help, I proved that this chain cannot be positive recurrent, but how can I show this chain is not transient?
Let $f(t,x,y)$ be the flow given by the system $$\dot{x}=y\qquad\dot{y}=x-x^2$$ and $O(x,y)$ the orbit starting at initial condition $(x,y)$. Let $P$ be the set of initial conditions $(x,y)$ such that $O(x,y)$ is periodic. Let $A_+$ be the set of initial conditions $(x,y)$ such that the limit of $t\rightarrow \infty$ of $O_+(x,y)$ exists. Let $A_-$ be the set of initial conditions $(x,y)$ such that the limit of $t\rightarrow -\infty$ of $O_-(x,y)$ exists. Let $A$ be the set of initial conditions $(x,y)$ such that the limits of $t\rightarrow \infty$ and $t\rightarrow -\infty$ of $O(x,y)$ exist. How to find $P$, $A_+$, $A_-$ and $A$? What I thought: The orbit $O(x,y)$ is given by $\{f(t,x,y):t\in\mathbb{R}\}$, where $O_+$ means that we restrict $t\geq 0$ and $O_-$ that $t\leq0$. I know what the Hamiltonian is and what the Jacobian is of the system, but I just cannot see what my next step should be. Could someone point me in the right direction? Here is the phase portrait:
Difference between revisions of "Conchoid" (Importing text file) (MSC 53A04) (2 intermediate revisions by 2 users not shown) Line 1: Line 1: + + + ''of a curve'' ''of a curve'' − The planar curve obtained by increasing or decreasing the position vector of each point of a given planar curve by a segment of constant length + The planar curve obtained by increasing or decreasing the position vector of each point of a given planar curve by a segment of constant length . If the equation of the given curve is =in polar coordinates, then the equation of its conchoid has the form: =. Examples: the conchoid of a straight line is called the [[Nicomedes conchoid|Nicomedes conchoid]]; the conchoid of a circle is called the [[Pascal limaçon|Pascal limaçon]]. Line 9: Line 12: ====References==== ====References==== − <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> J.D. Lawrence, "A catalog of special plane curves" , Dover, reprint (1972)</TD></TR></table> + <table> + <TR><TD valign="top">[a1]</TD> <TD valign="top"> J.D. Lawrence, "A catalog of special plane curves" , Dover, reprint (1972) </TD></TR> + </table> Latest revision as of 00:10, 13 December 2015 of a curve The planar curve obtained by increasing or decreasing the position vector of each point of a given planar curve by a segment of constant length $l$. If the equation of the given curve is $\rho=f(\phi)$ in polar coordinates, then the equation of its conchoid has the form: $\rho=f(\phi)\pm l$. Examples: the conchoid of a straight line is called the Nicomedes conchoid; the conchoid of a circle is called the Pascal limaçon. Comments References [a1] J.D. Lawrence, "A catalog of special plane curves" , Dover, reprint (1972) Zbl 0257.50002 How to Cite This Entry: Conchoid. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Conchoid&oldid=18606
What is the intuition behind short exact sequences of groups; in particular, what is the intuition behind group extensions? I'm sorry that the definitions below are a bit haphazard but they're how I learnt about them, chronologically. In Johnson's "Presentation$\color{red}{s}$ of Groups," page 100, there is the following . . . Definition 1:A diagram in a category $\mathfrak{C}$, which consists of objects $\{A_n\mid n\in\Bbb Z\}$ and morphisms $$\partial_n: A_n\to A_{n+1}, n\in \Bbb Z,\tag{6}$$ is called a sequencein $\mathfrak{C}$. Such a sequence is called exactif $$\operatorname{Im}\partial_n=\ker \partial_{n+1},\,\text{ for all }n\in \Bbb Z$$ [. . .] A short exact sequencein the category $\mathfrak{C}_{\Bbb R}$ of right $\Bbb R$-modules is an exact sequence of the form $(6)$ with all but three consecutive terms equal to zero. [. . .] Also, ibid., page 101, is this: It is fairly obvious that a sequence $$0\longrightarrow A\stackrel{\theta}{\longrightarrow}B\stackrel{\phi}{\longrightarrow}C\longrightarrow 0$$ is a short exact sequence if and only if the following conditions hold: $\theta$ is one-to-one, $\phi$ is onto, $\theta\phi=0$, $\ker \phi\le\operatorname{Im}\theta$. I'm reading Baumslag's "Topics in Combinatorial Group Theory". Section III.2 on semidirect products starts with Let $$1\longrightarrow A\stackrel{\alpha}{\longrightarrow}E\stackrel{\beta}{\longrightarrow}Q\longrightarrow 1$$ be a short exact sequence of groups. We term $E$ an extensionof $A$ by $Q$. Thoughts: I'm aware that semidirect products can be seen as short exact sequences but this is not something I understand yet. My view of semidirect products is as if they are defined by a particular presentation and my go-to examples are the dihedral groups. Please help :)
Advance warning: This is a post about a standard foundational construction in mathematics – constructing the integers from the natural numbers. It’s partly a prelude to another post, and I plan to do the details in slightly more generality than is normally done, but if you’re interested in this subject it’s probably nothing new and if it’s something new you’re probably not interested in this subject. Also a lot of this is going to be spelled out in way more detail than you’re likely to care about. Second warning: I’m pretty sure my lecturer when he did this proof for me back more than a decade ago in my first year of my maths degree added the caveat “And now you should forget about this proof and never do it again”. So I’m going against the health advisory of a trained mathematician. Some maths contained herein may be hazardous to your health. This post is about the following theorem: Let G be a set with operations + and * such that they obey all the axioms of a ring except for the existence of additive inverses, but have the additional property that + cancels in the sense that if x + z = y + z then x = y. There is a ring R and an isomorphic embedding \(f : G \to R\). Further, there is a unique (up to isomorphism) minimal such G. First, let’s prove uniqueness, as it will motivate the construction. Lemma: Suppose R is minimal. Then every element of R can be written as \(f(x) – f(y)\) for \(x, y \in G\). Proof: We need only show that the set of such elements form a ring, then the result will follow by minimality. It suffices to show that it’s closed under the ring operations, but this is just simple algebra: \[-(f(x) – f(y)) = f(y) – f(x)\] \[(f(x) – f(y)) + (f(u) – f(v)) = f(x + u) – f(y + v)\] \[(f(x) – f(y)) * (f(u) – f(v)) = f(x * u + y * v) – f(x * v + y * u)\] So this set is a subring of R containing f(G), so it must be R due to minimality. QED As an aside, note that it is not necessarily the case that every element of R is f(x) or -f(x) for some \(x \in G\). Consider for example \(G = \{ x \in \mathbb{N} : x > 1 \}\). Now suppose we have another minimal isomorphic embedding \(f’ : G \to R’\). We want to construct an isomorphism \(h : R \to R’\) such that \(h \cdot f = f’\). This will prove uniqueness. We construct h as follows: Let \(r = f(x) – f(y)\). Define \(h(r) = f'(x) – f'(y)\). First we must show this is well defined. Specifically we will show that if \(f(x) – f(y) = f(u) – f(v)\) then \(f'(x) – f'(y) = f'(u) – f'(v)\). This turns out to just be basic manipulation of the fact that \(f\) and \(f’\) are both isomorphic embeddings: \[ \begin{aligned} f(x) – f(y) &= f(u) – f(v) \\ f(x) + f(v) &= f(u) + f(y) \\ f(x + v) &= f(u + y) \\ x + v &= u + y \\ f'(x + v) &= f'(u + y) \\ f'(x) + f'(v) &= f'(u) + f'(y) \\ f'(x) – f'(y) &= f'(u) – f'(v) \\ \end{aligned} \] So as a result, \(h\) is well defined. Additionally, you can construct \(h’ : R’ \to R\) in exactly the same manner, and it’s clear by definition that \(h\) and \(h’\) are inverses, so \(h\) is a bijection. If we pick \(b \in G\) then \[ \begin{aligned} h(f(x)) &= h(f(x + b) – f(b)) \\ &= f'(x + b) – f'(b) \\ &= f'(x) \\ \end{aligned} \] So we’ve proven that \(h(f(x)) = f'(x)\) and thus \(h \cdot f = f’\). We still need to show that \(h\) is an isomorphism, but this follows simply from our formulae for \(+\) and \(*\) on things of the form \(f(x) – f(y)\): Each operation you can convert into operations in the original set, pass to \(f’\) and then convert back. QED This uniqueness proof nicely informs the construction too. Every element in our target ring is going to be the difference of two elements in our set. Therefore we construct the target ring as the set of differences. But of course two differences might result in the same element, so we define an equivalence relation for when this should happen and quotient out by that. So we will now construct our ring: Let \(R = G^2 / \sim\), where \((x,y) \sim (u, v)\) if \(x + v = y + u\) (i.e. if \(x – y = u – v\)). First we must show this is an equivalence relation. Reflexivity is obvious, symmetry follows from commutativity of addition. We only need to show transitivity. We’ll need some notation: If \(a = (x, y)\), write \(x = a^+\) and \(y = a^-\). (This will give us slightly fewer variables to keep track of). Suppose \(a \sim b\) and \(b \sim c\). Then \[\begin{aligned} a^+ + b^- &= a^- + b^+\\ b^+ + c^- &= b^- + c^+\\ a^+ + b^- + b^+ + c^- &= a^- + b^+ + b^- + c^+\\ a^+ + c^- + (b^- + b^)&= a^- + c^+ + (b^- + b^)\\ a^+ + c^- &= a^- + c^+ \\ a &\sim c \\ \end{aligned}\] (note we had to use the cancellation property for + here) Fix \(b \in G\) and define \(f : G \to R\) as \(f(x) = [(x + b, b)] \). (We’re not guaranteed a 0 element in G, which is why we can’t just map it to \([(x, 0)]\)). We will now prove that \(R\) is a ring and \(f\) an isomorphic embedding into it. First note that f does not depend on the choice of b. Proof: \((x + b, b) \sim (x + b’, b’)\) because \(x + b + b’ = x + b’ + b\). Now note that f is injective: This is because of the cancellation property we required on addition: If \((x + b, b) \sim (y + b, b)\) then \(x + 2b = y + 2b\) and so by cancellation \(x = y\). In order to show that it’s an isomorphic embedding we first need to construct a ring structure on R. We’ll use our formulae from above. Define: \[(x, y) + (u, v) = (x + u, y + v)\] \[(x,y) * (u, v) = (x * u + y * v, x * v + y * u)\] We first need to show that these are compatible with the equivalence relation \(\sim\). Now suppose \(a,b,c,d \in G^2\) with \(a \sim c\) and \(b \sim d\). We first want to show that \(a + b \sim c + d\). i.e. that \((a + b)^+ + (c + d)^- = (a + b)^- + (c + d)^+\). \[\begin{aligned} (a + b)^+ + (c + d)^- &= a^+ + b^+ + c^- + d^-\\ &= (a^+ + c^-) + (b^+ + d^-)\\ &= (a^- + c^+) + (b^- + d^+)\\ &= (c^+ + d^+) + (a^- + b^-) \\ &= (c + d)^+ + (a + b)^- \\ &= (a + b)^- + (c + d)^+ \\ \end{aligned}\] So \(+\) is compatible with the equivalence relation, and thus can be inherited by \(R\). We now have to do the same with \(*\). To simplify the calculation we’ll show that \(a * b \sim a * d\). The proof that \(a * d \sim c * d\) will be identical, and the whole result will follow from transitivity. So \[\begin{aligned} (a * b)^+ + (a * d)^- &= a^+ * b^+ + a^- * b^- + a^+ * d^- + a^- * d^+ \\ &= a^+ * ( b^+ + d^-) + a^- * (b^- + d^+) \\ &= a^+ * ( b^- + d^+) + a^- * (b^+ + d^-) \\ &= a^+ * b^- + a^- * b^+ + a^+ * d^+ + a^- * d^- \\ &= (a * b)^- + (a * d)^+ \\ a * b &\sim a * d \\ \end{aligned}\] So the operations are well defined on the equivalence classes. Now all we have to do is show that they satisfy the ring operations. Ideally without losing the will to live. It’s obvious by construction that + is commutative and associative (because it is on \(G\)). The equivalence class \(0 = [(x, x)]\) is a unit for + (it should be evident that every choice of x produces an equivalent element because \(x + y = x + y\)). Proof: Let \(a \in G^2\) and \(b = (x, x)\). \[\begin{aligned} (a + b)^+ + a^- &= a^+ + b^+ + a^- \\ &= a^+ + b^- + a^- \\ &= a^+ + (a + b)^- \\ a + b &\sim a \\ \end{aligned}\] Negations exist because \(a + (a^-, a^+) = (a^+ + a^-, a^+ + a^-) \sim 0\). So now we just have to prove that * is associative and distributes over +. Associative: \[\begin{aligned} ((a * b) * c)^+ &= (a * b)^+ * c^+ + (a * b)^- * c^- \\ &= a^+ * b^+ * c^+ + a^- * b^- * c^+ + a^- * b^+* c^- + a^+ * b^- * c^-\\ (a * (b * c))^+ &= a^+ * (b * c)^+ + a^- * (b * c)^- \\ &= a^+ * (b * c)^+ + a^- * (b * c)^- \\ &= a^+ * b^+ * c^+ + a^- * b^- * c^+ + a^- * b^+ * c^- + a^- * b^- * c^+ \\ &= ((a * b) * c)^+ \\ \end{aligned}\] At this point I’m prepared to accept on faith that the negative half works out the same way. Are you? Good. Distributive (just going to show left distributive. Right should follow similarly): \[\begin{aligned} (a * (b + c))^+ &= a^+ * (b + c)^+ + a^- * (b + c)^- \\ &= a^+ * b^+ + a^+ * c^+ + a^- * b^- + a^- * c^- \\ &= (a^+ * b^+ + a^- * b^-) + (a^+ * c^+ + a^- * c^- )) \\ & = (a * b)^+ + (a * c)^+ \\ (a * (b + c))^- &= a^+ * (b + c)^- + a^- * (b + c)^+ \\ &= a^+ * b^- + c^- + a^- * b^+ + a^- * c^+ \\ \end{aligned}\] Which means we’re done. Phew. That was a lot of work. Unfortunately, now on to the next lot of work! Several things more to prove: Theorem: If G has a multiplicative unit, which we’ll call 1, then \(f(1)\) is a multiplicative unit for \(R\). Proof: \[\begin{aligned} (x, y) * (1 + b, b) &= (x + x * b + y * b, x * b + y + y * b) \\ &= (x + (x * b + y * b), y + (x * b + y * b))\\ &\sim (x, y) \\ \end{aligned}\] (Using the fact that (\(x + c, y + c) \sim (x, y) \)). Theorem: If \(*\) is commutative on \(G\) then it is commutative on \(R\). Proof: Follows almost directly from how \(*\) on \(R\) is defined in terms of \(*\) on \(G\). Ok. Now we’re done. We apply this construction to \(\mathbb{N}\), and we get \(\mathbb{Z}\): The minimal commutative ring with a 1 that N embeds isomorphically into. Time to relax.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text レコードの詳細 - ほとんど同じレコード 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 レコードの詳細 - ほとんど同じレコード 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE レコードの詳細 - ほとんど同じレコード 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 レコードの詳細 - ほとんど同じレコード 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 レコードの詳細 - ほとんど同じレコード 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 レコードの詳細 - ほとんど同じレコード
I am going over a proof in the Fundamental Theorem of Galois Theory and I need a little clarification. I hope you can help. Let $L/K$ be a finite extension with Galois group $G$ and let $M$ be an intermediate field. Denote by $M^*$ the group of all $M$-automorphisms of $L$. Then a part of the Fundamental Theorem of Galois Theory states: If an intermediate field $M$ is a normal extension of $K$ then the Galois group of $M/K$ is isomorphic to the quotient group $G/M^*$. Now the proof is rather simple just defining the map $\phi: G \to G'$ by $\phi(\tau)=\tau \mid_M$ where $\tau \in G$ and $G'$ is the Galois group of $M/K$. Then $\phi$ is a group homomorphism and surjective. The deciding point in the proof is the claim that the kernel of $\phi$ is $M^*$, i.e., the kernel of $\phi$ is the group of all $M$-automorphisms of $L$. Why is this so? It seems to me that if $\tau \in M^*$, $\phi(\tau)=\tau \mid_M$ and $\tau \mid_M$ is just the identity on $M$ (since it is an $M$-automorphism)? If I can get the kernel to be $M^*$ then $G' \cong G/\text{ker}(\phi) = G/M^*$ and that would be the end of it.
Periodic or oscillatory motion is common throughout the universe, from the smallest to the largest distance and time scales. This kind of motion is related to the forces acting between objects by Newton’s 2 nd law, just as all motions are. To have oscillatory motion, there must be a restoring force that acts in a direction to cause an object to return to its equilibrium position. A particularly common type of oscillatory motion results when the magnitude of the restoring force is directly proportional to the displacement of the object from equilibrium: \[ F_{restoring} = -kx \label{Eq1}\] This is the force law characteristic of a spring that is stretched or compressed from its equilibrium position. The oscillatory motion that results from this force law is known as simple harmonic motion ( SHM). Even when the force law is not as simple as Equation \(\ref{Eq1}\) for arbitrary values of \(x\), it turns out that for an object that oscillates about an equilibrium position, this linear law provides an accurate description for small oscillations. Thus, we can make a very strong statement: essentially every system that vibrates, does so in SHM for small amplitudes of vibration. Because it is so common, it is worth spending some effort understanding SHM and the different ways to represent it. Another reason for focusing on SHM is that periodic wave motion is the interconnected vibrations of many, many oscillators, each vibrating in SHM. Simple Harmonic Motion Our approach is to use the tools we have at our disposal, namely, Newton’s 2 nd law, to analyze the motion of several different physical systems that exhibit oscillatory motion. We will look for common features of the motion and its description. Then, we will generalize the description and representation. In this process we will develop explicit mathematical expressions to represent the motion and we will see how properties of the motion such as the period of oscillation are related to the physical parameters of the particular phenomena. In this process we will revisit the energies involved in oscillating systems and gain a deeper understanding of the energy relationships. First System: Mass on a Spring We consider a mass hanging on a spring. There are two forces acting on the mass: the pull upward of the spring and the gravity force of the Earth pulling down. We saw previously that if we take x to be the distance from the equilibrium position of the mass as it hangs motionless on the spring, then the net force has the form \[\sum F = -kx\] Now we apply Newton’s 2 nd law: net force equals mass times acceleration \[-kx = ma , or\] \[-kx= m \dfrac{d^2x}{dt^2}\] This is called a differential equation, because it involves derivatives of x. A standard way to write this equation that will be useful as we compare different systems is \[ a = \dfrac{d^2}{dt^2}x(t) = -\dfrac{k}{m}x(t) \] Let’s note several things about this equation. Its solution will be a mathematical expression that gives the position x as a function of the time \(t\). k/m ( k and m are both positive constants). Also, the acceleration, \[a= \dfrac{d^2}{dt^2} x(t)\] is not constant. Rather, the acceleration is proportional to the displacement from equilibrium, but with the opposite sign. What kind of function, when differentiated twice, gives back the same function, but with a negative constant coefficient? Perhaps you remember from your calculus course which function has this property. If you do not, that is OK. What’s important is to understand the properties of the solution, not how to get the solution. There are two functions that have the property we desire. One is the sine function and the other is the cosine function. The second derivative of \(A \sin (bt)\) with respect to \(t\) (when A and b are constants) is \(-b^2A\sin(bt).\) That is, \[ \dfrac{d^2}{dt^2}A\sin bt = -Ab^2 \sin bt \label{Eq5} \] And similarly for the cosine function. We compare Equation \(\ref{Eq5}\) to the equation for the mass on a spring, \[ \dfrac{d^2}{dt^2}x(t) = -\dfrac{k}{m}x(t) \] we notice that they are the same if \(b^2\) equals \(k/m\). If we make that substitution, we get \[x(t)= A \sin \left( \sqrt{\dfrac{k}{m}t} \right) \] differentiating twice with respect to t and see if you do not get the function \(x(t)\) back multiplied by the negative constant - k/ m. These two functions (the sine and the cosine) are solutions of the differential equation we obtained by applying Newton’s 2 nd law to the mass hanging on the spring. These functions repeat every time the angle \(bt\)increases by \(2\pi \). Thus, the time to complete one oscillation is that value of \(t\) that satisfies the relation \(bt = 2\pi\). This time is called the period and is denoted by the letter \(T\). It is equal to \(2\pi /b \). \[T=\dfrac{2\pi}{b} = 2\pi\sqrt{\dfrac{m}{k}} \] Note that we know the period if we know the values of the factors that appear in Newton’s 2 nd law (mass and spring constant). We can now write the differential equation in terms of \(T\). \[ \dfrac{d^2}{dt^2}x(t) = -\left(\dfrac{2\pi}{T}\right)^2 x(t)\] and the possible solutions also in terms of \(T\): \[ x(t) - A \sin\dfrac{2\pi t}{T}\] or \[x(t)= A \cos\dfrac{2\pi t}{T}\] Before pursuing the analysis of the spring mass system further, we will look at another system. Then we will generalize our results and discuss them in much more detail. Second System: Simple Pendulum We consider a mass hanging on a lightweight string. The mass swings back and forth when pulled aside and released. How do we apply Newton’s 2 nd law? We first identify the objects and all the forces acting on the objects. Then, the Net Force acting on any particular object is equal to the product of the mass and acceleration of that object. In the case of our pendulum, the object of interest is the bob. (In our model, the mass of the string is negligible.) Two forces act on the bob - the tension in the string directed along the string, and the gravitational pull of the earth straight down on the bob. The vector sum of these two forces is the net force or unbalanced force. The motion is constrained to be along the arc of a circle with radius equal to the length of the string, l. The tangential component of the net force, that is, the force tangent to the path the bob takes, is the component that causes the bob to speed up or slow down along this path. (The component of the net force along the string causes the bob to move in a circle, and is dependent on the instantaneous speed. We do not need to be concerned with this radial force now.) To proceed, we draw a force diagram (Figure 8.5.1), showing the forces acting on the bob. Applying Newton’s 2 nd law along the tangential direction gives Figure 8.5.1 \[ mg\sin \theta = -ma_{tangential} \] The minus sign tells us that \(a_{tangential}\) is opposite to the direction of increasing \(\theta\). It is useful to express \(a_{tangential}\) in terms of \(\theta\).Since a \(a_{tangential}\) is the second derivative of a distance moved along the arc, and since a distance along the arc is simply the product of \(l\theta\) , \(a_{tangential} = l d^2\theta/dt^2\). Then, \[mg\sin \theta = -ml\dfrac{d^2\theta}{dt^2}\] and cancelling out the mass \[gsin\theta = -l\dfrac{d^2\theta}{dt^2}\] This looks almost like our equation of motion for a mass on a spring. The difference is we have a \(\sin\theta\) in stead of \(\theta\) on the left hand side. Perhaps for small oscillations, that is small values of \(\theta\), we can replace \(\sin\theta\) with \(\theta\). If we make this approximation (substituting for sin and then group the constants together on the left hand side) we get: \[-\dfrac{g}{l}\theta=\dfrac{d^2\theta}{dt^2} \] If we put this in standard form, we can easily compare it to the equation we got for the mass oscillating on a spring. \[ simple\: pendulum: \dfrac{d^2\theta}{dt^2} = -\dfrac{g}{l}\theta(t) \] \[ mass \: on \: spring: a=\dfrac{d^2x}{dt^2}=-\dfrac{k}{m}x(t) \] The similarity in these two equations. Except for the name of the variable, \(\theta\) or \(x\), which is arbitrary, they have the identical form. We saw before, that in terms of the period to make a complete oscillation, we could write the expression for \(x(t)\) as: \[\dfrac{d^2}{dt^2}x(t)=-\left(\dfrac{2\pi}{T}\right)^2x(t), where\] \[ (\dfrac{2\pi}{T})^2=\dfrac{k}{m} \leftarrow T=2\pi\sqrt{\dfrac{m}{k}} \] Now by comparing the pendulum equation to the mass and spring equation, we see that the relation giving the period for a pendulum must be: \[ (\dfrac{2\pi}{T})^2=\dfrac{g}{l} \leftarrow T=2\pi\sqrt{\dfrac{l}{g}} \] Also, since the equation for the mass on a spring and the equation for the pendulum are in fact the same equation with different constants, they must have the same solution. So the mathematical function that worked for the mass-spring, must work for the simple pendulum, too. The distinguishing feature that makes these equations similar is that the acceleration is proportional to the displacement, but with the opposite sign. This is the unique feature that leads to simple harmonic motion (SHM). Before going any further with the analysis of SHM, it is useful to investigate its general properties. This is what we will now do. Contributors Authors of Phys7B (UC Davis Physics Department)
The comments is completely wrong, and that is why questions should not be answered in comments, wrong answer in comments can not be downvoted. The covariant form of Maxwell's equations: $$\partial_{\alpha}F^{\alpha\beta} = \mu_{0} J^{\beta}$$ $$\partial_{\alpha}F_{\beta\gamma} + \partial_{\beta}F_{\gamma \alpha} + \partial_{\gamma} F_{\alpha \beta}=0$$ are indeed Lorentz invariant, in particular you wrote them in a way where you know how everything transforms. But Maxwell tells you how the fields evolve, given the charge and current. It doesn't tell you the motion of charges. For that you'd need something completely different, like the Lorentz Force Law and some relativistic version of Newton's Laws. Griffiths is saying that if you write down Maxwell in any frame then you can use that (to find the field and then e.g. use the Lorentz Force Law and $\vec F=d\vec p/dt$) to find the kinematics of objects. And these predictions for different frames will be the kinematics two different frames would describe for the same events. But Maxwell in vector form didn't tell us how the electric and magnetic fields transform between frames. We know the equations should be the same, but we don't know how the fields, the solutions, should change. So you should take it backwards and say that the fields should transform in a way that gives us the same dynamics. And that happens because we define the fields in terms of forces. So we need the Lorentz Force law, for example, in order to know what the fields should be in the different frames in the vector form. So we have to assert how they transform to make it so they give the same dynamics for motions of stuff when we combine with Lorentz Force and some laws of motion.
Maintenance is complete! We got more disk space.Become a Patron! Hello /sci/, can anyone formulate a mathematical/logical proof as to why liking girls is gay? >>10996470Transitive property.You like girls.Girls like dicks.Therefore, You like dicks.Faggot. >>10996482Write it more formally, I would also like a mathematical proof pls >greater percentage of blacks than whites studying science(STEM) in higher educationAnd this is in England where there is no affirmative action (is confirmed illegal by the Gov). >>10996460Are you literally retarded? Can't you read? >>10996460Poor people go into stem independently of gender, race, etc. This is well know. >>10996460There's also a higher percentage of gender non-binary students than either male or female, how bout dat? >I'm brainlet I want to be good at mathbut as you know that low iq can't be good at math And there is no way to improve iq which is fixed by geneThese facts make me sad and lethargy >>10996421Kys I'm at the point where I have zero tolerance for self-pity. Who gives a fuck about IQ when you haven't even fucking tried yetJust pick up a good textbook and study faggotWatch khan academy or the billion other math resources online if you need help. Or just ask us. What's the point in whining on the internet, how does that help towards your goal? >>10996440...... >>10996421dumb frogposter I'm taking a class in Computational Data Analysis and the first topic is linear regression. I've forgotten all of it since i learned it back in second year. Any textbook recommendations? I got by with half-remembered stuff from first year of my undergrad and the lecture notes and then the stuff on general linear models in Bishop's Pattern Recognition and Machine Learning that I was using for a different unit. Most of it is pretty straightforward and most of the reading I did was for interest instead of learning.While we're on book recommendations: Suggested books for likelihood methods? Ideally something that doesn't shy away from the information theory angle. I've been trying to figure out how it is tribes as small as 20-30 people have been surviving for thousands of years without having serious medical issues.https://slappedham.com/10-most-isolated-and-dangerous-tribes-in-the-world/2/I've figured it out now. In nature only the strong can survive, those who inherit weak genes tend to die off. Usually when they are still very young. Because of this only those with the strong genes survive.This would explain why Tribes have so many children and how such small groups are able to survive thousands of years in isolation. whenever I come back to this type of problems I stop thinking. I can't find any examples on this exact type of sums, it makes me feel retarded as fuck >>10996405Hmm... What about WHAT THEY FUCKING ARE, WHICH IS INFINITE GEOMETRIC SERIES???holy fuckThat one is 3/(1 - 1/8) = 24/7. >>10996405why is this written retardedly and not just[eqn]sum_{n=1}^{\infty} \frac{3}{8^n}]/eqn] why is this written retardedly and not just[eqn]\sum_{n=1}^{\infty} \frac{3}{8^n} [/eqn] >>10996405This is not a problem, it's just an infinite sum. Scientifically speaking, why can't we run an inflation-only country? By this I mean have 0 taxes, and print more money to fund government activity. Printing more money is equivalent to taxing a flat rate, and if you want tax brackets you can only print more money to tax higher, and then give cash deposits to those in lower brackets.This is literally equivalent in all ways. So why do we make monkeys write tax returns? flat tax rates are completely fucking retarded >>10996446>Fucking moronYou missed my sarcasm. >>10996448Okay then how about this: We genocide all government creditors, obliterate all gay social programs, and now the ungodly amount of money we already print is enough to run the US government 10 times over. No taxes needed. Flat taxe rates are only politically stable when average income is the same as median income. >>10996391who's the cutie to the right? You need to create an algorithm with any real computable number A from [0;1] in input and the rules for the coin game with A chance of winning in output. You can use only 1 coin. One of 2 players should win in a finite period of time. Hello everyone, I am a 4th year med tech student, and I was wondering if there is any worthwhile advice and experience that I need to know after graduation. does one need to pay tution fee for doing phd studies in biology in the usa? It is assumed your assistantship package will cover tuition fees. You, however, might still have to cover the other University's bullshit fees per semester (i.e. transportation, library, technology, administration)Between $500-1k per semester, in my experience >>10996431what if im an EU citizen? Can I work in the USA as field/research/teaching assistant? Who or what do you think programmed us?By Us I mean Earth lifeforms An instinct is the ability of a lifeforml to perform a behavior the first time it is exposed to the proper stimulus. For example, a dog will drool the first time and every time it is exposed to food.We all share the instinct to reproduce for whatever reason Is the DNA a code of some sort? >>10996318Yes, DNA basically acts like a chemical nondeterministic Turing machine. Programmed by evolution of course.When certain chemicals are present at locations on the chromosome, certain genes are more likely to be expressed. Their expression of course is in the form of a protein, and those proteins in turn cause further expression. Thus we get recursion and hence computation. The evolutionary process programmed us >>10996318This >>10996337, it's a code of a sort but not in the way we think of one normally as something you write into a computer or that which is stored in a language. Both of those codes have evident functional intent behind them, they exist to carry out their functions because those functions provide a utility to us their creators. DNA on the other hand is a "code" or "program" which has arisen as the inevitable result of the natural properties of elements, then compounds, then molecules, then amino-acid chains and complex proteins interacting with one another in a water-rich, energy rich environment. >>10996318are a spider's body and its web really two separate things?Genes code for proteins and the web is made from proteins.It's a pretty perfect example of an extended phenotype. You're a bunch of pseudo-scientistsChange my mind You're giving us too much credit dawg. No Bourbaki I'm not only talking about climate change, or other controversial science subject. Each paper in Science and Nature, the two top scientific journals are laden with 30-40 citations. In PNAS (Proceedings National Academy of Science) widely considered to be the 3rd highest impact factor, there is no citation limit and the average citation for one manuscript is by the hundreds.Everything is like IPCC says xxx. The UN says xxx. Bob et al. 2018 says xxx. Every scientific discussion on disagreement boils down to this peer reviewed study say this, and this peer reviewed study say that. Every science communicator says "trust the scientists" because you cannot do the science yourself. Modern science is so complex these days that amateurs cannot do experiments in their garage (like Galileo built a telescope in his attic) and overturn a whole discipline.Do you think this trend is hindering the progress of science? >>10996263Not all references to authority are appeals to authority. Appeal to authority specifically means that you trust authority solely on basis of stature without fairly assessing the validity of their claims.For the most part (I won't say this is always the case), scientists trust the IPCC, not because they're the IPCC, but because the results they've published have excellent methodology and analysis. >>10996263The peer review system has benefits and downsides, the benefit is that often a scientist's peers (other scientists working in the same field) have access to similar equipment to them which allows them to replicate experiments and thus test the viability of a hypothesis. It establishes a mechanism by which one of the most important parts of a viable theory can be obtained, that part being replication, if someone else can take your methodology and replicate your results it's much more probable that you're onto something. On the other hand peer review is not immune to creeping corruption or nepotism of a kind, peer reviewers are not obligated to review everything that comes across their desk, they can and regularly do reject material out of hand. As a result controversial or contradicting material can be blacklisted even if it has some scientific merit, all based on the preexisting biases of the human being entrusted with deciding which material to publish and which to discard. Just to use the climate controversy as an example, there is a significant body of climatologists (I think around 200) who have resorted to self publishing studies which they believe provide data contradicting the viability of the anthropogenic climate change conjecture, they self publish because all of their work in that particular subject has been rejected by peer reviewers out of hand in spite of these climatologists having sizable bodies of more mainstream work and being fully accredited in their field. I'm not going to make a value claim on their work, but it seems to me like a clear display of baseless bias. If AGW is a strong and sound proposition than challenges to it's validity shouldn't need to be deliberately pushed out of easy public sight, it aught to be able to stand with it's critics on even ground. >>10996302climate denialists trust youtube crackpots and alex jones and /pol/ infographics more. uneducated people just do that kind of stuff >>10996263>appeal to authorityThat is a logical fallacy, not an argument.Also, if you are doing experiements then YOU ARE THE SCIENTIST. "Amateur" and "professional" isn't a distinction that needs to be made. >>10996263>"appeal to authority"stop using words you don't understandAppeal to authority is: Aristotle said this, so this is true.I challenge you to find a scientific paper that uses something like this as an argument. Like: Prof. XY said this in his paper so this is true.If the scientific community "trusts" Prof. XY then it's because his findings has been replicated by others scientists. If we live in a simulation and physics are the laws of the simulation, breaking the laws - As in discovering vulnerabilities/flaws in the system would equate to breaking the laws of physics which in turn would allow us to exploit the system or accidentally crash it, time travel might be iteration over older records, parallel universes might be access to parallel processes and etc >>10996240how do you know you're actually breaking the laws of the simulation? Wouldn't we just consider breaking said laws to be new physics? >>10996240I live in the Carmelscape >>10996240Cool so you made a metaphorWhat now? There have been new good video footage of ufos at different locations in the world for example thishttps://www.youtube.com/watch?v=3OzTIGEAnr4and in iran they tried to shoot down some of themhttps://www.youtube.com/watch?v=u3yzNIYqTL0 >>10996232That red circle is clearly added in post. >plastic bag floats in the air during a storm>ufos, man >>10996234What needs to be explained? Nothing unidentifiable is visible in the second vid. The lights in the sky are AA rounds. >>10996232> and in iranYes, it certainly was Iran. You can tell by the word "ISRAEL" on the lower right corner.Zoomers wouldn't have any idea what this footage actually is, but us OF's (old farts) know it quite well. This was the opening night of Gulf War I. CNN was a brand new network at the time, and the idea that a war could be broadcast live on TV was completely unheard of. The closest thing we had before this was newspaper drawings of fleet positions to the northeast of the Falklands when the UK and Argentina had a go at it.At the very beginning, Iraq fired off what few long range cruise missiles they had left over from the Iran-Iraq war at Israel. Of course, we were all positively glued to our TV. I know this because I was over Sheri's (a waitress I worked with) house at the time, and she tried to seduce me in a pick lacy bra, stockings and garters. I was having nothing of it, though. The fucking war was starting. Fuck sex. >>10996321Vietnam was broadcast on the evening news you doofus. so if i were to go to a phd programme in biology in the usa i would not need to pay tution fee? Decay >>10996112f orbitals (l=3) are filled >>10996112Bottom row:>Nuclear bombs>Nuclear energy>Nuclear waste>Smoke detectors>Radiation detectorsTop row:>Magnets>Catalysts>Little bits put in glass and metal For me, it's the d-block people who write your 2s like this, why are you retarded it is well known that anyone who writes their 2s like that practices a VERY unhealthy exercise >>10996048for me it's i write in my own language so looky-loos can't copy me. >>10996048>two superiorWhat did it mean by this? I made a thread yesterday about meme learning and I didn't get the chance to thank the anons who replied before it was archived. THANKS /SCI/!!! Based and wholesomepilled!
The diagonal formula in mathematics is used to calculate the diagonals of a polygon including rectangles, square, and more similar shapes. When two non-adjacent vertices within a polygon are joined through a single line, it is named as the polygon. Diagonal is formed by joining any two vertices of a polygon except edges. The sloping is also named as the diagonal. Here, we will discuss for diagonal calculation in case of rectangle and the square. Square is nothing but a regular quadrilateral whose four sides are equal and aligned at the angle of 90 degrees. This is the reason diagonals for a square are also equal. In the case of rectangle, the opposite sides are parallel and they are congruent too. In this case, the diagonals of a rectangle will bisect each other and they are congruent too. Below is give the Diagonal Formula for square and the rectangle. \[\LARGE Diagonal\;of\;a\;Square=a\sqrt{2}\] Where, a is the length of the side of the square \[\LARGE Diagonal\;of\;a\;Rectangle=\sqrt{l^{2}+b^{2}}\] Where, l is the length of the rectangle. b is the breadth of the rectangle. p and q are the diagonals \[\LARGE Diagonal\;of\;a\;Cube=\sqrt{3}x\] Where,x is the length of the side of the Cube Polygon formula to find area: \[\large Area\;of\;a\;regular\;polygon=\frac{1}{2}n\; sin\left(\frac{360^{\circ}}{n}\right)s^{2}\] Polygon formula to find interior angles: \[\large Interior\;angle\;of\;a\;regular\;polygon=\left(n-2\right)180^{\circ}\] Polygon formula to find the triangles: \[\large Interior\;of\;triangles\;in\;a\;polygon=\left(n-2\right)\] Where, n is the number of sides and S is the length from center to corner. Formula of parallelogram diagonal in terms of sides and cosine $\beta$ (cosine theorem) \[\LARGE p=d_{1}=\sqrt{a^{2}+b^{2}- 2ab\;cos \beta}\] \[\LARGE q=d_{2}=\sqrt{a^{2}+b^{2}+ 2ab\; cos \beta}\] Formula of parallelogram diagonal in terms of sides and cosine α (cosine theorem) \[\LARGE p=d_{1}=\sqrt{a^{2}+b^{2}+2ab\;cos \alpha }\] \[\LARGE q=d_{2}=\sqrt{a^{2}+b^{2}-2ab\;cos\alpha }\] Formula of parallelogram diagonal in terms of two sides and other diagonal \[\LARGE p=d_{1}=\sqrt{2a^{2}+2b^{2}-d_{2}^{2}}\] \[\LARGE q=d_{2}=\sqrt{2a^{2}+2b^{2}-d_{1}^{2}}\] To find the total number of diagonals for a polygon with n number of sides, you can use the following formula. This formula is applicable to all shapes that satisfy the properties of a Polygon. No formula in mathematics is written in dreams but there is the logic behind it and the same is true for the below-given formula as well. Just memorize it by heart and start solving tough problems in minutes.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Saturation vapor pressure \(e_s\) is calculated from a given temperature \(T\) (in \(K\)) by using the Clausius-Clapeyron relation. \[\begin{equation} e_s(T) = e_s(T_0)\times \exp \left(\frac{L}{R_w}\left(\frac{1}{T_0} - \frac{1}{T}\right)\right) \tag{1} \end{equation}\] where \(e_s(T_0) = 6.11 hPa\) is the saturation vapor pressure at a reference temperature \(T_0 = 273.15 K\), \(L = 2.5 \times 10^6 J/kg\) is the latent heat of evaporation for water, and \(R_w = \frac{1000R}{M_w} = 461.52 J/(kg K)\) is the specific gas constant for water vapor (where \(R = 8.3144621 J / (mol K)\) is the molar gas constant and \(M_w = 18.01528 g/mol\) is the molar mass of water vapor). More details refer to Shaman and Kohn (2009). An alternative way to calculate saturation vapor pressure \(e_s\) is per the equation proposed by Murray (1967). \[\begin{equation} e_s = 6.1078\exp{\left[\frac{a(T - 273.16)}{T - b}\right]} \end{equation}\] where \(\begin{cases} a = 21.8745584 \\ b = 7.66 \end{cases}\) over ice; \(\begin{cases} a = 17.2693882 \\ b = 35.86 \end{cases}\) over water. The resulting \(e_s\) is in hectopascal (\(hPa\)) or millibar (\(mb\)). Relative humidity \(\psi\) is defined as the ratio of the partial water vapor pressure \(e\) to the saturation vapor pressure \(e_s\) at a given temperature \(T\), which is usually expressed in \(\%\) as follows \[\begin{equation} \psi = \frac{e}{e_s}\times 100 \tag{2} \end{equation}\] Therefore, when given the saturation vapor pressure \(e_s\) and relative humidity \(\psi\), the partial water vapor pressure \(e\) can also be easily calculated per equation (2). \[ e = \psi e_s \] The resulting \(e\) is in \(Pa\). Absolute humidity \(\rho_w\) is the total amount of water vapor \(m_w\) present in a given volume of air \(V\). The definition of absolute humidity can be described as follows \[ \rho_w = \frac{m_w}{V} \] Water vapor can be regarded as ideal gas in the normal atmospheric temperature and atmospheric pressure. Its equation of state is \[\begin{equation} e = \rho_w R_w T \tag{3} \end{equation}\] Absolute humidity \(\rho_w\) is derived by solving equation (3). \[ \rho_w = \frac{e}{R_w T} \] The resulting \(\rho_w\) is in \(kg/m^3\). Mixing ratio \(\omega\) is the ratio of water vapor mass \(m_w\) to dry air mass \(m_d\), expressed in equation as follows \[ \omega = \frac{m_w}{m_d} \] The resulting \(\omega\) is in \(kg/kg\). Specific humidity \(q\) is the ratio of water vapor mass \(m_w\) to the total (i.e., including dry) air mass \(m\) (namely, \(m = m_w + m_d\)). The definition is described as \[ q = \frac{m_w}{m} = \frac{m_w}{m_w + m_d} = \frac{\omega}{\omega + 1} \] Specific humidity can also be expressed in following way. \[ \begin{equation} q = \frac{\frac{M_w}{M_d}e}{p - (1 - \frac{M_w}{M_d})e} \tag{4} \end{equation} \] where \(M_d = 28.9634 g/mol\) is the molar mass of dry air; \(p\) represents atmospheric pressure and the standard atmospheric pressure is equal to \(101,325 Pa\). The details of formula derivation refer to Wikipedia. Substitute \(\frac{M_w}{M_d} \approx 0.622\) into equation (4) and simplify the formula. \[ q \approx \frac{0.622e}{p - 0.378e} \tag{5} \] The resulting \(q\) is in \(kg/kg\). Hence, by solving equation (5) we can obtain the equation for calculating the partial water vapor pressure \(e\) given the specific humidity \(q\) and atmospheric pressure \(p\). \[ e \approx \frac{qp}{0.622 + 0.378q} \tag{6} \] Substituting equations (1) and (6) into equation (2), we can get the equation for converting specific humidity \(q\) into relative humidity \(\psi\) at a given temperature \(T\) and under atmospheric pressure \(p\). Murray, F. W. 1967. “On the Computation of Saturation Vapor Pressure.” J. Appl. Meteor. 6 (1). American Meteorological Society: 203–4. https://doi.org/10.1175/1520-0450(1967)006%3C0203:OTCOSV%3E2.0.CO;2. Shaman, J., and M. Kohn. 2009. “Absolute Humidity Modulates Influenza Survival, Transmission, and Seasonality.” PNAS 106 (9). Natl Acad Sciences: 3243–8. https://doi.org/10.1073/pnas.0806852106.
Mapping/Examples/root x + root y = 1 Jump to navigation Jump to search $R_5 = \set {\tuple {x, y} \in \R \times \R: \sqrt x + \sqrt y = 1}$ Then $R_5$ is not a mapping. Proof $R_5$ fails to be a mapping because, for example, $\sqrt x$ does not exist for $x < 0$. Thus $R_5$ is undefined for $x <0$. Thus $R_5$ fails to be left-total. We have: \(\displaystyle \sqrt x + \sqrt y\) \(=\) \(\displaystyle 1\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \paren {\sqrt x + \sqrt y}^2\) \(=\) \(\displaystyle 1\) \(\displaystyle \leadsto \ \ \) \(\displaystyle x + y + 2 \sqrt {x y}\) \(=\) \(\displaystyle 1\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \paren {x + y - 1}^2\) \(=\) \(\displaystyle 4 x y\) \(\displaystyle \leadsto \ \ \) \(\displaystyle x^2 + y^2 - 2 x y - 2 x - 2 y + 1\) \(=\) \(\displaystyle 0\) $\blacksquare$
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
I'm trying to understand how the Doi-Peliti (DP) action is constructed, and specifically how they compute expectation values. To this end, I've been using the book by Taüber as a reference (Critical Dynamics: A Field Theory Approach to Equilibrium and Non-Equilibrium Scaling Behaviour). The point I seem to be missing is during the discretized $\rightarrow$ path integral procedure of the expectation value of an observable. For a single chemical species with lattice occupation numbers $\{n_i\}$ this is defined as $$ \langle A(t)\rangle =\sum_{\{n_i\}}A(\{n_i\})P(\{n_i\}, t) $$ where $P(\{n_i\},t)$ denotes the probability of observing a configuration $\{n_i\}$ and follows a master-type equation. In the DP formalism, the chemical reactant is assigned a site-specific bossonic ladder algebra $a_i$, $a_i^{\dagger}$ and one finds that the expectation value above may be expressed in this language by \begin{equation} \label{eq:expect} \langle A(t)\rangle=\langle\mathcal{P}|A\left(\{a_i^{\dagger}a_i\}\right)|\Phi(t)\rangle \end{equation} Here the projection operator $\langle\mathcal{P}|=\langle 0|\prod_ie^{a_i}$, and the state vector $|\Phi(t)\rangle=\sum_{\{n_i\}}P(\{n_i\},t)|\{n_i\}\rangle$ satisfies the imaginary time Schrödinger equation $$ \partial_t|\Phi(t)\rangle=-H(\{a_i^{\dagger}\},\{a_i\})|\Phi(0)\rangle $$ ($H(\{a_i^{\dagger}\},\{a_i\})$ is meant to indicate that $H$ is normal-ordered). By shifting the operator $\prod_ie^{a_i}$ in the above expression for $\langle A(t)\rangle$ over to the right, one obtains $$ \langle A(t)\rangle=\langle0|\tilde{A}\left(\{a_i^{\dagger}\rightarrow1\},\{a_i\}\right)e^{-H(\{a_i^{\dagger}\rightarrow 1+a_i^{\dagger}\},\{a_i\})t}|\tilde{\Phi}(0)\rangle $$ in which $\tilde{A}(\{1\},\{a_i\})$ is obtained from $A$ by normal ordering and replacing $a_i$ by $1$ (e.g. $a_i^{\dagger}a_ia_j^{\dagger}a_j\rightarrow a_i\delta_{ij}+a_ia_j$), and $$ |\tilde{\Phi}(0)\rangle=\prod_ie^{a_i}|\Phi(0)\rangle $$ Here comes the part I seem to fail to understand. If we denote by $$ U(t_2,t_1)=e^{-H(\{a_i^{\dagger}\rightarrow 1+a_i^{\dagger}\},\{a_i\})(t_2-t_1)} $$ then clearly $U(t_2,t_1)=U(t_2,t')U(t',t_1)$. We may thus split the time-evolution operator $U$ in $\langle A(t)\rangle$ up into many pieces and insert the completeness relation $$ 1=\int\prod_i\frac{d\phi_i^*d\phi_i}{2\pi i}e^{\sum_i\phi_i^*\phi_i}|\phi\rangle\langle\phi| $$ ($i$ in the denominator is the imaginary unit and $|\phi\rangle$ is a coherent state) inbetween each time-step to obtain $$ \langle A(t)\rangle=\int\left(\prod_{i,k}\frac{d\phi_i^*(t_k)d\phi_i(t_k)}{2\pi i}\right)\langle0|\tilde{A}\left(\{1\},\{a_i\}\right)|\phi(t_f)\rangle\left(\prod_j\langle\phi(t_j)|U(t_j,t_{j-1})|\phi(t_{j-1})\rangle\right)\times\langle\phi(t_0)|\tilde{\Phi}(0)\rangle $$ The matrix elements $$ \langle\phi(t_j)|U(t_j,t_{j-1})|\phi(t_{j-1})\rangle $$ are easily calculated. However, to me it seems that $$ \tilde{A}\left(\{1\},\{a_i\}\right)|\phi(t_f)\rangle=\tilde{A}\left(\{1\},\{\phi_i(t_f)\}\right)|\phi(t_f)\rangle $$ since $a_i|\phi\rangle=\phi_i|\phi\rangle$. In particular, I don't find it obvious how the above tends to the path integral $$ \langle A(t)\rangle=\int\prod_i\mathcal{D}[\phi_i^*,\phi_i]\tilde{A}(\{1\},\{\phi_i(t)\})e^{-\mathcal{A}[\phi_i^*,\phi_i]} $$ (for some action $\mathcal{A}$ I leave unspecified), as it seems as though the observable $\tilde{A}$ should only be evaluated at the final point $\phi(t_f)$. Sorry for the very long message. Any help would be greatly appreciated! This post imported from StackExchange Physics at 2017-08-11 12:47 (UTC), posted by SE-user john
I was reminded of a quote earlier: “The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn’s lemma?” — Jerry Bona I find I am slightly annoyed by this quote, because the equivalence between these three is so direct. I know that they’re equivalent is the joke, but the equivalence is close enough if you look at it the right way then building intuition about one should help you build intuition about the others. Moreover, I think the way Zorn’s lemma is taught has a problem-solution ordering issue. Zorn’s lemma, at its core, takes a common style of proof and isolates the boring mechanical parts into a self-contained tool that abstracts them away. This is great, but the problem is that it so effectively supplants the method it’s trying to abstract that nobody teaches that method any more so it feels like it comes out of nowhere. That method is transfinite induction. Or, induction over infinite well ordered sets. To explain how to use this, we need to use the following results: Well ordered induction principle Let \(X\) be some well ordered set, and let \(P\) be some property of elements of \(X\) such that for all \(x \in X\), if \(P(y)\) holds for all \(y < x\) then \(P(x)\). Then \(P(x)\) holds for all \(x \in X\). Proof: Suppose not, let \(x\) be the smallest element for which it does not hold. Then it holds for all \(y < x\) by the fact that \(x\) is the smallest. Thus it holds for \(x\) by assumptions on \(P\). This is a contradiction. QED We can use this much like we would use induction on the natural numbers. In particular it allows us to define functions recursively: If we can define a function \(f(x)\) in terms of its values for \(y < x\) this allows us to define a function on the whole of \(X\) (consider the property \(P(X)\) = “\(f\) is uniquely defined for all \(y < x\)” Hartog’s Lemma Let \(X\) be some set. There is a well-ordered set \(W\) such that there is no injective function \(f: W \to X\). Note: This doesn’t require the axiom of choice, but without the axiom of choice we can’t then conclude that there is some injective function \(f: X \to W\), because we could use such an injective function to well-order \(X\). This then implies the well-ordering theorem, which implies the axiom of choice. Proof: I’m only going to sketch this. The core idea is that you define \(W\) as the set of all well-orders of subsets of \(X\), up to order isomorphism. If there were an injective function from \(W\) to \(X\) then we could construct an element \(x \in W\) such that the initial segment \(\{y : y < x\}\) is isomorphic to \(W\). This gives us a function \(f : W \to W\) such that \(f(x) < x\). Then \(x > f(x) > f^2(x) > \ldots\) gives us an infinite decreasing sequence in a well ordered set, which is impossible. If you didn’t understand that sketch, don’t worry about it. It won’t be on the test. Just trust the theorem. Putting this all together Let \(V\) be a vector space and let \(L\) be the set of all linearly independent subsets of \(V\). Let \(W\) be some well ordered set with no injective functions into \(L\). Define a function \(f: W \to \mathcal{P}(V)\) recursively as follows: If \(A = \bigcup\limits_{y < x} f(y) \) spans the whole vector space, let \(f(x) = A\). Else, let \(f(x) = A \cup \{ v \} \) where \(v \in V\) is picked using the axiom of choice to be any element outside the span of \(A\). The intuition here is that we keep adding in new elements which are linearly independent of the existing elements and we’re going to keep going until we can’t go any further, at which point we must span the whole space. Claim 1: If \(y < x\) then \(f(y) \subseteq f(x)\). Proof: This is by construction. We defined \(f(x)\) in terms of a union that contains \(f(y)\). Claim: \(f(x)\) is always linearly independent. Proof: Induction! Suppose it’s true for all \(y < x\). Then we can find \(v_1, \ldots, v_n \in f(x)\) linearly dependent. But there are \(y_1, \ldots, y_n\) with \(v_i \in f(y_i)\). But because there are only finitely many \(y_i\) we can find \(y = \max y_i\) such that \(v_1, \ldots, v_n \in f(y)\) (because of the previous claim). So \(f(y)\) is linearly dependent, contradicting the inductive assumption. QED So actually \(f: W \to L\). But this means that \(f\) cannot be injective (by choice of \(W\)). So at some point we must have \(y < x\) with \(f(x) = f(y)\). But this can only happen if at some point our choice of \(A\) spanned the whole space. Thus \(f(x)\) must have spanned the whole space, and thus is a linearly independent spanning set. i.e. a basis. QED And thus Zorn It may not be obvious what about vector spaces in the above mattered and what was generalizable in the above, but after you’ve done a few of those it starts to become clearer. Rather than making you sit through that, let me highlight what I think were the salient details: We have some partially ordered set – in this case linearly independent subsets of a vector space, ordered by inclusion. We take our well ordered set and construct a function into that partially ordered set which is strictly increasing until it hits a maximal element. Because the function cannot be injective, it cannot be strictly increasing, so there must be a maximal element. In the case of a vector space, a maximal linearly independent set must span the whole space (because otherwise you could add in another element), so that maximal element is what we were looking for. So all we need to know now is when we can construct such a function. What was the property of linearly independent sets that let us do this? The property was that given our choice of \(f(y)\) for \(y < x\) we were always able to choose \(f(x)\) to be greater than or equal to every \(f(y)\). i.e. we choose \(f(x)\) to be an upper bound. If every subset of our target partially ordered set had an upper bound, then we could always construct this choice. This is however too strong a condition: e.g. it’s not the case that any two linearly independent subsets have a common upper bound. The sets \(\{v\}\) and \(\{-v\}\) do not for any non-zero \(v\). However, all we really need is that the sort of sets we get during our transfinite induction have upper bounds. And the important feature of these sets is that they come from an increasing sequence. In particular, any two elements of them are comparable. They form a chain. This leads us to the notion required for Zorn’s lemma: We are interested in partially ordered sets such that every chain has an upper bound. This allows us to construct our function, and transfinite induction then gives us a maximal element. So: Zorn’s lemma Let \(T\) be a partially ordered set such that every chain has an upper bound.Then there is some maximal element \(t \in T\). i.e. there is no element \(s \in T\) such that \(t < s\). Proof: Our proof will be very similar to our proof for vector space having a basis. The only major difference is that we’ll have to work a little harder at the recursive definition of our increasing function because where for the basis form we could construct the function and then show its output was always in \(L\), here we have to simultaneously construct the function and show that it’s well defined. In order to do this we’ll introduce a special value \(\infty \not\in T\). If things go wrong and we fail to construct a maximal element we’ll return \(\infty\) instead. This simplifies things to showing that we never return \(\infty\). First, lets be more explicit about our use of the axiom of choice. Let \(f\) be some choice function. We’re going to define an ‘upper bound’ function \(u(A)\) as follows: Let \(Q = \{x: y < x \forall y \in A\}\). Let \(R = \{x: y \leq x \forall y \in A\}\). If \(Q\) is non-empty, let \(u(A) = f(Q)\), i.e. any element which is strictly greater than all of the elements of \(A\). Else if \(R\) is non-empty, let \(u(A) = f(R)\) – i.e. any element which is \(\geq\) every element of \(A\). Else, there are no upper bounds. Return \(\infty\). Let \(W\) be some well ordered set. Define \(f : W \to T \cup \{\infty\}\) recursively as \(f(x) = \infty\) if \(f(y) = \infty\) for any \(y < x\), else \(f(x) = u(\{f(y): y < x\}))\) Claim: This function never returns \(\infty\). Proof by induction: Suppose this is true for all \(y < x\). Then the set \(\{f(y): y < x\}\) must form a chain, if \(u < v < x\) we picked \(f(v) \geq f(w)\) by our definition of \(u\). Thus, by our assumption on the partially ordered set \(T\), this set has an upper bound, so \(R\) in our definition of \(u\) is non-empty. This means that \(f(x)\) is chosen to be an element of \(R\) and thus is not \(\infty\). So we’ve constructed an increasing \(f : W \to T\). By choosing \(W\) appropriately, this cannot be an injection. So find \(y < x\) with \(f(x) = f(y)\). This can only happen if there is no element \(s\) such that \(s > f(y)\). i.e. \(f(y)\) is a maximal element. QED Although the details differed, hopefully this should look structurally pretty similar to the more concrete form with the vector space basis. Why is this useful? The main reason it’s useful is that the property of chains having upper bounds comes up a lot. In particular it comes up with things that are “essentially finite” in nature. Most applications of Zorn seem to boil down to the following more specific lemma: Teichmüller–Tukey lemma Let \(X\) be some set and let \(L \subset \mathcal{P}(X)\) be some family of sets with the property that \(A \in L\) if and only if \(B \in L\) for every finite \(B \subseteq A\). Then there is some \(A \in L\) such that for all \(x \in X\), \(A \cup \{x\} \not\in L\). Note: In particular, as we saw in our original proof, the linearly ordered subsets of a vector space satisfy these conditions. Most essentially “algebraic” conditions tend to satisfy it. Proof: We consider \(L\) partially ordered by subset inclusion. We’ll show that the union of any chain of sets is in \(L\). Suppose \(C\) were some chain of sets in \(L\) such that \(\bigcup C \not \in L\). Then we can find a finite set \(\{x_1, \ldots, x_n\} \subseteq \bigcup C\) not in \(L\). But then we can find \(U_i\) such that \(x_i \in U_i \in C\). Because \(C\) is a chain we can thus find a maximum \(U\). But then \(\{x_1, \ldots, x_n\}\) is a finite subset of \(U\) which is not in \(L\), contradicting the assumption that \(U \in L\). QED In parting Hopefully even if you didn’t follow all the details this demystified Zorn’s lemma a bit. In general, there is a rich theory of well ordered sets and it seems to often be skipped or deferred until after Zorn’s lemma has already been taught. I can understand why – if you fill in all the details it feels like a pretty complex piece of machinery to be introducing when all you’re going to do is use it to prove Zorn’s lemma and then forget about it. There’s a lot more to the theory than that though. Some of it is pretty interesting, and some of it I just think is useful in demystifying where a lot of this sort of thing comes from.
I tried couple of times to compute $\int_{-\pi}^{\pi}\sin(nx)e^{inx}$. According to W.A it should be $-\pi i$, I'm losing my mind trying to understand where I got wrong. Assuming the whole process that $e^{-in \pi}=\cos(n \pi)-i \sin(n\pi)=(-1)^{n}$, and $[\cos(nx)e^{-inx}]_{-\pi}^{\pi}$=0. Here is what I did: $\int_{-\pi}^{\pi}\sin(nx)e^{inx}=[\frac{(-\cos(nx))}{n}e^{-inx}]_{-\pi}^{\pi}-\int_{-\pi}^{\pi}\frac{ine^{-inx}\cos(nx)}{n}=-\frac{1}{n}[\cos(nx)e^{-inx}]_{-\pi}^{\pi}-\int_{-\pi}^{\pi}ie^{-inx}\cos(nx)e^{inx}=0-i[\frac{e^{-inx}}{-in}]_{-\pi}^{\pi}\cos(nx)=0.$ I'd really love to understand what is wrong with that. Thank you very much.
C.~D.~A.~Evans and J. D. Hamkins, “Transfinite game values in infinite chess,” Integers, vol. 14, p. Paper No.~G2, 36, 2014. @ARTICLE{EvansHamkins2014:TransfiniteGameValuesInInfiniteChess, AUTHOR = {C.~D.~A.~Evans and Joel David Hamkins}, TITLE = {Transfinite game values in infinite chess}, JOURNAL = {Integers}, FJOURNAL = {Integers Electronic Journal of Combinatorial Number Theory}, YEAR = {2014}, volume = {14}, number = {}, pages = {Paper No.~G2, 36}, month = {}, note = {}, eprint = {1302.4377}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/game-values-in-infinite-chess}, ISSN = {1553-1732}, MRCLASS = {03Exx (91A46)}, MRNUMBER = {3225916}, abstract = {}, keywords = {}, source = {}, } In this article, C. D. A. Evans and I investigate the transfinite game values arising in infinite chess, providing both upper and lower bounds on the supremum of these values—the omega one of chess—denoted by $\omega_1^{\mathfrak{Ch}}$ in the context of finite positions and by $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ in the context of all positions, including those with infinitely many pieces. For lower bounds, we present specific positions with transfinite game values of $\omega$, $\omega^2$, $\omega^2\cdot k$ and $\omega^3$. By embedding trees into chess, we show that there is a computable infinite chess position that is a win for white if the players are required to play according to a deterministic computable strategy, but which is a draw without that restriction. Finally, we prove that every countable ordinal arises as the game value of a position in infinite three-dimensional chess, and consequently the omega one of infinite three-dimensional chess is as large as it can be, namely, true $\omega_1$. The article is 38 pages, with 18 figures detailing many interesting positions of infinite chess. My co-author Cory Evans holds the chess title of U.S. National Master. Let’s display here a few of the interesting positions. First, a simple new position with value $\omega$. The main line of play here calls for black to move his center rook up to arbitrary height, and then white slowly rolls the king into the rook for checkmate. For example, 1…Re10 2.Rf5+ Ke6 3.Qd5+ Ke7 4.Rf7+ Ke8 5.Qd7+ Ke9 6.Rf9#. By playing the rook higher on the first move, black can force this main line of play have any desired finite length. We have further variations with more black rooks and white king. Next, consider an infinite position with value $\omega^2$. The central black rook, currently attacked by a pawn, may be moved up by black arbitrarily high, where it will be captured by a white pawn, which opens a hole in the pawn column. White may systematically advance pawns below this hole in order eventually to free up the pieces at the bottom that release the mating material. But with each white pawn advance, black embarks on an arbitrarily long round of harassing checks on the white king. Here is a similar position with value $\omega^2$, which we call, “releasing the hordes”, since white aims ultimately to open the portcullis and release the queens into the mating chamber at right. The black rook ascends to arbitrary height, and white aims to advance pawns, but black embarks on arbitrarily long harassing check campaigns to delay each white pawn advance. Next, by iterating this idea, we produce a position with value $\omega^2\cdot 4$. We have in effect a series of four such rook towers, where each one must be completed before the next is activated, using the “lock and key” concept explained in the paper. We can arrange the towers so that black may in effect choose how many rook towers come into play, and thus he can play to a position with value $\omega^2\cdot k$ for any desired $k$, making the position overall have value $\omega^3$. Another interesting thing we noticed is that there is a computable position in infinite chess, such that in the category of computable play, it is a win for white—white has a computable strategy defeating any computable strategy of black—but in the category of arbitrary play, both players have a drawing strategy. Thus, our judgment of whether a position is a win or a draw depends on whether we insist that players play according to a deterministic computable procedure or not. The basic idea for this is to have a computable tree with no computable infinite branch. When black plays computably, he will inevitably be trapped in a dead-end. In the paper, we conjecture that the omega one of chess is as large as it can possibly be, namely, the Church-Kleene ordinal $\omega_1^{CK}$ in the context of finite positions, and true $\omega_1$ in the context of all positions. Our idea for proving this conjecture, unfortunately, does not quite fit into two-dimensional chess geometry, but we were able to make the idea work in infinite **three-dimensional** chess. In the last section of the article, we prove: Theorem. Every countable ordinal arises as the game value of an infinite position of infinite three-dimensional chess. Thus, the omega one of infinite three dimensional chess is as large as it could possibly be, true $\omega_1$. Here is a part of the position. Imagine the layers stacked atop each other, with $\alpha$ at the bottom and further layers below and above. The black king had entered at $\alpha$e4, was checked from below and has just moved to $\beta$e5. Pushing a pawn with check, white continues with 1.$\alpha$e4+ K$\gamma$e6 2.$\beta$e5+ K$\delta$e7 3.$\gamma$e6+ K$\epsilon$e8 4.$\delta$e7+, forcing black to climb the stairs (the pawn advance 1.$\alpha$e4+ was protected by a corresponding pawn below, since black had just been checked at $\alpha$e4). The overall argument works in higher dimensional chess, as well as three-dimensional chess that has only finite extent in the third dimension $\mathbb{Z}\times\mathbb{Z}\times k$, for $k$ above 25 or so.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open. The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers. Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic? By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models. We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic? In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic. Let me state the main theorem and briefly sketch the proof. Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output. \begin{equation*}\small\begin{array}{rcr} \cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*} This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$. Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired. But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired. Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order. The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$. We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$ The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$. Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable. The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order. Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order. Go to the article to read more. A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review) @ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
The Parallax View In the previous post of this series unveiling the relationship between UFO sightings and population, we crossed the threshold of normality underpinning linear models to construct a generalised linear model based on the more theoretically satisfying Poisson distribution. On inspection, however, this model revealed itself to be less well suited to the data than we had, in our tragic ignorance, hoped. While it appeared, on visual inspection, to capture some features of the data, the predictive posterior density plot demonstrated that it still fell short of addressing the subtleties of the original. In this post, we will seek to overcome this sad lack in two ways: firstly, we will subject our models to pitiless mathematical scrutiny to assess their ability to describe the data. With our eyes irrevocably opened to these techniques, we will construct an ever more complex armillary with which to approach the unknowable truth. Critical Omissions of Information Our previous post showed the different fit of the Poisson model to the data from the simple Gaussian linear model. When presented with a grim array of potential models, however, it is crucial to have reliable and quantitative mechanisms to select amongst them. The eldritch procedure most suited to this purpose, model selection, in our framework, draws on information criteria that express the relative effectiveness of models at creating sad mockeries of the original data. The original and most well-known such criterion is the Akaike Information Criterion, which has, in turn, spawned a multitude of successors applicable in different situations and with different properties. Here, we will make use of Leave-One-Out Cross Validation (LOO-CV) 1 as the most applicable to the style of model and set of techniques applied here. It is important to reiterate that these approaches do not speak to an absolute underlying truth; information criteria allow us to choose between models, assessing which has most closely assimilated the madness and chaos of the data. For LOO-CV, this results in an expected log predictive density ( elpd) for each model. The model with the lowest elpd is the least-warped mirror of reality amongst those we subject to scrutiny. There are many fragile subtleties to model selection, of which we will mention only two here. Firstly, in general, the greater the number of predictors or variables incorporated into a model, the more closely it will be able to mimic the original data. This is problematic, in that a model can become overfit to the original data and thus be unable to represent previously unseen data accurately — it learns to mimic the form of the observed data at the expense of uncovering its underlying reality. The LOO-CV technique avoids this trap by, in effect, withholding data from the model to assess its ability to make accurate inferences on previously unseen data. The second consideration in model selection is that the information criteria scores of models, such as ( elpd) in LOO-CV, are subject to standard error in their assessment; the score itself is not a perfect metric of model performance, but a cunning approximation. As such we will only consider one model to have outperformed its competitors if the difference in their relative elpd is several times greater than this standard error. With this understanding in hand, we can now ruthlessly quantify the effectiveness of the Gaussian linear model against the Poisson generalised linear model. Gaussian vs. the Poisson The original model presented before our subsequent descent into horror was a simple linear Gaussian, produced through use of ggplot2‘s geom_smooth function. To compare this meaningfully against the Poisson model of the previous post, we must now recreate this model using the, now hideously familar, tools of Bayesian modelling with Stan. With both models straining in their different directions towards the light, we apply LOO-CV cross validation to assess their effectiveness at predicting the data. > compare( loo_normal, loo_poisson ) elpd_diff se -8576.1 712.5 The information criterion shows that the complexity of the Poisson model does not, in fact, produce a more effective model than the false serenity of the Gaussian 2. The negative elpd_diff of the compare function supports the first of the two models, and the magnitude being over twelve times greater than the standard error leaves little doubt that the difference is significant. We must, it seems, look further. With these techniques for selecting between models in hand, then, we can move on to constructing ever more complex attempts to dispel the darkness. Trials without End The Poisson distribution, whilst appropriate for many forms of count data, suffers from fundamental limits to its understanding. The single parameter of the Poisson, \(\lambda\), enforces that the mean and variance of the data are equal. When such comforting falsehoods wither in the pale light of reality, we must move beyond the gentle chains in which the Poisson binds us. The next horrific evolution, then, is the negative binomial distribution, which similarly speaks to count data, but presents a dispersion parameter (\(\phi\)) that allows the variance to exceed the mean 3. With our arcane theoretical library suitably expanded, we can now transplant the still-beating Poisson heart of our earlier generalised linear model with the more complex machinery of the negative binomial: $$\begin{eqnarray} y &\sim& \mathbf{NegBinomial}(\mu, \phi)\\ \log(\mu) &\sim& \alpha + \beta x\\ \alpha &\sim& \mathcal{N}(0, 1)\\ \beta &\sim& \mathcal{N}(0, 1)\\ \phi &\sim& \mathbf{HalfCauchy}(2) \end{eqnarray}$$ As with the Poisson, our negative binomial generalised linear model employs a log link function to transform the linear predictor. The Stan code for this model is given below. With this model fit, we can compare its whispered falsehoods against both the original linear Gaussian model and the Poisson GLM: > compare( loo_poisson, loo_negbinom ) elpd_diff se 8880.8 721.9 With the first comparison, it is clear that the sinuous flexibility offered by the dispersion parameter, \(\phi\), of the negative binomial allows that model to mould itself much more effectively to the data than the Poisson. The elpd_diff score is positive, indicating that the second of the two compared models is favoured; the difference is over twelve times the standard error, giving us confidence that the negative binomial model is meaningfully more effective than the Poisson. Whilst superior to the Poisson, does this adaptive capacity allow the negative binomial model to render the naïve Gaussian linear model obsolete? > compare( loo_normal, loo_negbinom ) elpd_diff se 304.7 30.9 The negative binomial model subsumes the Gaussian with little effort. The elpd_diff is almost ten times the standard error in favour of the negative binomial GLM, giving us confidence in choosing it. From here on, we will rely on the negative binomial as the core of our schemes. Overlapping Realities The improvements we have seen with the negative binomial model allow us to discard the Gaussian and Poisson models with confidence. It is not, however, sufficient to fill the gaping void induced by our belief that the sightings of abnormal aerial phenomena in differing US states vary differently with their human population. To address this question we must ascertain whether allowing our models to unpick the individual influence of states will improve their predictive ability. This, in turn, will lead us into the gnostic insanity of hierarchical models, in which we group predictors in our models to account for their shadowy underlying structures. Limpid Pools The first step on this path is to allow part of the linear function underpinning our model, specifically the intercept value, \(\alpha\), to vary between different US states. In a simple linear model, this causes the line of best fit for each state to meet the y-axis at a different point, whilst maintaining a constant slope for all states. In such a model, the result is a set of parallel lines of fit, rather than a single global truth. This varying intercept can describe a range of possible phenomena for which the rate of change remains constant, but the baseline value varies. In such hierarchical models we employ a concept known as partial pooling to extract as much forbidden knowledge from the reluctant data as possible. A set of entirely separate models, such as the per-state set of linear regressions presented in the first post of this series, employs a no pooling approach: the data of each state is treated separately, with an entirely different model fit to each. This certainly considers each the uniqueness of each state, but cannot benefit from insights drawn from the broader range of data we have available, which we may reasonably assume to have some relevance. By contrast, the global Gaussian, Poisson, and negative binomial models presented so far represent complete pooling, in which the entire set of data is considered a formless, protean amalgam without meaningful structure. This mindless, groping approach causes the unique features of each state to be lost amongst the anarchy and chaos. A partial pooling approach instead builds a global mean intercept value across the dataset, but allows the intercept value for each individual state to deviate according to a governing probability distribution. This both accounts for the individuality of each group of observations, in our case the state, but also draws on the accumulated wisdom of the whole. We now construct a partially-pooled varying intercept model, in which the parameters and observations for each US state in our dataset is individually indexed: $$\begin{eqnarray} y &\sim& \mathbf{NegBinomial}(\mu, \phi)\\ \log(\mu) &\sim& \alpha_i + \beta x\\ \alpha_i &\sim& \mathcal{N}(\mu_\alpha, \sigma_\alpha)\\ \beta &\sim& \mathcal{N}(0, 1)\\ \phi &\sim& \mathbf{HalfCauchy}(2) \end{eqnarray}$$ Note that the intercept parameter, \(\alpha\), in the second line is now indexed by the state, represented here by the subscript \(i\). The slope parameter, \(\beta\), remains constant across all states. This model can be rendered in Stan code as follows: Once the model has twisted itself into the most appropriate form for our data, we can now compare it against our previous completely-pooled model: > compare( loo_negbinom, loo_negbinom_var_intercept ) elpd_diff se 363.2 28.8 Our transcendent journey from the statistical primordial ooze continues: the varying intercept model is favoured over the completely-pooled model by a significant margin. Sacred Geometry Now that our minds have apprehended a startling glimpse of the implications of the varying intercept model, it is natural to consider taking a further terrible step and allowing both the slope and the intercept to vary 4. With both the intercept and slope of the underlying linear predictor varying, an additional complexity raises its head: can we safely assume that these parameters, the intercept and slope, vary independently of each other, or may there be arcane correlations between them? Do states with a higher intercept also experience a higher slope in general, or is the opposite the case? Without prior knowledge to the contrary, we must allow our model to determine these possible correlations, or we are needlessly throwing away potential information in our model. For a varying slope and intercept model, therefore, we must now include a correlation matrix, \(\Omega\), between the parameters of the linear predictor for each state in our model. This correlation matrix, as with all parameters in a Bayesian framework, must be expressed with a prior distribution from which the model can begin its evaluation of the data. $$\begin{eqnarray} y &\sim& \mathbf{NegBinomial}(\mu, \phi)\\ \log(\mu) &\sim& \alpha_i + \beta x_i\\ \begin{bmatrix} \alpha_i\\ \beta_i \end{bmatrix} &\sim& \mathcal{N}( \begin{bmatrix} \mu_\alpha\\ \mu_\beta \end{bmatrix}, \Omega )\\ \Omega &\sim& \mathbf{LKJCorr}(2)\\ \phi &\sim& \mathbf{HalfCauchy}(2) \end{eqnarray}$$ This model has grown and gained a somewhat twisted complexity compared with the serene austerity of our earliest linear model. Despite this, each further step in the descent has followed its own perverse logic, and the progression should clear. The corresponding Stan code follows: The ultimate test of our faith, then, is whether the added complexity of the partially-pooled varying slope, varying intercept model is justified. Once again, we turn to the ruthless judgement of the LOO-CV: > compare( loo_negbinom_var_intercept, loo_negbinom_var_intercept_slope ) elpd_diff se 13.3 2.4 In this final step we can see that our labours in the arcane have been rewarded. The final model is once again a significant improvement over its simpler relatives. Whilst the potential for deeper and more perfect models never ends, we will settle for now on this. Mortal Consequences With our final model built, we can now begin to examine its mortifying implications. We will leave the majority of the subjective analysis for the next, and final, post in this series. For now, however, we can reinforce our quantitative analysis with visual assessment of the posterior predictive distribution output of our final model. In comparison with earlier attempts, the varying intercept and slope model visibly captures the overall shape of the distribution with terrifying ease. As our wary confidence mounts in the mindless automaton we have fashioned, we can now examine its predictive ability on our original data. The purpose of our endeavours is to show whether or not the frequency of extraterrestrial visitations is merely a sad reflection of the number of unsuspecting humans living in each state. After seemingly endless cryptic calculations, our statistical machinery implies that there are deeper mysteries here: allowing the relationship between sightings and the underlying linear predictors to vary by state more perfectly predicts the data. There are clearly other, hidden, factors in play. More than that, however, our final model allows us to quantify these differences. We can now retrieve from the very bowels of our inferential process the per-state distribution of paremeters for both the slope and intercept of the linear predictor. It is important to note that, while we are still referring to the \(\alpha\) and \(\beta\) parameters as the slope and intercept, their interpretation is more complex in a generalised linear model with a \(\log\) link function than in the simple linear model. For now, however, this diagram is sufficient to show that the horror visited on innocent lives by our interstellar visitors is not purely arbitrary, but depends at least in part on geographical location. With this malign inferential process finally complete we will turn, in the next post, to a trembling interpretation of the model and its dark implications for our collective future. Model Fitting and Comparison Code Listing Footnotes
C.~D.~A.~Evans and J. D. Hamkins, “Transfinite game values in infinite chess,” Integers, vol. 14, p. Paper No.~G2, 36, 2014. @ARTICLE{EvansHamkins2014:TransfiniteGameValuesInInfiniteChess, AUTHOR = {C.~D.~A.~Evans and Joel David Hamkins}, TITLE = {Transfinite game values in infinite chess}, JOURNAL = {Integers}, FJOURNAL = {Integers Electronic Journal of Combinatorial Number Theory}, YEAR = {2014}, volume = {14}, number = {}, pages = {Paper No.~G2, 36}, month = {}, note = {}, eprint = {1302.4377}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/game-values-in-infinite-chess}, ISSN = {1553-1732}, MRCLASS = {03Exx (91A46)}, MRNUMBER = {3225916}, abstract = {}, keywords = {}, source = {}, } In this article, C. D. A. Evans and I investigate the transfinite game values arising in infinite chess, providing both upper and lower bounds on the supremum of these values—the omega one of chess—denoted by $\omega_1^{\mathfrak{Ch}}$ in the context of finite positions and by $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ in the context of all positions, including those with infinitely many pieces. For lower bounds, we present specific positions with transfinite game values of $\omega$, $\omega^2$, $\omega^2\cdot k$ and $\omega^3$. By embedding trees into chess, we show that there is a computable infinite chess position that is a win for white if the players are required to play according to a deterministic computable strategy, but which is a draw without that restriction. Finally, we prove that every countable ordinal arises as the game value of a position in infinite three-dimensional chess, and consequently the omega one of infinite three-dimensional chess is as large as it can be, namely, true $\omega_1$. The article is 38 pages, with 18 figures detailing many interesting positions of infinite chess. My co-author Cory Evans holds the chess title of U.S. National Master. Let’s display here a few of the interesting positions. First, a simple new position with value $\omega$. The main line of play here calls for black to move his center rook up to arbitrary height, and then white slowly rolls the king into the rook for checkmate. For example, 1…Re10 2.Rf5+ Ke6 3.Qd5+ Ke7 4.Rf7+ Ke8 5.Qd7+ Ke9 6.Rf9#. By playing the rook higher on the first move, black can force this main line of play have any desired finite length. We have further variations with more black rooks and white king. Next, consider an infinite position with value $\omega^2$. The central black rook, currently attacked by a pawn, may be moved up by black arbitrarily high, where it will be captured by a white pawn, which opens a hole in the pawn column. White may systematically advance pawns below this hole in order eventually to free up the pieces at the bottom that release the mating material. But with each white pawn advance, black embarks on an arbitrarily long round of harassing checks on the white king. Here is a similar position with value $\omega^2$, which we call, “releasing the hordes”, since white aims ultimately to open the portcullis and release the queens into the mating chamber at right. The black rook ascends to arbitrary height, and white aims to advance pawns, but black embarks on arbitrarily long harassing check campaigns to delay each white pawn advance. Next, by iterating this idea, we produce a position with value $\omega^2\cdot 4$. We have in effect a series of four such rook towers, where each one must be completed before the next is activated, using the “lock and key” concept explained in the paper. We can arrange the towers so that black may in effect choose how many rook towers come into play, and thus he can play to a position with value $\omega^2\cdot k$ for any desired $k$, making the position overall have value $\omega^3$. Another interesting thing we noticed is that there is a computable position in infinite chess, such that in the category of computable play, it is a win for white—white has a computable strategy defeating any computable strategy of black—but in the category of arbitrary play, both players have a drawing strategy. Thus, our judgment of whether a position is a win or a draw depends on whether we insist that players play according to a deterministic computable procedure or not. The basic idea for this is to have a computable tree with no computable infinite branch. When black plays computably, he will inevitably be trapped in a dead-end. In the paper, we conjecture that the omega one of chess is as large as it can possibly be, namely, the Church-Kleene ordinal $\omega_1^{CK}$ in the context of finite positions, and true $\omega_1$ in the context of all positions. Our idea for proving this conjecture, unfortunately, does not quite fit into two-dimensional chess geometry, but we were able to make the idea work in infinite **three-dimensional** chess. In the last section of the article, we prove: Theorem. Every countable ordinal arises as the game value of an infinite position of infinite three-dimensional chess. Thus, the omega one of infinite three dimensional chess is as large as it could possibly be, true $\omega_1$. Here is a part of the position. Imagine the layers stacked atop each other, with $\alpha$ at the bottom and further layers below and above. The black king had entered at $\alpha$e4, was checked from below and has just moved to $\beta$e5. Pushing a pawn with check, white continues with 1.$\alpha$e4+ K$\gamma$e6 2.$\beta$e5+ K$\delta$e7 3.$\gamma$e6+ K$\epsilon$e8 4.$\delta$e7+, forcing black to climb the stairs (the pawn advance 1.$\alpha$e4+ was protected by a corresponding pawn below, since black had just been checked at $\alpha$e4). The overall argument works in higher dimensional chess, as well as three-dimensional chess that has only finite extent in the third dimension $\mathbb{Z}\times\mathbb{Z}\times k$, for $k$ above 25 or so.
NCERT solutions for class 12 Physics Chapter 10 Wave Optics is an essential study material that is required for the students who are seriously preparing for class 12 board examination and graduation entrance examination. Wave optics class 12 NCERT solutions pdf provides answers to the question in textbooks, previous year question papers, and sample papers. NCERT solutions class 12 physics wave optics pdf comprises of MCQ’S, exemplary problems, worksheets and exercises that help you in understanding the topic clearly and to score good marks in class 12 as well as entrance examination. Class 12 Physics NCERT solutions for Chapter 10 Wave Optics The derivation of laws of refraction and reflection using Huygens law is often asked in exams. The Brewster law and the angular width expression of the central maximum of the diffraction pattern are frequently asked. Besides the derivation and the theory, the numerical problems from various topics are asked in the exams. All the topics in the chapter are covered in the NCERT Solutions provided here. Download the free PDF provided here, if necessary, take a printout to keep it handy during the preparation of exams. Topics covered in Chapter 10 Wave Optics Section Number Topic 10.1 Introduction 10.2 Huygens Principle 10.3 Refraction And Reflection Of Plane Waves Using Huygens Principle 10.3.1 Refraction Of A Plane Wave 10.3.2 Refraction At A Rarer Medium 10.3.3 Reflection Of A Plane Wave By A Plane Surface 10.3.4 The Doppler Effect 10.4 Coherent And Incoherent Addition Of Waves 10.5 Interference Of Light Waves And Young’s Experiment 10.6 Diffraction 10.6.1 The Single Slit 10.6.2 Seeing The Single Slit Diffraction Pattern 10.6.3 Resolving Power Of Optical Instruments 10.6.4 The Validity Of Ray Optics 10.7 Polarisation 10.7.1 Polarisation By Scattering 10.7.2 Polarisation By Reflection Class 12 Physics NCERT Solutions Wave Optics Important Questions Question 1: Monochromatic light having a wavelength of 589nm from the air is incident on a water surface. Find the frequency, wavelength and speed of (i) reflected and (ii) refracted light? [1.33 is the Refractive index of water] Answer: Monochromatic light incident having wavelength, \(\lambda\) = 589 nm = 589 x 10 -9 m Speed of light in air, c = 3 x 10 8 m s-1 Refractive index of water, \(\mu\) = 1.33 (i) In the same medium through which incident ray passed the ray will be reflected back. Therefore the wavelength, speed, and frequency of the reflected ray will be the same as that of the incident ray. Frequency of light can be found from the relation:\(v=\frac{c}{\lambda}\) = \(\frac{3\times10^{8}}{589\times10^{-9}}\) = 5.09 × 10 14Hz Hence, the speed, frequency, and wavelength of the reflected light are: c = 3 x 10 8 m s-1, 5.09 × 1014 Hz, and 589 nm respectively. (b) The frequency of light which is travelling never depends upon the property of the medium. Therefore, the frequency of the refracted ray in water will be equal to the frequency of the incident or reflected light in air. Refracted frequency, v = 5.09 x 10 14 Hz Speed of light in water Is related to the refractive Index of water as:\(v=\frac{c}{\lambda}\) = \(v=\frac{3\times10^{8}}{1.33}\) = 2.26 × 10 8m s ‑1 Wavelength of light in water can be found by the relation:\(\lambda=\frac{v}{V}\) = \(\frac{2.26\times10^{8}}{5.09\times10^{14}}\) = 444.007 × 10 -9m = 444.01nm Hence the speed, frequency and wavelength of refracted light are: 444.007 × 10 -9 m, 444.01nm, and 5.09 × 1014 Hz respectively. Question 2: Find the shape of the wave front in each of the following cases: (i) Light diverging from a point source. (ii) Light emerging out of a convex lens when a point source is placed at its focus. (iii) The portion of the wave front of light from a distant star intercepted by the Earth. Answer: (i) The shape of a wave front is spherical in the case of a light diverging from a point source. The wave front is shown in the figure (ii) The shape of a wave front is a parallel odd in the case of a light emerging out of a convex lens when a point source is placed at its focus. (iii) The shape of the wave front is a plane when a portion of the wave front of light from a distant star intercepted by the earth. Question 3: (i) The refractive index of glass is 1.5. What is the speed of light in glass? Speed of light in vacuum is ( 3.0 x 10 8 m s-1 ) (ii) Is the speed of light in glass Independent of the colour of light? If not, which of the two colours red and violet travels slower in a glass prism? Answer: (i) Refractive Index of glass, \(\mu\) = 1.5 Speed of light, c = 3 × 10 8 ms-1 Speed of light in glass is given by the relation; \(v=\frac{c}{\mu}\) = \(\frac{3\times10^{8}}{1.5}\) = \(2\times 10^{8}m/s\) Hence, the speed of light in glass is 2 × 10 8 m s-1 (ii) The speed of light in glass is not independent of the colour of light. The refractive Index of a violet component of white light is greater than the refractive Index of a red component. Hence, the speed of violet light is less than the speed of red light in glass. Hence, violet light travels slower than red light in a glass prism. Question 4: In Young’s double-slit experiment, 0.28mm separation between the slits and the screen is placed 1.4m away. 1.2cm is the distance between the central bright fringe and the fourth bright fringe. Determine the wavelength of light used in the experiment. Answer: Distance between the slits and the screen, D = 1.4 m and the distance between the slits, d = 0.28 mm = 0.28 x 10 -3 m Distance between the central fringe and the fourth (n = 4) fringe, u = 1.2cm = 1.2 × 10 -2 m The relation for the distance between the two fringes as in case of a constructive interference:\(u=n\;\lambda\;\frac{D}{d}\) Where, n = order of fringes = 4\(\lambda\) = Wavelength of light used = \(\frac{1.2\times10^{-2}\times 0.28\times 10^{-3}}{4\times 1.4}\) = 60 × 10 -7 = 600nm Therefore the wavelength of the light is 600 nm. Question 5: In Young’s double-slit experiment using the monochromatic light of wavelength \(\lambda\), the intensity of light at a point on the screen where path difference is \(\lambda\), is K units. What is the intensity of light at a point where path difference is \(\frac{\lambda}{3}\)? Answer: Let \(I_{1}\) and \(I_{2}\) be the intensity of the two light waves. Their resultant intensities can be obtained as:\(I’=I_{1}+I_{2}+2\sqrt{I_{1}\;I_{2}}\; cos\phi\) Where,\(\phi\) = Phase difference between the two waves \(I_{1}\) = \(I_{2}\) For monochromatic light waves: Therefore \(I’=I_{1}+I_{2}+2\sqrt{I_{1}\;I_{2}}\; cos\phi\) = \(2I_{1}+2I_{1}\;cos\phi\) Phase difference = \(\frac{2\pi}{\lambda}\times\;Path\;difference\) Since path difference = \(\lambda\), Phase difference, \(\phi=2\pi\) and I’ = K [Given] Therefore \(I_{1}=\frac{K}{4}\) . . . . . . . . . . . . . . . (i) When path difference= \(\frac{\lambda}{3}\) Phase difference, \(\phi=\frac{2\pi}{3}\) Hence, resultant intensity:\(I’_{g}=I_{1}+I_{1}+2\sqrt{I_{1}\;I_{1}}\; cos\frac{2\pi}{3}\) = \(\\2I_{1}+2I_{1}(-\frac{1}{2})\) Using equation (i), we can write:\(I_{g}=I_{1}=\frac{K}{4}\) Hence, the intensity of light at a point where the path difference is \(\frac{\lambda}{3}\) is \(\frac{K}{4}\) units. Question 6: 650 nm and 520 nm are two wavelengths of a beam of light which is used to obtain interference fringes in Young’s double slit experiment. (a) Find the distance of the third bright fringe on the screen from the central maximum for wavelength 650 nm. (b) What is the least distance from the central maximum where the bright fringes due to both the wavelengths coincide? Answer: Wavelength of the light beam, \(\lambda_{1}\) = 650 nm Wavelength of another light beam, \(\lambda_{2}\) = 520 nm Distance of the slits from the screen = D Distance between the two slits = d (i) Distance of the \(n^{th}\) bright fringe on the screen from the central maximum is given by the relation, x =\(n\;\lambda_{1}\;(\frac{D}{d})\) For third bright fringe, n=3 Therefore x = \(3\times650\frac{D}{d}= 1950\frac{D}{d}nm\) \(n\lambda_{2}=(n-1)\lambda_{1}\) (b) Let, the \(n^{th}\) bright fringe due to wavelength \(\lambda_{2}\) and \((n – 1)^{th}\) bright fringe due to wavelength \(\lambda_{2}\) coincide on the screen. We can equate the conditions for bright fringes as: 520n = 650n – 650 650=130n Therefore n = 5 Hence, the least distance from the central maximum can be obtained by the relation: x = \(n\;\lambda_{2}\;\frac{D}{d}\) = \(5\times 520\frac{D}{d}=2600\frac{D}{d}\) nm Note: The value of d and D are not given in the question. Question 7: In a double-slit experiment, 0.2° is found to be the angular width of a fringe on a screen placed 1 m away. The wavelength of light used is 600 nm. What will be the angular width of the fringe if the entire experimental apparatus is immersed in water? Take refractive index of water to be \(\frac{4}{3}\). Answer: Distance of the screen from the slits, D=1m Wavelength of light used, \(\lambda_{1}\) = 600 nm Angular width of the fringe in air \(\theta_{1}\) = 0.2° Angular width of the fringe in water=\(\theta_{2}\) Refractive index of water, \(\mu=\frac{4}{3}\) Refractive index is related to angular width as:\(\mu=\frac{\theta_{1}}{\theta_{2}}\) \(\theta_{2}=\frac{3}{4}\theta_{1}\) \(\frac{3}{4}\times 0.2=0.15\) Therefore, the angular width of the fringe in water will reduce to 0.15° Question 8: What is the Brewster angle for air to glass transition? (Refractive index of glass=1.5.) Answer: Refractive index of glass, \(\mu=1.5\) Let, Brewster angle = \(\theta\) Brewster angle is related to refractive index as:\(tan\theta=\mu\) \(\theta=tan^{-1}(1.5)\) = 56.31° Therefore, the Brewster angle for air to glass transition is 56.31° Question 9: Light of wavelength 5000 Armstrong falls on a plane reflecting surface. What are the wavelength and frequency of the reflected light? For what angle of incidence is the reflected ray normal to the incident ray? Answer: Wavelength of incident light, \([\lambda]\) = 5000 Armstrong = 5000 x 10 -10 m Speed of light, c =3 x 10 8 m Frequency of incident light is given by the relation, v = \(\frac{c}{\lambda}\) = \(\frac{3\;\times \;10^{8}}{5000\;\times \;10^{-10}}\) = 6 × 10 14 The wavelength and frequency of incident light is the same as that of reflected ray. Hence, the wavelength of reflected light is 5000 Armstrong and its frequency is \(6 \times 10^{14}\)Hz. When reflected ray is normal to incident ray, the sum of the angle of incidence, \(\angle i\) and angle of reflection, \(\angle r\) is 90° According to the law of reflection, the angle of incidence is always equal to the angle of reflection. Hence, we can write the sum as:\(\angle i+\angle r\) = 90° i.e. \(\angle i+\angle i\) = 90° Hence, \(\angle i=\frac{90}{2}\) = 45° Therefore, the angle of incidence for the given condition is 45° Question 10: Estimate the distance for which ray optics is a good approximation for an aperture of 4 mm and wavelength 400 nm. Answer: Fresnel’s distance (\(Z_{F}\)) is the distance for which the ray optics is a good approximation. It is given by the relation,\(Z_{F}=\frac{a^{2}}{\lambda}\) Where, Aperture width, a = 4 mm = 4 × 10 -3 m Wavelength of light, \(\lambda\) = 400 nm = 400 × 10 -9 m = 40m Therefore, the distance for which the ray optics is a good approximation is 40 m. Why to Opt BYJU’s? Keep visiting BYJU’S to get complete chapter-wise NCERT SOLUTIONS . Students can download BYJU’S- The Learning app to get a personalized learning experience and to prepare for the exams more effectively.
I would like to prove that $\sum_{n=1}^\infty \frac{x^2}{(1+x^2)^n}$ converges for $x\in[-1,1]$, but not uniformly. Pointwise convergence is easy. Define $S_n(x)=\sum_{k=1}^n \frac{x^2}{(1+x^2)^k}$. $\frac{x^2}{(1+x^2)^n}\leq \left(\frac{x^2}{1+x^2}\right)^n\forall x\in[-1,1]$. $\forall x\in[-1,1]\cap\{0\}$, $\sum_{n=1}^\infty\left(\frac{x^2}{1+x^2}\right)^n$ is a geometric series and converges to $\frac{1}{1-\frac{x^2}{1+x^2}}$. By the comparison test, $S_n(x)$ converges $\forall x\in[-1,1]\cap\{0\}.$ $S_n(0)=0 \forall n\in\mathbb{N}^*$. Uniform convergence is proving much trickier. I've been trying to show that for some $x$ that is dependent on $n$, $S_n(x)$ is constant, which seems to be a commonly used trick to show something doesn't uniformly converge. I plotted part of the sum, and it looks like around $x=.05$, for sufficiently large $n$, $S_n(x)$ jumps from 0 to 1. Though I have no idea where I can pull this number out of the series.
Introduction Built at the Jet Propulsion Laboratory by an Investigation Definition Team (IDT) headed by John Trauger, WFPC2 was the replacement for the first Wide Field and Planetary Camera (WF/PC-1) and includes built-in corrections for the spherical aberration of the HST Optical Telescope Assembly (OTA). The WFPC2 was installed in HST during the First Servicing Mission in December 1993. Early IDT report of the WFPC2 on-orbit performance: Trauger et al. (1994, ApJ, 435, L3) A more detailed assessment of its capabilities: Holtzman et al. (1995, PASP, 107, page 156 and page 1065). The WFPC2 was used to obtain high resolution images of astronomical objects over a relatively wide field of view and a broad range of wavelengths (1150 to 11,000 Å). WFPC2 was installed during the first HST Servicing Mission in 1993 and removed during Servicing Mission 4 in 2009. WFPC2 data can be found on the MAST Archive. ISRs Filter WFPC2 ISRs Listing Results 2010-04: The Dependence of WFPC2 Charge Transfer Efficiency on Background Illumination 2010-01: WFPC2 Standard Star CTE Optical Configuration While it was in operation, the WFPC2 field of view was located at the center of the HST focal plane. The central portion of the f/24 beam coming from the OTA would be intercepted by a steerable pick-off mirror attached to the WFPC2 and diverted through an open port entry into the instrument. The beam would then pass through a shutter and interposable filters. An assembly of 12 filter wheels contained a total of 48 spectral elements and polarizers. The light would then fall onto a shallow-angle, four-faceted pyramid, located at the aberrated OTA focus. Each face of the pyramid was a concave spherical surface, dividing the OTA image of the sky into four parts. After leaving the pyramid, each quarter of the full field of view would then be relayed by an optically flat mirror to a Cassegrain relay that would form a second field image on a charge-coupled device (CCD) of 800 x 800 pixels. Each of these four detectors were housed in a cell sealed by a MgF2 window, which is figured to serve as a field flattener. The aberrated HST wavefront was corrected by introducing an equal but opposite error in each of the four Cassegrain relays. An image of the HST primary mirror would then be formed on the secondary mirrors in the Cassegrain relays. The spherical aberration from the telescope's primary mirror would be corrected on these secondary mirrors, which were extremely aspheric; the resulting point spread function was quite close to that originally expected for WF/PC-1. Field of View The U2,U3 axes were defined by the "nominal" Optical Telescope Assembly (OTA) axis, which was near the center of the WFPC2 FOV. The readout direction was marked with an arrow near the start of the first row in each CCD; note that it rotated 90 degrees between successive chips. The x,y arrows mark the coordinate axes for any POS TARG commands that may have been specified in the proposal. An optional special requirement in HST observing proposals, places the target an offset of POS TARG (in arcsec) from the specified aperture. Camera Configurations Camera Pixels Field of View Scale f/ratio PC (PC1) 800 x 800 36" x 36" 0.0455" per pixel 28.3 WF2, 3, 4 800 x 800 80" x 80" 0.0996" per pixel 12.9 A Note about HST File Formats Data from WFPC2 are made available to observers as files in Multi-Extension FITS (MEF) format, which is directly readable by most PyRAF/IRAF/STSDAS tasks. All WFPC2 data are now available in either waivered FITS or MEF formats. The user may specify either format when retrieving that data from the HDA. WFPC2 data, in either Generic Edited Information Set (GEIS) or MEF formats, can be fully processed with STSDAS tasks. The figure below provides a physical representation of the typical data format. Resources Charge Traps There are about 30 pixels in WFPC2 that are "charge traps" which do not transfer charge efficiently during readout, producing artifacts that are often quite noticeable. Typically, charge is delayed into successive pixels, producing a streak above the defective pixel. In the worst cases, the entire column above the pixel can be rendered useless. On blank sky, these traps will tend to produce a dark streak. However, when a bright object or cosmic ray is read through them, a bright streak will be produced. Here, we show streaks (a) in the background sky, and (b) stellar images produced by charge traps in the WFPC2. Individual traps have been cataloged and their identifying numbers are shown. Warm Pixels and Annealing Decontaminations (anneals), during which the instrument is warmed up to about 22 o C for a period of six hours, were performed about once per month. These procedures are required in order to remove the UV-blocking contaminants which gradually build-up on the CCD windows (thereby restoring the UV throughput) as well as fix warm pixels. Examples of warm pixels are presented in the figure below. Calibration Procedure Estimated Accuracy Notes Bias subtraction 0.1 DN rms Unless bias jump is present Dark subtraction 0.1 DN/hr rms Error larger for warm pixels; absolute error uncertain because of dark glow Flat fielding <1% rms large scale Visible, near UV 0.3% rms small scale Visible, near UV ~10% F160BW; however, significant noise reduction achieved with use of correction flats Relative Photometry Procedure Estimated Accuracy Notes Residuals in CTE correction < 3% for the majority (~90%) of cases up to 1-% for extreme cases (e.g., very low backgrounds) Long vs. short anomaly (uncorrected) < 5% Magnitude errors <1% for well-exposed stars but may be larger for fainter stars. Some studies have failed to confirm the effect. (see Chapter 5 of IHB for more details) Aperture correction 4% rms focus dependence (1 pixel aperture) Can (should) be determined from data <1% focus dependence (> 5 pixel) Can (should) be determined from data 1-2% field dependence (1 pixel aperture) Can (should) be determined from data Contamination correction 3% rms max (28 days after decon) (F160BW) 1% rms max (28 days after decon) (filters bluer than F555W) Background determination 0.1 DN/pixel (background > 10 DN/pixel) May be difficult to exceed, regardless of image S/N Pixel centering < 1% Absolute Photometry Precedure Estimated Accuracy Sensitivity < 2% rms for standard photometric filters 2% rms for broad and intermediate filters in visible < 5% rms for narrow-band filters in visible 2-8% rms for UV filters Astrometry Procedure Estimated Accuracy Notes Relative 0.005" rms (after geometric and 34th-row corrections) Same chip 0.1" (estimated) Across chips Absolute 1" rms (estimated) Photometric Systems Used for WFPC2 Data The WFPC2 flight system is defined so that stars of color zero in the Johnson-Cousins UBVRI system have color zero between any pair of WFPC2 filters and have the same magnitude in V and F555W. This system was established by Holtzman et al. (1995b) The zeropoints in the WFPC2 synthetic system, as defined in Holtzman et al. (1995b), are determined so that the magnitude of Vega, when observed through the appropriate WFPC2 filter, would be identical to the magnitude Vega has in the closest equivalent filter in the Johnson-Cousins system. \(m_{AB} = -48.60-2.5\log f_\nu \) \(m_{ST} = -21.10-2.5\log f_\lambda\) Photometric Corrections A number of corrections must be made to WFPC2 data to obtain the best possible photometry. Some of these, such as the corrections for UV throughput variability, are time dependent, and others, such as the correction for the geometric distortion of WFPC2 optics, are position dependent. Finally, some general corrections, such as the aperture correction, are needed as part of the analysis process. Here we provide examples of factors affecting photometric corrections. Cool Down on April 23, 1994 PSF Variations 34th Row Defect Gain Variation Pixel Centering Possible Variation in Methane Quad Filter Transmission Polarimetry WFPC2 has a polarizer filter which can be used for wide-field polarimetric imaging from about 200 through 700 nm. This filter is a quad, meaning that it consists of four panes, each with the polarization angle oriented in a different direction, in steps of 45 o. The panes are aligned with the edges of the pyramid, thus each pane corresponds to a chip. However, because the filters are at some distance from the focal plane, there is significant vignetting and cross-talk at the edges of each chip. The area free from vignetting and cross-talk is about 60" square in each WF chip, and 15" square in the PC. It is also possible to use the polarizer in a partially rotated. Accurate calibration of WFPC2 polarimetric data is rather complex, due to the design of both the polarizer filter and the instrument itself. WFPC2 has an aluminized pick-off mirror with a 47° angle of incidence, which rotates the polarization angle of the incoming light, as well as introducing a spurious polarization of up to 5%. Thus, both the HST roll angle and the polarization angle must be taken into account. In addition, the polarizer coating on the filter has significant transmission of the perpendicular component, with a strong wavelength dependence. Astrometry Astrometry with WFPC2 means primarily relative astrometry. The high angular resolution and sensitivity of WFPC2 makes it possible, in principle, to measure precise positions of faint features with respect to other reference points in the WFPC2 field of view. On the other hand, the absolute astrometry that can be obtained from WFPC2 images is limited by the positions of the guide stars, usually known to about 0.5" rms in each coordinate, and by the transformation between the FGS and the WFPC2, which introduces errors of order of 0.1" Because WFPC2 consists of four physically separate detectors, it is necessary to define a coordinate system that includes all four detectors. For convenience, sky coordinates (right ascension and declination) are often used; in this case, they must be computed and carried to a precision of a few mas, in order to maintain the precision with which the relative positions and scales of the WFPC2 detectors are known. It is important to remember that the coordinates are not known with this accuracy. The absolute accuracy of the positions obtained from WFPC2 images is typically 0.5" rms in each coordinate and is limited primarily by the accuracy of the guide star positions.
So this is how I approached this question, the above equations could be simplified to : $$a = \frac{4(b+c)}{b+c+4}\tag{1!}$$ $$b = \frac{10(a+c)}{a+c+10}\tag{2}$$ $$c=\frac{56(a+b)}{a+b+56}\tag{3}$$ From above, we can deduce that $4 > a$ since $\frac{(b+c)}{b+c+4} < 1$ similarly $10 > b, 56 > c$ so $a + b + c < 70$ Let, $$(a + b + c)k = 70\tag4$$ Now let, $$\alpha(b+c) = b+c+4\tag{1'}$$ $$\beta(a+c) = a+c+10\tag{2'}$$ $$\gamma( a+b ) = a+b+56\tag{3'}$$ Now adding the above 3 equations we get : $$2(a+b+c) + 70 = a(\gamma + \beta) + c(\beta + \alpha) + b(\alpha + \gamma) \rightarrow (2 + k)(a+b+c) = a(\gamma + \beta) + c(\beta + \alpha) + b(\alpha + \gamma)$$ Now from above we see that coefficient of $a,b,c$ must be equal on both sides so, $$(2 + k) = (\alpha + \beta) = (\beta + \gamma) = (\alpha + \gamma)$$ Which implies $\beta = \gamma = \alpha = 1+ \frac{k}{2} = \frac{2 + k}{2}$, Now from $(1)$ and $(1')$ we get $a = \frac{4}{\alpha} = \frac{8}{2+k}$ similarly from $(2),(2')$ and $(3),(3')$ we find, $b = \frac{20}{2+k}, c = \frac{112}{2+k}$ Thus from above we get $a+b+c = \frac{140}{2+k}$ and from $(4)$ we get: $\frac{140}{2+k} = \frac{70}{k}$ from which we can derive $k = 2$ Thus we could derive $a = 2, b = 5, c = 28$ but, the problem now is $a, b, c$ values don't satisfy equation $(4)$ above for $k =2$ Well so, where do I err ? And did I take the right approach ? Do post the solution about how you solved for $x$.
The gamma function is defined as $$\Gamma(x)=\int_0^\infty t^{x-1}e^{-t}dt$$ for $x>0$. Through integration by parts, it can be shown that for $x>0$, $$\Gamma(x)=\frac{1}{x}\Gamma(x+1).$$ Now, my textbook says we can use this definition to define $\Gamma(x)$ for non-integer negative values. I don't understand why. The latter definition was derived by assuming $x>0$. So shouldn't the whole definition not be valid for any $x$ value less than zero? P.S. I have read other mathematical sources and most of them explain things in mathematical terms that are beyond my level. It would be appreciated if things could be kept in relatively simple terms.
I am trying to find material stiffness matrix for linear elasticity for finite element code, $$\mathbf{\sigma} = \lambda \hspace{1pt} \operatorname{tr}{\left(\mathbf{\epsilon}\right)}+ 2\mu\mathbf{\epsilon} \,,$$ where $\mathbf{\sigma,\epsilon}$ are Cauchy stress and corresponding conjugate strain respectively; both being second order tensors. $\operatorname{tr}\left(\mathbf{\epsilon} \right)$ is trace of the tensor. I want to find $$\frac{\partial \mathbf{\sigma}}{\partial\mathbf{\epsilon}}$$ Considering stress and strain as $6 {\times} 1$ column vectors, I started like this: $$ \begin{align} \mathbb{C_{ij}}& =\frac{\partial {\sigma_{i}}}{\partial{\epsilon_{j}}} \\[2.5px] & = \frac{\partial}{\partial \epsilon_{j}} \left(\lambda\epsilon_{i} + 2\mu\epsilon_{i} \right) \\[2.5px] & =\lambda\frac{\partial}{\partial \epsilon_{j}}\epsilon_{i} + 2\mu\frac{\partial}{\partial \epsilon_{j}}\epsilon_{i} \\[2.5px] & =\left( \lambda + 2 \mu \right)\delta_{ij} \end{align} $$ However, $\mathbb{C_{ij}}$ is given differently – as 6x6 matrix in here. (Note: The above equations are in incremental form in reality, but I just avoided that notation.) Can someone comment what is wrong in my derivation and what material stiffness matrix is right to use? Edit: After answer from Chemomechanics, I understood the problem, and rewrite the equation in 2 cases. Case 1: $i=1,2,3$ $$ \lambda\frac{\partial}{\partial \epsilon_{j}}(\epsilon_1 + \epsilon_2 + \epsilon_3) + 2\mu\frac{\partial}{\partial \epsilon_{j}}\epsilon_{i} \\[2.5px] =\lambda\frac{\partial}{\partial \epsilon_{j}}(\epsilon_1 + \epsilon_2 + \epsilon_3) + 2\mu\delta_{ij}$$ Can someone comment how the first part simplifies really? Final version is given in the answer, I am trying to reach there.
Definition:Contour Integral/Complex Definition The contour integral of $f$ along $C$ is defined by: $\displaystyle \int_C f \left({z}\right) \rd z = \sum_{i \mathop = 1}^n \int_{a_i}^{b_i} f \left({\gamma_i \left({t}\right) }\right) \gamma_i' \left({t}\right) \rd t$ Let $C$ be a closed contour in $\C$. Then the symbol $\displaystyle \oint$ is used for the contour integral on $C$. The definition remains the same: $\displaystyle \oint_C f \left({z}\right) \rd z := \sum_{i \mathop = 1}^n \int_{a_i}^{b_i} f \left({\gamma_i \left({t}\right) }\right) \gamma_i' \left({t}\right) \rd t$ Also known as A contour integral is called a line integral or a curve integral in many texts.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2019, Volume 122, Issue 23, pp. 231802 - 231802 We report a measurement of the mass difference between neutral charm-meson eigenstates using a novel approach that enhances sensitivity to this parameter. We... PHYSICS, MULTIDISCIPLINARY | Sensitivity enhancement | Eigenvectors | Parameter sensitivity | Large Hadron Collider | Particle collisions | Charm (particle physics) | Physics - High Energy Physics - Experiment PHYSICS, MULTIDISCIPLINARY | Sensitivity enhancement | Eigenvectors | Parameter sensitivity | Large Hadron Collider | Particle collisions | Charm (particle physics) | Physics - High Energy Physics - Experiment Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2019, Volume 122, Issue 23 Journal Article Physical review letters, ISSN 0031-9007, 05/2019, Volume 122, Issue 21, pp. 211803 - 211803 Journal Article The European Physical Journal C, ISSN 1434-6044, 12/2018, Volume 78, Issue 12, pp. 1 - 12 A search is presented for a Higgs-like boson with mass in the range 45 to 195$$\,{{\mathrm {GeV/}}c^2}$$ GeV/c2 decaying into a muon and a tau lepton. The... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Confidence intervals | Luminosity | Large Hadron Collider | Leptons | Decay | Bosons | Physics - High Energy Physics - Experiment | Regular - Experimental Physics Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Confidence intervals | Luminosity | Large Hadron Collider | Leptons | Decay | Bosons | Physics - High Energy Physics - Experiment | Regular - Experimental Physics Journal Article Nature Structural and Molecular Biology, ISSN 1545-9993, 2012, Volume 19, Issue 11, pp. 1139 - 1146 Both epigenetic and splicing regulation contribute to tumor progression, but the potential links between these two levels of gene-expression regulation in... PROTEIN | HISTONE VARIANT MACROH2A | BIOCHEMISTRY & MOLECULAR BIOLOGY | P72 | TRANSCRIPTIONAL COACTIVATOR | CELL BIOLOGY | BREAST-CANCER | COLON-CANCER | LUNG-CANCER | P68 | BIOPHYSICS | CANCER DEVELOPMENT | EXPRESSION | Alternative Splicing - genetics | Humans | DNA Primers - genetics | Reverse Transcriptase Polymerase Chain Reaction | Blotting, Western | Epigenesis, Genetic - physiology | Neoplasm Invasiveness - physiopathology | Animals | Histones - genetics | Chromatin Immunoprecipitation | Cell Line, Tumor | ROC Curve | Mice | DEAD-box RNA Helicases - metabolism | Neoplasm Invasiveness - genetics | Gene Expression Regulation, Neoplastic - physiology | Superoxide Dismutase - metabolism | Physiological aspects | Genetic aspects | Research | RNA | Helicases | Tumors | Epigenetics | Gene expression | Ribonucleic acid--RNA | Molecular biology | Cell adhesion & migration | Index Medicus PROTEIN | HISTONE VARIANT MACROH2A | BIOCHEMISTRY & MOLECULAR BIOLOGY | P72 | TRANSCRIPTIONAL COACTIVATOR | CELL BIOLOGY | BREAST-CANCER | COLON-CANCER | LUNG-CANCER | P68 | BIOPHYSICS | CANCER DEVELOPMENT | EXPRESSION | Alternative Splicing - genetics | Humans | DNA Primers - genetics | Reverse Transcriptase Polymerase Chain Reaction | Blotting, Western | Epigenesis, Genetic - physiology | Neoplasm Invasiveness - physiopathology | Animals | Histones - genetics | Chromatin Immunoprecipitation | Cell Line, Tumor | ROC Curve | Mice | DEAD-box RNA Helicases - metabolism | Neoplasm Invasiveness - genetics | Gene Expression Regulation, Neoplastic - physiology | Superoxide Dismutase - metabolism | Physiological aspects | Genetic aspects | Research | RNA | Helicases | Tumors | Epigenetics | Gene expression | Ribonucleic acid--RNA | Molecular biology | Cell adhesion & migration | Index Medicus Journal Article Journal of High Energy Physics, ISSN 1126-6708, 5/2018, Volume 2018, Issue 5, pp. 1 - 17 The CP asymmetry in B − → D s − D 0 and B − → D − D 0 decays is measured using LHCb data corresponding to an integrated luminosity of 3.0 fb−1, collected in pp... B physics | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Asymmetry | Luminosity | Physics - High Energy Physics - Experiment B physics | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Asymmetry | Luminosity | Physics - High Energy Physics - Experiment Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 01/2019, Volume 122, Issue 1, pp. 011802 - 011802 Journal Article JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 04/2019, Issue 4 The doubly Cabibbo- suppressed decay Xi(+)(c) -> p phi with ! K+K is observed for the fi rst time, with a statistical signi fi cance of more than fi fteen... Flavor physics | Branching fraction | Charm physics | Hadron-Hadron scattering (experiments) | TOOL | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment Flavor physics | Branching fraction | Charm physics | Hadron-Hadron scattering (experiments) | TOOL | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment Journal Article 07/2018 Phys. Rev. Lett. 121, 092003 (2018) We report a measurement of the lifetime of the $\Omega_c^0$ baryon using proton-proton collision data at center-of-mass... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article Nucleic Acids Research, ISSN 0305-1048, 8/2009, Volume 37, Issue 14, pp. 4672 - 4683 Polypyrimidine tract-binding protein (PTB) is a splicing regulator that also plays a positive role in pre-mRNA 3′ end processing when bound upstream of the... SPECIFICITY FACTOR | UPSTREAM SEQUENCE ELEMENT | DOWNSTREAM ELEMENTS | BIOCHEMISTRY & MOLECULAR BIOLOGY | MAMMALIAN POLYADENYLATION SIGNALS | POLY(A) POLYMERASE | MECHANISMS | TRACT BINDING-PROTEIN | HNRNP-H | CLEAVAGE | EFFICIENCY | Humans | Regulatory Sequences, Ribonucleic Acid | Polypyrimidine Tract-Binding Protein - metabolism | RNA, Messenger - metabolism | beta-Globins - genetics | 3' Untranslated Regions - chemistry | Heterogeneous-Nuclear Ribonucleoprotein Group F-H - metabolism | Poly A - metabolism | Base Sequence | Polyadenylation | Conserved Sequence | RNA Precursors - metabolism | RNA 3' End Processing | Index Medicus | RNA SPECIFICITY FACTOR | UPSTREAM SEQUENCE ELEMENT | DOWNSTREAM ELEMENTS | BIOCHEMISTRY & MOLECULAR BIOLOGY | MAMMALIAN POLYADENYLATION SIGNALS | POLY(A) POLYMERASE | MECHANISMS | TRACT BINDING-PROTEIN | HNRNP-H | CLEAVAGE | EFFICIENCY | Humans | Regulatory Sequences, Ribonucleic Acid | Polypyrimidine Tract-Binding Protein - metabolism | RNA, Messenger - metabolism | beta-Globins - genetics | 3' Untranslated Regions - chemistry | Heterogeneous-Nuclear Ribonucleoprotein Group F-H - metabolism | Poly A - metabolism | Base Sequence | Polyadenylation | Conserved Sequence | RNA Precursors - metabolism | RNA 3' End Processing | Index Medicus | RNA Journal Article 31. Correction to Genome-wide analysis of host mRNA translation during hepatitis C virus infection [J Virol., 87, 12, (2013) 6668-6677, DOI:10.1128/JVI.00538-13] Journal of Virology, ISSN 0022-538X, 06/2016, Volume 90, Issue 12, p. 5846 Journal Article 32. Measurement of Angular and CP Asymmetries in D-0 -> pi(+) pi(-) mu(+) mu(-) and D-0 -> K+ K- mu(+) mu(-) Decays PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 08/2018, Volume 121, Issue 9 Journal Article 10/2018 The production of $\Upsilon(nS)$ mesons ($n=1,2,3$) in $p$Pb and Pb$p$ collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{NN}}=8.16$ TeV is... Journal Article 06/2018 JHEP 11(2018)048 A measurement of the time-integrated $CP$ asymmetry in $D^0\rightarrow K^0_S K^0_S$ decays is reported. The data correspond to an integrated... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 07/2018 Phys. Lett. B 787 (2018) 124-133 A search for $C\!P$ violation in $\Lambda^0_b \to p K^-$ and $\Lambda^0_b \to p \pi^-$ decays is presented using a sample of... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article
The problem asks to calculate the force on one part due to other. But the electric field I'm using to do this calculation is the result E of complete solid non-conducting sphere. So why are we considering E of the northern hemisphere to calculate force it experience from the southern hemisphere? This is a very good question. One way of finding the total electrostatic force on the N hemisphere is to calculate the vector force $F_{ij}=k q_i q_j / r_{ij}^2$ on every charge $q_i$ in the N hemisphere due to every charge $q_j$ in the S hemisphere, then take the vector sum of all forces $F_{ij}$ : $$F_N=\sum_{i \in N} \sum_{j \in S} F_{ij}$$ This double sum or integral is very difficult to calculate, for two reasons : the general expression for the force $F_{ij}$ between every pair of 2 charges is a complicated function using the Cartesian co-ordinate system. even more so when using spherical co-ordinates ; each of the 2 charges $q_i, q_j$ in every pair has 3 co-ordinates, so in total there will be a sum ranging over 6 possible co-ordinates. An alternative method of solution (used in Jorge Daniel's answer) is to sum the total electrical force $F_i$ on each charge $q_i$ in the N hemisphere due to all of the charges in both the S and N hemispheres : $$F_N=\sum_{i \in N} F_i=\sum_{i \in N} \sum_{j \in N,S} F_{ij}$$ Whereas $F_{ij}$ is a very complicated expression, the total force $F_i$ on each charge is very much simpler because it is known to be radial and depends only on the radial distance of the charge $q_i$ from the centre of the sphere. As you rightly point out, this means that within the expression for $F_i$ we will include the force $F_{ij}$ on charge $q_i$ due to other charges $q_j$ which are also in the N hemisphere. However, if charge $q_j$ is also within the N hemisphere ($j \in N$), then when we take the sum over all charges $q_i$ in the N hemisphere ($i \in N$), this sum will include not only $F_{ij}$ but also the equal and opposite force $F_{ji}$ acting on $q_j$ due to charge $q_i$. These 2 contributions to the total force on all particles in the N hemisphere will cancel out because $F_{ij}=-F_{ji}$. On the other hand, if charge $q_j$ is in the S hemisphere ($j \in S$) then the force $F_{ji}$ will not be included in the first summation over $i \in N$. See Find the net force the southern hemisphere of a uniformly charged sphere exerts. To make the above explanation clearer, suppose we have a system of only 4 charges $q_i$ with $i=1 \to 4$. Between every pair of charges there is an electrical force $F_{ij}$ meaning the force that charge $j$ exerts on charge $i$. Suppose that charges $i=1 , 2$ constitute one object (which we call "the N hemisphere") and charges $i=3, 4$ constitute a second object ("the S hemisphere"). Then the total force on the N hemisphere due to the S hemisphere is $$F_N=F_{13}+F_{14}+F_{23}+F_{24}$$ The total forces on charges $q_1, q_2$ due to all other charges in both hemispheres (ie summing over $j \in N, S$) are $$F_1=F_{12}+F_{13}+F_{14}$$ $$F_2=F_{21}+F_{23}+F_{24}$$ If we add the forces on charges $q_1$ and $q_2$ (summing over $i \in N$) we get $$F_N'=F_1+F_2=(F_{12}+F_{21})+F_{13}+F_{14}+F_{23}+F_{24}=F_N$$ because $F_{12}=-F_{21}$. Thus both methods give the same answer.
Newspace parameters Level: \( N \) = \( 8048 = 2^{4} \cdot 503 \) Weight: \( k \) = \( 2 \) Character orbit: \([\chi]\) = 8048.a (trivial) Newform invariants Self dual: Yes Analytic conductor: \(64.2636035467\) Analytic rank: \(0\) Dimension: \(29\) Fricke sign: \(-1\) Sato-Tate group: $\mathrm{SU}(2)$ The dimension is sufficiently large that we do not compute an algebraic \(q\)-expansion, but we have computed the trace expansion. For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. Label \( a_{2} \) \( a_{3} \) \( a_{4} \) \( a_{5} \) \( a_{6} \) \( a_{7} \) \( a_{8} \) \( a_{9} \) \( a_{10} \) 1.1 0 −3.11496 0 0.544032 0 4.06617 0 6.70298 0 1.2 0 −2.62135 0 −1.76626 0 −2.03701 0 3.87147 0 1.3 0 −2.38925 0 −0.218073 0 2.23687 0 2.70851 0 1.4 0 −2.22567 0 −3.22615 0 −0.00440441 0 1.95359 0 1.5 0 −2.04161 0 −4.02753 0 −1.43352 0 1.16816 0 1.6 0 −1.71451 0 −1.13020 0 1.00303 0 −0.0604433 0 1.7 0 −1.62590 0 3.78842 0 2.93814 0 −0.356460 0 1.8 0 −1.30647 0 0.939477 0 −2.17911 0 −1.29314 0 1.9 0 −1.29900 0 2.09702 0 3.13973 0 −1.31260 0 1.10 0 −1.28849 0 −1.09309 0 −0.799248 0 −1.33979 0 1.11 0 −0.936010 0 1.25557 0 −0.290471 0 −2.12388 0 1.12 0 −0.454325 0 0.622756 0 4.14589 0 −2.79359 0 1.13 0 0.137144 0 1.57958 0 −4.44075 0 −2.98119 0 1.14 0 0.278789 0 −0.533193 0 −1.93823 0 −2.92228 0 1.15 0 0.502218 0 −3.38571 0 −3.39706 0 −2.74778 0 1.16 0 0.582422 0 −3.18976 0 3.00819 0 −2.66078 0 1.17 0 0.782310 0 2.74195 0 3.13445 0 −2.38799 0 1.18 0 0.795174 0 3.83818 0 −0.297725 0 −2.36770 0 1.19 0 1.10456 0 −2.70098 0 −3.17910 0 −1.77994 0 1.20 0 1.15550 0 −3.26045 0 1.92982 0 −1.66482 0 See all 29 embeddings This newform does not have CM; other inner twists have not been computed. \( p \) Sign \(2\) \(1\) \(503\) \(-1\) This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(8048))\): \(T_{3}^{29} - \cdots\) \(T_{5}^{29} + \cdots\) \(T_{7}^{29} - \cdots\) \(T_{13}^{29} - \cdots\)
A new approach (not found anywhere else) is given in book by Uspensky, Heaslet, titled Elementary Number Theory, as detailed below for finding non-negative solutions of a linear Diophantine Equation(LDE) $ax + by =c, \exists a,b,c \gt 0 \in \mathbb {Z}$, so that the slope of the line is negative (signs of $a,b$ are the same). I am stuck and want (at least) help in the below highlighted portion (of which, my doubts are listed in the end) to enable me start understanding the text: Dividing $x,y$, respectively, by $b,a$, have: $x = bm + r, y = an +s$, the substitution of these expressions yields: $ab(m+n) + ar + bs = c.\text{ }{ -(i)}$ By division can represent $c$ as: $c = abq + R, 0 \le R \lt ab,\text{ }{ -(ii)}$ whence, together with (i), it follows: $ar + bs - R = ab(q - m - n).$ This shows that $ab \mid (ar + bs - R), \text { but } ar + bs \lt 2ab;$ consequently $ar + bs - R \lt 2ab,$ and on the other hand: $ar +bs - R \gt -ab,$ and so the integer $\frac{ar+bs-R}{ab}$ lies in range $(-1,2) \implies 0,1$ as possible values; i.e. $$ \begin{align} ar + bs = R , \text{ }{ and, } & \ ar + bs = R + ab \text{ }{ -(iii)}\\ \end{align} $$ and correspondingly$$ \begin{align} m + n = q , \text{ }{ and, } & \ m + n = q - 1 \text{ }{ }\\ \end{align} $$ If $(a,b)=1$, then in eqn. (iii) exactly one has solution in non-negative integers $r \lt b \text{, }{ and }$ $ s\lt a.$ To prove this, let $r_0, s_0$ denote some particular solution of the first equation in (iii), i.e., $ar + bs = R$. Then all solutions of this equation will be given by: $$ \begin{align} r = r_0 - bt , \text{ }{ } & \ s = s_0 + at \text{ }{ }\\ \end{align} $$ & among them there is only one in which $0 \le r \lt b.$ The corresponding value of $s \lt a$, as $bs \le R \lt ab.$ Moreover $s \gt -a,$ since $bs \ge -ar \gt -ab.$ Now if it happens that $s \ge 0$, then the equation $ar + bs = R$ has a solution of the required kind and the solution is unique. But, then it is impossible to satisfy the other equation in the same manner. For $r+b, s$ will satisfy this equation, & all other solutions of it will be given by $$\begin{align} r + b -bt , \text{ }{ } & \ s + at \text{ }{ }\\ \end{align} $$and the only way to make $0 \le r+b - bt \lt b$ is to take $t = 1$, but then $s + at \ge a$. If, on the contrary, $s \lt 0$, then numbers $0 \le r \lt b, 0 \le s+a \lt a,$ will satisfy the second equation. Thus there are only two cases to consider: (1) $ar + bs = R$ is solvable in non-negative integers; (2) not solvable in this manner. In (1), $m+n = q$ has exactly $q+1$ solutions in non-negative integers:$$\begin{align} m = 0, 1, 2, ..., q\\ n = q, q-1, q-2, ..., 0\\ \end{align} $$ and correspondingly there are $q+1$ solutions of the equation $ax +by = c$ in non-negative integers. For (2), $m +n = q-1$ has exactly $q$ solutions in non-negative integers $$\begin{align} m = 0, 1, 2, ..., q-1\\ n = q-1, q-2, q-3, ..., 0\\ \end{align} $$ to which correspond again $q$ solutions of the proposed equation in non-negative integers. The summary of the results of the above discussion follows: $ax + by = c, \exists a,b,c \gt 0, (a,b) =1$ has $q+1$ or $q$ solutions, as the equation $ar +bs = R$ has solutions in non-negative integers $r \lt b, s \lt a$, or not. Note: $c = ab.(q) + R$. My doubts in the highlighted portion: (i) Why not $(q-m-n) \mid (ar +bs - R)$? (ii) Why $ar +bs \lt 2ab$?
Some Useful Forces and Energies to Know About Force of Gravity An object near the surface of the Earth is attracted to the Earth with a force commonly referred to as the force of gravity. This force is proportional to the object’s mass and fairly constant everywhere on the surface of the Earth. The constant of proportionality is often referred to as “g,” but properly it should have a subscript “E” to indicate that it is the earth that is interacting with the object, and not, e.g., the moon. If the downward force of gravity is balanced by an upward force acting on the object, then the object’s velocity remains constant (balanced forces). If the object exerting the upward force on the object is a scale or balance, then it reads the “weight” of the object, which is equal in this case of balanced forces, to the gravitational force acting down on the object. The term weight is often taken to mean the gravitational force. This is OK when forces are balanced and the object’s motion does not change. If the object’s velocity is changing, the weight (what a scale reads) can be very different from the gravitational force. (The force of gravity acting on the astronauts in the space shuttle is only slightly less than the force of gravity acting on them when they are standing on Earth, yet they are “weightless” in the orbiting shuttle! We study the interesting state of weightlessness in Part 2 of this course.) Keeping in mind the reservation mentioned above, we can write: \[ F_{gravity} = \text{weight} = mg \] where m is the mass of the object and g = 9.8 N/kg near the surface of the Earth. With mass expressed in kilograms and g in units of N/kg, the weight will be in newtons, N, the SI unit of force. Note that “g” with the value 9.8 N/kg refers only to the value of g near the surface of the Earth. The value of g on the moon or on Mars will have a different value. Note: This gravity equation is an extremely accurate approximation when near the surface of the Earth, but it will not hold if the object changes elevation by a sizeable fraction of the Earth's radius. You will learn the full story later, with Newton's theory of gravity. Example: A Stone and a Freight Train The acceleration of an object can be computed by adding up all the forces on an object, and dividing by its mass. \[ a = \frac{F}{m} \] A stone and a freight train cart are dropped at the same time from the top of the empire state building. Which hits the ground first? The stone has a mass of 4 kg, while the freight train cart has a mass of 4500 kg. Solution The object with larger acceleration will hit the ground first. Ignoring air resistance, the only force acting on the rock and the train cart is gravity. \( F_{rock} = m_{rock}g \) and \( F_{train} = m_{train}g \) To compute the acceleration, divide the force by the mass. \( a_{rock} = \frac{ F_{rock}}{ m_{rock} } \) and \( a_{train} = \frac{ F_{train}}{ m_{train} } \) But the force is mg to begin with! So dividing by m leaves only g, so \[ a_{rock} = a_{train} = g = 9.8 m/s^{2} \] So the two hit the ground at the same time! If this is true, the units N/kg should be equivalent to m/s 2. 1 N = 1 kg-m/s 2, so indeed the units work out correctly. Gravitational Potential Energy A gravitational potential energy-system exists for each pair of objects interacting by the gravitational force. If we are talking about an object and the Earth, the energy of this system changes as some other object (perhaps you) does work on the object by raising it to a higher elevation. We call this a change in the gravitational potential energy, ∆PEgravity, of the Earth-object system. This change is simply equal to the work done by the other object and is \[ \Delta GPE = mg\Delta y \] where ∆y is the change in elevation of the object. Notice that ∆GPE does not depend on distance moved parallel to the surface of the Earth, but only on the change in vertical distance (gravitational force points toward the center of the Earth). The work done on the Earth-object system by something else moving the object further from the center of the Earth, is positive, because the force and change in distance are in the same direction. Also note that we are imagining that the object is lifted up at a constant speed, so that there is no change in its motion. Thus, all of the work goes into changing the gravitational potential energy of the system. We call this a “potential” energy, because the energy depends only on the relative positions of the object and Earth. It does not depend on the route taken to get to these positions or on the speeds the object and Earth might have. Note that our expression for the gravitational potential energy gives only changes. Where we put our origin of the coordinate system used to measure “y” does not matter, since we are always subtracting two elevations. If we get sloppy and say an object has a gravitational potential energy of so many joules, we mean it has this amount relative to where we picked the origin of our coordinate system, which is completely arbitrary.We will come back to this later, but it is worth noting here that there is a connection between the force and the change in the potential energy. A Note on the "Zero Point" for Potential Energies Energy of Motion–Kinetic Energy Now suppose we do work on an object and move it in a horizontal direction, increasing its speed. Because we have not changed its elevation, we have not changed its gravitational potential energy. But certainly it now has more energy. We know this both from experience and from our definition of work. So where did the energy go? It is in the object’s motion. We call this kinetic energy, or KE and sometimes K in equations. The KE of an object in motion at speed \(v\) with mass \(m\) is given by: \[KE = \dfrac{1}{2}mv^{2}\] In addition, if we do work on an object of mass \(m\) and change its velocity from \(v_{i}\) to \(v_{f}\), then the change in KE is \[ W = \Delta KE = KE_{f} - KE{i} = \dfrac{1}{2}mv_{f}^{2} - \dfrac{1}{2}mv_{i}^{2} \] Note: it is the difference of the squares of the speeds that is important in changes in KE. Prototypical Example of Mechanical Energy Conservation A roller coaster with mass of 8,000 kg on a frictionless track reaches an initial height of 100 m, before plummeting to just 10 m above the ground. How much potential energy was lost by the coaster? Where did this energy go? Solution To discuss potential energy, let's use the ground as a reference, so that an object at ground level has zero potential energy (we're allowed to do this because only changes in energy are relevant; the zero point can be shifted arbitrarily). Then the roller coaster starts off with \( GPE_{i} = mgy_{i} = (8,000kg)(9.8m/s)(100m) = 7,840,000 J = 7.84 MJ \) And at the bottom has a final potential \( GPE_{f} = mgy_{f} = (8,000kg)(9.8m/s)(10m) = 784,000 J = .784 MJ \) (1 MJ is a mega joule, which is one million joules.) This leaves a change of \( \Delta GPE = .784 MJ - 7.84MJ = -7.056 MJ \) Making Sense of This Answer (Negative Energy?) This number does not indicate direction in the way that vectors do, since energy is a scalar. Rather, it simply indicates that the amount of energy in the roller-coaster system has decreased by 7.056 MJ. When working with energy, the question always then becomes "where did the energy go?". In this case, the answer is that the energy went into the carts kinetic energy only, since the friction was said to be negligible. Extra: How Fast is the Cart At the Bottom? It turns out energy conservation can tell us how fast the cart moves at any given altitude. Energy conservation tells us that, for a closed system \[ \Delta KE + \Delta GPE = 0 \] which means that the cart gains as much kinetic energy as it lost potential energy. This means \[ \Delta KE = - \Delta GPE = - (-7.056 MJ) = 7.056 MJ \] Because we know the cart's mass, the velocity can be computed from the kinetic energy equation: \[ KE = \dfrac{1}{2}mv^{2} \] \[ \sqrt{ \dfrac{2 KE}{m} } = v\] \[ \sqrt{ \dfrac{2(7,056,000 J)}{8,000kg} } = 42 m/s \] (Checking to see if this makes sense: 42 m/s is about 94 mph. This is a little fast for a roller coaster at that height, but then again we neglected friction!) The next energy system we take up is so important for future work, that we give it its own model: the Intro Spring-Mass Oscillator Model.
In my view, hierarchical modeling in a Bayesian setting mainly refers to the building of a complex prior structure. Consider a parameter of interest $\theta_{0}$ and your observation $(x_i)$. Now, consider for example that you are adding a supplemental layer to your model $p(\theta_0|\theta_1)$ through hyperprior $p(\theta_1)$ on $\theta_1$, then $p(\theta_0)$ writes:$$p(\theta_0)=\int_R p(\theta_0|\theta_1)p(\theta_1)d\theta_1,$$and so on for $\theta_2, \ldots$. The same for the observation model : consider that your parameter of interest $\theta_0$ is not directly related to the observation but to an other parameter $\theta_1$ that is itself related to the observations:$$p((x_i)|\theta_0)=\int_R p((x_i)|\theta_1)p(\theta_1|\theta_0) d\theta_1.$$ To sum up, in principle, you can always (to the best of my knowledge) marginalize the hierarchical structures to get something as $p(\theta_0|x) \propto p(x|\theta_0) \cdot p(\theta_0)$ so to the simplest Bayes formulation. However, most of the time, integrals are intractable and we need to work with all the latent variables of the prior structure. So, IMHO, a hierarchical Bayesian model is only a decomposed Bayesian model (assuming that we call a Bayesian model something of the simplest form $p(\theta_0|x) \propto p(x|\theta_0) \cdot p(\theta_0)$). Finally to answer your last question: "Is it enough for any statistical model which uses Bayes theorem to be categorized under Bayesian analysis/statistics" I would say no. A model can be qualified as a Bayesian model if it relies on Bayesian interpreation of probability https://en.wikipedia.org/wiki/Bayesian_probability and in particular if the posterior $p(\theta|x_i)$ makes any sense. Bayes theorem can be used in other contexts. See the related answer to the question Can frequentists use Bayes theorem?.
I have some questions on the difference between conditional MLE (CMLE) and unconditional MLE (UMLE) in practice. In what follows I will only talk about the unconditional and conditional mean and leave out the variance in order to shorten the question. Example 1: We have a linear model: $y_{i}=x_{i}^{\prime}\beta+\varepsilon_{i}$, $\varepsilon_{i}\sim N\left(0,\:\sigma^{2}\right)$ $y$ has conditional mean: $E\left[y\mid x\right]=x_{i}^{\prime}\beta$, and unconditional mean: $E\left[y\right]=\mu$ Am I right to assume that unconditional Maximum Likelihood estimation proceeds like this? The density function for observation $y$ is: $f\left(y_{i}\right)=\left(\frac{1}{\sqrt{2\pi\sigma^{2}}}\right)\exp\left(-\frac{1}{2}\frac{\left(y_{i}-\mu\right)^{2}}{\sigma^{2}}\right)$ The log-likelihood function is:$\log L\left(\mu,\:\sigma^{2}\right)=-\frac{N}{2}\log\left(2\pi\sigma^{2}\right)-\frac{1}{2}\sum_{i=1}^{N}\frac{\left(y_{i}-\mu\right)^{2}}{\sigma^{2}}$ And the UMLE estimator is: $\hat{\mu}_{ML}=\frac{1}{N}\sum_{i=1}^{N}y_{i}$ Am I right to assume that conditional Maximum Likelihood estimation proceeds like this? $f\left(y_{i}\mid x_{i}\right)=\left(\frac{1}{\sqrt{2\pi\sigma^{2}}}\right)\exp\left(-\frac{1}{2}\frac{\left(y_{i}-x_{i}^{\prime}\beta\right)^{2}}{\sigma^{2}}\right)$ the log-likelihood function is: $\log L\left(\beta,\:\sigma^{2}\right)=-\frac{N}{2}\log\left(2\pi\sigma^{2}\right)-\frac{1}{2}\sum_{i=1}^{N}\frac{\left(y_{i}-x_{i}^{\prime}\beta\right)^{2}}{\sigma^{2}}$ The CMLE estimator is: $\hat{\beta}_{ML}=\frac{\sum_{i=1}^{N}x_{i}y_{i}}{\sum_{i=1}^{N}x_{i}^{2}}$ Example 2 We have a Poisson model: $y$ has conditional mean $E\left[y_{i}\mid x_{i}\right]=\lambda_{i}=\exp\left(x_{i}^{\prime}\beta\right)$ and unconditional mean: $E\left[y\right]=\lambda$. Am I right to assume that unconditional Maximum Likelihood estimation proceeds like this? The density function for $y$ is: $f\left(y_{i}\right)=\frac{e^{-\lambda}\lambda^{y_{i}}}{y_{i}!}$ The log-likelihood function for is: $\log L\left(\lambda\right)=\sum_{i=1}^{N}log\left(\frac{e^{-\lambda}\lambda^{y_{i}}}{y_{i}!}\right)=\sum_{i=1}^{N}\left(-\lambda+y_{i}\log\left(\lambda\right)-\log\left(y_{i}\right)\right)$ And the UMLE estimator is: $\hat{\lambda}_{ML}=\frac{1}{N}\sum_{i=1}^{N}y_{i}$ Am I right to assume that conditional Maximum Likelihood estimation proceeds like this? $f\left(y_{i}\mid x_{i}\right)=\frac{e^{-\lambda}\lambda^{y_{i}}}{y_{i}!}=\frac{\exp\left(-\exp\left(x_{i}^{\prime}\beta\right)\right)\cdot\left[\exp\left(x_{i}^{\prime}\beta\right)\right]^{y_{i}}}{y_{i}!}$ The log-likelihood function is: $\log L\left(\lambda\right)=\sum_{i=1}^{N}\left(\frac{\exp\left(-\exp\left(x_{i}^{\prime}\beta\right)\right)\cdot\left[\exp\left(x_{i}^{\prime}\beta\right)\right]^{y_{i}}}{y_{i}!}\right)=\sum_{i=1}^{N}\left(-\exp\left(x_{i}^{\prime}\beta\right)+y_{i}x_{i}^{\prime}\beta-log\left(y_{i}!\right)\right) $ And the CMLE estimator is the solution to: $\sum_{i=1}^{N}\left(y_{i}-\exp\left(x_{i}^{\prime}\beta\right)\right)x_{i}^{\prime}=0$ Example 3 We have an AR(1) model: $y_{t}=\mu+\Theta y_{t}+\varepsilon_{t}$, $\varepsilon_{t}\sim N\left(0,\:\sigma^{2}\right)$ $y$ has conditional mean: $E\left[y\mid y_{t-1}\right]=\mu+\Theta y_{t}$, and unconditional mean: $E\left[y\right]=\frac{\mu}{1-\Theta}$ $y$ has conditional variance: $Var\left[y_{t}\mid y_{t-1}\right]=\sigma^{2}$ and unconditional variance: $Var\left[y_{t}\right]=\frac{\sigma^{2}}{1-\Theta^{2}}$ Am I right to assume that unconditional Maximum Likelihood estimation proceeds like this? The unconditional density function for $y$ is: $f\left(y_{t}\right)=\left(\frac{1}{\sqrt{2\pi\frac{\sigma^{2}}{1-\Theta^{2}}}}\right)\exp\left(-\frac{1}{2}\frac{\left(y_{t}-\frac{\mu}{1-\Theta}\right)^{2}}{\frac{\sigma^{2}}{1-\Theta^{2}}}\right)$ and then I state the log-likelihood function and derive the estimators as above. Am I right to assume that conditional Maximum Likelihood estimation proceeds like this? The conditional density function for $y$ is: $f\left(y_{t}\mid y_{t-1}\right)=\left(\frac{1}{\sqrt{2\pi\sigma^{2}}}\right)\exp\left(-\frac{1}{2}\frac{\left(y_{i}-\mu-\Theta y_{t-1}\right)^{2}}{\sigma^{2}}\right)$ and then I proceed by stating the log-likelihood function and deriving the estimators as above. My question is if what I have done above is correct? I know that the CMLE is right since I condition on x. The question is if the UMLE part is correct? I am quite sure that Example 3 is correct but I am not so sure Example 1 and Example 2 are correct. Any help would be greatly appreciated.
Suppose we have a secret $\sigma$. The secret comes from a universe in which the elements are not necessarily distributed uniformly. We split $\sigma$ into $n$ shares $[\sigma_1,...,\sigma_n]$ (using Shamir secret sharing). So the order of shares matters. We know given all the shares in the right order one can recover the secret. We permute all the shares in a matrix (see below). We fill the empty indices with some dummy (or random) values $d_{i,j}$ \begin{matrix} d_{11} & \sigma_{n} & \sigma_{2} & \dots & d_{1,m} \\ d_{21} & d_{22} & \sigma_{i} & \dots & d_{2,m} \\ \dots \\ d_{k,1} & d_{k,2} & \sigma_{3} & \dots & \sigma_{1} \end{matrix} Question: Given the matrix, can the adversary recover the secret with a high (or non-negligible) probability? I emphasis that $\sigma$ may have very greater distribution probability than the other elements of the universe and the adversary knows that probability. Please note that the values $k$ (number of rows) and $m$ (number of columns) are independent of the number of shares $n$ and we can increase them if it's needed. ==================================== Edit: Newly added: Suppose we have two permuted matrices one contains shares of secret value $\sigma$ and dummy values; and the other matrix contains shares of $\gamma$ and random values. We give away the two permuted matrices and one-to-one mapping of the elements to the adversary. The mapping tells the adversary that value in $i,j$ position in one matrix corresponds to value $k,l$ position in the other matrix. Question: Would the adversary learn the secret values $\sigma$ and $\gamma$ with a non-negligible probability.
The definition you quote is a formal definition of strings which is particularly conducive to induction. There are many other ways to define strings, for example as sequences of letters, or more exactly as mappings $f\colon \{1,\ldots,n\} \to \Sigma$ for some $n \in \mathbb{N}$. There are several different ways of understanding your definition, which is stated rather informally. The usual interpretation is as a least fixed point: $\Sigma^*$ is the least set which contains $\lambda$, and for each $w \in \Sigma^*$ and $\sigma \in \Sigma$, also $w\sigma \in \Sigma^*$; here you should think of concatenation as some formal operation, which is perhaps more appropriate to write as (say) $(w.\sigma)$. What is a least fixed point? In this case, it is the intersection of all sets satisfying the two conditions stated above. One can prove that in general, such sets can be constructed just in the way that you describe: start with the empty set, and repeatedly a closure operation $C$, which in this case is$$ S \to S \cup \{\lambda\} \cup \{w\sigma : w \in S, \sigma \in \Sigma\}. $$Form a sequence $S_0 = \emptyset$, $S_1 = C(S_0)$, $S_2 = C(S_1)$, and so on, and take ${\cal S} = \bigcup_{n=0}^\infty S_n$. It is not too hard to show that ${\cal S} = C({\cal S})$, and moreover ${\cal S}$ is the least fixed point. Another way of thinking of these rules is as a proof system for demonstrating that a certain expression is in $\Sigma^*$. We have an axiom $\begin{array}{c}\\\hline \lambda\end{array}$ and an inference rule $\begin{array}{c} w \\\hline w\sigma \end{array}$ valid for all $\sigma \in \Sigma$. A third way is through induction: a property $P$ holds for all $\Sigma^*$ if $P(\lambda)$ and for all $\sigma \in \Sigma$, $P(w) \longrightarrow P(w\sigma)$. This is how we usually use this definition. Given this definition, it is possible to define operations on strings and to prove theorems on strings. For example, length is defined by $|\lambda| = 0$ and $|w\sigma| = |w| + 1$, and concatenation by $x::\lambda = x$ and $x::(w\sigma) = (x::w)\sigma$; it is straightforward to prove by induction (on $y$) that $|x::y| = |x| + |y|$. This sort of definition is common in the more formal parts of computer science. Other parts will prefer the informal definition "$\Sigma^*$ is the set of all words over $\Sigma$", or more explicitly, "$\Sigma^*$ is the set of all finite sequences of elements of $\Sigma$".
You must be logged in to download the Full-Text PDF or to comment. aInstitute of Chemistry, University of Campinas (UNICAMP), PO Box 6154, 13084-971 Campinas, SP, Brazil bInstitute of Chemistry, University of Campinas (UNICAMP), PO Box 6154, 13084-971 Campinas, SP, Brazil cBrazilian Agricultural Research Corporation (Embrapa Soils), 22460-000, Rio de Janeiro, RJ, Brazil dInstitute of Chemistry, University of Campinas (UNICAMP), PO Box 6154, 13084-971 Campinas, SP, Brazil. E-mail: rjpoppi@unicamp.br Introduction United Nations (UN) projections estimate that the world’s population will be around 9.6 billion by 2050. Current projections indicate that feeding such a huge population would require dramatically increasing (~70 %) overall food production by 2050. To achieve this goal, the agricultural productivity in developing countries such as Brazil would need to increase significantly in order to provide more productive, sustainable and inclusive food systems to fight poverty and hunger in this massive population. One of the most important factors required to accomplish this task is the understanding of soil fertility in order to manage it most effectively. To achieve this, millions of soil analyses are performed every year around the world to increase crop yields. In Brazil, approximately 4 million soil fertility analyses are performed per year, and soil organic matter (SOM) is one of the main factors that support land management. However, the two main conventional methodologies to determine the SOM (Walkley–Black and dry-combustion) are time-consuming and expensive, and hence are not suitable for use on a large scale. Also, the Walkley–Black method is damaging to the environment, generating residues that require treatment, and, therefore, is not suitable for sustainable agricultural practices. 1 As an alternative to the traditional methods, visible-near infrared (vis-NIR) spectroscopy can provide fast, low-cost and accurate results for SOM analyses in an environmentally friendly way. Also, the methodology is non-destructive and does not require additional sample preparation. A comparison between the two methodologies is illustrated in Figure 1. However, vis-NIR spectra are composed of wide and superimposed bands and thus the application of this type of spectroscopy in SOM determinations requires the development of multivariate regression models capable of correlating these bands with the SOM reference values. Also, the soil matrices are very heterogeneous, complex and require a tremendous number of samples to create robust vis-NIR calibration models. Due to these problems, machine learning methods with high generalisation power have been employed in the development of the models. Among the machine learning methods that are suitable, we highlight the support vector machine (SVM). 2 Support vector machine Support vector machine is a kernel-based machine learning method proposed by Vladimir N. Vapnik, which uses implicit mapping of the input matrix (vis-NIR spectra) into a high-dimensional feature space defined by a specific kernel function; in this case the radial basis function (RBF): 2 $$K\left( {{x_i},{x_j}} \right) = exp\left( { - \gamma {{\left\| {{x_i} - {x_j}} \right\|}^2}} \right),{\rm{ }}\gamma > 0$$ (1) In the feature space, a linear hyperplane is built with the maximal margin between the support vectors of each class, and this hyperplane is set up to solve the initial separation problem. The SVM can also be extended to regression problems by adding and subtracting a positive k number in the y i reference value, creating a positive ( y+ i k) and negative class ( y– i k). In this situation, the optimal separation hyperplane will pass by the original values of y, because the best separation will be i y+ 0. As in linear regression models, the i yprediction value can be estimated using a linear regression function: y = w · K( x) + b (2) where w and b are the slope and offset of the regression line. The optimal w and b are obtained by minimising Equations 3 and 4. Minimise: $${1 \over 2}\left\| w \right\| + C\;\mathop \sum \limits_{i = 1}^n \left( {{\xi _i} + \xi _i^*} \right)$$ (3) Subject to: $$\left\{ {\matrix{ {{y_i} - w.K\left( {{x_i}} \right) - b\; \le \varepsilon + \;{\xi _i}} \cr {w.K\left( {{x_i}} \right) + b - {y_i}\; \le \;\varepsilon + \;{\xi _i}} \cr {{\xi _i},\xi _i^{\rm{*}} \ge 0} \cr } } \right\}$$ (4) where ε is the sensitive parameter which represents the tolerated error and C is the cost parameter, which controls the influence of each individual support vector. The slack variables ζ and i ζ i *are introduced to account for samples that do not lie in the ε-sensitive zone. 3 During this process the combination of two parameters must be optimised, the cost parameter ( C) already described and the RBF kernel parameter ( γ). γ is the regularisation parameter of the RBF function, which controls the width of this function. In order to reduce the time required to find this optimum combination, Bayesian optimisation can be used. The Bayesian optimisation algorithm attempts to minimise the root mean square error of cross validation (RMSECV) in a specific domain for each parameter; in this case [10 –3 to 10 3] for C and γ. The algorithm selects the combination of C and γ points that provides the greatest potential improvement of RMSECV. 4 SVM modelling and Bayesian optimisation were implemented in Matlab R2016b with the Statistics and Machine Learning Toolbox 11.0. 4 Materials and methods In order to obtain a spectral library that represents the major producing regions of Brazil, 42,471 soil samples from several regions of Brazil were collected. The SOM reference analyses were based on the Walkley–Black method. These analyses were performed in collaboration with the IBRA Laboratory, Brazil, that holds a certification of proficiency from the Brazilian Agricultural Research Corporation (Embrapa Soils) and is accredited to ISO/IEC 17025:2005. Before the vis-NIR spectra acquisition, the samples were oven dried at 40 °C for 48 hours, a rubber mallet was used to break the soil clusters and the granule size was controlled by a sieve (Ø < 2 mm). The spectra were obtained using a vis-NIR spectrometer customised for this determination, called SpecSoil-Scan (Speclab Holding S.A., Campinas - SP, Brazil). This instrument can analyse 40 soil samples per batch and the spectral range is 432–2448 nm, with a spectral resolution of 3.3 nm. A principal component analysis (PCA) model was applied to the spectral data set to find outliers. Samples with high values of Hotelling T 2 and residuals in spectral data ( Q-statistics) at a significance level of 5 % were considered outliers. The Hotelling T 2 is related to leverage, which measures the distance of the sample from the centre of the data and Q residuals represent the unmodelled vis-NIR spectra. 5 Representative samples were selected for development and validation of the models, resulting in 28,314 samples for the calibration set and 14,157 for the validation set. Results and discussion The original vis-NIR spectra of all soil samples are shown in Figure 2a, where the mean spectrum is represented by the black line. The NIR spectra contains useful information related to the SOM, due to absorptions in the CC, C=C, CH, CN, NH and OH chemical bands. In the visible region, information on the SOM can be determined from absorption bands due to chromophores and darkness of the soil. 6 To reduce baseline variation and spectral noise, the vis-NIR spectra were preprocessed by Savitzky–Golay smoothing and first derivative, with a window size of 11 points. 7 The preprocessed spectra are shown in Figure 2b, where the major variations in the absorption bands at 400–600, 1100, 1400, 1800–2000 and 2200–2400 nm are highlighted, common to most soil samples. 6 The three main absorptions bands are in the region of 500–650 nm, 1400 nm and 1900 nm. The absorptions at 500–650 nm can be associated with minerals that contain iron and the band at 1400 nm and 1900 nm can be associated with the OH group. The absorption band at 1100–1150 nm can be associated to aromatics and C–H stretch, and at 2200–2500 nm they are mainly due to vibrations involving metal–OH. 6 The SVM model was built using the calibration samples and the choice of the optimal combination of C and γ values was performed as described above. To avoid overfitting in the regression model, the validation set was considered a set of unknown samples and these samples had no influence on the choice of C and γ parameters of the SVM model. The scatter plots showing the reference versus predicted values by the SVM model are shown in Figure 3. Due to the high number of samples a colour bar containing the recurrence of the predicted values for each reference value was inserted in this plot. The SOM reference content in both sets were distributed along the range evaluated. The R 2 cal, R 2 val, RMSEC and RMSEP were close indicating the concordance between the calibration and validation sets. In other words, the SVM regression model adequately modelled the huge diversity of soils of the spectral library without overfitting the model. Analysing the recurrence of the predicted values in Figure 3, it is possible to conclude that most of the samples were predicted with SOM values close to the reference ones. Only a few samples (dark blue) had the predicted values far from the reference values. This fact can also be observed in Figure 4, which shows the histograms of the prediction errors in calibration and validation sets. The histograms show that most of the samples were predicted with residues of up to 2 × RMSE in both sets, while few samples were predicted with higher residues. Conclusions The support vector machine algorithm was successful in dealing with an extensive and complex soil spectral library to determine SOM content. Brazil’s soils are very diverse and heterogeneous with regards to chemical composition and soil organic matter content. The robustness presented by the proposed methodology involving vis-NIR spectra and machine learning has created high expectations for the possibility of mitigating/eliminating the use of heavy metal reagents in soil fertility analysis. Also, the methodology has potential to be used as a replacement for the traditional method in the future. Knowledge of soil fertility, supported by a green analytical methodology, could pave the way for increasing sustainable agricultural productivity. Acknowledgements The authors thank Instituto Nacional de Ciência e Tecnologia de Bioanalítica (INCTBio), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq, Brazil, 465389/2014-7 and 303994/2017-7), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES, Brazil, Finance Code 001) and Fundação de Amparo a Pesquisa do Estado de São Paulo (FAPESP, Brazil, 2014/508673) for financial support. We also thank Speclab Holding S.A. for providing the samples and the vis-NIR equipment SpecSoil-Scan ®, and to Embrapa (Project number MP5 14.05.01.001.01.00.00). References F.B. de Santana, A.M. de Souza and R.J. Poppi, “Green methodology for soil organic matter analysis using a national near infrared spectral library in tandem with learning machine”, Sci. Total Environ. 658,895–900 (2019). https://doi.org/10.1016/j.scitotenv.2018.12.263 C. Cortes and V. Vapnik, “Support-vector networks”, Mach. Learn. 20,273–297 (1995). https://doi.org/10.1023/A:1022627411411 P.R. Filgueiras, J.C.L. Alves, C.M.S. Sad, E.V.R. Castro, J.C.M. Dias and R.J. Poppi, “Evaluation of trends in residuals of multivariate calibration models by permutation test”, Chemometr. Intell. Lab. Syst. 133,33–41 (2014). https://doi.org/10.1016/j.chemolab.2014.02.002 Mathworks, Statistics and Machine Learning Toolbox. MatLab, pp. 1–9214 (2017). TMUser’s Guide R2017a R. Bro and A.K. Smilde, “Principal component analysis”, Anal. Methods. 6,2812–2831 (2014). https://doi.org/10.1039/c3ay41907j B. Stenberg, R.A. Viscarra Rossel, A.M. Mouazen, J. Wetterlind, M. Mouazen and J. Wetterlind, “Visible and near infrared spectroscopy in soil science”, Adv. Agron. 107,163–215 (2010). https://doi.org/10.1016/S0065-2113(10)07005-7 A. Savitzky and M.J.E. Golay, “Smoothing and differentiation of data by simplified least squares procedures”, Anal. Chem. 36,1627–1639 (1964). https://doi.org/10.1021/ac60214a047
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea. I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.) @dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later... oops lol typo bohm bohr btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals... @dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality @dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as... @vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally." @dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing > The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment. ↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O @vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local? @dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated... if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around @vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best @dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view... Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo… And to make things even more confusing: Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion It seems my mind is getting more and more comfortable with dialetheia now @vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago. @Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII. If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl... @Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them. @AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily. @bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref. @PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification. @Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there. ← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments. hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference. One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass @vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore @Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense.
I was reading the construction of the localization of a category in the book "Methods of homological algebra" of Manin and Gelfand. Let me remind you the definition of the localization of a category: Let $B$ be an arbitrary category and $S$ a set of morphisms in $B$, the localization is a category $B[S^{-1}]$ with a functor $Q:B\rightarrow B[S^{-1}]$ such that $Q(s)$ is an isomorphism for every $s\in S$, and if another functor $F:B\rightarrow D$ has this property then there exists a functor $G:B[S^{-1}]\rightarrow D$ such that $F=G\circ Q$. Before asking my question let me remind you the construction of the localization: Let $B$ be an arbitrary category and "S" an arbitrary set of morphisms in $B$, we want to construct $B[S^{-1}]$. Set $\mathrm{Ob}\;B[S^{-1}]=\mathrm{Ob}\;B$, and define $Q$ to be the identity on objects. I want to construct the morphisms of $B[S^{-1}]$, to do that we proceed in several steps: a) introduce variables $x_s$, one for every morphism $s\in S$. b) Now construct an oriented graph $\Gamma$ as follows: vert $\Gamma$=Ob $B$ Edges of $\Gamma$=$\{$morphisms in $B\}\cup\{x_s:s\in S\}$ if $f$ is a morphism $X\rightarrow Y$ then it correspondes to an edge $X\rightarrow Y$ if $s\in S$ then $x_s$ is an edge $Y\rightarrow X$. A path in this graph is what you expect it to be. Now lets define an equivalence relation among paths with the same beginning and same ending. We say that two paths are equivalent if they can be joined by a chain of these two types of operations: 1) two consecutive arrows can be replaced with their composition 2) the path $sx_s$ is equivalent to $id$ and the same for $x_ss$. So a morphism is an equivalence class of paths with the common beginning and common end. Now if you are interested you can easily continue defining $Q$ and proving that this category has the property that we want. My question is this: are we sure that this is a category? Because the class of morphisms can be a proper class, it's not obvious to me that is a set. I was told that there is a way to fix this (or at least to bypass the problem); do you know a way to fix this issue? (I know that for example if the set $S$ has good properties, i.e. it's a localizing system of morphisms, then there is a nicer construction, but I would like to know a general construction that works even if $S$ is not localizing).
Decomposition of Autoregressive Models Autoregression Background This post will be a formal introduction into some of the theory of Autoregressive models. Specifically, we’ll tackle how to decompose an AR(p) model into a bunch of AR(1) models. We’ll discuss the interpretability of these models and howto therein. This post will develop the subject using what seems to be an atypical approach, but one that I find to be very elegant. The traditional way Let $x_t$ be an AR(p) process. So $x_t = \sum\limits_{i=1}^pa_ix_{t-i} + w_t$. We can express $x_t$ thusly: Where $L$ is the lag operator. We define the AR polynomial $\Phi$ as $\Phi(L) := \left(1 - \sum\limits_{i=1}^pa_iL^i\right)$. Then as long as we’re considering this polynomial to be over an algebraically closed field (like $\mathbb{C}$), we can factor this polynomial. So Where $\varphi_i$ are the roots of $\Phi$. This polynomial will be the star of our show today. In fact, if we want to understand $\Phi$ (the AR process we’re considering), we need only understand each $x_t = \varphi_ix_{t-1} + w_t$ model. We’ll actually prove this later. We’ll be investigating several of its properties but first (just for fun) let’s investigate the conditions under which it’s invertible. We can reduce the problem of determining whether or not the AR polynomial is invertible to the problem of inverting each factor. After all, the polynomial is invertible if and only if each factor is. So let’s restrict our considerations to just a single factor for the time being. When will $x-\lambda$ be invertible? Now, we recall the Taylor Series Expansion formula from our trusty calculus class. In general, we tend to use $a=0$ as a good arbitrary choice (the Maclaurin Series), resulting in: Well, if we consider the Taylor expansion of $\frac{1}{x-\lambda}$, we find Which is square summable if and only if $|\lambda| > 1$. So we see that if the AR roots are all outside the unit circle, then the AR polynomial will be invertible (in such a case, the AR process is called causal). The new(?) approach Definition Let $X_t\in \mathbb{R}^n, A_i\in\mathbb{R}^{n\times n}, B\in\mathbb{R}^{k_0\times n}$, and $p\in\mathbb{N}_{>0}$. Also let, $W_t$ be $k_0$ dimensional white noise. Then, $X_t$ is called a VAR(p) process if, Basically a VAR(p) model is an analogue of AR(p) models where the scalars are matrices and the variable itself is a vector. The noise in such a model need not be independent per coordinate, nor restricted to only one. In fact, we can have any number of driving noise processes that get mixed together linearly into $X_t$. Hence cometh $B$ – the linear transformation from the driving noise space into the $X_t$ space. Furthermore, let’s say that we don’t directly observe $X_t$ directly, but rather we observe some $Y_t$ that’s a linear function of $X_t$ (plus some noise). Formally, we observe $Y_t$ s.t.: Where $Y_t \in\mathbb{R}^m, C\in\mathbb{R}^{m\times n}, D\in \mathbb{R}^{k_1\times m}$. And again, $U_t$ is $k_1$ dimensional white noise. Here $D$ serves the same purpose as $B$ did above, but for the observations themselves. This is called a “State Space” model. Examplificate How does this relate to our star (the AR polynomial $\Phi$)? Let’s try to express an AR(p) process as a VAR(1) process… Let $x_t$ be an AR(p) process. That is to say, $x_t = \sum\limits_{i=1}^p a_ix_{t-i} + w_t$ where $w_t$ is some white noise process. Let $W_t$ be a 1 dimensional white-noise processes such that $W_t = w_t$. We define Then, we can see that the first coordinate of $X_t$ is always $x_t$. Our notation for this will be $X_t[0] = x_t$. Furthermore, let We can see that for $0 < i < p$, we have $(AX_{t-1})[i] = x_{t-i}$, and $(AX_{t-1})[0] = \sum\limits_{i=1}^p a_i x_{t-i} = x_t - w_t$. As it stands, $AX_{t-1}$ is almost $X_t$, just without the added noise in the first coordinate. So let’s add noise to only the first coordinate. To this end, let and Now we want $Y_t = x_t$. So we set That way, $Y_t = CX_t = x_t$ and all is right with the world. So yay! We’ve expressed this AR(p) process as a $p$ dimensional $VAR(1)$ process! So… Why? Why do? Well, one natural question to ask at this point is, what are the eigenvalues of $A$? This polynomial is the characteristic polynomial of $A$, so the roots of this polynomial are the eigenvalues of $A$. Let’s see how this relates to $\Phi$… So the characteristic polynomial is really just $\Phi(L)$ revisited! Furthermore, check this sweetness out… So we can see that the eigenvalues are actually intimately related with the roots (in fact they’re inverses of each other)! With that in mind, let’s see what happens when we try to diagonalize this matrix $A$. Firstly, let’s assume that $A$ matrix is diagonalizable. I.e., let where $H$ is a coordinate transformation (an invertible map) and $\bar A$ is diagonal. Then So if we let $\tilde X_t = HX_t$, we get So $\tilde X_t$ is a bunch of AR(1) processes (because the matrix $\bar A$ is diagonal), and, Which is to say that $Y_t$ is a linear combination of AR(1) processes! So we started out with a general AR(p) process, and found that expressing it as a VAR(1) model allows us to see that this is just a linear combination of AR(1) processes. I think that’s pretty cool. But let’s investigate this a bit further… What more?! As per our previous section, we have that $Y_t$, which was really just $x_t$ – our original AR(p) variable, is a linear combination of AR(1) models. More specifically, if we let $F := CH^{-1}$, then So indeed, $x_t$ is a linear combination of $p$ many AR(1) processes. In fact, since $\tilde X_t = \bar A \tilde X_{t-1} + \left(HB\right)W_t$, we know the AR(1) models explicitly: Where $\lambda_i$ is the $i$th eigenvalue of $A$ (hence the $(i,i)$th element of $\bar A$). So we see that the AR(1) processes each have their respective roots from the roots of the original AR polynomial $\Phi$. Pretty cool, right? Summary So we can express any AR(p) model as a linear combination of AR(1) processes (albeit with correlated noise terms), where each AR(1) process is determined by the roots of the AR polynomial (or the characteristic polynomial). We’ve essentially reduced the problem of studying AR(p) models to studying their eigenvalues (or AR roots).
ISSN: 1930-8337 eISSN: 1930-8345 All Issues Inverse Problems & Imaging May 2011 , Volume 5 , Issue 2 A special issue ALCOMA'10 Select all articles Export/Reference: Abstract: We consider a reaction-diffusion equation for the front motion $u$ in which the reaction term is given by $c(x)g(u)$. We formulate a suitable inverse problem for the unknowns $u$ and $c$, where $u$ satisfies homogeneous Neumann boundary conditions and the additional condition is of integral type on the time interval $[0,T]$. Uniqueness of the solution is proved in the case of a linear $g$. Assuming $g$ non linear, we show uniqueness for large $T$. Abstract: In this paper we prove a stable determination of the coefficients of the time-harmonic Maxwell equations from local boundary data. The argument --due to Isakov-- requires some restrictions on the domain. Abstract: Based on the Fenchel pre-dual of the total variation model, a nonlinear multigrid algorithm for image denoising is proposed. Due to the structure of the differential operator involved in the Euler-Lagrange equations of the dual models, line Gauss-Seidel-semismooth-Newton step is utilized as the smoother, which provides rather good smoothing rates. The paper ends with a report on numerical results and a comparison with a very recent nonlinear multigrid solver based on Chambolle's iteration [6]. Abstract: Coded Aperture Imaging is a cheap imaging process encountered in many fields of research like optics, medical imaging, astronomy, and that has led to several good results for two dimensional reconstruction methods. However, the three dimensional reconstruction problem remains nowadays severely ill-posed, and has not yet furnished satisfactory outcomes. In the present study, we propose an illustration of the poorness of the data in order to operate a good inversion in the 3D case. In the context of a far-field imaging, an inversion formula is derived when the detector screen can be widely translated. This reformulates the 3D inversion problem of coded aperture imaging in terms of classical Radon transform. In the sequel, we examine more accurately this reconstruction formula, and claim that it is equivalent to solve the limited angle Radon transformproblem with very restricted data. We thus deduce that the performances of any numerical reconstruction will remain shrank, essentially because of the physical nature of the coding process, excepted when a very strong a prioriknowledge is given for the 3D source. Abstract: For the two dimensional inverse electrical impedance problem in the case of piecewise constant conductivities with the currents injected at adjacent point electrodes and the resulting voltages measured between the remaining electrodes, in [3] the authors proposed a nonlinear integral equation approach that extends a method that has been suggested by Kress and Rundell [10] for the case of perfectly conducting inclusions. As the main motivation for using a point electrode method we emphasized on numerical difficulties arising in a corresponding approach by Eckel and Kress [4, 5] for the complete electrode model. Therefore, the purpose of the current paper is to illustrate that the inverse scheme based on point electrodes can be successfully employed when synthetic data from the complete electrode model are used. Abstract: In this paper we study passive sensor imaging with ambient noise sources by suitably migrating cross correlations of the recorded signals. We propose and study different imaging functionals. A new functional is introduced that is an inverse Radon transform applied to a special function of the cross correlation matrix. We analyze the properties of the new imaging functional in the high-frequency regime which shows that it produces sharper images than the usual Kirchhoff migration functional. Numerical simulations confirm the theoretical predictions. Abstract: In this paper we extend the idea of adaptive discretization by using refinement and coarsening indicators from papers by Chavent, Bissell, Benameur and Jaffré (cf., e.g., [5], [9]) to a general setting. This allows to make use of the relation between adaptive discretization and sparse paramerization in order to construct an algorithm for finding sparse solutions of inverse problems. We provide some first steps in the analysis of the proposed method and apply it to an inverse problem in systems biology, namely the reconstruction of gene networks in an ordinary differential equation (ODE) model. Here due to the fact that not all genes interact with each other, reconstruction of a sparse connectivity matrix is a key issue. Abstract: We propose a data clustering model reduced from variational approach. This new clustering model, a regularized k-means, is an extension from the classical k-means model. It uses the sum-of-squares error for assessing fidelity, and the number of data in each cluster is used as a regularizer. The model automatically gives a reasonable number of clusters by a choice of a parameter. We explore various properties of this classification model and present different numerical results. This model is motivated by an application to scale segmentation. A typical Mumford-Shah-based image segmentation is driven by the intensity of objects in a given image, and we consider image segmentation using additional scale information in this paper. Using the scale of objects, one can further classify objects in a given image from using only the intensity value. The scale of an object is not a local value, therefore the procedure for scale segmentation needs to be separated into two steps: multiphase segmentation and scale clustering. The first step requires a reliable multiphase segmentation where we applied unsupervised model, and apply a regularized k-means for a fast automatic data clustering for the second step. Various numerical results are presented to validate the model. Abstract: Let $\mathcal B$ be a viscoelastic body with a (smooth) bounded open reference set $\Omega$ in $\mathbb R^3$, with the equation of motion being described by the Lamé coefficients $\lambda_0$ and $\mu_0$ and the related viscoelastic coefficients $\lambda_1$ and $\mu_1$. The latter are assumed to be factorized with the sametemporal part, i.e. $\lambda_1(t,x)=k(t)p(x)$ and $\mu_1(t,x)=k(t)q(x)$. Furthermore, it is assumed that the spatial parts $p$ and $q$ of $\lambda_1$ and $\mu_1$ are unknownand the threeadditional measurements $\sum_{j=1}^3\sigma_{i,j}^0(t,x)$ n$_j(x) = g_i(t,x)$, $i=1,2,3$, are available on $(0,T)\times \partial \Omega$ for some (sufficiently large) subset $\Gamma\subset \partial \Omega$. The fundamental task of this paper is to show the uniqueness of the pair $(p,q)$ as well as its continuous dependence on the boundary conditions, the initial data being kept fixed and the initial velocity being suitably related to the initial displacement. Abstract: The inverse fluid--solid interaction problem considered here is to determine the shape of an elastic body from pressure measurements made in the near field. In particular we assume that the elastic body is probed by pressure waves due to point sources, and the resulting scattered field and the normal derivative of the scattered field is available for every source and receiver combination on the source and measurement curves. We provide an analysis of the Reciprocity Gap (RG) method in this case, as well as the Linear Sampling Method (LSM). A novelty of our analysis is that we exhibit a connection between the RG method and a non--standard LSM using sources and receivers on different curves. We provide numerical tests of the algorithms using both synthetic and real data. Abstract: The aim of electrical impedance tomography (EIT) is to reconstruct the conductivity values inside a conductive object from electric measurements performed at the boundary of the object. EIT has applications in medical imaging, nondestructive testing, geological remote sensing and subsurface monitoring. Recovering the conductivity and its normal derivative at the boundary is a preliminary step in many EIT algorithms; Nakamura and Tanuma introduced formulae for recovering them approximately from localized voltage-to-current measurements in [Recent Development in Theories & Numerics, International Conference on Inverse Problems 2003]. The present study extends that work both theoretically and computationally. As a theoretical contribution, reconstruction formulas are proved in a more general setting. On the computational side, numerical implementation of the reconstruction formulae is presented in three-dimensional cylindrical geometry. These experiments, based on simulated noisy EIT data, suggest that the conductivity at the boundary can be recovered with reasonable accuracy using practically realizable measurements. Further, the normal derivative of the conductivity can also be recovered in a similar fashion if measurements from a homogeneous conductor (dummy load) are available for use in a calibration step. Abstract: This article proposes a new framework to regularize imaging linear inverse problems using an adaptive non-local energy. A non-local graph is optimized to match the structures of the image to recover. This allows a better reconstruction of geometric edges and textures present in natural images. A fast algorithm computes iteratively both the solution of the regularization process and the non-local graph adapted to this solution. The graph adaptation is efficient to solve inverse problems with randomized measurements such as inpainting random pixels or compressive sensing recovery. Our non-local regularization gives state-of-the-art results for this class of inverse problems. On more challenging problems such as image super-resolution, our method gives results comparable to sparse regularization in a translation invariant wavelet frame. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
I'm working through a proof on the equivalence between the vector component formula and the sin formula for the cross product of two vectors, $a$ and $b$. One point in the proof involves finding the area of the parallelogram of $a$ and $b$, angle between them $\theta$, when it is projected onto the $xy$ plane. Call the area of our original parallelogram, $Q$, and the area of the projected one, $P$. After determining that the angle which the cross product is rotated away from the z-axis ($\alpha$) is the same angle the plane of the parallelogram is rotated away from the xy plane, the presenter states without justification that the area of the projected parallelogram ($P$) is equal to: $Q \cos(\alpha)$. This is holding me up since I cannot justify this myself, and in my attempt to do so I come up with a different result for $P$: by identifying the side lengths for P as the adjacent sides of right triangle with hypotenuse lengths $a$ and $b$, the expression I would get to describe the area $P$ would be:$$a \cos(\alpha) \cdot b \cos(\alpha) \cdot \sin(\theta) = Q \cos^2(\alpha)$$For reference the proof is described here (timestamped to point of interest): https://youtu.be/cXKDJ7_rmyM?t=4603 He seems to make an error by writing what should be cosine down as sine (unless I am grossly mistaken), but looking past that I cannot justify or discover any geometric method to find $Q$ as anything other than what she above. Is he justified in saying the area should be $Q \cos(\alpha)$, or am I correct with the above formulation?
This is another example of an information puzzle. You are trying to find the most numbers a pair of spies can send to each other given that there are 26 stones in the river. Furthermore, the stones are all identical, and the only way the spies can communicate is by throwing a certain number of stones into the pond at the same time. You are ultimately trying to devise an algorithm that can produce the most possible outcomes for this procedure of throwing 26 stones, which will them map to the greatest number of results. The answer lies in the number of ways there are to divide 26 stones into groups. Although the stones themselves are identical, the order that they're thrown in isn't. So the groups that the stones are thrown in form an ordered partition, like such: o o o o|o o o|o o|o o o o o|o|o o|o o o|o o o o|o o Each o represents a stone, and each | represents a divider between a group of stones that were thrown. In the example above, there were 4, 3, 2, 5, 1, 2, 3, 4, 2 stones thrown. Now, notice that any two consecutive stones can have a divider between them or not. This is a total of 25 dividers that can either be present or not present, for a total of $2^{25} = 33~554~432$ outcomes. So our first upper bound on the numbers the spies could exchange is $\lfloor2^{12.5}\rfloor = 5792$. Complicating this is the fact that each spy has to have some control over the stones they throw. If spy 1 throws all 26 stones (which is the case with no dividers), this leaves no choice in the matter for spy 2. So, we decide instead to give each spy his own set of 13 stones, which again can either have dividers between them or not (which is a total of 12 positions where dividers can occur). In this case, the upper bound is $2^{12} = 4096$. Complicating this yet again is that each spy has to throw either the same number of groups of stones, or the first spy throws one more group of stones than the second spy. So, for each of the 13 stones, the spies need to decide on a way to divide them into groups of, say, 6 or 7. If each spy decides beforehand to divide the stones into exactly 7 groups to throw, this gives a total of $\binom{12}{6} = 924$ choices in where to put the dividers, which is our first solution that actually works. From here, we have to work up. Note that this first naïve solution doesn't take advantage of the fact that the first spy can throw one more group of stones than the second, or that either spy can throw less than 13 stones. So there's some information that we've ended up discarding. The hockey stick theorem states that any number $\displaystyle \binom{n}{k}$ is equal to $\displaystyle \sum_{m=0}^{k} \binom{n-1-m}{k-m}$ (the theorem gets its name from the way those numbers form a hockey stick on Pascal's Triangle). So supposing we instead arrange that spy 2 can throw anywhere from 7 to 13 stones in 7 groups depending on how many spy 1 throws, he can still get a total of $\binom{13}{6} = 1716$ combinations. The algorithm then becomes: Both spies determine which arrangement of stones to throw beforehand, with the restriction that they each throw exactly 7 groups of no more than 13 stones. They come to the river and alternate throwing groups of stones that correspond to their number. Spy 1 then throws all the remaining stones into the river, and they depart. This allows them to exchange two numbers up to $1716$, which is the same number you got up to. Now, we note that in some of the above cases, we have some stones left over that Spy 1 has to throw away. Could we put these to better use? In $\binom{11}{5} = 462$ cases, a spy will have 1 stone left. In $\binom{10}{4} = 210$ cases, a spy will have 2 stones left. In $\binom{9}{3} = 84$ cases, a spy will have 3 stones left, etc. In each of these cases, the spy can throw any number from $1$ to $n$ stones, but Spy 2 cannot throw any stones if Spy 1 doesn't throw at least $1$ first, so we consider the worst-case scenario where Spy 1 has thrown all his stones but Spy 2 still has $n$ to throw. Spy 1 throws a stone, leaving $n-1$ for Spy 2. Spy 2 then throws any number from $1$ to $n-1$ and Spy 1 throws the rest away. This algorithm will still work if there are any other number of stones left. This doesn't really do anything in the case where there are only 1 or 2 stones left (in each of these cases, Spy 2 either has no stones to throw or must throw exactly 1 stone, which doesn't give any information). However, for 3 or more stones, we get the following improvements: For $s = 3$, we can express $2$ cases per arrangement, for an improvement of $\binom{9}{3} \times (2-1) = 84$ more cases. For $s = 4$, we can express $3$ cases per arrangement, for an improvement of $\binom{8}{2} \times (3-1) = 56$ more cases. For $s = 5$, we can express $4$ cases per arrangement, for an improvement of $\binom{7}{1} \times (4-1) = 21$ more cases. For $s = 6$, we can express $5$ cases per arrangement, for an improvement of $\binom{6}{0} \times (5-1) = 4$ more cases. All together, we get $84 + 56 + 21 + 4 = 145$ extra cases from the extra stones, bringing the total up to $1861$. I can't see any elegant improvements to make on this algorithm past that, though. If you were to make a computer program to traverse all the possibilities, I suppose you could get to $2286$, but it would probably require a whole different approach, potentially involving the same sequence of throws from one spy representing different numbers depending on how the other spy threw his stones.
In Peskin's QFT book page 294, he formally addressed the quantization of EM field, $$propagotor_{EM}=\frac{-ig_{\mu\nu}}{k^2+i\epsilon}$$ Now that we have the functional integral quantization method at our command, let us apply it to the derivation of this expression. Consider the functional integral $$\int DAe^{iS[A]},$$ Actually I expected to see how he would derive the generating functional $\int DAe^{i(S[A]+J^\mu A_\mu)}$ from the canonical quantization before proceeding on to introduce the Faddeev-Popov trick. So is there a way to do this derivation, i.e., from the operator formalism using Hamiltonian, or we must presume the validity of this path integral as a starting point, which is probably what Peskin did(I guess)?This post imported from StackExchange Physics at 2014-05-04 11:29 (UCT), posted by SE-user LYg
When we get a circuit such as the following: How do we define the cut-off frequency? Is it still $$f_c = \frac1{2\pi R_1C_1}$$ since \$f_c\$ is defined for \$X_{c1} = R_1\$? Or is it defined so that \$V_{out}\$(the one after OA3) is \$0.707V_2\$? Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community When we get a circuit such as the following: How do we define the cut-off frequency? Is it still $$f_c = \frac1{2\pi R_1C_1}$$ since \$f_c\$ is defined for \$X_{c1} = R_1\$? Or is it defined so that \$V_{out}\$(the one after OA3) is \$0.707V_2\$? The cutoff frequency is defined as the -3dB point, where 0dB is defined as the amplitude of the signal in the passband. So it's still \$\frac{1}{2πR_1 C_1}\$. It's defined to be the half-power point. Since power is proportional to \$V^2\$ (and \$I^2\$ for that matter), one half power is when \$V_\text{OUT}=\frac{V_\text{IN}}{\sqrt{2}}\approx 0.7071\cdot V_\text{IN}\$. There are other definitions. Different filter types may set the bar elsewhere (Chebyshev, for example.) My own way of looking at it is that the critical point is when the \$2^\text{nd}\$ derivative of phase with respect to frequency goes through zero. But that's my arbitrary choice and it incorporates the effects of nearby poles and zeros. So just ignore me on that point. How do we define the cutoff frequency in an active Op-Amp filter? For a simple RC filter the so-called cut-off frequency is when the impedance of the capacitor equals the resistance of the resistor i.e.: - \$\dfrac{1}{2\pi f C} = R\$ Re-arranging we get \$f = \dfrac{1}{2\pi C R}\$ In your circuit you do have op-amps but, they are only providing "gain" and this does not alter the relationship between cut-off frequency, C and R except in the case when the cut-off frequency is so high that the op-amps can no-longer provide that gain.
I have the following functions: a) $\displaystyle f:\mathbb{C}\backslash\{0\}\rightarrow\mathbb{C},\ f(z)=\frac{1}{e^{\frac{1}{z}}-1}$ b) $\displaystyle f:\mathbb{C}\backslash\{0,2\}\rightarrow\mathbb{C},\ f(z)=\frac{\sin z ^2}{z^2(z-2)}$ c) $\displaystyle f:\mathbb{C}\backslash\{0\}\rightarrow\mathbb{C},\ f(z)=\cos\left(\frac{1}{z}\right)$ d) $\displaystyle f:\mathbb{C}\backslash\{0\}\rightarrow\mathbb{C},\ f(z)=\frac{1}{1-\cos\left(\frac{1}{z}\right)}$ e) $\displaystyle f:\mathbb{C}\backslash\{0\}\rightarrow\mathbb{C},\ f(z)=\frac{1}{\sin\left(\frac{1}{z}\right)}$ What would the quickest approach to determine if $f$ has a removable singularity, a pole or an essential singularity? What would be the thinking $behind$ the approach? Edit: What I know/ What I have tried: I know that if we have an open set $\Omega \subseteq \mathbb{C}$, then we call an isolated singularity, a point, where $f$ is not analytic in $\Omega$ ($f \in H(\Omega \backslash \{a\}$). The functions in (a)-(e) are not defined on some values. So I suspect, that these are the first candidates for singularities. For instance in (a), it would be 0. In (b), it would be 0 and 2. Question: Could there be any other points where these functions are not analytic? Let's call our isolated singularity $a$. Furthermore I know that we have 3 types of singularities: 1) removable This would be the case when $f$ is bounded on the disk $D(a,r)$ for some $r>0$. 2) pole There is $c_1, ... , c_m \in \mathbb{C},\ m\in\mathbb{N}$ with $c_m \neq 0$, so that: $$f(z)-\sum\limits_{k=1}^m c_k\cdot\frac{1}{(z-a)^k},\ z \in \Omega \backslash \{a\})$$ has a removable singularity in $a$, then we call $a$ a pole. We also know that in this case: $|f(z)|\rightarrow \infty$ when $z\rightarrow a$. 3) essential If the disk $D(a,r) \subseteq \Omega$, then $f(D(a,r)\backslash\{a\})$ is dense in $\mathbb{C}$ and we call $a$ essential singularity. The books that I have been using (Zill - Complex Analysis and Murray Spiegel - Complex Analysis) both expand the function as a Laurent series and then check the singularities. But how do I do this, if I use the definitions above? It doesn't seem to me to be so straight forward... What I would want to learn a method which allows me to do the following: I look at the function and the I try approach X to determine if it has a removable singularity. If not continue with approach Y to see if we have a pole and if not Z, to see if we have an essential singularity. An algorithmic set of steps so to speak, to check such functions as presented in (a) to (e). Edit 2: This is not homework and I would start a bounty if I could, because I need to understand how this works by tommorow. Unfortunately I can start a bounty only tommorow... Edit 3: Is this so easy? Because using the definitions, I am getting nowhere in determing the types of singularities...
I am looking at past paper questions and I'm a little stuck on this one. I have the following system of ODEs: $\dot{x}=(\epsilon x+2y)(x+1)$ $\dot{y}=(-x+\epsilon y)(x+1)$ where $\epsilon$ is a parameter. a) Show that $L(x,y)=ax^2+by^2$ is a Lyapunov function for the equilibrium at the origin of the system of the ODEs above if $\epsilon \leq 0$ and for suitable $a,b >0$. Give an example of suitable $a,b$. b) What does the Lyapunov function tell us about the stability of the origin for $\epsilon <0$ and for $ \epsilon =0$? Okay, so my attempt: $L(0,0)=a(0)^2+b(0)^2=0$ $L(x,y)>0 $ when $a,b>0$ $\frac{dL}{dt}=\frac{dL}{dx} \frac{dx}{dt} + \frac{dL}{dy} \frac{dy}{dt}$ $=2xa(\epsilon x^2 +2yx +\epsilon x +2y) +2by(-x^2 + \epsilon yx -x + \epsilon y )$ But I'm not sure where to go from there, I feel like I'm overthinking it. If anyone could help to find a solution to study I'd be really appreciative!
Do you have any reason to believe it is convex? In the space of nonlinear problems, convexity is the exception, not the rule. Convexity is something to be proven, not assumed. Consider the scalar case; that is, $m=n=1$. Then the problem is$$\min_{y,w\geq 0}(x-yw)^2=\min_{y,w\geq 0}x^2-2xyw+y^2w^2$$ The gradient and Hessian of $\phi_x(y,w)=x^2-2xyw-y^2w^2$ is$$\nabla\phi_x(y,w)=\begin{bmatrix} 2yw^2 - 2xw \\ 2y^2w - 2xy \end{bmatrix}$$$$\nabla^2\phi_x(y,w)=\begin{bmatrix} 2w^2 & 4yw - 2x \\ 4yw - 2x & 2y^2 \end{bmatrix}$$The Hessian is not positive semidefinite for all $x,y,w\geq 0$. For example, $$\nabla^2\phi_1(2,1)=\begin{bmatrix} 2 & 6 \\ 6 & 8 \end{bmatrix}, \quad\lambda_{\min}(\nabla^2\phi_1(2,1))=-1.7082$$
I want to minimize $a_1 x_1+a_2 x_2+c x_1 \log\dfrac{x_1}{x_1+x_2}+c x_2 \log\dfrac{x_2}{x_1+x_2}$ for $x_{1,2}\ge 0$ (all the scalars $a_{1,2}<0$ and $c>0$ are real). After having trouble solving for the critical points of the Lagrangian, I thought I should check its second order derivatives, only to figure out that the Hessian matrix has a determinant that vanishes to zero. How am I supposed to perform the optimization when I can't find critical points and even if I could, I cannot verify their status (max, min or saddle point)? First Derivatives The first order derivatives of the function are: $a_j+c\log\dfrac{x_j}{x_1+x_2}$, for $j=1,2$. I've noticed that, if either of the $x$'s becomes $0$ (achieves its lower feasible bound) then the derivative for that $x$, goes to minus infinity. This seems to imply to me that the origin is not a candidate. Second Derivatives The Hessian of the objective function is given by $\left(\begin{array}{cc} \text{c} \left(\frac{1}{x_1}-\frac{1}{x_1+x_2}\right) & -\frac{\text{c}}{x_1+x_2} \\ -\frac{\text{c}}{x_1+x_2} & \frac{\text{c} x_1}{x_2(x_1+x_2)} \\\end{array}\right)$ and has a determinant equal to zero. The question How should I conceptualize this problem? Is there something I'm missing? Where can I find info on how to tackle functions like this in optimization problems?
I greet you this day, First: read the notes. Second: view the videos. Third: solve the questions/solved examples. Fourth: check your solutions with my thoroughly-explained solutions. Fifth: check your answers with the calculators as applicable. Comments, ideas, areas of improvement, questions, and constructive criticisms are welcome. You may contact me. If you are my student, please do not contact me here. Contact me via the school's system. Thank you for visiting!!! Samuel Dominic Chukwuemeka (Samdom For Peace) B.Eng., A.A.T, M.Ed., M.S Students will: (1.) List the toolbox functions. (2.) Describe the concept of the transformation of functions. (3.) Describe the transformations done to a parent function to give the child function. (4.) Calculate the transformed coordinate of a parent function on the child function. (5.) Discuss some applications of the transformations of functions. Biology: Parents give birth to children just as in Mathematics: Parent functions "give birth" to child functions. Well, actually, parent functions are transformed to give child functions. In other words, the transformation of parent functions lead to child functions. Teacher: What do you usually do to graph any function? Student: It depends on the function. We can graph Linear Functions using Table of Values. We can also graph Linear Functions using the Intercepts - the $x-intercept$ and the $y-intercept$ Teacher: You answered well. What about Quadratic Functions? Student: We can graph Quadratic Functions using Table of Values. We can also graph Quadratic Functions using the Vertex and the Intercepts (the $x-intercept$ and the $y-intercept$) Teacher: Very good answer. What about Cubic Functions? Absolute Value Functions? Student: We can graph the functions using the Table of Values. Teacher: Very good! Why are we learning this topic? Rather than using the Table of Values to graph each child function, we can graph only the parent function using the Table of Values. Then, we just transform the parent function to give the child function. Biology: The husband and wife do several positions before the "man scores a goal/goals into the woman". ☺☺☺ In other words, the husband and wife do several transformations for the wife to be pregnant and give birth to child/children. Mathematics: There are several transformations done to the parent function in order to give birth to the child function. The parent function can move up and down - Vertical Shift The parent function can move left and right - Horizontal Shift The parent function can turn over vertically - Vertical Reflection - Reflection across the $x-axis$ The parent function can turn over horizontally - Horizontal Reflection - Reflection across the $y-axis$ The parent function can be stretched vertically - Vertical Stretch The parent function can be stretched horizontally - Horizontal Stretch The parent function can be compressed vertically - Vertical Compression The parent function can be compressed horizontally - Horizontal Compression So, we can just transform the parent functions to give the child functions. We can use Table of Values to graph the parent function. Then, we can use any of those transformations on the parent function to give us the child function. We can also use a combination of transformations (Transformation Combo) to give child functions. Food and Nutrition: (Burger King, McDonalds, Jacks, Wendy's): Combo of cheeseburger, fries, and drink just as in Mathematics: Combination(Combo) of transformations to give child functions. Combination of transformations is when we use more than one transformation to get the child function. Teacher: What happens when we have several operations in an arithmetic or algebraic operations? Student: We use the Order of Operations Teacher: In that sense, what happens when we have a child function that was got from several (more than one) transformation? Student: I guess we should use the Order of Transformations Teacher: That is correct!!! Student: So, what is the order of transformations when you have more than one transformation? Teacher: We shall get to that. However, just know this: the horizontal transformations have preeminence over the vertical transformations. Student: Why is that? Teacher: What do you think? Horizontal is inside Vertical is outside Do you start your journey from "inside" and work your way "outside" OR do you start from "outside" and work your way "inside"? Student: You begin from "inside" to "outside". Teacher: Correct! This reminds me of a popular African Proverb Student: What is it? Teacher: It states that Charity begins at home Do you want another reason? Student: Sure... When the husband and wife are alone at night in the room, which position is preeminent - horizonal or vertical? Student: I do not know Teacher: That's okay. Just know that when you have any child function that is formed as a result of a combination of transformations, the horizontal transformations should be done before the vertical transformations. For now, we shall focus on these parent functions. The parent functions are: (1.) Identity Function or Linear Function: $y = x$ (2.) Quadratic Function or Squaring Function: $y = x^2$ (3.) Cubic Function of Cubing Function: $y = x^3$ (4.) Positive Square root Function: $y = \sqrt{x}$ (5.) Cube Root Function: $y = \sqrt[3]{x}$ (6.) Absolute Value Function: $y = |x|$ (7.) Reciprocal Function: $y = \dfrac{1}{x}$ Later, we shall discuss these parent functions: (9.) Exponential Function: $y = a^x$ and $y = e^x$ (10.) Logarithmic Function: $y = \log_a{x}$ and $y = \log_e{x}$ (11.) Trigonometric Function: $y = \sin x$ and $y = \cos x$ HOSH - Horizontal Shift HORE - Horizontal Reflection HOST - Horizontal Stretch HOCO - Horizontal Compression VECO - Vertical Compression VEST - Vertical Stretch VERE - Vertical Reflection VESH - Vertical Shift (1.) ACT A point at $(-5, 7)$ in the standard $(x, y)$ coordinate plane is translated right $7$ coordinate units and down $5$ coordinate units. What are the coordinates of the point after the translation? (2.) ACT In the standard $(x, y)$ coordinate plane, $A'$ is the image resulting from the reflection of the point $A(2, -3)$ across the $y-axis$. What are the coordinates of $A'$? (3.) ACT In the standard $(x, y)$ coordinate plane, the coordinates of the $y-intercept$ of the graph of the function $y = f(x)$ are $(0, -2)$. What are the coordinates of the $y-intercept$ of the graph of the function $y = f(x) - 3$?
You're looking for the equation of Gibbs free energy: $$\Delta G^\circ =\Delta H^\circ - T\Delta S^\circ$$ Per Wikipedia's Gibbs free energy page: The equation can be also seen from the perspective of the system taken together with its surroundings (the rest of the universe). First assume that the given reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, respectively. The reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called exergonic. So effectively you're trying to deduce the $T$ that produces $\Delta G = 0$ which is the "tipping point" (then the reaction will occur for $\Delta G < 0$) So you solve for $T$ in: $0 = \Delta H - T \Delta S$ Thereafter as $T \rightarrow \infty$ you see that $\Delta G$ keeps decreasing, making the reaction more favorable. Thus $S_{system} = S_{surroundings}$ is a fancy prerequisite for reaction occurrence, which we commonly associate with $\Delta G \le 0$.
The statement$$\exists x\exists y\forall z\;\bigl((x = z) \lor (y = z)\bigr)$$asserts that there exist elements $x,y$ in the universe such that, for each $z$ in the universe, either $z=x,\;\,$or $z=y,\;\,$or both (i.e., $z$ is equal to one of $x,y$). Based on that understanding, the statement is true for a given universe if and only if the universe is nonempty and has at most two elements. If the universe has exactly one element, $a$ say, let $x=y=a$. If the universe has exactly two elements, $a,b$ say, let $x=a, y = b$. In both of the above cases, the statement is true, since any $z$ would have to be equal to one of $x,y$ (there's no way $z$ can avoid that). On the other hand, if the universe has at least $3$ elements, $a,b,c,\;$say, there's no way to choose $x,y$ so that each $z$ is equal to one of $x,y$. If such elements $x,y$ were to exist, the set $\{x,y\}$ would have at most two elements, hence the statement would be false for at least one of the test cases $z=a, z=b, z=c$. Key point: The order of the quantifiers matters. For example, if you change the statement to$$\forall z\exists x\exists y\;\bigl((x = z) \lor (y = z)\bigr)$$the new statement asserts that for each $z\;$in the universe, there exist elements $x,y\;$in the universe such that $x=z,\;$or$\;y=z,\;\,$or both. But the new statement is true in any universe since, for any choice of $z$, we can simply choose $x=z,\;$and then for $y$, we can choose any element of the universe (e.g., $y=z$).
Evaluate $$\lim_{x \to -\infty} \left(\frac{\sqrt{1+x^2}-x}{x} \right)$$ I tried by taking $x^2$ out of the root by taking it common. i.e: $$\lim_{x \to -\infty} \left(\frac{x\sqrt{\frac{1}{x^2}+1}-x}{x} \right)$$ and then cancelling the x in numerator and denominator $$\lim_{x \to -\infty} \left(\frac{\sqrt{\frac{1}{x^2}+1}-1}{1} \right)$$ then substituting $x= -\infty$ in the equation, we get, $$\lim_{x \to -\infty} \left(\frac{\sqrt{0+1}-1}{1} \right)$$ which equals to $0$. But it is not the correct answer. What have I done wrong.
One possible explanation could be based on the fact that for low loss, high $Q$ circuits, the sensitivity of the impedance to the deviation of the frequency from the resonant frequency, is greater than for high loss, low $Q$ circuits. As a result, relative changes in the voltages and currents in low loss circuits are greater as well and, correspondingly, the peaks of the resonant curves, associated with low loss circuits, are sharper. Or, we can say, that the peaks, associated with the resonant curves of high loss circuits, are broader. Let's take two RLC resonant circuits, high $Q$ and low $Q$, with identical reactive components, $L$ and $C$, but different resistive components: low, $r$, for the high $Q$ circuit and high, $R$, for the low $Q$ circuit. At a resonant frequency, the impedances of both resonant circuits, high $Q$ and low $Q$, are purely resistive. When the frequency deviates from the resonant frequency, the same reactive component, capacitive or inductive (depending on the direction of the frequency change), is added to the impedances of both circuits, but, due to a lower resistance in the high $Q$ circuit, this reactive component would become more dominant in the high $Q$ circuit than in the low $Q$ circuit, leading to a more dramatic change in the impedance of the high $Q$ circuit (including both the magnitude and the phase). Let's say that both circuits are driven by the same AC voltage source $V$. At the resonant frequency, $f_0$, the impedances of the two circuits will be defined by their resistances, $Z_1=r$ and $Z_2=R$. Correspondingly, the currents in the two circuits will be $I_1=V/r$ and $I_2=V/R$. If the frequency increases by $\Delta f$, the impedance of the inductors will exceed the impedance of the capacitors in both circuits, say, by $X$. The magnitudes of the new impedances for the high $Q$ and low $Q$ circuits will become $Z_1'=\sqrt {r^2+X^2}$ and $Z_2'=\sqrt {R^2+X^2}$, respectively. A relative increase in the magnitude of a non-resonant impedance over a resonant impedance will be greater for the high $Q$ circuit than for the low $Q$ circuits:$$\frac {|Z_1'|}{|Z_1|} > \frac {|Z_2'|}{|Z_2|} \, ,$$since$$\sqrt {1+ \left(\frac {X} {r} \right)^2} > \sqrt {1+ \left(\frac {X} {R} \right)^2} \, .$$ Correspondingly, a relative decrease of a non-resonant current over a resonant current will be greater for the high $Q$ circuit, leading to a sharper resonant curve. The same effect could be demonstrated for parallel RLC circuits, except that in that case larger resistance gives larger $Q$.
My physics textbook says when a rock is lifted gravity does negative work and increases the gravitational potential energy. The problem with reading this statement in isolation is that it is ambiguous. The first thing which is not clear is the system which is being considered. Is it the rock alone or the rock & the Earth together? The implication from the statement is that the system is the rock & the Earth as the rock by itself cannot have gravitational potential energy whereas the rock & the Earth can. This is an important distinction because for the rock system the gravitational attraction on the rock due to the Earth is an external force whereas for the rock & Earth system the gravitational attraction on the rock due to the Earth is an internal force with its Newton third law pair being the gravitational attraction on the Earth due to the rock. To simplify matters consider what happens to a rock with is moving upwards with some kinetic energy $K_{\rm start}$ and then some time later it has a kinetic energy $K_{\rm finish}$. If the system is the rock alone then there is only one external force acting on the rock which is a downward force, the gravitational attraction on the rock due to the Earth $W$. If the rock has moved up a distance $h$ then the work done on the rock by the gravitational force is $-W\,h$ with the minus sign being there because the displacement of the rock upwards is in the opposite direction to the downward external force on the rock. So this is your "negative work done (on the rock system) by gravity". Now using the work-energy theorem gives $-Wh = K_{\rm finish}- K_{\rm start}= \Delta K$ noting that the right-hand side of this equation will be negative. Any other external force, eg your hand applying a force on the rock will contribute to the left-hand side (work done by an external force) of the equation so if you apply an upward force equal in magnitude to the weight of the rock then net work done will be zero and there will be no change in the kinetic energy of the rock. If the system is the rock & the Earth then there are no external forces acting on the system but there will be the two equal magnitude and opposite direction gravitational forces acting on the rock and the Earth. Often an assumption is made that the mass of the Earth $m_{\rm Earth}$ is much, much greater than that of the rock $m_{\rm rock}$ but in this case I want to make that assumption later in the analysis but I do want to make an assumption that if the initial upward velocity of the rock was $v_{\rm rock}$ then the initial "downward" velocity of the Earth was $\dfrac{m_{\rm rock}}{m_{\rm Earth}}v_{\rm rock}$ ie the initial momentum of the whole system was zero. If the rock starts from the Earth's surface and the radius of the Earth is $r_{\rm Earth}$ then the initial gravitational potential energy of the system is $-\dfrac{Gm_{\rm Earth}m_{\rm rock}}{r_{\rm Earth}}$ and the final potential energy is $-\dfrac{Gm_{\rm Earth}m_{\rm rock}}{r_{\rm Earth}+h}$.So the change in the potential energy of the system is $-\dfrac{Gm_{\rm Earth}m_{\rm rock}}{r_{\rm Earth}+h} -\left (-\dfrac{Gm_{\rm Earth}m_{\rm rock}}{r_{\rm Earth}}\right )= \dfrac {Gm_{\rm Earth} m_{\rm rock}}{r_{\rm Earth}}\dfrac{h}{r_{\rm Earth}+h}$ Now we can make the approximation $r_{\rm Earth} \gg h$ to approximate the change in gravitational potential energy to $\dfrac {Gm_{\rm Earth} m_{\rm rock}}{r^2_{\rm Earth}}h = Wh$ where $W$ is the weight of the rock. So in the end there is a decrease in the total kinetic energy of two components of the system and a corresponding increase in the the gravitational potential energy of the system $Wh$. Although the internal forces acting on the Earth and the rock are of equal magnitude because the mass of the Earth is so much greater than that of the rock the internal force on the Earth will undergo a much smaller displacement than that of the rock so the work done by the internal force on the rock will be $-Wh$ the same as before. This illustrates that the change in potential energy is equal to minus the work done by the conservative internal forces.
What is Einstein Field Equation? The Einstein Field Equation (EFE) is also known as Einstein’s equation. There are a set of ten equations extracted from Albert Einstein’s General Theory of Relativity. The EFE describes the basic interaction of gravitation. The equations were first published in 1915 by Albert Einstein as a tensor equation. Following is the Einstein Field Equation: \(G_{\mu \upsilon } + g_{\mu \upsilon }\Lambda = \frac{8 \pi G}{c^{4}}T_{\mu \upsilon }\) Where, G μ𝜐is the Einstein tensor which is given as Rμ𝜐-½ Rgμ𝜐 R μ𝜐is the Ricci curvature tensor R is the scalar curvature g μ𝜐is the metric tensor 𝚲 is a cosmological constant G is Newton’s gravitational constant c is the speed of light T μ𝜐is the stress-energy tensor Einstein Field Equations Derivation Following is the derivation of Einstein Field Equations. Einstein wanted to explain that measure of curvature = source of gravity. The source of gravity is the stress-energy tensor. The stress-energy tensor is given as: \(T^{\alpha \beta }=\begin{bmatrix} \rho & 0 &0 & 0\\ 0&P &0 &0 \\ 0 &0 &P &0 \\ 0&0 &0 &P \end{bmatrix}\rightarrow \begin{bmatrix} \rho &0 &0 &0 \\ 0& 0 &0 &0 \\ 0 & 0& 0& 0\\ 0& 0 &0 &0 \end{bmatrix}\) In the above matrix we see that the P is tending to zero because, for Newton’s gravity, the mass density is the source of gravity. The equation of motion is given as: \(\frac{du^{i}}{d\tau }+\Gamma _{v\alpha }^{i}u^{v}u^{\alpha }=0\) \(\frac{du^{i}}{d\tau }+\Gamma _{i}^{00}=0\) \(\frac{du^{i}}{d\tau }+\frac{1}{2}\frac{\partial g_{00}}{\partial x^{i}}=0\) \(\frac{du^{i}}{d\tau }+\frac{\partial \phi }{\partial x^{i}}=0\) \(g_{00}=-(1+2\phi )\) But we know that \(\bigtriangledown ^{2}\phi =4\pi G\rho\) Therefore, \(R^{\mu v}=-8\pi GT^{\mu v}\) Where \(-8\pi GT^{\mu v}\) is the constant. What is Einstein Tensor? Einstein tensor is also known as trace-reversed Ricci tensor. In Einstein Field Equation, it is used for describing spacetime curvature such that it is in alignment with the conservation of energy and momentum. It is defined as: G = R-½ gR Where, Ris the Ricci tensor g is the metric tensor R is the scalar curvature What is stress-energy tensor? Stress-energy tensor is defined as the tensor T αβ is a symmetrical tensor which is used for describing the energy and momentum density of the gravitational field. It is given as: T αβ = T βα Stay tuned with BYJU’S for more concepts of Physics.
Please help me to find the sum of $\sum\limits_{n=1}^\infty \left[ \frac{\left(\frac{3 - \sqrt 5}{2}\right)^n}{n^3}\right]$ Is there any special technique to solve this one ? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Please help me to find the sum of $\sum\limits_{n=1}^\infty \left[ \frac{\left(\frac{3 - \sqrt 5}{2}\right)^n}{n^3}\right]$ Is there any special technique to solve this one ? This question appears to be off-topic. The users who voted to close gave this specific reason: I don't know anything about the polylogarithm function myself, so if I were forced to solve the problem I would use an elementary technique. Here, through a series of differentiations and multiplication by $x$, I can deduce an integral equation for a function with the given series expansion: Assume $f(x) = \sum_{n=1}^{\infty} {x^n \over n^3}$. Then we want $f\left({3-\sqrt{5} \over 2} \right)$. This series is dominated by the geometric series so it must converge absolutely at the given point (which has modulus less than 1). Now, term-by-term we have $$ x{d\over dx} \left[x {d\over dx} \left[ x{d\over dx} \left[{x^n \over n^3} \right] \right] \right] = x {d\over dx} \left[ x{d\over dx} \left[{x^n \over n^2} \right] \right] = ... = x^n, $$ so $$ \sum_{n=1}^{\infty} x{d\over dx} \left[x {d\over dx} \left[ x{d\over dx} \left[{x^n \over n^3} \right] \right] \right] = \sum_{n=1}^{\infty} x^n = {x\over 1-x}.$$ Moving the sum through the derivatives, we find $x(x(xf'(x))')' = {x\over 1-x}$. Solving this differential equation gives an expression for $f$ which can be used to evaluate the sum. Practically speaking, considering the terms in $$S_n=\sum_{i=1}^n \frac{a^n}{n^3}$$ with $a=\frac{1}{2} \left(3-\sqrt{5}\right)$, you can notice that $a$ is rather small ($\approx 0.382$) which means that the numerator of the fraction $a^n$ will decrease quite fast while the denominator $n^3$ will increase quite fast; this makes that each term will be significantly smaller than the previous. Let us compute the partial sums and get for $6$ significant figures $$S_1=0.381966$$ $$S_2=0.400203$$ $$S_3=0.402267$$ $$S_4=0.402600$$ $$S_5=0.402665$$ $$S_6=0.402679$$ $$S_7=0.402683$$ $$S_8=0.402684$$ Now, the question is : how many terms $k$ have to be added in order to reach an accuracy such that $$\frac{a^k}{k^3} \leq \epsilon$$ The answer is given by the solution of $a^k =k^3 \epsilon$ which can be expressed in terms of Lambert function (another special function) $$k=-\frac{3 }{\log (a)}W\left(\frac{1}{3 \sqrt[3]{-\frac{\epsilon }{\log ^3(a)}}}\right)$$ which can seem very complex. However, very good approxations exist for Lambert function such as $$W(x)\approx L_1-L_2+\frac{L_2}{L_1}$$ where $L_1=\log(x)$ and $L_2=\log(L_1)$. Applied to the case $a=\frac{1}{2} \left(3-\sqrt{5}\right)$ and $\epsilon=10^{-6}$, this would give for the value of the argument of Lambert function $x=32.0808$ from which $W(x)=2.58319$ and then $k=8.05213$ which is what we saw ealier. If you want to change the tolerance to $\epsilon=10^{-p}$, quick and dirty fit would show that the number of terms to be added is approximately given by $$k=0.0148695 p^2+1.67776 p-2.80166$$ According to Maple, $$ \text{polylog}\left(3, \dfrac{3-\sqrt{5}}{2}\right) = \dfrac{4}{5} \zeta(3) + \dfrac{\pi^2}{15} \ln \left(\dfrac{3-\sqrt{5}}{2}\right) - \dfrac{1}{12} \ln\left(\dfrac{3-\sqrt{5}}{2}\right)^3 $$ I don't know where it gets this rather remarkable identity.
I am working on a class project, the passage I quoted in here is from a book Complex Numbers & Geometry by Hahn, p.64. For any four complex numbers $a$, $b$, $c$, $d$, the following identity is easy to verify: $$(a-b)(c-d)+(a-d)(b-c) = (a-c)(b-d).$$ By triangle inequality, we obtain $$|a-b||c-d|+|a-d||b-c| = |a-c||b-d|.$$ Now let us investigate when the inequality becomes an equality. In the case of triangle inequality, $$|z_1 + z_2| ≤ |z_1| + |z_2|,$$ equality holds iff $z_1/z_2$ is a positive real number (provided $z_1\cdot z_2 \neq 0$). Thus we are looking for a condition to ensure that $\frac{(a-b)(c-d)}{(a-d)(b-c)}$ is a positive real number. My question is not so much about complex number but about how do you go from $(z_1/z_2)>0$ to saying that $(a-b)(c-d)/(a-d)(b-c)$ has to be also $>0$? As in elsewhere, the author tends to skip lots of detail, I think he also skips detail here. Thank you very much for your time.
For the standard 4th order Runge Kutta: where the system is assumed to be smooth (so that the RHS has no discontinuous points) $\mathbf{y'} = \mathbf{F}(t,\mathbf{y})$ $\mathbf{y(t_0)} = \mathbf{y_0}$ $\mathbf{y_{i+1} = y_n + 1/6(k_1 + 2k_2 + 2k_3 + k_4}$ where $$\mathbf{k_1 = F(t_n,y_n)}$$ $$\mathbf{k_2 = F(t_n + h/2, y_n + hk_1/2)}$$ $$\mathbf{k_3 = F(t_n + h/2, y_n + hk_2/2)}$$ $$\mathbf{k_4 = F(t_n + h, y_n + hk_3)}$$ Each step has an error of $O(|h|^5)$ and the final step has a total error of $O(h^4)$. You can confirm in Wikipedia Now assuming that we do not have $y'(t_0) = F(t,y_0)$ defined (so that the system be not be continuous/differentiable at the initial point) and we modify the initial point to be $\hat{y}(t_0 + \delta) = y_0 + \epsilon$. This is done by a series approximation expanded at $t_0$ and approximated at $t_0 + \delta$. So $$\hat{y}(t_0 + \delta) := a_0 + a_1(t_0 + \delta)+a_2(t_0 + \delta)^2 + \dots + O((t_0+\delta)^n)$$ where the $a_i$ are coefficients of the series determined by $y(t_0) = y_0$ (if $t_0 = 0$, then $a_0 = y_0$ for instance, and $a_i$ are determined recursively for $i = 1\dots n$. For simplicity, we will assume $t_0 = 0$ but I will still treat it as a variable for the rest of this discussion). It is known that the error for the polynomial approximation has an order of $O(|t|^{n+1})$ if the polynomial expansion is $\hat{y} = \sum_i^n a_i t^i$. Now this is going to sound like a simple question, but given the error in $\hat{y}$, what is the associated change error in the Runge Kutta? Keep in mind the error is of different variables. The Runge Kutta has error in terms of its step size $h$ and the polynomial error has error in terms of its variable $t$. How do I take a Big Oh of that? I only know if they are the same variable by the maximum property Expansion Question (EDIT): The process assumes both sides of $\mathbf{y'} = \mathbf{F}(t,\mathbf{y})$ take a polynomial form $y = \sum a_{ij} t^i$ and there will be recursions to solve the corresponding coefficients. I am using $a_{ij}$ so we do not forget the system $\mathbf{y'} = \mathbf{F}(t,\mathbf{y})$ could contain $n > 2$ equations. Normally most Numerical books assumes $2$ or $3$ equations. Also, this recursion process has no calculus in it. This is similar to a power series approximation. If this isn't the right place to post, can someone move it? Thanks
Sum of Euler Phi Function over Divisors Theorem Let $n \in \Z_{>0}$ be a strictly positive integer. Then $\displaystyle \sum_{d \mathop \divides n} \map \phi d = n$ where: $\displaystyle \sum_{d \mathop \divides n}$ denotes the sum over all of the divisors of $n$ $\map \phi d$ is the Euler $\phi$ function, the number of integers less than $d$ that are prime to $d$. Proof Let us define: $S_d = \set {m \in \Z: 1 \le m \le n, \gcd \set {m, n} = d}$. That is, $S_d$ is all the numbers less than or equal to $n$ whose GCD with $n$ is $d$. Now from Integers Divided by GCD are Coprime we have: $\gcd \set {m, n} = d \iff \dfrac m d, \dfrac n d \in \Z: \dfrac m d \perp \dfrac n d$ That is, by definition of the Euler phi function: $\card {S_d} = \map \phi {\dfrac n d}$ From the definition of the $S_d$, it follows that for all $1 \le m \le n$: $\exists d \divides n: m \in S_d$ Therefore: $\displaystyle \set {1, \ldots, n} = \bigcup_{d \mathop \divides n} S_d$ Moreover, it follows from the definition of the $S_d$ that they are pairwise disjoint. Now from Corollary to Cardinality of Set Union, it follows that: \(\displaystyle n\) \(=\) \(\displaystyle \sum_{d \mathop \divides n} \card {S_d}\) \(\displaystyle \) \(=\) \(\displaystyle \sum_{d \mathop \divides n} \map \phi {\dfrac n d}\) $\displaystyle \sum_{d \mathop \divides n} \map \phi {\dfrac n d} = \sum_{d \mathop \divides n} \map \phi d$ and hence the result. $\blacksquare$
They can replenish themselves by quick recycling and replacement within a reasonable time. They cannot replenish more... Improvement in Food Resources Chapter Overview Introduction Introduction Improvement in crop yields Crop variety improvement Crop production management Crop protection management Fish production (Pisciculture) Bee-Keeping (Apiculture) Chapter at a Glance and Glossary We know that all living organisms need food to get energy and nutrients like protein, carbohydrates, fats, vitamins and minerals. All these nutrients are required for maintenance of our body, development, growth, proper health and sustenance. These nutrients are provided by both plants and animals. It means we are directly or indirectly dependent on agriculture and animal husbandry. Hence the improvement of agriculture as well as of animals has always been Inevitable since time immemorial. But even then, it is natural to think over some burning questions Like, Can the current levels of production be sufficient for us? Why it is necessary to improve the plants and animals? How can we meet with the current demands of production? The reasons of all these questions lie in the following facts: Population Explosion: Our country is second largest in population in the world with about 1.2 billion people. The problem is day by day aggravating by the continuous rise in population. At this rate it is expected that Indian population may reach around 1-3 billion by the end of 2020. For supplying the food to the ever increasing population of our country, it is necessary that we increase the production of agricultural and animal products because it is estimated that in future we will need more than a quarter of a billion tonnes of grains every year to feed our people. The increase in food yield can be done either by farming on more land or by improving the production efficiency through some modem scientific practices. The first mode of increasing the farming land, is not easily possible in our thickly populated country. Hence, the second point is the only best option with us. Farming Revolution: So far, by applying the modem scientific methods, we have supplemented our demand of the food to some extent. Like green revolution to increase food grain production and white revolution to better and more efficient use as well as availability of milk. Some more revolutions like blue revolution (enhanced fish and silver revolution (increased poultry production) have also helped to compensate with the increasing demand of food production. more... Green revolution Cereal grain production Tissues Chapter Overview Introduction Introduction Division of Labour Plant and Animal Tissues are different Plant tissues Meristem tic tissue Permanent tissue Animal tissue Epithelial tissue muscular tissue Connective tissues Nervous tissue You have studied in the previous chapter that all living organisms are made up of cells. They are either unicellular (e.g. diatoms, bacteria, yeast protozoans etc.), or multicellular (e.g.) frog, earthworm, dog, man, mango tree, money plant, peepal etc). Most of cells are specialized to carry out different functions. Each specialised function is taken up by a different groups of cells. Since these cells carry out only a particular function. For instance in human beings, muscle cells combine together to perform contraction and relaxation to cause movements, nerve cells cord in ate to carry messages; blood cells and plasma to transport oxygen, carbon dioxide, food, hormones and waste materials, and so on. Similarly in plants, cells combine to perform specific functions such as transportation of food and water from one part to the other; synthesis of food material, storage of reserve foods, etc. Thus a kind of division of labour exists in the cells of multicellular organisms to perform specific functions. Division of Labour The body of multicellular organisms is made up of organ systems, organ systems are made up of organs, organs are composed of tissues and tissues are composed of cells. Most of these cells are specialised to carry out only few functions efficiently. These functions are taken up by different groups of cells. Thus, we can say that there is a division of labour in the multicellular organisms. "Division of labour refers to the distribution of different functions among different parts of the body of organism which get specialized for the particular function." Cell division and cell differentiation lead to the development of specific organs, consisting of specific groups of cells to perform specific functions in the body. Moreover, the organs are also made up of different groups of cells on the basis of their functions. A particular function, inside an organ is performed by group of specialised cells which lie at a definite site in the body. The cluster of cells specially positioned and designed to perform a particular function efficiently is known as tissue. more... Tissue: A group of similar or dissimilar cells that perform a common function and have a common origin, is known as Tissue The Fundamental Unit of Life Chapter Overview Introduction Introduction What are living organisms made up of? Discovery of cell Cell theory Structure of cell Plasma Membrane or Cell Membrane Osmotic Solutions Endocytosis and exocytosis Cell wall Nucleus Cytoplasm Cytosol Cell Organelles Endoplasmic Reticulum (ER) Golgi Apparatus Lysosomes Plastids Plastids Vacuoles Ribosomes All the living organisms which we see in our surrounding are essentially complex structures made up of numerous coordinated compartments usually known as cells. The cell is the fundamental structural and physiological unit of living organisms. Unicellular organisms consist of just one cell while multicellular organisms consist of several cells, which are specialised to perform distinct functions. A unicellular organism can perform its all metabolic activities which a multicellular organism can. The cell contains all the structures and molecular constituents needed for life. What are living Organisms Made up of? We can compare a cell with a brick. Just as a building is made up of bricks, the body of a plant or an animal is made up of cells, i.e. all living organisms show cellular organisation, Some organisms such as Amoeba, Paramecium, Euglena, Bacteria etc. are made up of only single cell, hence are called unicellular or a cellular. There are large number of other organisms which are made up of millions of cells and are known as multicellular. All cells, whether the exist as unicellular organisms or as part of multicellular organisms demonstrate certain similar basic functions such as nutrition, respiration excretion etc. which are essential for their survival. Discovery of Cell The history of the cell began with the invention of a microscope by the Dutch scientist Anton Van Leeuwenhoek (1632-1723) who observed the living cells of bacteria, Euglena sperms, eggs and blood corpuscles of invertebrates in 1683. Robert Hooke (1635-1703) an English scientist, invented a primitive microscope by using lenses for achieving greater magnification. In such a microscope, the object to be seen was Fig.: 3.1. Primitive microscope of Robert Hooke Fig.: 5-2. Cells as seen by Robert Hooke placed on a stage below and light coming from an oil flame was thrown on it by a convex mirror. While studying a slice of cork Robert Hooke observed a honeycomb like pattern under his microscope in 1665. He coined the term cell (cellulae), the Latin word, which means "a little room". He published his findings in Micrographic in London in 1665. Robert Brown (1773-1858) a Scottish botanist, discovered a little sphere like structure in the cells of the orchid root in 1831. Later he named it Nucleus. The gel like substance present in all the living cells was termed protoplasm by Hugo Von Mohl (1838) and Joharines more... Structure of the Atom Chapter Overview Introduction Introduction Thomson’s Model of Atom Rutherford’s Model of Atom Bohr’s Model of Atom Discovery of Neutron Atomic Number and Mass Number Electronic configuration (Bohr-Bury Sheme) Concept of Valency Isotopes Isobars On the basis of experimental observations, different models have been proposed for the structure of an atom. Thomson’s model of Atom According to Thomson's model, atoms can be considered as a large sphere of uniform positive charge with a number of small negatively charged electrons scattered throughout it. This model was called as plum pudding model. The electrons present the plums in the pudding made of negative charge. This model is similar to of watermelon in which the pulp represents the positive charge and positive charge the seeds denote the electrons, Fig. 2.1. Thomson?s plum-pudding model Rutherford’s Model of Atom (Alpha Particle Scattering Experiment) Rutherford in 1911 performed an experiment which led to the downfall of Thomson's model According to Rutherford's model: An atom contains a dense and positively charged region located at its centre, it was called as nucleus. All the positive charge of an atom and most of its mass was contained in the nucleus. The rest of an atom must be empty space which contains the much smaller and negatively charged electrons. Total negative charge on the electrons is equal to the total positive charge on the nucleus, so that atom on the whole is electrically neutral. On the basis of the proposed model, the experimental observations in the scattering experiment. The a-particles passing through the atom in the region of the electrons would pass straight without any deflection. Only those particles that come in close vicinity of the positively charged nucleus get deviated from their path. Very few \[\alpha \]-particles, those that collide with the nucleus, would face a rebound. Bohr’s Model of Atom In 1913 Niels Bohr, a student of Rutherford proposed a model to account for the shortcomings of Rutherford's model. Bohr's model can be understood in terms of two postulates proposed by him. The postulates are: Postulate 1. The electrons move in definite circular paths of fixed energy around a central nucleus. Fig. 4.1. Illustration showing different orbits or the energy levels of fixed energy in an atom according to Bohr?s model Postulate 2. The electron more... Atoms and Molecules Chapter Overview Introduction Introduction. Laws of Chemical Combination. Dalton’s Atomic Theory Atom Modern Day Symbols of Atoms of Different Elements Atomic Mass What is a molecule Molecular formulae Ion Ionic compounds Writing chemical formula of compounds Relation between molecular formula and empirical formula Valency Formula of Ionic Compounds Molecular Mass. Percentage Composition of a Compounds Mole Concept Molar Mass Gram-atomic Mass Around 500 B.C. an Indian philosopher Maharishi Kanda, said in his Darshan that if we go on dividing matter, we shall get smaller and smaller particles. A stage would come beyond which further division will not be possible. He named these particles as TAJRMANLT. This concept was further elaborated by another Indian philosopher, Pakudha Katya an. Katya an said, these particles normally exist in a combined form which gives us various forms of matter. Around the same era, sin ancient Greek philosopher Democritus (460 – 370 B.C.) and Leucippus suggested that if we go on dividing matter, a stage will came when further division of particles will not be possible. Democritus called these individual particles 'atoms' (which means indivisible). These ideas were based on philosophical considerations. In this chapter, we shall study about atom and molecules and related aspects, like atomic and molecular masses, mole concept and molar masses. We shall also learn how to write chemical formula of a compound. Laws of Chemical Combinations There are two main laws of chemical combinations: (i) Law of conservation of mass, (ii) Law of definite or constant proportions. Lavoisier gave the Law of Conservation of Mass as: In every chemical reaction, total masses of all the reactants is equal to the masses of all the products. \[\Rightarrow \]Total mass of the substances before the reaction = Total mass of the substances after the reaction For example, in the reaction of hydrogen \[({{H}_{2}})\]and chlorine \[(C{{l}_{2}})\]represented. Here 2g of \[{{H}_{2}}\] reacts with 71g of \[C{{l}_{2}}\]to give 73 g of HCl. \[{{H}_{2}}\]+ \[C{{l}_{2}}\]\[\to \] \[2HCl\] Molecular mass more... Is Matter Around Us Pure Chapter Overview Introduction Introduction Elements and Compounds Mixture Solution Saturated solution Supersaturated solution Unsaturated solution Solubility Colloidal solution Colloidal solution Brownian motion Tyndall effect Electrophoresis Separation of mixtures Chromatography Types of Chromatography Distillation Fractional distillation Crystallization Water purification in water works Separation based on magnetic properties Physical and chemical change Matter around us is of two types: (i) Pure substances (ii) Mixture. In this chapter we will study about mixtures and pure substances. Initially, we will discuss about elements and compounds. Elements And Compounds A chemical element is a pure substance and it consists of one type of atom distinguished by its atomic number. Examples of some elements are: helium, carbon, iron, gold, silver, copper, aluminium, hydrogen, oxygen, nitrogen, sulphur, chlorine, iodine, uranium and plutonium. Our body is also composed of elements but the composition of elements in human body is very much different from that of the Earth's crust, as it can be seen from Table 2.1 given below: Table 2.1 Elements in Earth’s Crust and Human Body
When we integrate certain integrals, such as $$\int \frac{x^2}{\sqrt{16-x^2}} dx$$ We can make a substitution like $x = 4 \sin \theta$ Then we can simplify the above integral to the following: $$8 \theta - 8 \sin \theta \cos \theta + C$$ I then learned we can use a right angled triangle to find alternate expressions for $\frac{x}{4} = \sin \theta$ such as $\frac{\sqrt{16-x^2}}{4} = \cos \theta$ and substitute theta to find the answer $8 \arcsin \frac{x}{4} - \frac{x}{2} \sqrt{16-x^2} + C$ But clearly when I graph the two functions $$y=\arcsin \left(\frac{x}{4}\right)$$ and $$y=\arccos \left(\frac{\sqrt{16-x^2}}{4}\right)$$ They are only equal for $x \ge 0$ according to https://www.desmos.com/calculator Whats going on here? Why does this work? Why can we make this equivalent triangle substitution when the functions clearly arent equal to each other on $x < 0$?
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism