text
stringlengths
100
957k
meta
stringclasses
1 value
# Unable to compile STM32F4 code sample with Eclipse I'm trying to compile a simple STM32F4 sample using Eclipse with the GNU ARM Eclipse plugin from http://gnuarmeclipse.livius.net/blog/. I'm getting the following errors: and the structure is as follows: and the source code from main.c #include "main.h" /** * Main application entry point. */ int main() { init(); do { loop(); } while(1); } /** * Application initialization. */ void init() { RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOD, ENABLE); GPIO_InitTypeDef gpio; GPIO_StructInit(&gpio); gpio.GPIO_Mode = GPIO_Mode_OUT; gpio.GPIO_Pin = LEDS; GPIO_Init(GPIOD, &gpio); GPIO_SetBits(GPIOD, LEDS); } /** * Application loop. */ void loop() { static uint32_t counter = 0; ++counter; GPIO_ResetBits(GPIOD, LEDS); GPIO_SetBits(GPIOD, LED[counter % 4]); delay(250); } /** * Delay by given ms * * @param ms */ void delay(uint32_t ms) { ms *= 3360; while(ms--) { __NOP(); } } Any ideas what these errors mean? - You haven't shown us the actual linker errors, try to find a line of the form some/path/file.c:362: undefined reference to `some_function' then recursive grep the supplied files for that function name and add the corresponding source file to your project. – Chris Stratton Feb 20 '13 at 20:42 I have no experience with Eclipse and STM32, but I do have some experience with GCC. When I see these errors my hunch is that the verbosity level can be increased somehow. I mean 'Error 1'? I've never seen a compiler just output 'Error 1'. – jippie Feb 20 '13 at 21:18 II'll try to get a bit more verbosity, that would in fact help quite a bit. Just not too sure how to get it out of the compiler. – josef.van.niekerk Feb 20 '13 at 21:24 HTron's answer is the correct one. You need that assembly file and you can obtain it from the firmware package that ST provides. Make sure to choose it from a folder of an IDE that uses gcc (will make sure no issues arise). Glad to see you're still working to get the toolchain running. – Gustavo Litovsky Feb 20 '13 at 23:13 @jippie: I've setup gcc as well and I got those same errors. They actually mean that make returned with some other error but in reality all is ok, and an output file is generated properly (if everything else is ok) and you can ignore those errors. Haven't found (or looked much) to remove them which would be nice. – Gustavo Litovsky Feb 20 '13 at 23:42
{}
Patterns of ultracold atoms in an optical lattice can be engineered on a microscale by selectively removing atoms from individual sites. Apart from their intrinsic interest as Bose-Einstein condensates, ultracold atomic gases are ripe for applications in quantum information processing and as a medium for simulating phenomena in condensed matter physics. Gathering a suitable collection of cold atoms is only half the battle, however; the atoms have to be arranged in a specific pattern and researchers must be able to manipulate individual atoms at will. Peter Würtz, Tim Langen, Tatjana Gericke, Andreas Koglbauer, and Herwig Ott at Johannes Gutenberg University in Mainz and the University of Kaiserslautern, Germany, report in Physical Review Letters their success at addressing single sites in a two-dimensional lattice of rubidium atoms. The research team created the ultracold atomic lattice by cooling the rubidium in an optical trap, then loading the cold atoms into a two-dimensional array of potential wells formed by criss-crossing laser beams. The interference pattern of the 2D beams creates a regular pattern of atom bunches, much like eggs in a carton, but with a $600$-$\text{nm}$ lattice spacing. To view the lattice, the researchers sweep the atomic egg-carton with the $100$-$\text{nm}$-diameter beam from a scanning electron microscope, which ionizes the atoms and this ionization is detected and imaged by a charge detector array. By letting the beam linger on a particular lattice site, they can knock atoms out of the lattice, and by scanning over many lattice sites they can create any desired 2D pattern. Setting up patterns by removing atoms from specific sites allows the team to watch in situ tunneling processes and microengineer novel atomic interactions. – David Voss More Features » ### Announcements More Announcements » ## Subject Areas Atomic and Molecular Physics Mesoscopics ## Next Synopsis Particles and Fields ## Related Articles Atomic and Molecular Physics ### Synopsis: Detecting a Molecular Duet Using a scanning tunneling microscope, researchers detect coupled vibrations between two molecules. Read More » Atomic and Molecular Physics ### Viewpoint: How to Create a Time Crystal A detailed theoretical recipe for making time crystals has been unveiled and swiftly implemented by two groups using vastly different experimental systems. Read More » Atomic and Molecular Physics ### Viewpoint: What Goes Up Must Come Down A molecular fountain, which launches molecules rather than atoms and allows them to be observed for long times, has been demonstrated for the first time. Read More »
{}
# Chapter 8 - Personal Finance - 8.1 Percent, Sales Tax and Discounts - Exercise Set 8.1 - Page 495: 45 12% #### Work Step by Step A is P percent of B A = PB $0.3 = P~(2.5)$ $P = \frac{0.3}{2.5}$ $P = 0.12$ To express a decimal as a percent, we can move the decimal point two places to the right. $P = 0.12 = 12\%$ 12% of 2.5 is 0.3 After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
### 394. Decode String Given an encoded string, return it's decoded string. The encoding rule is: k[encoded_string], where the encoded_string inside the square brackets is being repeated exactly k times. Note that k is guaranteed to be a positive integer. You may assume that the input string is always valid; No extra white spaces, square brackets are well-formed, etc. Furthermore, you may assume that the original data does not contain any digits and that digits are only for those repeat numbers, k. For example, there won't be input like 3a or 2[4]. Examples: s = "3[a]2[bc]", return "aaabcbc". s = "3[a2[c]]", return "accaccacc". s = "2[abc]3[cd]ef", return "abcabccdcdcdef". Seen this question in a real interview before? When did you encounter this question? Which company?
{}
# Question 98443 Sep 19, 2016 Here's what I got. #### Explanation: Nickel(II) chloride, ${\text{NiCl}}_{2}$, will react with ammonium sulfide, ("NH"_4)_2"S", to form nickel sulfide, $\text{NiS}$, and ammonium chloride, $\text{NH"_4"Cl}$. The idea here is that both reactants are soluble ionic compounds, as shown by the solubility rules listed below Nickel sulfide is an insoluble solid that precipitates out of solution, while ammonium chloride is a soluble ionic compound that will exist as ions in the solution, i.e. the reaction produces aqueous ammonium chloride. You will thus have ${\text{NiCl"_ (2(aq)) + ("NH"_ 4)_ 2"S"_ ((aq)) -> "NiS"_ ((s)) darr + 2"NH"_ 4"Cl}}_{\left(a q\right)}$ To get the net ionic equation, start by writing out the complete ionic equation ${\text{Ni"_ ((aq))^(2+) + 2"Cl"_ ((aq))^(-) + 2"NH" _ (4(aq))^(+) + "S"_ ((aq))^(2-) -> "NiS"_ ((s)) darr + 2"NH"_ (4(aq))^(+) + 2"Cl}}_{\left(a q\right)}^{-}$ Eliminate the spectator ions, i.e. the ions that are present on both sides of the equation "Ni"_ ((aq))^(2+) + color(red)(cancel(color(black)(2"Cl"_ ((aq))^(-)))) + color(red)(cancel(color(black)(2"NH"_ (4(aq))^(+)))) + "S"_ ((aq))^(2-) -> "NiS"_ ((s)) darr + color(red)(cancel(color(black)(2"NH"_ (4(aq))^(+)))) + color(red)(cancel(color(black)(2"Cl"_ ((aq))^(-))))# to get ${\text{Ni"_ ((aq))^(2+) + "S"_ ((aq))^(2-) -> "NiS}}_{\left(s\right)} \downarrow$
{}
# Helpful hints for Problem Set 9 A student asked me whether it’s okay to solve Problem Set 9 by using Excel. While I generally encourage students to use Excel for the purpose of validating their work (especially for computationally challenging problem sets such as the present one), I also expect students to demonstrate understanding and knowledge of the logical framework upon which any given problem is based. In other words, I expect you to show and explain your work on this problem set just as you would have to show and explain your work if this was an exam question. By all means, create your own spreadsheet model of Problem Set 9 to validate your answers for this problem set. But start out by devising you own computation strategy using a piece of paper, pen or pencil, and calculator. Since you know that the value of risky debt is equal to the value of safe debt minus the value of the limited liability put option, one approach to solving this problem set would be to start out by calculating the value of a riskless bond, and the value of the limited liability put option. The value of a riskless bond is $V(B) = B{e^{ - rT}}$, where B corresponds to the promised payment to creditors. The value of the option to default (V(put)) can be calculated by applying the BSM put equation (see the second bullet point on page 8 of http://fin4335.garven.com/spring2018/lecture16.pdf); this requires 1) calculating ${d_1}$ and ${d_2}$, 2) using the Standard Normal Distribution Function (“z”) Table to find $1-N({d_1})$ and $1- N({d_2})$ , and inputting these probabilities into the BSM put equation, where the exercise price corresponds to the promised payment to creditors and the value of the underlying asset corresponds to the value of the firm’s assets (keep in mind that the (risk neutral) probability of default corresponds to  $1- N({d_2})$ for reasons explained during yesterday’s class meeting). Once you obtain the value of the safe bond (V(B)) and the value of the option to default (V(put)) for each firm, then the fair value for each firm’s debt is simply the difference between these two values; i.e., V(D) = V(B) – V(put). Upon finding V(D) for firm 1 and firm 2, then you can obtain these bonds’ yields to maturity (YTM ) by solving for YTM in the following equation: $V(D) = B{e^{ - YTM(T)}}$; the credit risk premium is equal to the difference between the yield to maturity and the riskless rate of interest. Since the value of equity corresponds to a call option written on the firm’s assets with exercise price equal to the promised payment to creditors, you could also solve this problem by first calculating the value of each firm’s equity (V(E)) using BSM call equation (see the second bullet point on page 7 of http://fin4335.garven.com/spring2018/lecture16.pdf and substitute the value of assets (V(F)) in place of S and the promised payment of \$B in place of K in that equation). Once you know V(E) for each firm, then the value of risky debt (V(D)) is equal to the difference between the value of assets V(F) and the value of equity V(E) Then YTM and credit risk premium follow in the manner described in the previous paragraph.
{}
# Limit x tends to 0, quotient of arc sin x by x is equal to 1 ## Formula $\large \displaystyle \lim_{x \,\to\, 0} \dfrac{\sin^{-1} x}{x} \,=\, 1$ ### Proof $x$ is a literal and it represents the ratio of length of the opposite side to hypotenuse of a right angled triangle for an angle. The inverse of sine function for this value is written as $\sin^{-1} x$. The value of the ratio of $\sin^{-1} x$ to $x$ is the requisite result when the value of the $x$ approaches zero. $\displaystyle \lim_{\displaystyle x \to 0} \dfrac{\sin^{-1} x}{x}$ 01 #### Transform the function in trigonometric form Take $y = \sin^{-1} x$, then $x = \sin y$. Now, transform the entire function in terms of $y$ from $x$. $x \,\to\, 0 \implies y \,\to\, \sin^{-1} 0$ $\therefore \,\,\,\,\,\, y \,\to\, 0$ If $x$ tends to $0$, then $y$ tends to $\sin^{-1} 0$. Therefore, $y$ is also tends to zero when the value of $x$ approaches to zero. $= \,\,$ $\displaystyle \lim_{\displaystyle y \to 0} \dfrac{y}{\sin y}$ 02 #### Evaluate it The function is similar to the limit x tends to 0, sin x by x rule. On the basis of this formula, the value of the function can be evaluated mathematically. $= \,\,$ $\displaystyle \lim_{\displaystyle y \to 0} \dfrac{1}{\dfrac{\sin y}{y}}$ $= \,\,$ $\dfrac{1}{\displaystyle \lim_{\displaystyle y \to 0} \dfrac{\sin y}{y}}$ $= \,\,$ $\dfrac{1}{1}$ $\therefore \,\,\,\,\,\, \displaystyle \lim_{\displaystyle x \,\to\, 0} \dfrac{\sin^{-1} x}{x} \,=\, 1$ Save (or) Share
{}
? Free Version Moderate # Resonant Frequencies in Open-Closed Pipe APPH12-DJTUVR Consider a pipe (open on one end, closed on the other) of length $1.5$ $m$. A sound wave ($v$ $=$ $340$ $m/s$) is directed down the tube. What are the first three resonant frequencies produced as a result of standing waves in the pipe? A $113.3$ $Hz$, $226.7$ $Hz$, $340$ $Hz$ B $113.3$ $Hz$, $340$ $Hz$, $566.7$ $Hz$ C $56.7$ $Hz$, $170$ $Hz$, $283.3$ $Hz$ D $56.7$ $Hz$, $113.3$ $Hz$, $170$ $Hz$
{}
## Partitioning a matrix into block diagonal form, part 4 In my previous two posts I described a method for constructing a partitioning vector $v$ for an $n$ by $n$ square matrix $A$ (part 2 of this series) and showed that the vector thus constructed partitions $A$ into block diagonal form (part 3). In this post I begin the task of showing that the vector $v$ is maximal and unique: $A$ cannot also be partitioned as a block diagonal matrix with $s$ partitions, where $s > r$, and if we partition $A$ into block diagonal form with $r$ partitions using a vector $w$ then we have $w = v$. In the previous posts I showed that we can construct a vector $v$ that will partition $A$ into block diagonal form with $r$ partitions (where $r$ is one less than the length of $v$), such that $A = \begin{bmatrix} A_{1}&0&\cdots&0 \\ 0&A_{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&A_{r} \end{bmatrix}$ Now suppose that we have a vector $w$ that also partitions A into block diagonal form, using $s$ partitions: $A = \begin{bmatrix} A'_{1}&0&\cdots&0 \\ 0&A'_{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\cdots&A'_{s} \end{bmatrix}$ My approach in this post is to try to find some sort of connection between $w$ and $v$. In particular, I claim that for all $\alpha$ where $1 \le \alpha \le s+1$ we have $w_{\alpha} = v_{\beta}$ for some $\beta$ where $1 \le \beta \le r+1$. In other words, any partition boundary specified by $w$ is also a partition boundary specified by $v$. My proof is by induction. Since $w$ and $v$ are both partition vectors we already have $w_1 = v_1 = 0$ by definition. The claim therefore holds true for $\alpha = 1$ (with $\beta = 1$ in this case). Suppose that for some $\alpha$ we have $w_{\alpha} = v_{\beta}$ for some $\beta$ (with $1 \le \beta \le r+1$), and consider $w_{\alpha+1}$. We want to show that $w_{\alpha+1} = v_{\gamma}$ for some $\gamma$ where $1 \le \gamma \le r+1$. We can select $\gamma > \beta$ such that $1 \le \gamma \le r+1$ and $w_{\alpha} \le v_{\gamma-1} < w_{\alpha+1} \le v_{\gamma}$. This is done as follows: We have $w_{\alpha+1} \le n = v_{r+1}$ so we know that $w_{\alpha+1} \le v_k$ for at least one $k$ (i.e., $k = r+1$). We select $\gamma$ to be the smallest $k$ less than or equal to $r+1$ for which $w_{\alpha+1} \le v_k$. By the definition of $\gamma$ and the fact that $v_{\gamma-1} < v_{\gamma}$ we then conclude that $w_{\alpha+1} > v_{\gamma-1}$. (If we had $w_{\alpha+1} \le v_{\gamma-1}$ then $\gamma$ would not be the smallest $k$ less than or equal to $r+1$ for which $w_{\alpha+1} \le v_k$.) We also have $\gamma > \beta$, which means that $\gamma - 1 \ge \beta$. We then have $v_{\gamma-1} \ge v_\beta = w_{\alpha}$. Finally, $\gamma - 1 \ge \beta$ and $\beta \ge 1$ implies that $\gamma \ge 2$ and thus $\gamma \ge 1$. The result is that $\gamma$ has the properties noted above. From the above we see that $w_{\alpha+1} \le v_{\gamma}$. I now claim that $w_{\alpha+1} = v_{\gamma}$. The proof is by contradiction. Suppose instead that $w_{\alpha+1} < v_{\gamma}$. The submatrix $A'_{\alpha+1}$ includes entries from rows $w_\alpha + 1$ through $w_{\alpha+1}$ of $A$ and columns $w_\alpha + 1$ through $w_{\alpha+1}$ of $A$. Since $w$ partitions $A$ into block diagonal form we know that any submatrices to the right of and below $A'_{\alpha+1}$ are zero (if such submatrices exist). The requirement that all submatrices to the right of $A'_{\alpha}$ be zero then means that we must have $a_{ij} = 0$ if $w_{\alpha} + 1 \le i \le w_{\alpha+1}$ and $j > w_{\alpha+1}$, and the requirement that all submatrices below $A'_{\alpha}$ be zero means that we must also have $a_{ij} = 0$ if $i > w_{\alpha+1}$ and $w_{\alpha} + 1 \le j \le w_{\alpha+1}$. From the way we selected $\gamma$ we have $w_{\alpha} \le v_{\gamma-1} < w_{\alpha+1}$, which means that we also have $a_{ij} = 0$ if $v_{\gamma-1} + 1 \le i \le w_{\alpha+1}$ and $j > w_{\alpha+1}$, and $a_{ij} = 0$ if $i > w_{\alpha+1}$ and $v_{\gamma-1} + 1 \le j \le w_{\alpha+1}$. But recall from the definition of $v$ that $v_{\gamma}$ was chosen to be the smallest $k$ such that $a_{ij} = 0$ if either $v_{\gamma-1} + 1 \le i \le k$ and $j > k$, or if $i > k$ and $v_{\gamma-1} + 1 \le j \le k$; alternatively $v_{\gamma}$ was set to $n$ if no such $k$ existed. In the former case we must have $v_{\gamma} \le w_{\alpha+1}$, which contradicts our assumption that $v_{\gamma} > w_{\alpha+1}$. In the latter case our assumption that $v_{\gamma} > w_{\alpha+1}$ implies that there is indeed a $k$ fulfilling the criteria above, namely $k = w_{\alpha+1}$, which contradicts our assumption about how $v$ was defined. We therefore conclude that given $w_{\alpha} = v_{\beta}$ for some $\beta$ (where $1 \le \beta \le r+1$), there exists $\gamma$ such that $w_{\alpha+1} = v_{\gamma}$ (where $1 \le \gamma \le r+1$). Combined with the fact that this is true for $\alpha = 1$, we conclude that for any $\alpha$ (where $1 \le \alpha \le s+1$) there exists $\beta$ such that $1 \le \beta \le r+1$ and $w_{\alpha} = v_{\beta}$, so that any partition boundary specified by $w$ is also a partition boundary specified by $v$. In part 5 of this series I use this result to complete the task of showing that the vector $v$ is maximal and unique. This entry was posted in linear algebra. Bookmark the permalink.
{}
# Math Help - finding an extremal 1. ## finding an extremal If , show that the extremal is I've done: Then using Euler's equation we get: So and then I'm stuck from here, I can't see how to turn this is into trigonometric functions? I tried letting which didn't work... Can anyone please help me? Thanks in advance! 2. Solve the differential equation 1- y"= y which is the same as y"+ y= 1. That's a linear equation with constant coefficients. Its characteristic equation is $r^2+ 1= 0$ and I recommend trying y= A, a constant, for the "specific solution". This problem clearly expects you to be able to solve that differential equation.
{}
Question # All bases considered in these are assumed to be ordered bases. In Exercise, compute thecoordinate vector of v with respect to the giving basis S for V.V is M_22 Alternate coordinate systems All bases considered in these are assumed to be ordered bases. In Exercise, compute the coordinate vector of v with respect to the giving basis S for V. V is $$R^2, S = \left\{ \begin{bmatrix}1 \\ 0 \end{bmatrix}\begin{bmatrix} 0 \\1 \end{bmatrix} \right\}, v = \begin{bmatrix} 3 \\-2 \end{bmatrix}$$ 2021-02-15 We are given the following ordered basis S for the vector space $$\displaystyle{V}={M}_{{22}}$$ as well as the following vector v in V: $$S = \left\{ \begin{bmatrix}1 \\ 0 \end{bmatrix}\begin{bmatrix} 0 \\1 \end{bmatrix} \right\}; v = \begin{bmatrix} 3 \\-2 \end{bmatrix}$$ We wish to compute the coordinate vector $$\displaystyle{\left[{v}\right]}_{{s}}$$ of v with respect to the basis S We have $$[v]_s = \begin{bmatrix}a \\ b \end{bmatrix}$$ Where $$v = \begin{bmatrix}3 \\ -2 \end{bmatrix}= a \begin{bmatrix}1 \\ 0 \end{bmatrix} + b \begin{bmatrix}0 \\ 1 \end{bmatrix}= \begin{bmatrix}a \\ b \end{bmatrix}$$ Equating coefficients yields the following solution: $$\displaystyle{a}={1},$$ $$\displaystyle{b}=-{1},$$ $$\displaystyle{c}={0}:$$ $$\displaystyle{d}={2}.$$ Therefore, the coordinate vector $$\displaystyle{\left[{v}\right]}_{{S}}$$ of v with respect to the basis S is $$[v]_S = \begin{bmatrix}3 \\ -2 \end{bmatrix}$$
{}
## 12.16 Filtrations A nice reference for this material is [Section 1, HodgeII]. (Note that our conventions regarding abelian categories are different.) Definition 12.16.1. Let $\mathcal{A}$ be an abelian category. 1. A decreasing filtration $F$ on an object $A$ is a family $(F^ nA)_{n \in \mathbf{Z}}$ of subobjects of $A$ such that $A \supset \ldots \supset F^ nA \supset F^{n + 1}A \supset \ldots \supset 0$ 2. A filtered object of $\mathcal{A}$ is pair $(A, F)$ consisting of an object $A$ of $\mathcal{A}$ and a decreasing filtration $F$ on $A$. 3. A morphism $(A, F) \to (B, F)$ of filtered objects is given by a morphism $\varphi : A \to B$ of $\mathcal{A}$ such that $\varphi (F^ iA) \subset F^ iB$ for all $i \in \mathbf{Z}$. 4. The category of filtered objects is denoted $\text{Fil}(\mathcal{A})$. 5. Given a filtered object $(A, F)$ and a subobject $X \subset A$ the induced filtration on $X$ is the filtration with $F^ nX = X \cap F^ nA$. 6. Given a filtered object $(A, F)$ and a surjection $\pi : A \to Y$ the quotient filtration is the filtration with $F^ nY = \pi (F^ nA)$. 7. A filtration $F$ on an object $A$ is said to be finite if there exist $n, m$ such that $F^ nA = A$ and $F^ mA = 0$. 8. Given a filtered object $(A, F)$ we say $\bigcap F^ iA$ exists if there exists a biggest subobject of $A$ contained in all $F^ iA$. We say $\bigcup F^ iA$ exists if there exists a smallest subobject of $A$ containing all $F^ iA$. 9. The filtration on a filtered object $(A, F)$ is said to be separated if $\bigcap F^ iA = 0$ and exhaustive if $\bigcup F^ iA = A$. By abuse of notation we say that a morphism $f : (A, F) \to (B, F)$ of filtered objects is injective if $f : A \to B$ is injective in the abelian category $\mathcal{A}$. Similarly we say $f$ is surjective if $f : A \to B$ is surjective in the category $\mathcal{A}$. Being injective (resp. surjective) is equivalent to being a monomorphism (resp. epimorphism) in $\text{Fil}(\mathcal{A})$. By Lemma 12.16.2 this is also equivalent to having zero kernel (resp. cokernel). Lemma 12.16.2. Let $\mathcal{A}$ be an abelian category. The category of filtered objects $\text{Fil}(\mathcal{A})$ has the following properties: 1. It is an additive category. 2. It has a zero object. 3. It has kernels and cokernels, images and coimages. 4. In general it is not an abelian category. Proof. It is clear that $\text{Fil}(\mathcal{A})$ is additive with direct sum given by $(A, F) \oplus (B, F) = (A \oplus B, F)$ where $F^ p(A \oplus B) = F^ pA \oplus F^ pB$. The kernel of a morphism $f : (A, F) \to (B, F)$ of filtered objects is the injection $\mathop{\mathrm{Ker}}(f) \subset A$ where $\mathop{\mathrm{Ker}}(f)$ is endowed with the induced filtration. The cokernel of a morphism $f : A \to B$ of filtered objects is the surjection $B \to \mathop{\mathrm{Coker}}(f)$ where $\mathop{\mathrm{Coker}}(f)$ is endowed with the quotient filtration. Since all kernels and cokernels exist, so do all coimages and images. See Example 12.3.13 for the last statement. $\square$ Definition 12.16.3. Let $\mathcal{A}$ be an abelian category. A morphism $f : A \to B$ of filtered objects of $\mathcal{A}$ is said to be strict if $f(F^ iA) = f(A) \cap F^ iB$ for all $i \in \mathbf{Z}$. This also equivalent to requiring that $f^{-1}(F^ iB) = F^ iA + \mathop{\mathrm{Ker}}(f)$ for all $i \in \mathbf{Z}$. We characterize strict morphisms as follows. Lemma 12.16.4. Let $\mathcal{A}$ be an abelian category. Let $f : A \to B$ be a morphism of filtered objects of $\mathcal{A}$. The following are equivalent 1. $f$ is strict, 2. the morphism $\mathop{\mathrm{Coim}}(f) \to \mathop{\mathrm{Im}}(f)$ of Lemma 12.3.12 is an isomorphism. Proof. Note that $\mathop{\mathrm{Coim}}(f) \to \mathop{\mathrm{Im}}(f)$ is an isomorphism of objects of $\mathcal{A}$, and that part (2) signifies that it is an isomorphism of filtered objects. By the description of kernels and cokernels in the proof of Lemma 12.16.2 we see that the filtration on $\mathop{\mathrm{Coim}}(f)$ is the quotient filtration coming from $A \to \mathop{\mathrm{Coim}}(f)$. Similarly, the filtration on $\mathop{\mathrm{Im}}(f)$ is the induced filtration coming from the injection $\mathop{\mathrm{Im}}(f) \to B$. The definition of strict is exactly that the quotient filtration is the induced filtration. $\square$ Lemma 12.16.5. Let $\mathcal{A}$ be an abelian category. Let $f : A \to B$ be a strict monomorphism of filtered objects. Let $g : A \to C$ be a morphism of filtered objects. Then $f \oplus g : A \to B \oplus C$ is a strict monomorphism. Proof. Clear from the definitions. $\square$ Lemma 12.16.6. Let $\mathcal{A}$ be an abelian category. Let $f : B \to A$ be a strict epimorphism of filtered objects. Let $g : C \to A$ be a morphism of filtered objects. Then $f \oplus g : B \oplus C \to A$ is a strict epimorphism. Proof. Clear from the definitions. $\square$ Lemma 12.16.7. Let $\mathcal{A}$ be an abelian category. Let $(A, F)$, $(B, F)$ be filtered objects. Let $u : A \to B$ be a morphism of filtered objects. If $u$ is injective then $u$ is strict if and only if the filtration on $A$ is the induced filtration. If $u$ is surjective then $u$ is strict if and only if the filtration on $B$ is the quotient filtration. Proof. This is immediate from the definition. $\square$ Lemma 12.16.8. Let $\mathcal{A}$ be an abelian category. Let $f : A \to B$, $g : B \to C$ be strict morphisms of filtered objects. 1. In general the composition $g \circ f$ is not strict. 2. If $g$ is injective, then $g \circ f$ is strict. 3. If $f$ is surjective, then $g \circ f$ is strict. Proof. Let $B$ a vector space over a field $k$ with basis $e_1, e_2$, with the filtration $F^ nB = B$ for $n < 0$, with $F^0B = ke_1$, and $F^ nB = 0$ for $n > 0$. Now take $A = k(e_1 + e_2)$ and $C = B/ke_2$ with filtrations induced by $B$, i.e., such that $A \to B$ and $B \to C$ are strict (Lemma 12.16.7). Then $F^ n(A) = A$ for $n < 0$ and $F^ n(A) = 0$ for $n \geq 0$. Also $F^ n(C) = C$ for $n \leq 0$ and $F^ n(C) = 0$ for $n > 0$. So the (nonzero) composition $A \to C$ is not strict. Assume $g$ is injective. Then \begin{align*} g(f(F^ pA)) & = g(f(A) \cap F^ pB) \\ & = g(f(A)) \cap g(F^ p(B)) \\ & = (g \circ f)(A) \cap (g(B) \cap F^ pC) \\ & = (g \circ f)(A) \cap F^ pC. \end{align*} The first equality as $f$ is strict, the second because $g$ is injective, the third because $g$ is strict, and the fourth because $(g \circ f)(A) \subset g(B)$. Assume $f$ is surjective. Then \begin{align*} (g \circ f)^{-1}(F^ iC) & = f^{-1}(F^ iB + \mathop{\mathrm{Ker}}(g)) \\ & = f^{-1}(F^ iB) + f^{-1}(\mathop{\mathrm{Ker}}(g)) \\ & = F^ iA + \mathop{\mathrm{Ker}}(f) + \mathop{\mathrm{Ker}}(g \circ f) \\ & = F^ iA + \mathop{\mathrm{Ker}}(g \circ f) \end{align*} The first equality because $g$ is strict, the second because $f$ is surjective, the third because $f$ is strict, and the last because $\mathop{\mathrm{Ker}}(f) \subset \mathop{\mathrm{Ker}}(g \circ f)$. $\square$ The following lemma says that subobjects of a filtered object have a well defined filtration independent of a choice of writing the object as a cokernel. Lemma 12.16.9. Let $\mathcal{A}$ be an abelian category. Let $(A, F)$ be a filtered object of $\mathcal{A}$. Let $X \subset Y \subset A$ be subobjects of $A$. On the object $Y/X = \mathop{\mathrm{Ker}}(A/X \to A/Y)$ the quotient filtration coming from the induced filtration on $Y$ and the induced filtration coming from the quotient filtration on $A/X$ agree. Any of the morphisms $X \to Y$, $X \to A$, $Y \to A$, $Y \to A/X$, $Y \to Y/X$, $Y/X \to A/X$ are strict (with induced/quotient filtrations). Proof. The quotient filtration $Y/X$ is given by $F^ p(Y/X) = F^ pY/(X \cap F^ pY) = F^ pY/F^ pX$ because $F^ pY = Y \cap F^ pA$ and $F^ pX = X \cap F^ pA$. The induced filtration from the injection $Y/X \to A/X$ is given by \begin{align*} F^ p(Y/X) & = Y/X \cap F^ p(A/X) \\ & = Y/X \cap (F^ pA + X)/X \\ & = (Y \cap F^ pA)/(X \cap F^ pA) \\ & = F^ pY/F^ pX. \end{align*} Hence the first statement of the lemma. The proof of the other cases is similar. $\square$ Lemma 12.16.10. Let $\mathcal{A}$ be an abelian category. Let $A, B, C \in \text{Fil}(\mathcal{A})$. Let $f : A \to B$ and $g : A \to C$ be morphisms. Then there exists a pushout $\xymatrix{ A \ar[r]_ f \ar[d]_ g & B \ar[d]^{g'} \\ C \ar[r]^{f'} & C \amalg _ A B }$ in $\text{Fil}(\mathcal{A})$. If $f$ is strict, so is $f'$. Proof. Set $C \amalg _ A B$ equal to $\mathop{\mathrm{Coker}}((1, -1) : A \to C \oplus B)$ in $\text{Fil}(\mathcal{A})$. This cokernel exists, by Lemma 12.16.2. It is a pushout, see Example 12.5.6. Note that $F^ p(C \amalg _ A B)$ is the image of $F^ pC \oplus F^ pB$. Hence $(f')^{-1}(F^ p(C \amalg _ A B)) = g(f^{-1}(F^ pB))) + F^ pC$ Whence the last statement. $\square$ Lemma 12.16.11. Let $\mathcal{A}$ be an abelian category. Let $A, B, C \in \text{Fil}(\mathcal{A})$. Let $f : B \to A$ and $g : C \to A$ be morphisms. Then there exists a fibre product $\xymatrix{ B \times _ A C \ar[r]_{g'} \ar[d]_{f'} & B \ar[d]^ f \\ C \ar[r]^ g & A }$ in $\text{Fil}(\mathcal{A})$. If $f$ is strict, so is $f'$. Proof. This lemma is dual to Lemma 12.16.10. $\square$ Let $\mathcal{A}$ be an abelian category. Let $(A, F)$ be a filtered object of $\mathcal{A}$. We denote $\text{gr}^ p_ F(A) = \text{gr}^ p(A)$ the object $F^ pA/F^{p + 1}A$ of $\mathcal{A}$. This defines an additive functor $\text{gr}^ p : \text{Fil}(\mathcal{A}) \longrightarrow \mathcal{A}, \quad (A, F) \longmapsto \text{gr}^ p(A).$ Recall that we have defined the category $\text{Gr}(\mathcal{A})$ of graded objects of $\mathcal{A}$ in Section 12.15. For $(A, F)$ in $\text{Fil}(\mathcal{A})$ we may set $\text{gr}(A) = \text{the graded object of }\mathcal{A}\text{ whose } p\text{th graded piece is }\text{gr}^ p(A)$ and if $\mathcal{A}$ has countable direct sums, then we simply have $\text{gr}(A) = \bigoplus \text{gr}^ p(A)$ $\text{gr} : \text{Fil}(\mathcal{A}) \longrightarrow \text{Gr}(\mathcal{A}), \quad (A, F) \longmapsto \text{gr}(A).$ Lemma 12.16.12. Let $\mathcal{A}$ be an abelian category. 1. Let $A$ be a filtered object and $X \subset A$. Then for each $p$ the sequence $0 \to \text{gr}^ p(X) \to \text{gr}^ p(A) \to \text{gr}^ p(A/X) \to 0$ is exact (with induced filtration on $X$ and quotient filtration on $A/X$). 2. Let $f : A \to B$ be a morphism of filtered objects of $\mathcal{A}$. Then for each $p$ the sequences $0 \to \text{gr}^ p(\mathop{\mathrm{Ker}}(f)) \to \text{gr}^ p(A) \to \text{gr}^ p(\mathop{\mathrm{Coim}}(f)) \to 0$ and $0 \to \text{gr}^ p(\mathop{\mathrm{Im}}(f)) \to \text{gr}^ p(B) \to \text{gr}^ p(\mathop{\mathrm{Coker}}(f)) \to 0$ are exact. Proof. We have $F^{p + 1}X = X \cap F^{p + 1}A$, hence map $\text{gr}^ p(X) \to \text{gr}^ p(A)$ is injective. Dually the map $\text{gr}^ p(A) \to \text{gr}^ p(A/X)$ is surjective. The kernel of $F^ pA/F^{p + 1}A \to A/X + F^{p + 1}A$ is clearly $F^{p + 1}A + X \cap F^ pA/F^{p + 1}A = F^ pX/F^{p + 1}X$ hence exactness in the middle. The two short exact sequence of (2) are special cases of the short exact sequence of (1). $\square$ Lemma 12.16.13. Let $\mathcal{A}$ be an abelian category. Let $f : A \to B$ be a morphism of finite filtered objects of $\mathcal{A}$. The following are equivalent 1. $f$ is strict, 2. the morphism $\mathop{\mathrm{Coim}}(f) \to \mathop{\mathrm{Im}}(f)$ is an isomorphism, 3. $\text{gr}(\mathop{\mathrm{Coim}}(f)) \to \text{gr}(\mathop{\mathrm{Im}}(f))$ is an isomorphism, 4. the sequence $\text{gr}(\mathop{\mathrm{Ker}}(f)) \to \text{gr}(A) \to \text{gr}(B)$ is exact, 5. the sequence $\text{gr}(A) \to \text{gr}(B) \to \text{gr}(\mathop{\mathrm{Coker}}(f))$ is exact, and 6. the sequence $0 \to \text{gr}(\mathop{\mathrm{Ker}}(f)) \to \text{gr}(A) \to \text{gr}(B) \to \text{gr}(\mathop{\mathrm{Coker}}(f)) \to 0$ is exact. Proof. The equivalence of (1) and (2) is Lemma 12.16.4. By Lemma 12.16.12 we see that (4), (5), (6) imply (3) and that (3) implies (4), (5), (6). Hence it suffices to show that (3) implies (2). Thus we have to show that if $f : A \to B$ is an injective and surjective map of finite filtered objects which induces and isomorphism $\text{gr}(A) \to \text{gr}(B)$, then $f$ induces an isomorphism of filtered objects. In other words, we have to show that $f(F^ pA) = F^ pB$ for all $p$. As the filtrations are finite we may prove this by descending induction on $p$. Suppose that $f(F^{p + 1}A) = F^{p + 1}B$. Then commutative diagram $\xymatrix{ 0 \ar[r] & F^{p + 1}A \ar[r] \ar[d]^ f & F^ pA \ar[r] \ar[d]^ f & \text{gr}^ p(A) \ar[r] \ar[d]^{\text{gr}^ p(f)} & 0 \\ 0 \ar[r] & F^{p + 1}B \ar[r] & F^ pB \ar[r] & \text{gr}^ p(B) \ar[r] & 0 }$ and the five lemma imply that $f(F^ pA) = F^ pB$. $\square$ Lemma 12.16.14. Let $\mathcal{A}$ be an abelian category. Let $A \to B \to C$ be a complex of filtered objects of $\mathcal{A}$. Assume $\alpha : A \to B$ and $\beta : B \to C$ are strict morphisms of filtered objects. Then $\text{gr}(\mathop{\mathrm{Ker}}(\beta )/\mathop{\mathrm{Im}}(\alpha )) = \mathop{\mathrm{Ker}}(\text{gr}(\beta ))/\mathop{\mathrm{Im}}(\text{gr}(\alpha )))$. Proof. This follows formally from Lemma 12.16.12 and the fact that $\mathop{\mathrm{Coim}}(\alpha ) \cong \mathop{\mathrm{Im}}(\alpha )$ and $\mathop{\mathrm{Coim}}(\beta ) \cong \mathop{\mathrm{Im}}(\beta )$ by Lemma 12.16.4. $\square$ Lemma 12.16.15. Let $\mathcal{A}$ be an abelian category. Let $A \to B \to C$ be a complex of filtered objects of $\mathcal{A}$. Assume $A, B, C$ have finite filtrations and that $\text{gr}(A) \to \text{gr}(B) \to \text{gr}(C)$ is exact. Then 1. for each $p \in \mathbf{Z}$ the sequence $\text{gr}^ p(A) \to \text{gr}^ p(B) \to \text{gr}^ p(C)$ is exact, 2. for each $p \in \mathbf{Z}$ the sequence $F^ p(A) \to F^ p(B) \to F^ p(C)$ is exact, 3. for each $p \in \mathbf{Z}$ the sequence $A/F^ p(A) \to B/F^ p(B) \to C/F^ p(C)$ is exact, 4. the maps $A \to B$ and $B \to C$ are strict, and 5. $A \to B \to C$ is exact (as a sequence in $\mathcal{A}$). Proof. Part (1) is immediate from the definitions. We will prove (3) by induction on the length of the filtrations. If each of $A$, $B$, $C$ has only one nonzero graded part, then (3) holds as $\text{gr}(A) = A$, etc. Let $n$ be the largest integer such that at least one of $F^ nA, F^ nB, F^ nC$ is nonzero. Set $A' = A/F^ nA$, $B' = B/F^ nB$, $C' = C/F^ nC$ with induced filtrations. Note that $\text{gr}(A) = F^ nA \oplus \text{gr}(A')$ and similarly for $B$ and $C$. The induction hypothesis applies to $A' \to B' \to C'$, which implies that $A/F^ p(A) \to B/F^ p(B) \to C/F^ p(C)$ is exact for $p \geq n$. To conclude the same for $p = n + 1$, i.e., to prove that $A \to B \to C$ is exact we use the commutative diagram $\xymatrix{ 0 \ar[r] & F^ nA \ar[r] \ar[d] & A \ar[r] \ar[d] & A' \ar[r] \ar[d] & 0 \\ 0 \ar[r] & F^ nB \ar[r] \ar[d] & B \ar[r] \ar[d] & B' \ar[r] \ar[d] & 0 \\ 0 \ar[r] & F^ nC \ar[r] & C \ar[r] & C' \ar[r] & 0 }$ whose rows are short exact sequences of objects of $\mathcal{A}$. The proof of (2) is dual. Of course (5) follows from (2). To prove (4) denote $f : A \to B$ and $g : B \to C$ the given morphisms. We know that $f(F^ p(A)) = \mathop{\mathrm{Ker}}(F^ p(B) \to F^ p(C))$ by (2) and $f(A) = \mathop{\mathrm{Ker}}(g)$ by (5). Hence $f(F^ p(A)) = \mathop{\mathrm{Ker}}(F^ p(B) \to F^ p(C)) = \mathop{\mathrm{Ker}}(g) \cap F^ p(B) = f(A) \cap F^ p(B)$ which proves that $f$ is strict. The proof that $g$ is strict is dual to this. $\square$ Comment #520 by Keenan Kidwell on Definition 0121 makes reference to the intersection and union of a (countable) family of subobjects of an object in an abelian category, but as far as I can tell intersections and unions of subobjects are not defined anywhere. Perhaps this is intentional, but I thought I should mention it. Comment #521 by on Yes, I agree that is bad. Hope somebody will fix this (hint, hint). I guess we do discuss subobjects in the section on abelian categories, but we don't point out that you can take their intersection (of two of them I mean). To intersect infinitely many of them you probably would define that as a limit? In fact, it is always a cofiltered limit since you can throw in the collection of finite intersections? Similarly with sums (and unions in the case of an increasing sequence). Anyway, I think our omission, in this particular case, is not as bad as all that, because the statement $\bigcap F^i(A) = 0$ just means that any morphism $T \to A$ which factors through all the subobjects $F^i(A)$ is zero, in other words, you don't have to know what the intersection is and in particular you don't need to know a priori that it exists. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
Now showing items 1-20 of 630 • #### Structural speciation in chemical reactivity profiling of binary-ternary systems of Ni(II) with iminodialcohol and aromatic chelators The importance of structural speciation in the control of chemical reactivity in Ni(II) binary-ternary systems, involving (O,O,N)-containing substrates (1,1’-iminodi-2-propanol), and aromatic chelators (2,2’-bipyridine, 1,10-phenanthroline), prompted the systematic synthesis of new crystalline materials characterized by elemental analysis, FT-IR, UV-Visible, Luminescence, TGA, magnetic susceptibility, and X-ray crystallography. The structures contain mononuclear octahedral assemblies, the lattice architecture of which exemplifies reaction conditions under which conformational variants and solvent-associated lattice-imposed complexes are assembled. Transformations between complex species denote their association with reactivity pathways, suggesting alternate synthetic methodologies for their isolation. Theoretical work (Hirshfeld, Electrostatic Potential, DFT) signifies the impact of crystal structure on energy profiles of the generated species. The arisen physicochemical profiles of all compounds portray a well-configured interwoven network of pathways, projecting strong connection between structural speciation and Ni(II) reactivity patterns in organic-solvent media. The collective results provide well-defined parameterized profiles, poised to influence the synthesis of new Ni(II)-iminodialcohol materials with specified structural-magneto-optical properties. • #### Aging and Cholesterol Metabolism The role cholesterol metabolism has to play in health span is clear, and monitoring the parameters of cholesterol metabolism is key to aging successfully. The aim of this chapter is to provide a brief overview of the mechanisms which regulate cholesterol in the body, secondly to discuss how aging effects cholesterol metabolism, and thirdly to unveil how systems biology is leading to an improved understanding of the intersection between aging and the dysregulation of cholesterol metabolism. • #### A Novel Averaging Principle Provides Insights in the Impact of Intratumoral Heterogeneity on Tumor Progression Typically stochastic differential equations (SDEs) involve an additive or multiplicative noise term. Here, we are interested in stochastic differential equations for which the white noise is nonlinearly integrated into the corresponding evolution term, typically termed as random ordinary differential equations (RODEs). The classical averaging methods fail to treat such RODEs. Therefore, we introduce a novel averaging method appropriate to be applied to a specific class of RODEs. To exemplify the importance of our method, we apply it to an important biomedical problem, in particular, we implement the method to the assessment of intratumoral heterogeneity impact on tumor dynamics. Precisely, we model gliomas according to a well-known Go or Grow (GoG) model, and tumor heterogeneity is modeled as a stochastic process. It has been shown that the corresponding deterministic GoG model exhibits an emerging Allee effect (bistability). In contrast, we analytically and computationally show that the introduction of white noise, as a model of intratumoral heterogeneity, leads to monostable tumor growth. This monostability behavior is also derived even when spatial cell diffusion is taken into account. • #### A Comprehensive Review of the Composition, Nutritional Value, and Functional Properties of Camel Milk Fat Recently, camel milk (CM) has been considered as a health-promoting icon due to its medicinal and nutritional benefits. CM fat globule membrane has numerous health-promoting properties, such as anti-adhesion and anti-bacterial properties, which are suitable for people who are allergic to cow’s milk. CM contains milk fat globules with a small size, which accounts for their rapid digestion. Moreover, it also comprises lower amounts of cholesterol and saturated fatty acids concurrent with higher levels of essential fatty acids than cow milk, with an improved lipid profile manifested by reducing cholesterol levels in the blood. In addition, it is rich in phospholipids, especially plasmalogens and sphingomyelin, suggesting that CM fat may meet the daily nutritional requirements of adults and infants. Thus, CM and its dairy products have become more attractive for consumers. In view of this, we performed a comprehensive review of CM fat’s composition and nutritional properties. The overall goal is to increase knowledge related to CM fat characteristics and modify its unfavorable perception. Future studies are expected to be directed toward a better understanding of CM fat, which appears to be promising in the design and formulation of new products with significant health-promoting benefits. • #### Virtual reality training in cardiopulmonary resuscitation in schools UK average survival from Out of Hospital Cardiac Arrest (OHCA) survival is around 8.6%, which is significantly lower than other high performing countries with survival rates of over 20%. A cardiac arrest victim is 2–4 times more likely to survive OHCA with bystander CPR provision. Mandatory Teaching CPR to children in school is acknowledged to be the most effective way to reach the entire population and improving the bystander CPR rate and is endorsed by the World Health Organization (WHO) “Kids Save Lives” statement. Despite this, Wales is yet to follow other home nations by including CPR training as a mandatory within the school’s curriculum. In this paper, we explore the role of teaching CPR to schoolchildren and report on the development by Computer scientists at the University of Chester and the Welsh Ambulance Services NHS Trust (WAST) of VCPR, a virtual environment to help teach the procedure. VCPR was developed in three stages: identifying requirements and specifications; development of a prototype; and management—development of software, further funding and exploring opportunities for commercialisation. We describe the opportunities in Wales to skill up the whole population over time in CPR and present our Virtual reality (VR) technology is emerging as a powerful for teaching CPR in schools. • #### New Self-dual Codes from 2 x 2 block circulant matrices, Group Rings and Neighbours of Neighbours In this paper, we construct new self-dual codes from a construction that involves a unique combination; $2 \times 2$ block circulant matrices, group rings and a reverse circulant matrix. There are certain conditions, specified in this paper, where this new construction yields self-dual codes. The theory is supported by the construction of self-dual codes over the rings $\FF_2$, $\FF_2+u\FF_2$ and $\FF_4+u\FF_4$. Using extensions and neighbours of codes, we construct $32$ new self-dual codes of length $68$. We construct 48 new best known singly-even self-dual codes of length 96. • #### Layer Dynamics for the one dimensional $\eps$-dependent Cahn-Hilliard / Allen-Cahn Equation We study the dynamics of the one-dimensional ε-dependent Cahn-Hilliard / Allen-Cahn equation within a neighborhood of an equilibrium of N transition layers, that in general does not conserve mass. Two different settings are considered which differ in that, for the second, we impose a mass-conservation constraint in place of one of the zero-mass flux boundary conditions at x = 1. Motivated by the study of Carr and Pego on the layered metastable patterns of Allen-Cahn in [10], and by this of Bates and Xun in [5] for the Cahn-Hilliard equation, we implement an N-dimensional, and a mass-conservative N−1-dimensional manifold respectively; therein, a metastable state with N transition layers is approximated. We then determine, for both cases, the essential dynamics of the layers (ode systems with the equations of motion), expressed in terms of local coordinates relative to the manifold used. In particular, we estimate the spectrum of the linearized Cahn-Hilliard / Allen-Cahn operator, and specify wide families of ε-dependent weights δ(ε), µ(ε), acting at each part of the operator, for which the dynamics are stable and rest exponentially small in ε. Our analysis enlightens the role of mass conservation in the classification of the general mixed problem into two main categories where the solution has a profile close to Allen-Cahn, or, when the mass is conserved, close to the Cahn-Hilliard solution. • #### New Extremal Binary Self-dual Codes from block circulant matrices and block quadratic residue circulant matrices In this paper, we construct self-dual codes from a construction that involves both block circulant matrices and block quadratic residue circulant matrices. We provide conditions when this construction can yield self-dual codes. We construct self-dual codes of various lengths over F2 and F2 + uF2. Using extensions, neighbours and sequences of neighbours, we construct many new self-dual codes. In particular, we construct one new self-dual code of length 66 and 51 new self-dual codes of length 68. • #### Spatial discretization for stochastic semilinear subdiffusion driven by integrated multiplicative space-time white noise Spatial discretization of the stochastic semilinear subdiffusion driven by integrated multiplicative space-time white noise is considered. The spatial discretization scheme discussed in Gy\"ongy \cite{gyo_space} and Anton et al. \cite{antcohque} for stochastic quasi-linear parabolic partial differential equations driven by multiplicative space-time noise is extended to the stochastic subdiffusion. The nonlinear terms $f$ and $\sigma$ satisfy the global Lipschitz conditions and the linear growth conditions. The space derivative and the integrated multiplicative space-time white noise are discretized by using finite difference methods. Based on the approximations of the Green functions which are expressed with the Mittag-Leffler functions, the optimal spatial convergence rates of the proposed numerical method are proved uniformly in space under the suitable smoothness assumptions of the initial values. • #### Oscillatory and stability of a mixed type difference equation with variable coefficients The goal of this paper is to study the oscillatory and stability of the mixed type difference equation with variable coefficients $\Delta x(n)=\sum_{i=1}^{\ell}p_{i}(n)x(\tau_{i}(n))+\sum_{j=1}^{m}q_{j}(n)x(\sigma_{i}(n)),\quad n\ge n_{0},$ where $\tau_{i}(n)$ is the delay term and $\sigma_{j}(n)$ is the advance term and they are positive real sequences for $i=1,\cdots,l$ and $j=1,\cdots,m$, respectively, and $p_{i}(n)$ and $q_{j}(n)$ are real functions. This paper generalise some known results and the examples illustrate the results. • #### Characterization of microwave and terahertz dielectric properties of single crystal La2Ti2O7 along one single direction New generation wireless communication systems require characterisations of dielectric permittivity and loss tangent at microwave and terahertz bands. La2Ti2O7 is a candidate material for microwave application. However, all the reported microwave dielectric data are average value from different directions of a single crystal, which could not reflect its anisotropic nature due to the layered crystal structure. Its dielectric properties at the microwave and terahertz bands in a single crystallographic direction have rarely been reported. In this work, a single crystal ferroelectric La2Ti2O7 was prepared by floating zone method and its dielectric properties were characterized from 1 kHz to 1 THz along one single direction. The decrease in dielectric permittivity with increasing frequency is related to dielectric relaxation from radio frequency to microwave then to terahertz band. The capability of characterizing anisotropic dielectric properties of a single crystal in this work opens the feasibility for its microwave and terahertz applications. • #### Group Codes, Composite Group Codes and Constructions of Self-Dual Codes The main research presented in this thesis is around constructing binary self-dual codes using group rings together with some well-known code construction methods and the study of group codes and composite group codes over different alphabets. Both these families of codes are generated by the elements that come from group rings. A search for binary self-dual codes with new weight enumerators is an ongoing research area in algebraic coding theory. For this reason, we present a generator matrix in which we employ the idea of a bisymmetric matrix with its entries being the block matrices that come from group rings and give the necessary conditions for this generator matrix to produce a self-dual code over a fi nite commutative Frobenius ring. Together with our generator matrix and some well-known code construction methods, we find many binary self-dual codes with parameters [68, 34, 12] that have weight enumerators that were not known in the literature before. There is an extensive literature on the study of different families of codes over different alphabets and speci fically finite fi elds and finite commutative rings. The study of codes over rings opens up a new direction for constructing new binary self-dual codes with a rich automorphism group via the algebraic structure of the rings through the Gray maps associated with them. In this thesis, we introduce a new family of rings, study its algebraic structure and show that each member of this family is a commutative Frobenius ring. Moreover, we study group codes over this new family of rings and show that one can obtain codes with a rich automorphism group via the associated Gray map. We extend a well established isomorphism between group rings and the subring of the n x n matrices and show its applications to algebraic coding theory. Our extension enables one to construct many complex n x n matrices over the ring R that are fully de ned by the elements appearing in the first row. This property allows one to build generator matrices with these complex matrices so that the search field is practical in terms of the computational times. We show how these complex matrices are constructed using group rings, study their properties and present many interesting examples of complex matrices over the ring R. Using our extended isomorphism, we de ne a new family of codes which we call the composite group codes or for simplicity, composite G-codes. We show that these new codes are ideals in the group ring RG and prove that the dual of a composite G-code is also a composite G-code. Moreover, we study generator matrices of the form [In | Ω(v)]; where In is the n x n identity matrix and Ω(v) is the composite matrix that comes from the extended isomorphism mentioned earlier. In particular, we show when such generator matrices produce self-dual codes over finite commutative Frobenius rings. Additionally, together with some generator matrices of the type [In | Ω(v)] and the well-known extension and neighbour methods, we fi nd many new binary self-dual codes with parameters [68, 34, 12]. Lastly in this work, we study composite G-codes over formal power series rings and finite chain rings. We extend many known results on projections and lifts of codes over these alphabets. We also extend some known results on γadic codes over the infi nite ring R∞ • #### Error estimates of a continuous Galerkin time stepping method for subdiffusion problem A continuous Galerkin time stepping method is introduced and analyzed for subdiffusion problem in an abstract setting. The approximate solution will be sought as a continuous piecewise linear function in time $t$ and the test space is based on the discontinuous piecewise constant functions. We prove that the proposed time stepping method has the convergence order $O(\tau^{1+ \alpha}), \, \alpha \in (0, 1)$ for general sectorial elliptic operators for nonsmooth data by using the Laplace transform method, where $\tau$ is the time step size. This convergence order is higher than the convergence orders of the popular convolution quadrature methods (e.g., Lubich's convolution methods) and L-type methods (e.g., L1 method), which have only $O(\tau)$ convergence for the nonsmooth data. Numerical examples are given to verify the robustness of the time discretization schemes with respect to data regularity. • #### Multi-metric Evaluation of the Effectiveness of Remote Learning in Mechanical and Industrial Engineering During the COVID-19 Pandemic: Indicators and Guidance for Future Preparedness This data set contains data collected from 5 universities in 5 countries about the effectiveness of e-learning during the COVID-19 pandemic, specifically tailored to mechanical and industrial engineering students. A survey was administered in May, 2020 at these universities simultaneously, using Google Forms. The survey had 41 questions, including 24 questions on a 5-point Likert scale. The survey questions gathered data on their program of study, year of study, university of enrolment and mode of accessing their online learning content. The Likert scale questions on the survey gathered data on the effectiveness of digital delivery tools, student preferences for remote learning and the success of the digital delivery tools during the pandemic. All students enrolled in modules taught by the authors of this study were encouraged to fill the survey up. Additionally, remaining students in the departments associated with the authors were also encouraged to fill up the form through emails sent on mailing lists. The survey was also advertised on external websites such as survey circle and facebook. Crucial insights have been obtained after analysing this data set that link the student demographic profile (gender, program of study, year of study, university) to their preferences for remote learning and effectiveness of digital delivery tools. This data set can be used for further comparative studies and was useful to get a snapshot of student preferences and e-learning effectiveness during the COVID-19 pandemic, which required the use of e-learning tools on a wider scale than previously and using new modes such as video conferencing that were set up within a short timeframe of a few days or weeks. • #### Non-Exhaust Vehicle Emissions of Particulate Matter and VOC from Road Traffic: A Review As exhaust emissions of particles and volatile organic compounds (VOC) from road vehicles have progressively come under greater control, non-exhaust emissions have become an increasing proportion of the total emissions, and in many countries now exceed exhaust emissions. Non-exhaust particle emissions arise from abrasion of the brakes and tyres and wear of the road surface, as well as from resuspension of road dusts. The national emissions, particle size distributions and chemical composition of each of these sources is reviewed. Most estimates of airborne concentrations derive from the use of chemical tracers of specific emissions; the tracers and airborne concentrations estimated from their use are considered. Particle size distributions have been measured both in the laboratory and in field studies, and generally show particles to be in both the coarse (PM2.5-10) and fine (PM2.5) fractions, with a larger proportion in the former. The introduction of battery electric vehicles is concluded to have only a small effect on overall road traffic particle emissions. Approaches to numerical modelling of non-exhaust particles in the atmosphere are reviewed. Abatement measures include engineering controls, especially for brake wear, improved materials (e.g. for tyre wear) and road surface cleaning and dust suppressants for resuspension. Emissions from solvents in screen wash and de-icers now dominate VOC emissions from traffic in the UK, and exhibit a very different composition to exhaust VOC emissions. Likely future trends in non-exhaust particle emissions are described. • #### Talos: a prototype Intrusion Detection and Prevention system for profiling ransomware behaviour Abstract: In this paper, we profile the behaviour and functionality of multiple recent variants of WannaCry and CrySiS/Dharma, through static and dynamic malware analysis. We then analyse and detail the commonly occurring behavioural features of ransomware. These features are utilised to develop a prototype Intrusion Detection and Prevention System (IDPS) named Talos, which comprises of several detection mechanisms/components. Benchmarking is later performed to test and validate the performance of the proposed Talos IDPS system and the results discussed in detail. It is established that the Talos system can successfully detect all ransomware variants tested, in an average of 1.7 seconds and instigate remedial action in a timely manner following first detection. The paper concludes with a summarisation of our main findings and discussion of potential future works which may be carried out to allow the effective detection and prevention of ransomware on systems and networks. • #### New binary self-dual codes of lengths 56, 58, 64, 80 and 92 from a modification of the four circulant construction. In this work, we give a new technique for constructing self-dual codes over commutative Frobenius rings using $\lambda$-circulant matrices. The new construction was derived as a modification of the well-known four circulant construction of self-dual codes. Applying this technique together with the building-up construction, we construct singly-even binary self-dual codes of lengths 56, 58, 64, 80 and 92 that were not known in the literature before. Singly-even self-dual codes of length 80 with $\beta \in \{2,4,5,6,8\}$ in their weight enumerators are constructed for the first time in the literature. • #### Composite Matrices from Group Rings, Composite G-Codes and Constructions of Self-Dual Codes In this work, we define composite matrices which are derived from group rings. We extend the idea of G-codes to composite G-codes. We show that these codes are ideals in a group ring, where the ring is a finite commutative Frobenius ring and G is an arbitrary finite group. We prove that the dual of a composite G-code is also a composite G-code. We also define quasi-composite G-codes. Additionally, we study generator matrices, which consist of the identity matrices and the composite matrices. Together with the generator matrices, the well known extension method, the neighbour method and its generalization, we find extremal binary self-dual codes of length 68 with new weight enumerators for the rare parameters $\gamma$ = 7; 8 and 9: In particular, we find 49 new such codes. Moreover, we show that the codes we find are inaccessible from other constructions. • #### Millimeter-Wave Free-Space Dielectric Characterization Millimeter wave technologies have widespread applications, for which dielectric permittivity is a fundamental parameter. The non-resonant free-space measurement techniques for dielectric permittivity using vector network analysis in the millimeter wave range are reviewed. An introductory look at the applications, significance, and properties of dielectric permittivity in the millimeter wave range is addressed first. The principal aspects of free-space millimeter wave measurement methods are then discussed, by assessing a variety of systems, theoretical models, extraction algorithms and calibration methods. In addition to conventional solid dielectric materials, the measurement of artificial metamaterials, liquid, and gaseous-phased samples are separately investigated. The pros of free-space material extraction methods are then compared with resonance and transmission line methods, and their future perspective is presented in the concluding part. • #### G-Codes, self-dual G-Codes and reversible G-Codes over the Ring Bj,k In this work, we study a new family of rings, Bj,k, whose base field is the finite field Fpr . We study the structure of this family of rings and show that each member of the family is a commutative Frobenius ring. We define a Gray map for the new family of rings, study G-codes, self-dual G-codes, and reversible G-codes over this family. In particular, we show that the projection of a G-code over Bj,k to a code over Bl,m is also a G-code and the image under the Gray map of a self-dual G-code is also a self-dual G-code when the characteristic of the base field is 2. Moreover, we show that the image of a reversible G-code under the Gray map is also a reversible G2j+k-code. The Gray images of these codes are shown to have a rich automorphism group which arises from the algebraic structure of the rings and the groups. Finally, we show that quasi-G codes, which are the images of G-codes under the Gray map, are also Gs-codes for some s.
{}
Each Linux process has its own dedicated address space dynamically translated into physical memory address space by the MMU (and the kernel) [1]. To each individual process, the view is as if it alone has full access to the system’s physical memory [2]. More importantly, the address space of even a single process can be much larger than physical memory. However, the memory usage of each process is bounded with Linux resource limitation (via setrlimit()). The space is divided into several parts, including text, data, heap and stack, etc. Stack and heap are the two parts we’ll talk about. At the time of creation, the system creates heap and stack segment for processes. While stack is the reserved memory as scratch space for thread execution, heap is memory set aside for dynamic allocation. Each thread gets a stack, while there’s typically only one heap for the application. The OS allocates the stack for each system-level thread when the thread is created, and the stack size is predetermined in creation. Typically, the OS is called by the language runtime to allocate the heap for the application. The heap size is set on application startup, but it can grow by calling allocators to request more memory from OS. [3] In the runtime, dynamic memory management is operated on heap. Standard C functions malloc() and free() are used to allocate and deallocate memory blocks. malloc() is a library call, not a system call, which directly deals with paged virtual memory instead of physical memory (which is handled by the kernel). As the system creates heap and stack segment for processes at the time of creation, malloc() already has some memory to work with without having to call the OS for every memory request from the program. If more memory is needed (either due to malloc() or due to stack growing), then a system call brk()/sbrk()/mmap() is made to obtain a contiguous (w.r.t virtual memory address) chunk of memory, which the malloc() further slices and dices in smaller chunks and hands out to the application. [3,4] Various different implementations of malloc() attempt to satisfy any given request from the memory which has already been allocated to the process. And, these allocators are categorized mainly by how they keep track of the free blocks that they can use to parcel out memory to applications. Main categories are first-fit, best-fit and worst-fit, etc. [5] ## Memory allocation inside kernel Actual physical memory is managed by Linux kernel. The kernel treats physical pages as the basic unit of memory management. Although the processor’s smallest addressable unit is usually word (or even a byte), the MMU typically deals in pages. The MMU manages page tables with page-sized granularity. [2] The space in between the heap and the stack is unallocated space, therefore the top of the heap is often referred to as the “break point” because this is where the memory space is split. As more dynamic memory is needed, the application must inform the OS to move up the break point to allocate more memory for the process, thus increasing the heap size. The allocation is usually achieved by brk()/sbrk(). On a call to brk(), the Linux kernel performs a few checks and then allocates the new memory for the process. First, the kernel aligns the old and the new break point to be on page boundaries. A page (usually 4KB) is the smallest unit of memory that the OS will give to any process. Then, the system call checks the limits and the kernel verifies that it is safe to allocate the required memory, and finally the kernel calls do_brk() function to increase the memory for the process. Each process has a map/page-table to take a certain range of addresses in the processes’ address space and to map it physical memory. When the process calls brk(), the OS increases the map size for the heap, thus giving the process more memory to use. When the map size is increased, the new pages that are mapped into the address space do not actually exist, i.e., no physical pages are allocated for the new mapping. The OS, by increasing the map size, only provides a mapping between the new addresses in the process space and the memory that those pages will occupy when they are used. ## Summary of malloc() workflow The malloc() calls an internal helper, chunk_alloc(), to find the block to return to the user. This helper is the function that looks through the bins for a match to return. If its cannot find a suitable match, it calls malloc_extend_top(), another helper function, that actually calls sbrk(). Once the memory is retrieved from the OS, chunk_alloc() splits off the piece that is required for the current location. It then adds the remainder to the appropriate bin for future use and returns the new block to malloc(), which then returns that block to the user. As it turns out, most of the work that malloc() does deals with deciding how to manage memory that has already been allocated by the OS.
{}
# Tools: phpMetrics I’m researching some tools to visualize PHP code which I think can be helpful to understand the complexity of a project. In this article I explore a tool called phpMetrics that is a static analysis tool for php. It can collect information about PHP files in your project. This tool can be installed via composer as a standalone tool. composer global require 'phpmetrics/phpmetrics' phpmetrics --report-html=myreport.html /path/of/your/sources When the tool is installed you can run it from the composer global directory. It starts with analysing all the PHP files and collecting metrics and generating everything that is needed. vagrant@seoeffect:/var/www/html\$ /home/vagrant/.composer/vendor/bin/phpmetrics --report-html=myreport.html /var/www/seoeffect PHPMetrics by Jean-François Lépine 795/1853 [============>---------------] 42% There are a lot of numerical metrics collected like the cyclomatic complexity, number of lines, number of classes etc.. For an overview of all metrics the documentation should be consulted. These metrics can be quite useful for a global overview about how well your code is doing and where the possible problems in your code are. All these metrics are drawn into graphs that give a quick overview about the state of a project. When the analyzing is done there is one html file generated with an overview of the metrics. For this project there are a lot of red circles so there is a lot of work to do. On
{}
gemseo / problems / scalable / data_driven # api module¶ ## Scalability study - API¶ This API facilitates the use of the gemseo.problems.scalable.data_driven.study package implementing classes to benchmark MDO formulations based on scalable disciplines. ScalabilityStudy class implements the concept of scalability study: 1. By instantiating a ScalabilityStudy, the user defines the MDO problem in terms of design parameters, objective function and constraints. 2. For each discipline, the user adds a dataset stored in a Dataset and select a type of ScalableModel to build the ScalableDiscipline associated with this discipline. 3. The user adds different optimization strategies, defined in terms of both optimization algorithms and MDO formulation. 4. The user adds different scaling strategies, in terms of sizes of design parameters, coupling variables and equality and inequality constraints. The user can also define a scaling strategies according to particular parameters rather than groups of parameters. 5. Lastly, the user executes the ScalabilityStudy and the results are written in several files and stored into directories in a hierarchical way, where names depend on both MDO formulation, scaling strategy and replications when it is necessary. Different kinds of files are stored: optimization graphs, dependency matrix plots and of course, scalability results by means of a dedicated class: ScalabilityResult. gemseo.problems.scalable.data_driven.api.create_scalability_study(objective, design_variables, directory='study', prefix='', eq_constraints=None, ineq_constraints=None, maximize_objective=False, fill_factor=0.7, active_probability=0.1, feasibility_level=0.8, start_at_equilibrium=True, early_stopping=True, coupling_variables=None)[source] This method creates a ScalabilityStudy. It requires two mandatory arguments: • the 'objective' name, • the list of 'design_variables' names. Concerning output files, we can specify: • the directory which is 'study' by default, • the prefix of output file names (default: no prefix). Regarding optimization parametrization, we can specify: • the list of equality constraints names (eq_constraints), • the list of inequality constraints names (ineq_constraints), • the choice of maximizing the objective function (maximize_objective). By default, the objective function is minimized and the MDO problem is unconstrained. Last but not least, with regard to the scalability methodology, we can overwrite: • the default fill factor of the input-output dependency matrix ineq_constraints, • the probability to set the inequality constraints as active at initial step of the optimization active_probability, • the offset of satisfaction for inequality constraints feasibility_level, • the use of a preliminary MDA to start at equilibrium start_at_equilibrium, • the post-processing of the optimization database to get results earlier than final step early_stopping. Parameters: • objective (str) – name of the objective • design_variables (list(str)) – names of the design variables • directory (str) – working directory of the study. Default: ‘study’. By default it is set to “study”. • prefix (str) – prefix for the output filenames. Default: ‘’. By default it is set to “”. • eq_constraints (list(str)) – names of the equality constraints. Default: None. • ineq_constraints (list(str)) – names of the inequality constraints Default: None. • maximize_objective (bool) – maximizing objective. Default: False. By default it is set to False. • fill_factor (float) – default fill factor of the input-output dependency matrix. Default: 0.7. By default it is set to 0.7. • active_probability (float) – probability to set the inequality constraints as active at initial step of the optimization. Default: 0.1 By default it is set to 0.1. • feasibility_level (float) – offset of satisfaction for inequality constraints. Default: 0.8. By default it is set to 0.8. • start_at_equilibrium (bool) – start at equilibrium using a preliminary MDA. Default: True. By default it is set to True. • early_stopping (bool) – post-process the optimization database to get results earlier than final step. By default it is set to True. gemseo.problems.scalable.data_driven.api.plot_scalability_results(study_directory)[source] This method plots the set of ScalabilityResult generated by a ScalabilityStudy and located in the directory created by this study. Parameters: study_directory (str) – directory of the scalability study. Scalable study Scalable study Scalable study Scalable study
{}
# Mathematical expression in axis label In R, I use expression(theta[l]) so that the label of my plot axis is that same as $\theta_l$ from LaTeX. For esthetic reasons, I'd rather like to display $\theta_\ell$. Can you help me? EDIT Before, I did plot(1:10, 1:10, xlab=expression(theta[l])) and I exported the resulting picture in pdf. Then, using \begin{figure}[htbp] \centerline{\includegraphics[scale=.6]{test.pdf}} \end{figure} my picture was inserted in LaTeX. Following the comments, here is what I now do: require(tikzDevice) tikz("test.tex", standAlone=TRUE, width=5, height=5) plot(1:10, 1:10, xlab="$\\theta_\\ell$") dev.off() tools::texi2pdf('test.tex') system(paste(getOption('pdfviewer'),'test.pdf')) However, when I insert the resuling plot in LaTeX, the quality of the figure is not as good as before. Is there something more that I can do? - @agstudy: Windows. Linux is also fine for me. –  Marco Mar 18 '13 at 15:32 @Sven Hohenstein: Thanks for your picture :-) –  Marco Mar 18 '13 at 15:47 Is this what you're looking for? –  plannapus Mar 18 '13 at 15:51 @plannapus: mmmmm, interesting ! I will investigate that package :-) –  Marco Mar 18 '13 at 16:11 This works with cairo_pdf: plot(1:10, main = "\u2113"). edit: I just realized that you want theta_ell, that wont probably help there... –  Hemmo Mar 20 '13 at 18:33 There is really no reason to use a possibly unmaintained package like 'tikzDevice' for such a simple problem. Part of the problem with the 'tikz' device is that it doesn't seem to correctly accept the 'xpinch' and 'ypinch' arguments that specify your plot's resolution. There is a larger question of adding LaTEX notation to plots, but for this localized problem, the question is one of specifying the font to make the base 'plotmath' package display cursive letters for you. You can change the font for your x-axis label by separating it out from the plot command and choosing a custom font from within the 'title' function with something like this: plot(1:10, 1:10, xlab="") windowsFonts(script=windowsFont("Script MT Bold")) title(xlab=expression(theta[l]), family="script") What we've done is to specify a null label for the x-axis in the plot command to first make space. Then, we load up a system font into the available font families (I like Script MT Bold for expressions). Finally, we can use the 'title' function to plot the x-axis label and specify the family for any text in that label. By doing this, we preserve all of the original functionality of the plotting device, so you should no longer have a drop in resolution when converting to PDF. Now if anyone has a good solution to the LaTEX notation problem outside of the 'tikzDevice' package, I would love to hear about it. The only way I know to do this well is to flip the model and use the 'tikz' LaTEX package to draw the whole graphic manually from within the LaTEX file or to use the 'pixmap' R package to draw an image of my expression on top of the plot. Neither feels like a perfect approach. - Thanks, I will study your answer this week end :-) –  Marco Mar 22 '13 at 21:25 Where do we find the "windowsFonts"? –  Marco Mar 26 '13 at 5:31 These are the fonts installed on your system. You just have to use the name of the font as it is registered on your computer. On a Windows computer, you can use the Character Map application (system tool). –  Dinre Mar 26 '13 at 11:46 windowsFonts is a function that can display the currently avaialble font mappings for the interactive graphics device, and it can be used to assign new mappings. It's obviously only avaialble on ... Windows. There are corresponding functions on Mac (quartzFonts) and Nix ("X11Fonts). –  BondedDust Mar 27 '13 at 17:58 I think tikzDevice is the way to go here. You can install from R-forge. install.packages("tikzDevice", repos="http://R-Forge.R-project.org") The tikz /pgf philosophy is to create a plot that can be typeset. You will probably want these to be consistent with your document. Eg, with the same packages, fonts, font size etc You can set these things within a call to tikz by setting options such as the document declaration options(tikzDocumentDeclaration = "\\documentclass[10pt]{article}") packages tikzLatexPackages (or similar) You can also control the font size. All these things are detailed in ?tikzDevice. You could also use knitr to create your plots within a literate programming document (.rnw) In this case, using a tikz device a pdf is created (as external = TRUE), using the same document declaration and header / packages as the whole document. \documentclass[12pt,a4paper]{article} \begin{document} <<setup, include = FALSE>>= opts_chunk$set(dev = 'tikz', external = TRUE) @ <<myplot, fig.width=5, fig.height = 5, echo=FALSE>>= plot(1:10, 1:10, xlab="$\\theta_\\ell\$") @ \end{document} - If you are installing on a Mac (or Windows) you will probably need to install the proper package of tools and use: install.packages("tikzDevice", repos="http://R-Forge.R-project.org") –  BondedDust Mar 27 '13 at 18:00 This is a somewhat dirty solution, but it makes it: plot(1,1, xlab=expression(theta)) title(xlab=" \u2113",line=3.2,cex.lab=.7) First plot with the theta symbol. Then add the \ell symbol with smaller font size and manually setting the position. - @carelus: Thanks. Does it still appear as \ell when the image is exported in pdf format? –  Marco Mar 23 '13 at 7:59 just checked and apparently it doesn't :-( –  Julián Urbano Mar 23 '13 at 19:21 me neither. Thanks anyway –  Marco Mar 23 '13 at 19:26 The behavior will depend on the font mappings of the pdf-viewer. On a Mac using the Preview.app as viewer it does appear with the same cursive-ell. Your viewer need to have a proper Symbol font to get the same behavior and I believe the Windows viewers are often not set up properly for this. –  BondedDust Mar 27 '13 at 18:04 @Dwin: Thanks for the precision. Do you have a recommendation regarding the pdf-viewer on windows? –  Marco Mar 29 '13 at 6:37 I found a workaround here. They do explain a lengthy process to get the encoding to work with the standard pdf device. Otherwise, the CairoPDF device can be used by installing the Cairo package. Then something like xlab="\u2113" will show up in the pdf using @Julián Urbano's solution. I had no luck using the character within an expression. -
{}
# how do I create a border, non-rectangular, surrounding a nodes in tikz? I have the following MWE. I would like to draw a border without including the $y_3$ (or s3 in the .tex file) -- so it would be a rectangle with its left upper corner changed so that it bypasses s3. Is there a way to create such a border, basically, a rectangular polygon, surrounding a set of nodes? (EDIT: I was trying to do it using the last command in the tikz file.) \documentclass{article} \usepackage{tikz} \usetikzlibrary{fit,chains} \begin{document} \begin{tikzpicture}[node distance=2cm] \node(s1) {$y_1$}; \node(s2) [right of = s1] {$y_2$}; \node(s3) [right of = s2]{$y_{3}$}; \node(s4) [right of = s3]{$y_4$}; \node(s5) [right of = s4]{$y_5$}; \draw [->] (s1) -- (s2) ; \draw [->] (s2) -- (s3) ; \draw [->] (s3) -- (s4) ; \draw [->] (s4) -- (s5) ; \node(x1) [below of = s1]{$x_1$}; \node(x2) [right of = x1] {$X_2$}; \node(x3) [right of = x2] {$X_3$}; \node(x4) [right of = x3] {$X_4$}; \node(x5) [right of = x4] {$X_5$}; \draw [->] (s1) -- (x1) ; \draw [->] (s2) -- (x2) ; \draw [->] (s3) -- (x3) ; \draw [->] (s4) -- (x4) ; \draw [->] (s5) -- (x5) ; \node[rectangle,draw=red, fit=(x3) (x4) (x5) (s4) (s5),inner sep=3mm,line width=1mm](rect2) {}; \end{tikzpicture} \end{document} If instead of a node you accept a non automatically drawn line ... \documnentclass{article} \usepackage{tikz} \usetikzlibrary{fit,chains, calc} \begin{document} \begin{tikzpicture}[node distance=2cm] \node(s1) {$y_1$}; \node(s2) [right of = s1] {$y_2$}; \node(s3) [right of = s2]{$y_{3}$}; \node(s4) [right of = s3]{$y_4$}; \node(s5) [right of = s4]{$y_5$}; \draw [->] (s1) -- (s2) ; \draw [->] (s2) -- (s3) ; \draw [->] (s3) -- (s4) ; \draw [->] (s4) -- (s5) ; \node(x1) [below of = s1]{$x_1$}; \node(x2) [right of = x1] {$X_2$}; \node(x3) [right of = x2] {$X_3$}; \node(x4) [right of = x3] {$X_4$}; \node(x5) [right of = x4] {$X_5$}; \draw [->] (s1) -- (x1) ; \draw [->] (s2) -- (x2) ; \draw [->] (s3) -- (x3) ; \draw [->] (s4) -- (x4) ; \draw [->] (s5) -- (x5) ; \node[rectangle,draw=red, fit=(x3) (x4) (x5) (s4) (s5),inner sep=3mm,line width=1mm](rect2) {}; \draw[blue, line width=1mm] (x3.south west)-|(s5.north east) --($(s3.north)!0.5!(s4.north)$)|-($(s3.west)!0.5!(x3.west)$)--cycle; \end{tikzpicture} \end{document}
{}
# 4.2: Laws of Set Theory ## Tables of Laws The following basic set laws can be derived using either the Basic Definition or the Set-Membership approach and can be illustrated by Venn diagrams. Table $$\PageIndex{1}$$: Basic Laws of Set Theory Commutative Laws ($$1$$) $$A \cup B = B \cup A$$ ($$1^{\prime}$$) $$A \cap B = B\cap A$$ Associative Laws ($$2$$) $$A\cup (B\cup C)=(A\cup B)\cup C$$ ($$2^{\prime}$$) $$A\cap (B\cap C)=(A\cap B)\cap C$$ Distributive Laws ($$3$$) $$A\cap (B\cup C)=(A\cap B)\cup (A\cap C)$$ ($$3^{\prime}$$) $$A\cup (B\cap C)=(A\cup B)\cap (A\cup C)$$ Identity Laws ($$4$$) $$A\cup\emptyset =\emptyset\cup A=A$$ ($$4^{\prime}$$) $$A\cap U=U\cap A=A$$ Complement Laws ($$5$$) $$A\cup A^{c}=U$$ ($$5^{\prime}$$) $$A\cap A^c=\emptyset$$ Idempotent Laws ($$6$$) $$A\cup A=A$$ ($$6^{\prime}$$) $$A\cap A=A$$ Null Laws ($$7$$) $$A\cup U=U$$ ($$7^{\prime}$$) $$A\cap\emptyset=\emptyset$$ Absorption Laws ($$8$$) $$A\cup (A\cap B)=A$$ ($$8^{\prime}$$) $$A\cap (A\cup B)=A$$ DeMorgan's Laws ($$9$$) $$(A\cup B)^c=A^c\cap B^c$$ ($$9^{\prime}$$) $$(A\cap B)^c=A^c\cup B^c$$ Involution Law ($$10$$) $$(A^c)^c=A$$ It is quite clear that most of these laws resemble or, in fact, are analogous of laws in basic algebra and the algebra of propositions. ## Proof Using Previously Proven Theorems Once a few basic laws or theorems have been established, we frequently use them to prove additional theorems. This method of proof is usually more efficient than that of proof by Definition. To illustrate, let us prove the following Corollary to the Distributive Law. The term "corollary" is used for theorems that can be proven with relative ease from previously proven theorems. Corollary $$\PageIndex{1}$$: A Corollary to the Distributive Law of Sets Let A and B be sets. Then $$(A\cap B) \cup (A\cap B^c) = A\text{.}$$ Proof \begin{equation*} \begin{split} (A\cap B) \cup (A\cap B^c) & = A \cap (B \cup B^c)\\ & \quad \textrm{Why?}\\ & = A \cap U\\ &\quad \textrm{Why?}\\ & = A\\ &\quad \textrm{Why?} \end{split}\text{.} \end{equation*} ## Proof Using the Indirect Method/Contradiction The procedure one most frequently uses to prove a theorem in mathematics is the Direct Method, as illustrated in Theorem 4.1.1 and Theorem 4.1.2. Occasionally there are situations where this method is not applicable. Consider the following: Theorem $$\PageIndex{1}$$: An Indirect Proof in Set Theory Let $$A, B, C$$ be sets. If $$A\subseteq B$$ and $$B\cap C = \emptyset\text{,}$$ then $$A\cap C = \emptyset\text{.}$$ Proof Commentary: The usual and first approach would be to assume $$A\subseteq B$$ and $$B\cap C = \emptyset$$ is true and to attempt to prove $$A\cap C = \emptyset$$ is true. To do this you would need to show that nothing is contained in the set $$A \cap C\text{.}$$ Think about how you would show that something doesn't exist. It is very difficult to do directly. The Indirect Method is much easier: If we assume the conclusion is false and we obtain a contradiction --- then the theorem must be true. This approach is on sound logical footing since it is exactly the same method of indirect proof that we discussed in Subsection 3.5.3. Assume $$A\subseteq B$$ and $$B\cap C = \emptyset\text{,}$$ and $$A\cap C \neq \emptyset\text{.}$$ To prove that this cannot occur, let $$x\in A \cap C\text{.}$$ \begin{equation*} \begin{split} x \in A \cap C & \Rightarrow x \in A \textrm{ and } x \in C\\ & \Rightarrow x \in B \textrm{ and } x \in C\\ & \Rightarrow x \in B \cap C \end{split}\text{.} \end{equation*} But this contradicts the second premise. Hence, the theorem is proven. ## Exercises In the exercises that follow it is most important that you outline the logical procedures or methods you use. Exercise $$\PageIndex{1}$$ 1. Prove the associative law for intersection (Law $$2^{\prime}$$) with a Venn diagram. 2. Prove DeMorgan's Law (Law 9) with a membership table. 3. Prove the Idempotent Law (Law 6) using basic definitions. 1. 2. \begin{equation*} \begin{array}{ccccccc} A & B &A^c & B^c & A\cup B & (A\cup B)^c &A^c\cap B^c \\ \hline 0 & 0 &1 & 1 & 0 & 1 & 1 \\ 0 & 1 &1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 1 & 0 & 0 \\ \end{array} \end{equation*} The last two columns are the same so the two sets must be equal. 3. \begin{equation*} \begin{split} x\in A\cup A & \Rightarrow (x\in A) \lor (x\in A)\quad\textrm{by the definition of } \cap\\ &\Rightarrow x\in A \quad\textrm{ by the idempotent law of logic} \end{split} \end{equation*} Therefore, $$A\cup A\subseteq A\text{.}$$ \begin{equation*} \begin{split} x\in A &\Rightarrow (x\in A) \lor (x\in A) \quad \textrm{ by conjunctive addition}\\ & \Rightarrow x\in A\cup A\\ \end{split} \end{equation*} Therefore, $$A \subseteq A\cup A$$ and so we have $$A\cup A=A\text{.}$$ Exercise $$\PageIndex{2}$$ 1. Prove the Absorption Law (Law $$8^{\prime}$$) with a Venn diagram. 2. Prove the Identity Law (Law 4) with a membership table. 3. Prove the Involution Law (Law 10) using basic definitions. Exercise $$\PageIndex{3}$$ Prove the following using the set theory laws, as well as any other theorems proved so far. 1. $$\displaystyle A \cup (B - A) = A \cup B$$ 2. $$\displaystyle A - B = B^c - A ^c$$ 3. $$\displaystyle A\subseteq B, A\cap C \neq \emptyset \Rightarrow B\cap C \neq \emptyset$$ 4. $$\displaystyle A\cap (B - C) = (A\cap B) - (A\cap C)$$ 5. $$\displaystyle A - (B \cup C) = (A - B)\cap (A - C)$$ For all parts of this exercise, a reason should be supplied for each step. We have supplied reasons only for part a and left them out of the other parts to give you further practice. 1. \begin{equation*} \begin{split} A \cup (B-A)&=A\cup (B \cap A^c) \textrm{ by Exercise 4.1.1 of Section 4.1}\\ & =(A\cup B)\cap (A\cup A^c) \textrm{ by the distributive law}\\ &=(A\cup B)\cap U \textrm{ by the null law}\\ &=(A\cup B) \textrm{ by the identity law } \square \end{split}\text{.} \end{equation*} 2. \begin{equation*} \begin{split} A - B & = A \cap B ^c\\ & =B^c\cap A\\ &=B^c\cap (A^c)^c\\ &=B^c-A^c\\ \end{split}\text{.} \end{equation*} 3. Select any element, $$x \in A\cap C\text{.}$$ One such element exists since $$A\cap C$$ is not empty. \begin{equation*} \begin{split} x\in A\cap C\ &\Rightarrow x\in A \land x\in C \\ & \Rightarrow x\in B \land x\in C \\ & \Rightarrow x\in B\cap C \\ & \Rightarrow B\cap C \neq \emptyset \quad \square \\ \end{split}\text{.} \end{equation*} 4. \begin{equation*} \begin{split} A\cap (B-C) &=A\cap (B\cap C^c) \\ & = (A\cap B\cap A^c)\cup (A\cap B\cap C^c) \\ & =(A\cap B)\cap (A^c\cup C^c) \\ & =(A\cap B)\cap (A\cup C)^c \\ & =(A-B)\cap (A-C) \quad \square\\ \end{split}\text{.} \end{equation*} 5. \begin{equation*} \begin{split} A-(B\cup C)& = A\cap (B\cup C)^c\\ & =A\cap (B^c\cap C^c)\\ & =(A\cap B^c)\cap (A\cap C^c)\\ & =(A-B)\cap (A-C) \quad \square\\ \end{split}\text{.} \end{equation*} Exercise $$\PageIndex{4}$$ Use previously proven theorems to prove the following. 1. $$\displaystyle A \cap (B\cap C)^c= (A\cap B^c)\cup (A\cap C^{c })$$ 2. $$\displaystyle A \cap (B\cap (A\cap B)^c)= \emptyset$$ 3. $$\displaystyle (A\cap B) \cup B^c = A \cup B^c$$ 4. $$A \cup (B - C) = (A \cup B) - (C - A)\text{.}$$ Exercise $$\PageIndex{5}$$: Hierarchy of Set Operations The rules that determine the order of evaluation in a set expression that involves more than one operation are similar to the rules for logic. In the absence of parentheses, complementations are done first, intersections second, and unions third. Parentheses are used to override this order. If the same operation appears two or more consecutive times, evaluate from left to right. In what order are the following expressions performed? 1. $$A \cup B^c\cap C\text{.}$$ 2. $$A\cap B \cup C\cap B\text{.}$$ 3. $$\displaystyle A \cup B \cup C^c$$ 1. $$\displaystyle A\cup ((B^c)\cap C)$$ 2. $$\displaystyle (A\cap B)\cup (C\cap B)$$ 3. $$\displaystyle (A\cup B)\cup (C^c)$$ Exercise $$\PageIndex{6}$$ There are several ways that we can use to format the proofs in this chapter. One that should be familiar to you from Chapter 3 is illustrated with the following alternate proof of part (a) in Theorem 4.1.1: Table $$\PageIndex{2}$$: An alternate format for the proof of Theorem 4.1.1 (1) (2) $$x \in A \cap (B \cup C)$$ Premise $$(x \in A) \land (x \in B \cup C)$$ (1), definition of intersection ($$x \in A) \land ((x \in B) \lor (x \in C))$$ (2), definition of union $$(x \in A)\land (x\in B)\lor (x \in A)\land (x\in C)$$ (3), distribute $$\land$$ over $$\lor$$ $$(x \in A\cap B) \lor (x \in A \cap C)$$ (4), definition of intersection $$x \in (A \cap B) \cup (A \cap C)$$ (5), definition of union $$\blacksquare$$ Prove part (b) of Theorem 4.1.2 and Theorem 4.2.1 using this format. 4.2: Laws of Set Theory is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Al Doerr & Ken Levasseur.
{}
1. ## percent equation Hi, everyone! I am so happy to find this friendly and supportive forum , glad to be part of it. I just started taking classes in college this fall, and math is one of the subject I have to work hard. I am doing OK so far but there are problems I am finding hard to solve. One of them is a word problem using the percentage. I had spent almost all day today, trying to solve it and finally I am giving up and asking for help. I will appreciate the help. A hospital needs to dilute a 60% boric acid solution to a 10% solution. If it needs 40 liters of the 10% solution, how much of the 60% solution and how much water should it use? 2. Hello, mirdita59! Welcome aboard! A hospital needs to dilute a 60% boric acid solution to a 10% solution. If it needs 40 liters of the 10% solution, how much of the 60% solution and how much water should it use? Let $\displaystyle x$ = amount of 60% solutions to be used. Then $\displaystyle 40-x$ = amount of water to be used. Consider the amount of boric acid in each stage. We used $\displaystyle x$ liters of the 60% solution. . . This contains: $\displaystyle 0.6x$ liters of boric acid. We add $\displaystyle 40-x$ liters of pure water. . . This contains: $\displaystyle 0$ liters of boric acid. The mixture contains: .$\displaystyle 0.6x + 0 \:=\:0.6x$ liters of boric acid. But we are told that the final 40 liters should be 10% boric acid. . . Hence, it will contain: .$\displaystyle (40)(0.10) \:=\:4$ liters of boric acid. There is our equation: . $\displaystyle 0.6x \:=\:4\quad\Rightarrow\quad x \,=\,\frac{20}{3}$ Therefore, we should use $\displaystyle 6\frac{2}{3}$ liters of the 60% solution and $\displaystyle 33\frac{1}{3}$ liters of water. 3. A hospital needs to dilute a 60% boric acid solution to a 10% solution. If it needs 40 liters of the 10% solution, how much of the 60% solution and how much water should it use? Water is 0% acid. So final volume of liquid is 40 liters. Let x = volume of 60% acid solution to be mixed, in liters. So, (40-x) liters of water is to be added. x(60%) +(40-x)(0%) = 40(10%) Divide both sides by 1%, x(60) +0 = 40(10) x = 400/60 = 6.667liters, or (6 and 2/3) liters Therefore, 6 and 2/3 liters of 60% boric acid solution and (40 -6.667) = 33.333 liters, or 33 and 1/3 liters of water
{}
## Basic time series analysis Hello, I am having trouble understanding the nomenclature associate with a stochastic process, especially when associated with a time series. Let or simply be a stochastic process. Then, we can describe the process by finding the joint pdf of observations taken at times , for any value of . So, my stochastic process looks like ? I guess I am not understanding the relationship between and the subscript $k$ as in, the difference between seeing and . My thought is, we have a stochastic process , which is a colection of observations taken times x--------------------x--------------------x----------------------x where the x denotes observations (or realizations) , , , and respectively through some time . So, for instance, if I am measuring human traffic in airports and I collect my sample data at each one of these times and come up with some kind of distribution for each sample $(X_{t_{1}},...,X_{t_k})$ then try to find a joint distribution to describe this particular process, which I let run for a given amount or time. Is this correct? So, a process, such as a random walk , where $Z_{t}$ is white noise is made of two process; namely and , correct? Do I veiw as a random variable with a joint distribution made up of random variables observed times for a duration, so that, is a process? As you can tell, my thoughts are pretty scattered. I would really appreciate some help understanding the basic structure of stochastic process relating to time series analysis. Thank you
{}
# Spatial–temporal variations in riverine carbon strongly influenced by local hydrological events in an alpine catchment Headwater streams drain inline-formula>70 % of global land areas but are poorly monitored compared with large rivers. The small size and low water buffering capacity of headwater streams may result in a high sensitivity to local hydrological alterations and different carbon transport patterns from large rivers. Furthermore, alpine headwater streams on the “Asian water tower”, i.e., Qinghai–Tibetan Plateau, are heavily affected by thawing of frozen soils in spring as well as monsoonal precipitation in summer, which may present contrasting spatial–temporal variations in carbon transport compared to tropical and temperate streams and strongly influence the export of carbon locked in seasonally frozen soils. To illustrate the unique hydro-biogeochemistry of riverine carbon in Qinghai–Tibetan headwater streams, here we carry out a benchmark investigation on the riverine carbon transport in the Shaliu River (a small alpine river integrating headwater streams) based on annual flux monitoring, sampling at a high spatial resolution in two different seasons and hydrological event monitoring. We show that riverine carbon fluxes in the Shaliu River were dominated by dissolved inorganic carbon, peaking in the summer due to high discharge brought by the monsoon. Combining seasonal sampling along the river and monitoring of soil–river carbon transfer during spring thaw, we also show that both dissolved and particulate forms of riverine carbon increased downstream in the pre-monsoon season due to increasing contribution of organic matter derived from thawed soils along the river. By comparison, riverine carbon fluctuated in the summer, likely associated with sporadic inputs of organic matter supplied by local precipitation events during the monsoon season. Furthermore, using lignin phenol analysis for both riverine organic matter and soils in the basin, we show that the higher acid-to-aldehyde (inline-formula $M2inlinescrollmathmlchem\mathrm{normal Ad}/\mathrm{normal Al}$ 31pt14ptsvg-formulamathimge6a821d5b8b6996ba753f66655cbf8cb bg-18-3015-2021-ie00001.svg31pt14ptbg-18-3015-2021-ie00001.png ) ratios of riverine lignin in the monsoon season reflect a larger contribution of topsoil likely via increased surface runoff compared with the pre-monsoon season when soil leachate lignin inline-formula $M3inlinescrollmathmlchem\mathrm{normal Ad}/\mathrm{normal Al}$ 31pt14ptsvg-formulamathimg6790affa9bc31b6a11fd6de4d6bd641b bg-18-3015-2021-ie00002.svg31pt14ptbg-18-3015-2021-ie00002.png ratios were closer to those in the subsoil than topsoil solutions. Overall, these findings highlight the unique patterns and strong links of carbon transport in alpine headwater catchments with local hydrological events. Given the projected climate warming on the Qinghai–Tibetan Plateau, thawing of frozen soils and alterations of precipitation regimes may significantly influence the alpine headwater carbon transport, with critical effects on the biogeochemical cycles of the downstream rivers. The alpine headwater catchments may also be utilized as sentinels for climate-induced changes in the hydrological pathways and/or biogeochemistry of the small basin. ### Zitieren Zitierform: Wang, Xin / Liu, Ting / Wang, Liang / et al: Spatial–temporal variations in riverine carbon strongly influenced by local hydrological events in an alpine catchment. 2021. Copernicus Publications. ### Zugriffsstatistik Gesamt: Volltextzugriffe:
{}
It is currently 20 Oct 2017, 22:13 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # During a 40-mile trip, Marla traveled at an average speed of Author Message TAGS: ### Hide Tags Manager Joined: 03 Jan 2017 Posts: 197 Kudos [?]: 9 [0], given: 4 Re: During a 40-mile trip, Marla traveled at an average speed of [#permalink] ### Show Tags 28 Mar 2017, 10:49 let's write down what the question asks and them make it up: we will get at (0,25y+40)/1,25x : 40/x or (0,25y+40)/1,25*40 So we don't need x, but y 1) NS 2) S Kudos [?]: 9 [0], given: 4 Intern Joined: 19 May 2016 Posts: 24 Kudos [?]: 6 [0], given: 86 Location: United States Concentration: Strategy, Human Resources GMAT 1: 710 Q46 V41 GMAT 2: 730 Q49 V41 GMAT 3: 680 Q46 V37 WE: Operations (Manufacturing) Re: During a 40-mile trip, Marla traveled at an average speed of [#permalink] ### Show Tags 16 Apr 2017, 16:08 Bunuel wrote: During a 40-mile trip, Marla traveled at an average speed of x miles per hour for the first y miles of the trip and and at an average speed of 1.25x mph for the last 40-y miles of the trip. The time that Marla took to travel the 40 miles was what percent of the time it would have taken her if she has traveled at an average speed of x miles per hour for the entire trip? $$t_1=\frac{y}{x}+\frac{40-y}{1.25x}=\frac{0.25y+40}{1.25x}$$; $$t_2=\frac{40}{x}$$; Q: $$\frac{t_1}{t_2}=\frac{0.25y+40}{1.25x}*\frac{x}{40}=\frac{0.25y+40}{1.25}*\frac{1}{40}$$. So we see that the value of this fraction does not depend on $$x$$, only on $$y$$. (1) x = 48. Not sufficient. (2) y = 20. Sufficient. Thanks, Banuel! This explanation is perfectly clear. However, when I got this question, I started by adding the rates, so the actual time became 40/(9/4x) and the time had x been used all along would be 40/x. I ended up only needing x to solve the problem, which is obviously wrong. When is it okay to add the rates? Thank you! Kudos [?]: 6 [0], given: 86 Senior Manager Joined: 26 Dec 2015 Posts: 254 Kudos [?]: 52 [0], given: 1 Location: United States (CA) Concentration: Finance, Strategy WE: Investment Banking (Venture Capital) Re: During a 40-mile trip, Marla traveled at an average speed of [#permalink] ### Show Tags 06 Jul 2017, 19:40 help here would be greatly appreciated. i'm having a hard time setting up this equation algebraically. this is what i understand: - the entire trip consists of 2 parts (below) 1) traveling distance of Y at X mph 2) traveling distance of 40-Y @ 1.25X mph *Note: Y must be < X - D=RT. BUT you want to know T, so T = $$\frac{D}{R}$$ JeffTargetTestPrep, can you please explain why you did the below? y/x + (40 – y)/(1.25x) is what percent of 40/x ? [y/x + (40 – y)/(1.25x)]/(40/x) * 100 = ? The way I translated the first statement is: y/x + (40 – y)/(1.25x) = $$\frac{X}{100}$$ * ($$\frac{40}{X}$$) Kudos [?]: 52 [0], given: 1 Re: During a 40-mile trip, Marla traveled at an average speed of   [#permalink] 06 Jul 2017, 19:40 Go to page   Previous    1   2   [ 23 posts ] Display posts from previous: Sort by
{}
## Tuesday, 20 November 2018 ### On This Day in Math - November 20 The history of astronomy is a history of receding horizons. — Edwin Powell Hubble The 324th day of the year; 324 is the largest possible product of positive integers with a sum of 16. (Students, Can you find the integers. Try to find the similar maximum product with a sum of 17)). 324 is also the sum of four consecutive primes, 324 = 73 + 79 + 83 + 89 If you have a square array of 324 dots (that's 18x18) you can carefully paint them each in one of four colors so that no four corners of a rectangle (with sides horizontal and vertical) are the same color. you can also do that for any smaller square, but not for any larger. Here is a 17x17 to ponder EVENTS 1629 In a letter to Marin Mersenne, Descartes … went on to postulate another kind of language in which ideas would be represented so clearly that errors of judgment would be 'almost impossible'. To realize such a language, all of our thoughts would first have to be given a proper order 'like the natural order of the numbers'; and this presupposes the 'true philosophy', by which the analysis and ordering of thoughts would be carried out. Although Descartes pursues the plan no further, he is optimistic that 'such a language is possible and that the knowledge on which it depends can be discovered'. *Donald Rutherford, 1711 Robert Simson submitted to a simple test of his mathematical knowledge and was duly admitted as professor of mathematics at the University of Glasgow. His most influential work was a definitive edition of Euclid’s Elements in 1749. *VFR  The pedal line of a triangle is sometimes called the "Simson line" after him, although it does not actually appear in any work of Simson. 1843 Sylvester departs US for England and describes his life as "Pretty much a blank." After resigning from Un of Va. after only four months, J. J. Sylvester lived with a brother in New York City while trying to find work in the US. Finally giving up, her returned to England with no job or prospects for one. *James Joseph Sylvester: Life and Work in Letters edited by Karen Hunger Parshall 1980, Steve Ptacek in Solar Challenger piloted its first solar-powered flight. The aircraft was designed and built by AeroVironment, Inc. (founded in 1971 by ultra-light airplane innovator, Dr. Paul MacCready). An earlier, 71-ft wingspan, solar-powered design, the Gossamer Penguin, after test flights, flew about 1.95 miles at a public demonstration on 7 Aug 1980. Solar Challenger built upon this experience to be a piloted, solar-powered aircraft strong enough to handle both long and high flights when encountering normal turbulence. With only a 46.5-ft wingspan, it had a huge horizontal stabilizer and had enough wing area for 16,128 solar cells. After design modifications, Ptacek flew across the English Channel flight on 7 July 1981.*VFR 2008 Conficker, also known as Downup, Downadup and Kido, is a computer worm targeting the Microsoft Windows operating system that was first detected on this day in November 2008. It uses flaws in Windows software and dictionary attacks on administrator passwords to propagate while forming a botnet, and has been unusually difficult to counter because of its combined use of many advanced malware techniques. The Conficker infected millions of computers including government, business and home computers in over 200 countries, making it the largest known computer worm infection since the 2003 Welchia. *Wik BIRTHS 1602 Otto von Guericke (20 Nov 1602; 11 May 1686) German physicist who investigated the properties of a vacuum invented (1654) the first piston air pump to produce a vacuum. While mayor of Madgeburg, in 1663, he demonstrated that two 51 cm diameter copper hemispheres with air pumped out of their interior would be so strongly held together by the force of air pressure that teams of horses harnessed to each hemisphere were not able to pull the hemispheres apart. He studied the role of air in combustion and respiration. With his invention of the first electrostatic machine - a rotating ball of sulphur electrified by friction against his hand - he produced sizeable sparks and showed that like charges repel each other.*TIS 1792 Nikolai Ivanovich Lobachevsky born. (November 20, 1792 – February 12, 1856 (O.S.)) was a Russian mathematician and geometer, renowned primarily for his pioneering works on hyperbolic geometry, otherwise known as Lobachevskian geometry. William Kingdon Clifford called Lobachevsky the "Copernicus of Geometry" due to the revolutionary character of his work. Russia did not convert to the Gregorian Calendar until after the communist revolution in 1918. The new style dates were (December 1, 1792 – February 24, 1856 *Wik And if you've never heard Tom Lehrer's fantastic musical creation about Lobachevsky. He admits the topic has no relation to the man, but the name just fit so well. 1873 William W(eber) Coblentz (20 Nov 1873; 15 Sep 1962) an American physicist and astronomer whose work lay primarily in infrared spectroscopy. In 1905 he founded the radiometry section of the National Bureau of Standards, which he headed for 40 years. Coblentz measured the infrared radiation from stars, planets, and nebulae and was the first to determine accurately the constants of blackbody radiation, thus confirming Planck's law. *TIS 1889 Edwin Powell Hubble (20 Nov 1889; 28 Sep 1953) American astronomer, born in Marshfield, Mo., who is considered the founder of extragalactic astronomy and who provided the first evidence of the expansion of the universe. In 1923-5 he identified Cepheid variables in "spiral nebulae" M31 and M33 and proved conclusively that they are outside the Galaxy. His investigation of these objects, which he called extragalactic nebulae and which astronomers today call galaxies, led to his now-standard classification system of elliptical, spiral, and irregular galaxies, and to proof that they are distributed uniformly out to great distances. Hubble measured distances to galaxies and their redshifts, and in 1929 he published the velocity-distance relation which is the basis of modern cosmology. *TIS The late Bill Buegsen was a resident who was proud of the achievements of Marshfield's native son, so he designed a one-fourth replica of the original Hubble Space Telescope. The Hubble Telescope replica was dedicated on July 4, 1994 and is located on Clay Street, on the west side of the Webster County Courthouse in Marshfield, Mo. It took approximately three months to build, is approximately twelve feet long, ten feet wide and weighs twelve hundred pounds. There is also an Elementary school named for Hubble. The city is on the famous Route 66 just 30 minutes east of Springfield, Mo. *Marshfield Tourist Office web site 1893 André Bloch (20 Nov 1893 in Besançon, France - 11 Oct 1948 in Paris, France) attended the École Polytechnique in 1913 then was drafted in 1914. An accident at the front made him unfit for military service. On 17 Nov 1917, at a family meal, he murdered one of his brothers, his uncle and his aunt. He was confined to a psychiatric hospital (Saint-Maurice Hospital) where he worked on a large range of topics, function theory, geometry, number theory, algebraic equations and kinematics. Bloch wrote many important papers, corresponding with Hadamard, Mittag-Leffler, Pólya and Henri Cartan (Élie Cartan's son). He was a model patient who refused to go out saying Mathematics is enough for me. Bloch explained the murders to his doctor saying It's a matter of mathematical logic. There had been mental illness in my family. He saw it as his eugenic duty! The Académie awarded him the Becquerel Prize just before his death. *SAU 1917 Leonard Jimmie Savage (20 November 1917 – 1 November 1971) was an American mathematician and statistician. Nobel Prize-winning economist Milton Friedman said Savage was "one of the few people I have met whom I would unhesitatingly call a genius." His most noted work was the 1954 book Foundations of Statistics, in which he put forward a theory of subjective and personal probability and statistics which forms one of the strands underlying Bayesian statistics and has applications to game theory. During World War II, Savage served as chief "statistical" assistant to John von Neumann, the mathematician credited with building the first electronic computer. One of Savage's indirect contributions was his discovery of the work of Louis Bachelier on stochastic models for asset prices and the mathematical theory of option pricing. Savage brought the work of Bachelier to the attention of Paul Samuelson. It was from Samuelson's subsequent writing that "random walk" (and subsequently Brownian motion) became fundamental to mathematical finance. In 1951 he introduced the minimax regret criterion used in decision theory. The Hewitt–Savag *Wik 1924 Benoit Mandelbrot (20 Nov 1924 in Warsaw, Poland - 14 Oct 2010 in Cambridge, Massachusetts, USA) was largely responsible for the present interest in Fractal Geometry. He showed how Fractals can occur in many different places in both Mathematics and elsewhere in Nature.*SAU 1955 Ray Ozzie, who designed the Lotus Notes office management software for Lotus Development Corporation, is born in Chicago, IL. Ozzie graduated from the University of Illinois at Urbana-Champaign (UIUC) in 1979. During this time Ray worked at the Computer-based Education Research Lab (CERL) on the PLATO operating system. He was impressed with PLATO’s real-time communications and has often publicly credited his CERL experience as the inspiration for Lotus Notes. In 1984 Mitch Kapor, founder of Lotus Development Corporation, supported the idea to develop a PLATO-like product for PC by funding Iris Associates, Inc. In August 1986 Lotus Notes was complete becoming the first example of groupware and a commercial success. In 1997 Ozzie left Iris Associates to start a new venture, Rythmix Corp.*CHM 1963 Sir William Timothy Gowers, FRS (20 November 1963, ) is a British mathematician. He is a Royal Society Research Professor at the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge, where he also holds the Rouse Ball chair, and is a Fellow of Trinity College, Cambridge. In 1998 he received the Fields Medal for research connecting the fields of functional analysis and combinatorics.*Wik DEATHS 1713 Thomas Tompion (baptised 25 Jul 1639, 20 Nov 1713) Most famous English clockmaker of his time, especially known for watchmaking improvements. He worked closely with experimental physicist Robert Hooke, and in 1675, following Hooke's design, Tompion made one of the first English watches regulated by a balance spring. In 1695, with Edward Barlow and William Houghton, he patented the cylinder escapement (a controlling device) that allowed use of a horizontal wheel, enabling Tompion to make the first of the flat and more compact watches.*TIS 1764 Christian Goldbach (18 Mar 1690, 20 Nov 1764)Russian mathematician whose contributions to number theory include Goldbach's conjecture, formulated in a letter to Leonhard Euler dated 7 Jul 1742. Stated in modern terms it proposes that: "Every even natural number greater than 2 is equal to the sum of two prime numbers." It has been checked by computer for vast numbers - up to at least 4 x 1014 - but still remains unproved. Goldbach made another conjecture that every odd number is the sum of three primes, on which Vinogradov made progress in 1937. (It has been checked by computer for vast numbers, but remains unproved.) Goldbach also studied infinite sums, the theory of curves and the theory of equations. *TIS 1856 Farkas Bolyai (9 Feb 1775, 20 Nov 1856) Hungarian mathematician, poet, and dramatist who spent a lifetime trying to prove Euclid's (fifth) postulate that parallel lines do not meet. While studying at the University of Göttingen, he met as a fellow student, the noted German mathematician Carl F. Gauss, with whom he corresponded as a life-long friend. Bolyai taught mathematics, physics and chemistry at Marosvásárhely all his life. He discouraged his son, János Bolyai, from studying the parallel axiom as he had, writing in a letter to him: "For God's sake, please give it up. Fear it no less than the sensual passion, because it, too, may take up all your time and deprive you of your health, peace of mind and happiness in life." *TIS 1882 Henry Draper (7 Mar 1837, 20 Nov 1882) American physician and amateur astronomer who made the first photograph of the spectrum of a star (Vega), in 1872. He was also the first to photograph a nebula, the Orion Nebula, in 1880. For his photography of the transit of Venus in 1874, Congress ordered a gold medal struck in his honour. His father, John William Draper, in 1840 had made the first photograph of the Moon.*TIS 1934 Willem de Sitter (6 May 1872, 20 Nov 1934) Dutch mathematician, astronomer, and cosmologist who developed theoretical models of the universe based on Albert Einstein's general theory of relativity. He worked extensively on the motions of the satellites of Jupiter, determining their masses and orbits from decades of observations. He redetermined the fundamental constants of astronomy and determined the variation of the rotation of the earth. He also performed statistical studies of the distribution and motions of stars, but today he is best known for his contributions to cosmology. His 1917 solution to Albert Einstein's field equations showed that a near-empty universe would expand. Later, he and Einstein found an expanding universe solution without space curvature.*TIS 1960 Hidehiko Yamabe (山辺 英彦 Yamabe Hidehiko?, August 22, 1923 in Ashiya, Hyōgo, Japan – November 20, 1960 in Evanston, Illinois) was a Japanese mathematician. His most notable work includes the final solution of Hilbert's fifth problem. After graduating from the University of Tokyo in 1947, Yamabe became an assistant at Osaka University. From 1952 until 1954 he was an assistant at Princeton University, receiving his Ph.D. from Osaka University while at Princeton. He left Princeton in 1954 to become assistant professor at the University of Minnesota. Except for one year as a professor at Osaka University, he stayed in Minnesota until 1960. Yamabe died suddenly of a stroke in November 1960, just months after accepting a full professorship at Northwestern University. *Wik 1986 Arne Carl-August Beurling (February 3, 1905 – November 20, 1986) was a Swedish mathematician and professor of mathematics at Uppsala University (1937–1954) and later at the Institute for Advanced Study in Princeton, New Jersey. Beurling worked extensively in harmonic analysis, complex analysis and potential theory. The "Beurling factorization" helped mathematical scientists to understand the Wold decomposition, and inspired further work on the invariant subspaces of linear operators and operator algebras. In the summer of 1940 he single-handedly deciphered and reverse-engineered an early version of the Siemens and Halske T52 also known as the Geheimfernschreiber (secret teletypewriter) used by Nazi Germany in World War II for sending ciphered messages.[1] The T52 was one of the so-called "Fish cyphers", that using, transposition, created nearly one quintillion (893 622 318 929 520 960) different variations. It took Beurling two weeks to solve the problem using pen and paper. Using Beurling's work, a device was created that enabled Sweden to decipher German teleprinter traffic passing through Sweden from Norway on a cable. In this way, Swedish authorities knew about Operation Barbarossa before it occurred. Not wanting to reveal how this knowledge was attained the Swedish warning was not treated as credible by Soviets. *Wik Credits *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell ## Monday, 19 November 2018 ### On This Day in Math - November 19 A visitor to Niels Bohr's country cottage, noticing a horseshoe hanging on the wall, teasing the eminent scientist about this ancient superstition. 'Can it be true that you, of all people, believe it will bring you luck?' 'Of course not,' replied Bohr, 'but I understand it brings you luck whether you believe it or not.' The 323rd day of the year; If you put drew every possible path from (0,0) to (8,0) that never dropped below the x-axis using only unit vectorial moves with slopes of 1, 0, or -1 there are 323 possible paths. (alternatively this is the number of different ways of drawing non-intersecting chords on a circle with eight points- this is deceptive because it counts each way of drawing a single chord, and drawing no chords at all, students might want to count how many ways this can be done using four chords.) These are called Motzkin numbers, after Theodore Motzkin. 323 is the sum of nine consecutive primes 323 = 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53 *Derek Orr 323 is a palindrome and also the smallest composite number n that divides the (n+1)st Fibonacci number. *What's Special about this Number When eight labeled points are selected around a circle, there are 323 ways of drawing any number of nonintersecting chords joining them, such numbers are called Motzkin numbers EVENTS 1857 Arthur Cayley opens a letter to J. J. Sylvester with, "I have just obtained a theorem that appears to be very remarkable." The theorem would be the centerpiece of his Memoire on the Theory of Matrices. The theorem showed that a matrix was the solution of its own characteristic equation. *A. J. Crilly, Arthur Cayley: Mathematician Laureate of the Victorian Age 1982 Science has an article describing Friedman’s version of Kruskal’s theorem. The important thing is that this is a mathematical (rather than metamathematical) statement independent of arithmetic. *VFR 2010 Experts confirmed that the remains of the 16th-century Danish astronomer Tycho Brahe, his wife and another eight people, including five children, were buried in Prague's Church of Our Lady before Tyn. The tin coffin, tied up with a red ribbon, was deposited in the tomb in the overcrowded church that afternoon preceded by a church service celebrated by Prague Archbishop Dominik Duka and including prayers in Czech and Danish. *Wik The grave had been previously opened 1901, on the three hundredth anniversary of his death, the bodies of Tycho Brahe and his wife Kirstine were exhumed in Prague. They had been embalmed and were in remarkably good condition, but the astronomer’s artificial nose was missing, apparently filched by someone after his death. It had been made for him in gold and silver when his original nose was sliced off in a duel he fought in his youth at Rostock University after a quarrel over some obscure mathematical point. BIRTHS 1894 Heinz Hopf (19 Nov 1894 in Gräbschen (near Breslau), Germany (now Wrocław, Poland) - 3 June 1971 in Zollikon, Switzerland) work was in algebraic topology. He studied vector fields and extended Lefschetz's fixed point formula. He also studied homotopy classes and defined what is now known as the 'Hopf invariant'.*SAU 1900 Mikhail Lavrentev (19 Nov 1900 in Kazan, Russia - 15 Oct 1980 in Moscow, Russia) remembered for an outstanding book on conformal mappings and he made many important contributions to that topic.*SAU 1901 Nina Karlovna Bari (November 19, 1901, Moscow – July 15, 1961, Moscow) was a Soviet mathematician known for her work on trigonometric series. She was killed by a train in the Moscow Metro, and her colleagues speculated that she committed suicide, prompted by the death of her mentor Nikolai Luzin ten years earlier, a man who may have been her lover.*Wik 1907 Adrien Albert (19 November 1907, Sydney - 29 December 1989, Canberra) was a leading authority in the development of medicinal chemistry in Australia. Albert also authored many important books on chemistry, including one on selective toxicity. He was awarded BSc with first class honours and the University Medal in 1932 at the University of Sydney. He gained a PhD in 1937 and a DSc in 1947 from the University of London. His appointments included Lecturer at the University of Sydney (1938-1947), advisor to the Medical Directorate of the Australian Army (1942-1947), research at the Wellcome Research Institute in London (1947-1948) and in 1948 the Foundation Chair of Medical Chemistry in the John Curtin School of Medical Research at the Australian National University in Canberra where he established the Department of Medical Chemistry. He was a Fellow of the Australian Academy of Science. He was the author of Selective Toxicity: The Physico-Chemical Basis of Therapy, first published by Chapman and Hall in 1951. The Adrien Albert Laboratory of Medicinal Chemistry at the University of Sydney was established in his honour in 1989.[1] His bequest funds the Adrien Albert Lectureship, awarded every two years by the Royal Society of Chemistry *Wik 1918 Hendrik Christoffel van de Hulst (19 Nov 1918; 31 Jul 2000) Dutch astronomer who predicted theoretically (1944) that in interstellar space the amount of neutral atomic hydrogen, which in its hyperfine transition radiates and absorbs at a wavelength of 21 cm, might be expected to occur at such high column densities as to provide a spectral line sufficiently strong as to be measurable. Shortly after the end of the war several groups set about to test this prediction. The 21-cm line of atomic hydrogen was detected in 1951, first at Harvard University followed within a few weeks by others. The discovery demonstrated that astronomical research, which at that time was limited to conventional light, could be complemented with observations at radio wavelengths, revealing a range of new physical processes.*TIS DEATHS 1672 John Wilkins FRS (1 January 1614 – 19 November 1672) was an English clergyman, natural philosopher and author, as well as a founder of the Invisible College and one of the founders of the Royal Society, and Bishop of Chester from 1668 until his death. Along with his inventions (almost all of which were destroyed in the Great Fire) and assorted writings on philosophy, mathematics, and cryptography, John Wilkins distinguished himself by planning the first lunar expedition.(in the 17th century??? Yes… learn more here) He wrote for the common reader the Discovery (1638) and the Discourse (1640) which showed how reason and experience supported Copernicus, Kepler and Galileo rather than Aristotlian or literal biblical doctrines. In 1641, he anonymously published a small but comprehensive treatise on cryptography. In Mathematical Magick (1648) he described and illustrated the balance lever, wheel, pulley, wedge and screw in a part called "Archimedes or Mechanical Powers" and in a second part "Daedalus or Mechanical Motions" such strange devices as flying machines, artificial spiders, a land yacht, and a submarine. *WIS 1998 Tetsuya Theodore Fujita(23 Oct 1920, 19 Nov 1998) was a Japanese-American meteorologist who increased the knowledge of severe storms. In 1953, he began research in the U.S. Shortly afterwards, he immigrated and established the Severe Local Storms Project. He was known as "Mr. Tornado" as a result of the Fujita scale (F-scale, Feb 1971), which he and his wife, Sumiko, developed for measuring tornadoes on the basis of their damage. Following the crash of Eastern flight 66 on 24 Jun 1975, he reviewed weather-related aircraft disasters and verified the downburst and the microburst (small downburst) phenomena, enabling airplane pilots to be trained on how to react to them. Late in his career, he turned to the study of storm tracks and El Nino. *TIS *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell ## Sunday, 18 November 2018 ### On This Day in Math - November 18 Predictions can be very difficult— — Niels Bohr The 322nd day of the year; 322 is the 12th Lucas Number. The Lucas Sequence is similar to the Fibonacci sequence with L(1) = 1 and L(2) = 3 and each term is the sum of the two previous terms. L(n) is also the integer nearest to $\phi ^n$ This is the last day of the year that will be a Lucas Number. 322 is smallest number whose square has 6 diff digits (103684). *Derek Orr 322 is a sphenic number, which means that it is the area of  a rectangular box (parallelepiped) with prime lengths for its length, width, and height. EVENTS 2349 B.C. Noah’s flood began according to the English mathematician, William Whiston (1667–1752) who felt it was caused by a comet which passed over the equator causing extensive rains. *Claire L. Parkinson, Breakthroughs, p. 131 Whiston would follow Newton as the Lucasian Professor at Cambridge.*RMAT 1690 First use of catenary According to E. H. Lockwood (1961) and the University of St. Andrews website, this term was first used (in Latin as catenaria) by Christiaan Huygens (1629-1695) in a letter to Leibniz dated November 18, 1690. *Earliest Known Uses of Some of the Words of Mathematics The English word catenary is usually attributed to Thomas Jefferson (but see below****), who wrote in a letter to Thomas Paine on the construction of an arch for a bridge"I have lately received from Italy a treatise on the equilibrium of arches, by the Abbé Mascheroni. It appears to be a very scientifical work. I have not yet had time to engage in it; but I find that the conclusions of his demonstrations are, that every part of the catenary is in perfect equilibrium."  (Dec. 23, 1788) *Jeff Miller  (Paine had previously used the term "catenarian" in earlier letter to Jefferson. (Sept. 15, 1788). ****Miller's web site now includes a single previous use of Catenary in English in 1725 in Lexicon Technicum: Or, An Universal English Dictionary of ARTS and SCIENCES: Fourth Edition Volume I***** 1752 Goldbach writes Euler with conjecture that every odd number greater than 3 is the sum of an odd number and twice a square (he allowed 02). Euler would reply that it was true for the first 1000 odd numbers, and then later to confirm it for the first 2500. A hundred years later, German mathematician Moritz Stern found two contradictions, 5777 and 5993. The story appears in Alfred S. Posamentier's Magnificent Mistakes in Mathematics, (but gloriously, has a mistake for the date, using 1852, but such a wonderful book can forgive a print error.) 1812 Jean Victor Poncelet , a military engineer, was captured while Napoleon’s army was retreating from Moscow. He profited from this enforced leisure (until his release in June 1814) by resuming his study of mathematics. While there he did important work on projective geometry. *VFR 1825 Birbeck writes to Gilmer regarding the application for professor at U Va of Charles Bonnycastle, son of John Bonneycastle.  “The name of Bonnycastle must be well known in America, and if known, must be highly valued; and the son I am persuaded will extend the fame of the parent. 1879  After the death of Maxwell, George Stokes writes to offer Lord Rayleigh the position of head of the Cavendish Laboratory in Cambridge.  *Memoir and scientific correspondence of the late Sir George Gabriel Stokes, pg 227 1883 At noon on this day the telegraphic time signals sent out daily from the Naval Observatory at Washington, D.C., were changed to standard time, a system adopted on the initiative of the American Railway Association. Standard time was suggested for the U.S. in 1869 by Charles Ferdinand Dowd, a schoolmaster from Saratoga, N.Y., but was not adopted then. He suggested dividing the continent into four time zones each one hour or fifteen degrees of longitude wide. Standard Railroad Time had four time zones, Eastern, Central, Western, and Pacific. Congress made these official in 1918. Some citizens grumbled about “railroad tyranny” and tampering with “God’s time.” See New York Times, 20 Nov. 1983. *VFR On April 1 of 1967, The Uniform Time Act divided the United States into eight time zones; Eastern, Central, Mountain, Pacific, Yukon, Alaska, Hawaii, and Bering.  *FFF, pg 149 1915 On the third Tuesday of November, Einstein entered the Prussian Academy of Sciences, and erased a planet from the solar system. A planet witnessed dozens of times, so often they had calculated its orbit. In the first of two papers on his General Theory of Relativity, he showed that his calculations perfectly explained the strange behavior of Mercury. For years the inability of Newtonian Physics had been unable to explain the behavior, and they looked for (and found?) Vulcan, Mercury's close "companion". With Einstein's theory, his calculations proved Vulcan did not exist. *Thomas Levenson "The Hunt for Vulcan." 1963 The first push-button telephone goes into service. @yovisto The first electronic push-button system with Touch-Tone dialing was offered by Bell Telephone to customers in Carnegie and Greensburg, Pennsylvania. Western Electric experimented as early as 1941 with methods of using mechanically activated reeds to produce two tones for each of the ten digits. But the technology proved unreliable and it was not until long after the invention of the transistor when the technology matured. On 18 November 1963 the Bell System in the United States officially introduced dual-tone multi-frequency (DTMF) technology under its registered Touch-Tone® mark. Over the next few decades Touch-Tone service replaced traditional pulse dialing technology and it eventually became a world-wide standard for telecommunication signaling. The now standard layout of the keys on the Touch-Tone telephone was the result of research of the human-engineering department at Bell Laboratories in the 1950s under the leadership of South African-born psychologist John Elias Karlin (1918–2013), who was previously a leading proponent in the introduction of all-number-dialing in the Bell System. This research resulted in the design of the DTMF keypad that arranged the push-buttons into 12 positions in a 3-by-4 position rectangular array, and placed the 1, 2, and 3 keys in the top row for most accurate dialing.[18] The remaining digits occupied the lower rows in sequence from left to right, however, placing the 0 into the center of the fourth row, while omitting the lower left, and lower right positions. These two positions were later assigned to the asterisk and pound key when the keypad was expanded for twelve buttons in 1969. *Wik 1991 IBM and Siemens AG Announce 64M DRAM Chip Prototype : IBM and Siemens AG announce they have developed a prototype 64 megabyte DRAM chip. This development was in line with Moore’s Law which predicts a doubling of the number of transistors etched into silicon every 18 months. *CHM BIRTHS 1839 August (Adolph Eduard Eberhard) Kundt (18 Nov 1839; 21 May 1894)  was a German physicist who developed a method (1866) to determine the  velocity of sound in gases and solids. He used a closed glass tube into  which a dry powder (such as lycopodium) has been sprinkled. The source  of sound in the original device was a metal rod clamped at its centre  with a piston at one end, which is inserted into the tube. When the rod  is stroked, sound waves generated by the piston enter the tube. If the  position of the piston in the tube is adjusted so that the gas column is  a whole number of half wavelengths long, the dust will be disturbed by  the resulting stationary waves forming a series of striations, enabling  distances between nodes to be measured. *TIS 1844 Albert Wangerin (November 18, 1844 – October 25, 1933) worked on potential theory, spherical functions and differential geometry.*SAU He wrote an important two volume treatise on potential theory and spherical functions. Theorie des Potentials und der Kugelfunktionen I was published in 1909 and Theorie des Potentials und der Kugelfunktionen II was published in 1921. Wangerin functions are named for him. He was also known for writing of textbooks, encyclopaedias and his historical writings.*Wik 1872 Giovanni Enrico Eugenio Vacca (18 November 1872 – 6 January 1953) was an Italian mathematician, Sinologist and historian of science. Vacca studied mathematics and graduated from the University of Genoa in 1897 under the guidance of G. B. Negri. He was a politically active student and was banished for that from Genoa in 1897. He moved to Turin and became an assistant to Giuseppe Peano. In 1899 he studied, at Hanover, unpublished manuscripts of Gottfried Wilhelm Leibniz, which he published in 1903. Around 1898 Vacca became interested in Chinese language and culture after attending a Chinese exhibition in Turin. He took private lessons of Chinese and continued to study it at the University of Florence. Vacca then traveled to China in 1907-8 and defended a PhD in Chinese studies in 1910. In 1911, he became a lecturer in Chinese literature at the University of Rome. In 1922, he moved to Florence and taught Chinese literature and language at university until 1947. The interests of Vacca were almost equally split between mathematics, Sinology and history of science, with a corresponding number of papers being 38, 47 and 45. In 1910, Vacca developed a complex number iteration for pi. *Wik 1887 Gustav Theodor Fechner (19 Apr 1801, 18 Nov 1887) German physicist and philosopher who was a key figure in the founding of psychophysics, the science concerned with quantitative relations between sensations and the stimuli producing them. He formulated the rule known as Fechner’s law, that, within limits, the intensity of a sensation increases as the logarithm of the stimulus. He also proposed a mathematical expression of the theory concerning the difference between two stimuli, advanced by E. H. Weber. (These are now known to be only approximately true. However, as long as the stimulus is of moderate intensity, then the laws will give us a good estimate.) Under the name “Dr. Mises” he also wrote humorous satire. In philosophy he was an animist, maintaining that life is manifest in all objects of the universe. *TIS 1897 Patrick M.S. Blackett (18 Nov 1897; 13 Jul 1974) English scientist who won the Nobel Prize for Physics in 1948 for his discoveries in the field of cosmic radiation, which he accomplished primarily with cloud-chamber photographs that revealed the way in which a stable atomic nucleus can be disintegrated by bombarding it with alpha particles (helium nuclei). Although such nuclear disintegration had been observed previously, his data explained this phenomenon for the first time and were useful in explaining disintegration by other means. *TIS 1900 George Bogdanovich Kistiakowsky (November 18, 1900 – December 7, 1982) was a Ukrainian-American physical chemistry professor at Harvard who participated in the Manhattan Project and later served as President Dwight D. Eisenhower's Science Advisor. Born in Kiev in the old Russian Empire, Kistiakowsky fled Russia during the Russian Civil War. He made his way to Germany, where he earned his PhD in physical chemistry under the supervision of Max Bodenstein at the University of Berlin. He emigrated to the United States in 1926, where he joined the faculty of Harvard University in 1930, and became a citizen in 1933. During World War II, he was the head of the National Defense Research Committee (NDRC) section responsible for the development of explosives, and the technical director of the Explosives Research Laboratory (ERL), where he oversaw the development of new explosives, including RDX and HMX. He was involved in research into the hydrodynamic theory of explosions, and the development of shaped charges. In October 1943, he was brought into the Manhattan Project as a consultant. He was soon placed in charge of X Division, which was responsible for the development of the explosive lenses necessary for an implosion-type nuclear weapon. He watched an implosion weapon that was detonated in the Trinity test in July 1945. A few weeks later a Fat Man implosion weapon was dropped on Nagasaki. From 1962 to 1965, he chaired the National Academy of Sciences's Committee on Science, Engineering, and Public Policy (COSEPUP), and was its vice president from 1965 to 1973. In later years he was active in an antiwar organization, the Council for a Livable World. Kistiakowsky severed his connections with the government in protest against the US involvement in the war in Vietnam. In 1977, he assumed the chairmanship of the Council for Livable World, campaigning against nuclear proliferation. He died of cancer in Cambridge, Massachusetts, on December 17, 1982. His body was cremated, and his ashes were scattered near his summer home on Cape Cod, Massachusetts. His papers are in the Harvard University archives.*Wik 1901 George Horace Gallup (November 18, 1901 – July 26, 1984) was an American pioneer of survey sampling techniques and inventor of the Gallup poll, a successful statistical method of survey sampling for measuring public opinion. Gallup was born in Jefferson, Iowa, the son of George Henry Gallup, a dairy farmer. His higher education took place at the University of Iowa. He served as a journalism professor at Drake and Northwestern for brief periods. In 1932 he moved to New York City to join the advertising agency of Young and Rubicam as director of research (later as vice president from 1937 to 1947). He was also professor of journalism at Columbia University, but he had to give up this position shortly after he formed his own polling company, the American Institute of Public Opinion (Gallup Poll), in 1935. In 1936, his new organization achieved national recognition by correctly predicting, from the replies of only 50,000 respondents, that Franklin Roosevelt would defeat Alf Landon in the U.S. Presidential election. This was in direct contradiction to the widely respected Literary Digest magazine whose poll based on over two million returned questionnaires predicted that Landon would be the winner. Not only did Gallup get the election right, he correctly predicted the results of the Literary Digest poll as well using a random sample smaller than theirs but chosen to match it. Twelve years later, his organization had its moment of greatest ignominy, when it predicted that Thomas Dewey would defeat Harry S. Truman in the 1948 election, by five to fifteen percentage points. Gallup believed the error was mostly due to ending his polling three weeks before Election Day. Gallup died in 1984 of a heart attack at his summer home in Tschingel, a village in the Bernese Oberland of Switzerland. He was buried in Princeton Cemetery. *Wik 1912 Shigeo Sasaki 佐々木 (18 November 1912 Yamagata Prefecture, Japan – 14 August 1987 Tokyo) was a Japanese mathematician working on differential geometry who introduced Sasaki manifolds. *Wik 1916 Sir David Robert Bates, FRS(18 November 1916, Omagh, County Tyrone, Ireland – 5 January 1994) was an Irish mathematician and physicist. During the Second World War he worked at the Admiralty Mining Establishment where he developed methods of protecting ships from magnetically activated mines. His contributions to science include seminal works on atmospheric physics, molecular physics and the chemistry of interstellar clouds. He was knighted in 1978 for his services to science, was a Fellow of the Royal Society and vice-president of the Royal Irish Academy. In 1970 he won the Hughes Medal. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1974. The Mathematics Building at Queens University Belfast, is named after him. *Wik 1923 Alan B. Shepard, Jr. (18 Nov 1923; 21 Jul 1998) Alan Bartlett Shepard, Jr. was America's first man in space and one of only 12 humans who walked on the Moon. Named as one of the nation's original seven Mercury astronauts in 1959, Shepard became the first American into space on 5 May 1961, riding a Redstone rocket on a 15-minute suborbital flight that took him and his Freedom 7 Mercury capsule 115 miles in altitude and 302 miles downrange from Cape Canaveral, FL. (His flight came three weeks after the launch of Soviet cosmonaut Yuri Gagarin, who on 12 Apr 1961, became the first human space traveler on a one-orbit flight lasting 108 minutes.) Although the flight of Freedom 7 was brief, it  was a major step for U.S. in a  race with the USSR. *TIS 1927 John Leslie Britton (November 18, 1927 – June 13, 1994) was an English mathematician from Yorkshire who worked in combinatorial group theory and was an expert on the word problem for groups. Britton was a member of the London Mathematical Society and was Secretary of Meetings and Membership with that organization from 1973-1976. Britton died in a climbing accident on the Isle of Skye. *Wik DEATHS 1919 Adolf Hurwitz (26 March 1859 in Hildesheim, Lower Saxony, Germany Died: 18 Nov 1919 in Zurich, Switzerland) Hurwitz studied the genus of the Riemann surface and worked on how class number relations could be derived from modular equations. Hurwitz did excellent work in algebraic number theory. For example he published a paper on a factorisation theory for integer quaternions in 1896 and applied it to the problem of representing an integer as the sum of four squares. A full proof of Hurwitz's ideas appears in a booklet published in the year of his death. This involves studying the ring of integer quaternions in which there are 24 units. He shows that one-sided ideals are principal and introduces prime and primary quaternions. *SAU 1928 Alexander Ziwet (February 8, 1853 -  No
vember 18, 1928) born in Breslau. He became professor at the University of Michigan, an editor of the Bulletin of the AMS, and a collector of mathematics text who enriched the Michigan library. *VFR His early education was obtained in a German gymnasium. He afterwards studies in the universities of Warsaw and Moscow, one year at each, and then entered the Polytechnic School at Karlsruhe, where he received the degree of Civil Engineer in 1880. He came immediately to the United States and received employment on the United States Lake Survey. Two years later he was transferred to the United States Coast and Geodetic Survey, computing division, where he remained five years. In 1888 he was appointed Instructor in Mathematics in the University of Michigan. From this position he was advanced to Acting Assistant Professor in 1890, to Assistant Professor in 1891, to Junior Professor in 1896, and to Professor of Mathematics in 1904. He was a member of the Council of the American Mathematical Society and an editor of the "Bulletin" of the society. In 1893-1894 he published an "Elementary Treatise on Theoretical Mechanics" in three parts, of which a revised edition appeared in 1904. He also translated from the Russian of I. Somoff "Theoretische Mechanik" (two volumes, 1878, 1879). *Burke A. Hinsdale and Isaac Newton Demmon, History of the University of Michigan (Ann Arbor: University of Michigan Press, 1906), pp. 320-321. 1933 Robert Forsyth Scott (28 July 1849 in Leith, near Edinburgh, Scotland - 18 Nov 1933 in Cambridge, England) studied at Cambridge and was elected to a fellowship. After a short time teaching he studied to be a barrister. He spent most of his career as Bursar and Master of St John's College Cambridge. He published a book on Determinants. *SAU 1949 Frank Baldwin Jewett (5 Sep 1879, 18 Nov 1949) Frank Baldwin Jewett was the U.S. electrical engineer who directed research as the first president of the Bell Telephone Laboratories, Inc., (1925-40). Jewett believed that the best science and technology result from bringing together and nurturing the best minds. Under his tenure Bell Labs laid the foundation for a new scientific discipline, radio astronomy, and transformed movies by synchronizing sound to pictures. Bell Labs was the first to transmit television over a long distance in the U.S. and designed the first electrical digital computer. Bell Labs won its first Nobel Prize in physics for fundamental work demonstrating the wave nature of matter.*TIS 1959 Aleksandr Yakovlevich Khinchin (July 19, 1894, Kondrovo, Kaluga Oblast, Russia - November 18, 1959, Moscow, Russia) was a Russian mathematician who contributed to many fields including number theory and probability. Khinchin made significant contributions to the metric theory of Diophantine approximations and established an important result for simple real continued fractions, discovering a property of such numbers that leads to what is now known as Khinchin's constant. He also published several important works on statistical physics, where he used the methods of probability theory, and on information theory, queuing theory and mathematical analysis.*Wik *Niels Bohr Institute 1962 Niels Henrik David Bohr (7 Oct 1885, 18 Nov 1962) was a Danish physicist, born in Copenhagen, who was the first to apply the quantum theory, which restricts the energy of a system to certain discrete values, to the problem of atomic and molecular structure. For this work he received the Nobel Prize for Physics in 1922. He developed the so-called Bohr theory of the atom and liquid model of the nucleus. Bohr was of Jewish origin and when the Nazis occupied Denmark he escaped in 1943 to Sweden on a fishing boat. From there he was flown to England where he began to work on the project to make a nuclear fission bomb. After a few months he went with the British research team to Los Alamos in the USA where they continued work on the project. *TIS (Bohr was an excellent athlete in his youth, read more here.) Niels and his mathematician brother Harald are buried in the same grave site at the Assistens cemetery in Copenhagen. I love the youthful picture of the two brothers shown at right. 1994 Nathan Jacob Fine (22 October 1916 in Philadelphia, USA - 18 Nov 1994 in Deerfield Beach, Florida, USA) He published on many different topics including number theory, logic, combinatorics, group theory, linear algebra, partitions and functional and classical analysis. He is perhaps best known for his book Basic hypergeometric series and applications published in the Mathematical Surveys and Monographs Series of the American Mathematical Society. The material which he presented in the Earle Raymond Hedrick Lectures twenty years earlier form the basis for the material in this text.*SAU Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell ## Saturday, 17 November 2018 ### On This Day in Math - November 17 Central part of a large floor mosaic, from a Roman villa in Sentinum (now known as Sassoferrato, in Marche, Italy), ca. 200–250 C.E. Aion, the god of eternity, is standing inside a celestial sphere decorated with zodiac signs, in between a green tree and a bare tree (summer and winter, respectively). Sitting in front of him is the mother-earth goddess, Tellus (the Roman counterpart of Gaia) with her four children, who possibly represent the four seasons.*Wik The unreasonable efficiency of mathematics in science is a gift we neither understand nor deserve. ~Eugene Paul Wigner The 321st day of the year. 321 is the number of partitions of 13 into at most 4 parts. (7+3+2+1 would be one such) 321 is a Central Delannoy number. The Delannoy numbers are the number of lattice paths from (0, 0) to (b, a) in which only east (1, 0), north (0, 1), and northeast (1, 1) steps are allowed. Central Delannoy numbers are paths to (a,a). [Delannoy numbers are named for Henri Auguste Delannoy (1833–1915) who was a friend and correspondent of Edouard Lucas, editor of Récréations Mathématiques.] EVENTS 1717 While making his rounds a gendarme found an infant on the steps of the Church of Saint Jean-le-Rond in Paris. The child was christened Jean-le-Rond. Later, for unknown reasons, he added the surname d’Alembert. Jean le Rond D'Alembert (1717 1783) was abandoned by his mother on the steps of Saint Jean le Rond, which was the baptistery of Notre-Dame. Foster parents were found and he was christened with the name of the saint. When he became famous, his mother attempted to reclaim him, but he rejected her. His father, we now know, was an artillery officer, Louis-Camus Destouches. His natural father did not want his paternity known, but paid for his education in secret. D'Alembert was a pioneer in the study of differential equations and their use of in physics. He studied the equilibrium and motion of fluids. 1767 At the commencement, the College of Philadelphia bestowed on David Rittenhouse, then present, the honorary degree of Master of Arts, Dr. William Smith, the Provost, addressing him as follows: Sir,--The trustees of this College (the faculty of professors cheerfully concurring), being ever desirous to distinguish real merit, especially in the natives of this province,--and well-assured of the extraordinary progress and improvement which you have made, by a felicity of natural genius, in mechanics, mathematics, astronomy, and other liberal arts and sciences, all which you have adorned by singular modesty and irreproachable morals,--have authorized and required me to admit you to the honorary degree of Master of Arts, in this seminary: I do therefore, by virtue of this authority, most cheerfully admit, &c. *U Penn Library web site In 1797, the first patent in the U.S. for a clock was issued to Eli Terry of East Windsor, Conn. for an equation clock.The patent was signed by President John Adams. The clock had two minute hands, one of which showed the mean or true time, while the other "together with the striking part and hour hand showed the apparent time, as divided by the sun according to the table of variation of the sun and clock for each day of the year." He began making clocks in 1793, in Plymouth, Conn. Terry introduced wooden geared clocks using the ideas of Eli Whitney's new armory practice to produce interchangeable gears (1802) and mass production of very inexpensive household clocks. Terry developed ways to produce wooden clock works by machine. *TIS 1834 Astronomer Royal Airy receives suggestion to begin mathematical search for undiscovered planet that would be Neptune by the Reverend T.J. Hussey. He mentions in his letter how he has heard of a possible planet beyond Uranus and looked for it using a reflector telescope, but to no avail. He presented the idea of using mathematics as a tool in the search but admitted to Airy that he would not be of much help in that regard. On Novemeber 23rd Airy writes back to the reverend and admits he too has been preoccupied with a possible planet. He had observed that Uranus' orbit deviated the most in 1750 and 1834, when it would be at the same point. This was strong evidence for an object pulling on the planet, but Airy felt that until more observations were made no mathematical tools would be of help *from http://theoriginal1701.hubpages.com/hub/The-Drama-of-Neptunes-Discovery 1921 Fisher reads his famous Royal Society paper, On the Mathematical Foundations of Theoretical Statistics. Statistical Historian, Steven Stigler writes that, “in 1921 Fisher presented a major pathbreaking memoir to the Royal Society of London, a memoir that more than any other single work gave the framework and direction to twentieth century statistical theory.” The paper begins with a list of definitions that, while almost unheard of before that paper, have become standard in even elementary statistics courses. Terms like estimation, likelihood, optimum, and not defined, but actually used in the descriptions of other terms was Fisher’s first public use of “parameter”. *Stigler, Fisher in 1921 At right the Fisher window from  from the Greatroom at Caius College., Cambridge 1930 Kurt Godel’s “On formally undecidable propositions of Principia Mathematica and related systems ” was received for publication. It contained the amazing result that there are true but unprovable statements in arithmetic. In 1970, a U.S. patent was issued for the computer mouse - an "X-Y Position Indicator for a Display System" (No. 3541541). The inventor was Doug Engelbart. In the lab, he and his colleagues had called it a "mouse," after its tail-like cable. The first mouse was a simple hollowed-out wooden block, with a single push button on top. Engelbart had designed this as a tool to select text, move it around, and otherwise manipulate it. It was a key element of his larger project - the NLS (oN Line System), a computer he and some colleagues at the Stanford Research Institute had built. The NLS also allowed two or more users to work on the same document from different workstations.*TIS 2011  French law allows the first taste of Beaujolais Nouveau on this date each year. In 1985 the law was changed to the third Thursday in November. Also permission was given to ship wine ahead of time to bonded warehouses outside of France. Thus in the US we can drink the Nouveau on the same day as the French. Get out now and buy several bottles for the holidays. *VFR 2012 The maximum of the Leonid activity in 2012 is expected during the night of the 17th November 2012. The Leonids are a prolific meteor shower. It tends to peak around November 17, but some are spread through several days on either side and the specific peak changing every year. *Cute-Calendar.com BIRTHS 1597 Henry Gellibrand was an English clergyman who worked on magnetic declination and who made mathematical contributions to navigation.*SAU He discovered that magnetic declination – the angle of dip of a compass needle – is not constant but changes over time. He announced this in 1635, relying on previous observations by others, which had not yet been correctly interpreted. He also devised a method for measuring longitude, based on eclipses. The mathematical tables of Henry Briggs, consisting of logarithms of trigonometric functions, were published by Gellibrand in 1633 as Trigonometria Britannica. He was Professor at Gresham College, succeeding Edward Gunter in 1626. He was buried in St Peter Le Poer. (London, demolished in 1907) *Wik 1790 August Möbius (17 Nov 1790; 26 Sep 1868)August Ferdinand Möbius was a German astronomer, mathematician and author. He is best known for his work in analytic geometry and in topology, especially remembered as one of the discoverers of the Möbius strip, which he had discovered in 1858. A Möbius strip is a two-dimensional surface with only one side. It can be constructed in three dimensions as follows. Take a rectangular strip of paper and join the two ends of the strip together so that it has a 180 degree twist. It is now possible to start at a point A on the surface and trace out a path that passes through the point which is apparently on the other side of the surface from A. Although his most famous work is in mathematics, Möbius did publish important work on astronomy.*TIS 1865 John Stanley Plaskett (17 Nov 1865; 17 Oct 1941) Canadian astronomer known for his expert design of instruments and his extensive spectroscopic observations. He designed an exceptionally efficient spectrograph for the 15-inch refractor and measured radial velocities and found orbits of spectroscopic binary stars. He designed and supervised construction of the 72-inch reflector built for the new Dominion Astrophysical Observatory in Victoria and was appointed its first director in 1917. There he extended the work on radial velocities and spectroscopic binaries and studied spectra of O and B-type stars. In the 1930s he published the first detailed analysis of the rotation of the Milky Way, demonstrating that the sun is two-thirds out from the center of our galaxy about which it revolves once in 220 million years. *TIS 1902 Eugene Paul Wigner (17 Nov 1902; 1 Jan 1995) Hungarian-born American physicist who was the joint winner of the 1963 Nobel Prize for Physics (with Maria Goeppert Mayer and Johannes Hans Jensen) for his insight into quantum mechanics, for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles. He made many contributions to nuclear physics and played a prominent role in the development of the atomic bomb and nuclear energy. *TIS I love this story about Wigner as a child, "When he was ten years old ... he was told that he had tuberculosis. The cure was to be found in sending him to a sanatorium in Breitenstein in Austria and he spent six weeks there before being told that the diagnosis had been wrong and that he had never had tuberculosis. However, one advantage of his six weeks was that he began to think about mathematical problems "I had to lie on a deck chair for days on end, and I worked terribly hard on constructing a triangle if the three altitudes are given." *SAU 1917 Ruth Aaronson Bari (November 17, 1917 – August 25, 2005) was an American mathematician known for her work in graph theory and homomorphisms. The daughter of Polish-Jewish immigrants to the U.S., she was a professor at George Washington University beginning in 1966. She was the mother of environmental activist Judi Bari, science reporter Gina Kolata and art historian Martha Bari.*Wik DEATHS 1704 Valentin Heins (May 15th 1637 in Hamburg - November 17 1704 ) was a German arithmetician (Reckoner) The son of a linen weaver, the source of his education is unknown. From 1651 Heins was licensed to provide instruction in commercial computing (accounting, bookkeeping, arithmetic, etc). In the years 1658 and 1659 Heins studied theology for several semesters at the universities of Jena and Leipzig , but then returned to Hamburg. There he married and had a vicariate (financial endoument) in 1661 at the Cathedral. Whether Heins performed for a service is not known. In 1670 he became writing and arithmetic master of the German Church School St. Michaelis . He was also from 1663-1672 accountant of the Guinean-African Company. He wrote several textbooks, which made him known beyond national boundaries. They were reprinted up to the beginning of the 19th Century. Particularly popular was his tyrocinium mercatorio arithmeticum, a commercial arithmetic and accounting book. Heins founded in 1690, with the calculation of the parish school master of St. Jacobi Henry Meissner , the art-loving Societät billing. This later became the Mathematical Society of Hamburg, the worlds oldest existing mathematical society. *Wik 1929 Herman Hollerith (29 Feb 1860, 17 Nov 1929) American inventor of a tabulating machine that was an important precursor of the electronic computer. For the 1890 U.S. census, he invented several punched-card machines to automate the sorting of data. The machine which read the cards used a pin going through a hole in the card to make an electrical connection with mercury placed beneath. The resulting electrical current activated a mechanical counter. It saved the United States 5 million dollars for the 1890 census by completing the analysis of the data in a fraction of the time it would have taken without it and with a smaller amount of manpower than would have been necessary otherwise. In 1896, he formed the Tabulating Machine Company, a precursor of IBM. *TIS 1953 Pierre Humbert (13 June 1891 in Paris, France, 17 Nov 1953 in Paris, France) graduated from the École Polytechnique in Paris and then moved to Edinburgh to do research under Whittaker. He spent most of his career in the University of Montpellier.  He specialized in the history of the seventeenth century he wrote particularly on the French astronomers of that period. He also made contributions to mathematics, in particular he wrote on elliptic functions, Lamé functions, and Mathieu functions. His main mathematical work from the mid 1930s onwards was in developing the symbolic calculus. He also wrote on applications of the symbolic calculus to mathematical physics. *SAU 1954 Tadeusz Banachiewicz (13 February 1882, Warsaw, Congress Poland, Russian Empire – 17 November 1954, Kraków) was a Polish astronomer, mathematician and geodesist. He was educated at Warsaw University and his thesis was on "reduction constants of the Repsold heliometer". In 1905, after the closure of the University by the Russians, he moved to Göttingen and in 1906 to the Pulkowa Observatory. He also worked at the Engel'gardt Observatory at Kazan University from 1910–1915. In 1919, after Poland regained her independence, Banachiewicz moved to Kraków, becoming a professor at the Jagiellonian University and the director of Kraków Observatory. He authored approximately 180 research papers and modified the method of determining parabolic orbits. In 1925, he invented a theory of "cracovians" — a special kind of matrix algebra — which brought him international recognition. This theory solved several astronomical, geodesic, mechanical and mathematical problems. In 1922 he became a member of PAU (Polska Akademia Umiejętności) and from 1932 to 1938 was the vice-president of the International Astronomical Union. He was also the first President of the Polish Astronomical Society, the vice-president of the Geodesic Committee of The Baltic States and, from 1952 to his death, a member of the Polish Academy of Sciences. He was also the founder of the journal Acta Astronomica. He was the recipient of Doctor Honoris Causa titles from the University of Warsaw, the University of Poznań and the University of Sofia in Bulgaria.[citation needed] Banachiewicz invented a chronocinematograph. The lunar crater Banachiewicz is named after him. He wrote over 230 scientific works. *Wik 1956 John Evershed (26 Feb 1864, 17 Nov 1956) English astronomer who discovered (1909) the Evershed effect - the horizontal motion of gases outward from the centres of sunspots. While photographing solar prominences and sunspot spectra, he noticed that many of the Fraunhofer lines in the sunspot spectra were shifted to the red. By showing that these were Doppler shifts, he proved the motion of the source gases. This discovery came to be known as the Evershed effect. He also gave his name to a spectroheliograph, the Evershed spectroscope.*TIS 1958 Yutaka Taniyama (November 12, 1927, Kisai near Tokyo – November 17, 1958, Tokyo) was a Japanese mathematician known for the Taniyama-Shimura conjecture. Taniyama was best known for conjecturing, in modern language, automorphic properties of L-functions of elliptic curves over any number field. A partial and refined case of this conjecture for elliptic curves over rationals is called the Taniyama-Shimura conjecture or the modularity theorem whose statement he subsequently refined in collaboration with Goro Shimura. The names Taniyama, Shimura and Weil have all been attached to this conjecture, but the idea is essentially due to Taniyama. In 1986 Ribet proved that if the Taniyama-Shimura conjecture held, then so would Fermat's last theorem, which inspired Andrew Wiles to work for a number of years in secrecy on it, and to prove enough of it to prove Fermat's Last Theorem. Due to the pioneering contribution of Wiles and the efforts of a number of mathematicians the Taniyama-Shimura conjecture was finally proven in 1999. The original Taniyama conjecture for elliptic curves over arbitrary number fields remains open, and the method of Wiles and others cannot be extended to provide its proof.*Wik 1990 Robert Hofstadter (5 Feb 1915, 17 Nov 1990) American scientist who was a joint recipient of the Nobel Prize for Physics in 1961 for his investigations in which he measured the sizes of the neutron and proton in the nuclei of atoms. He revealed the hitherto unknown structure of these particles and helped create an identifying order for subatomic particles. He also correctly predicted the existence of hte omega-meson and rho-meson. He also studied controlled nuclear fission. Hofstadter was one of the driving forces behind the creation of the Stanford Linear Accelerator. He also made substantial contributions to gamma ray spectroscopy, leading to the use of radioactive tracers to locate tumors and other disorders.*TIS 2000 Louis-Eugène-Félix Néel (22 Nov 1904, 17 Nov 2000) French physicist, corecipient (with the Swedish astrophysicist Hannes Alfvén) of the Nobel Prize for Physics in 1970 for his pioneering studies of the magnetic properties of solids. His contributions to solid-state physics have found numerous useful applications, particularly in the development of improved computer memory units. About 1930 he suggested that a new form of magnetic behavior might exist - called antiferromagnetism. Above a certain temperature (the Néel temperature) this behaviour stops. Néel pointed out (1947) that materials could also exist showing ferrimagnetism. Néel has also given an explanation of the weak magnetism of certain rocks, making possible the study of the past history of the Earth's magnetic field.*TIS Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
{}
# propagateWeights -- propagate (Lie theoretic) weights along equivariant maps ## Synopsis • Optional inputs: • Forward => ..., default value false, propagate weights from domain to codomain • LeadingTermTest => ..., default value true, check the columns of the input matrix for repeated leading terms • MinimalityTest => ..., default value true, check that the input map is minimal ## Description Let $T$ be a torus which acts on a polynomial ring compatibly with the grading. Assume the variables in the ring are weight vectors for the action of $T$. Use this method on a $T$-equivariant map of graded free modules, to obtain the weights of the domain from the weights of the codomain (or viceversa). The weights of the variables in the ring must be set a priori, using the method setWeights. This method is called by other methods in this package. The only reason for using this method directly is that it returns a complete list of weights for the domain (resp. codomain) of an equivariant map, instead of the highest weights decomposition. This method implements an algorithm introduced in Galetto - Propagating weights of tori along free resolutions.
{}
# How to determine proper significance level in forming hypothesis? I am conducting a Two-Way ANOVA test on a set of data, with replication; there are two variables, each with 3 different categories (thus, 9 different groups). Each group has 4 data points, bringing the total data set to 36 points. I conducted a Two-Way ANOVA test by hand (and confirmed the results with Minitab and Excel), and I am confident (pun not intended) with the accuracy of my $p$ values. However, I am unsure of what an appropriate significance level is. All the literature I've read on introductory statistics assumes a 0.05 significance level; should I also assume the same? Is such a level based on personal preference? What justifies selecting 0.05? • I'll leave it to someone more knowledgeable to expand this, but the answer to your final question is: tradition, unfortunately. – Matthew Drury Aug 18 '15 at 17:20
{}
Find the volume and surface area of the body of rotation formed by the rotation of the region between the Question: Find the volume and surface area of the body of rotation formed by the rotation of the region between the circle X 2 + y 2 R ^ 2 and x | + ly | = R square around the line X + y = 2R: 12.20 Similar Solved Questions A tiny conducting sphere has 125 extra electrons on it (has a charge of -125 e).... A tiny conducting sphere has 125 extra electrons on it (has a charge of -125 e). A second charged tiny conducting sphere is set to 8.5 mm from the first. You measure the first sphere now has an attractive force on it of 2.6*10^-19 N due to the second sphere. A. Does the second sphere have extra ele... If 14.4 moles of Cu and 17.4 moles of HNO3 are allowed to react, what is... If 14.4 moles of Cu and 17.4 moles of HNO3 are allowed to react, what is the maximum number of moles of NO that can be produced? 3 Cu + 8 HNO3 → 3Cu(NO3)2 + 2NO + 4H2O... Choose two different properties and describe how they vary across a period. Choose two different properties and describe how they vary across a period.... Problem 1: Geneticists examined the distribution of seed coat color in cultivated amaranth grains, Amaranthus caudatus_ Crossing black-seeded and pale-seeded caudatus populations Lalc the following counts of black; brown; and pale seeds in the second generation.Seed Cwat Color Seed Counthlack 545hrrmpaleAccording to genetics laws, dominant cpistasis should lead to 13 of all such sccds hcing black brown_ and pale. We want t test this theory %o significance level(a) Find the value ol the lest sta Problem 1: Geneticists examined the distribution of seed coat color in cultivated amaranth grains, Amaranthus caudatus_ Crossing black-seeded and pale-seeded caudatus populations Lalc the following counts of black; brown; and pale seeds in the second generation. Seed Cwat Color Seed Count hlack 545... 6 (10 points) A particular type of surface wave in a fluid travels with phase velocity、... 6 (10 points) A particular type of surface wave in a fluid travels with phase velocity、 where b is a of the phase velocity. tant. Find the group velocity of a packet of these surfn n b/A, Find the group velocity of a packet of these surface waves, in terms... Fuge 2Of 4NAMESECTION [: MULTIPLE CHOICE Fill in the boxes On Ihe left with the correct answer for each of the following questions_ (4 pts. each)1 The following reaction: ZA(g) JB(S) 7 C(g) D(). AH > 0, will be spontaneous at; )Eow T b.) High T c)AlIT d.) No T can determine2.) The following reaction: A(s) B() 7 2C().4H < 0, will be spontaneous at: Low T b.) High T c.) AIIT d.) No Tcan determine3.) The following reaction: 2A() 7 B(s) + C().AH < 0, will be spontaneous at: a.) Low High T Fuge 2Of 4 NAME SECTION [: MULTIPLE CHOICE Fill in the boxes On Ihe left with the correct answer for each of the following questions_ (4 pts. each) 1 The following reaction: ZA(g) JB(S) 7 C(g) D(). AH > 0, will be spontaneous at; )Eow T b.) High T c)AlIT d.) No T can determine 2.) The following ... Question The position of a hummingbird flying along straight line given by the function s(t) = 9r" 24t measured in seconds and is measured in yards On what interval(s) is the hummingbird slowing down?where / 2 ((Enter your answer In Interval notation: If entering more than one interval write the Intervals as = union )Provide your answer below:FEEDBACKMORE INSTRUCTIONSUBMIT Question The position of a hummingbird flying along straight line given by the function s(t) = 9r" 24t measured in seconds and is measured in yards On what interval(s) is the hummingbird slowing down? where / 2 ( (Enter your answer In Interval notation: If entering more than one interval write ... QMa) Find lamily ol the matrlces that Is similar Io Ihe malnx[1+2+1]Find eigenvalues and eigenvectors of the following matrix:Determine () Eigenspace of each eigenvalue and basis of this eigenspace Eigenbasis of the matnix Is the matrix In part(b) delective? QMa) Find lamily ol the matrlces that Is similar Io Ihe malnx [1+2+1] Find eigenvalues and eigenvectors of the following matrix: Determine () Eigenspace of each eigenvalue and basis of this eigenspace Eigenbasis of the matnix Is the matrix In part(b) delective?... DSP pleasee show me how to do it and id you can step by step so... DSP pleasee show me how to do it and id you can step by step so that I can learn Problem 6) 13 marks] Consider the following LTI differential equation system where x(t) represents an input and y(t) indicates the corresponding output. dt2 dt (a) What is the transfer function of this system? (b) I... Is the matrix A =invertible?Solution Taking the colactor expansion along the third columgetdet _ A-3/2 ~0+0 = 34-4 -0. So, by the Fundamental TheoreI of Invertible Matrices . is HOL invertible: Is the matrix A = invertible? Solution Taking the colactor expansion along the third colum get det _ A-3/2 ~0+0 = 34-4 -0. So, by the Fundamental TheoreI of Invertible Matrices . is HOL invertible:... Counse HomeEvaluate Ihe cyliridrical coordinate integralAssignmentsStudy PlanGradebookdz [ dr doChapter Contentselextdz r dr du=L] (Type an exact answer; usingneeded !Tools for SuccessMultimedia LibraryPurchase Options counse Home Evaluate Ihe cyliridrical coordinate integral Assignments Study Plan Gradebook dz [ dr do Chapter Contents elext dz r dr du=L] (Type an exact answer; using needed ! Tools for Success Multimedia Library Purchase Options...
{}
# How to put therefore and implies symbols \documentclass{article} \usepackage{graphicx} \begin{document} \vspace{\baselineskip}\noindent \textbf{THEOREM :} If an operator has both Left Identity and Right Identity then it is \emph{UNIQUE}. \vspace{\baselineskip}\noindent \textbf{PROOF :} Let e_{l} is left identity therefore e_{l} * e_{r} this implies e_{r} \end{document} • Welcome to TeX.SX! Please make your code compilable, starting with \documentclass{...} and ending with \end{document}. That may seem tedious to you, but think of the extra work it represents for TeX.SX users willing to help you. Help them help you: remove that one hurdle between you and a solution to your problem. – Ronny Nov 6 '13 at 10:40 • It would be nice, if you could explain, what you tried to include the symbols, where they should appear, and whether you need that in more than this small example (because then maybe it's better to use algorithm2e or something like that). – Ronny Nov 6 '13 at 11:08 • i want to write symbol for "therefore" – user39495 Nov 6 '13 at 11:10 • If you are using \vspace font changes or \noindent in a document it is a sign that something is probably wrong. Ideally the markup should just be \begin{theorem} with the spacing and fonts specified elsewhere. – David Carlisle Nov 6 '13 at 11:12 • the three dots symbol is $\therefore$ (amssymb package) see texdoc symbols – David Carlisle Nov 6 '13 at 11:14 As stated in the comments, you get the symbols in mathmode simply by writing them down. Packages like amsmath and amssymb support you. \documentclass{article} \usepackage{amsmath} \usepackage{amssymb} \newtheorem{theorem}{THEOREM} \newtheorem{proof}{PROOF} \begin{document} \begin{theorem} If an operator has both Left Identity and Right Identity then it is \emph{UNIQUE}. \end{theorem} \begin{proof} Let $e_{l}$ is left identity $\therefore e_{l} * e_{r} \implies e_{r}$ \end{proof} \end{document} • With a very fierce “don't do it in a printed document”! – egreg Mar 21 '15 at 15:57 • I think you're missing an amsthm somewhere. – kahen Mar 21 '15 at 20:31 • @kahen Can you explain why? Did you run my example? – Johannes_B Mar 22 '15 at 9:49 A somewhat larger version of \therefore may be built as: \dot{.\hspace{.095in}.}\hspace{.5in} • That sort of tricks may be OK for visual rendering but they can be highly undesirable when trying to copy raw text from a generated PDF – JuanRocamonde Dec 29 '18 at 22:46 Maybe you could simply use lualatex and a font which actually has the character: %!TEX TS-program = lualatex \documentclass{article} \usepackage{fontspec} \newtheorem{theorem}{THEOREM} \newtheorem{proof}{PROOF} \begin{document} \begin{theorem} If an operator has both Left Identity and Right Identity then it is \emph{UNIQUE}. \end{theorem} \begin{proof} Let $e_{l}$ is left identity ∴ $e_{l} * e_{r}$ ⇒ $e_{r}$ \end{proof} \end{document} Initialisation Code: \def\therefore{\boldsymbol{\text{ } \leavevmode \lower0.4ex\hbox{$\cdot$} \kern-.5em\raise0.7ex\hbox{$\cdot$} \kern-0.55em\lower0.4ex\hbox{$\cdot$} \thinspace\text{ }}} And can then be called on using: \therefore Which renders as: Not entirely happy with any of the handmade therefores proposed above, I thought would offer this one also, which in my opinion has a good balance of dot size (in between \bullet and \cdot) and spacing: \def\therefore{{\tiny$\bullet$}\kern-0.2ex\raisebox{1ex}{\tiny$\bullet$}\kern-0.2ex{\tiny$\bullet$}}
{}
# Tag Info Accepted ### What's the difference between Lagrangian relaxation and Lagrangian decomposition? They are not the same thing. Lagrangian decomposition is a special case of Lagrangian relaxation. (Note: I'm talking specifically about integer programming problems in this answer, though some of ... • 12.7k ### "Partial" Lagrangian Dual in LP This is called Lagrangian relaxation, no matter what subset of constraints you choose to dualize. • 23.3k Accepted ### "Partial" Lagrangian Dual in LP Based on the mentioned references, suppose the primal problem is: \begin{align} \begin{array}{cl} \underset{}{\text{minimize}} & c x \\ \text{subject to} & Ax = a \\ & Dx \leq e \\ & x ... • 6,206 Accepted ### Difficulties with Finding a Proper Penalty Value for the Progressive Hedging Algorithm This problem is addressed in some detail in Section 2.1 of the paper Progressive hedging innovations for a class of stochastic mixed-integer resource allocation problems by Watson and Woodruff (a non-... • 1,967 Accepted ### Augmented Lagrangian Function for Semidefinite Programming Problems My way of reading it is $\langle X, \mathcal{A}^*(y)+S-C\rangle = \langle X, \mathcal{A}^*(y)-C\rangle + \langle X,S \rangle$. The first term is your standard inner product between dual variable and ... • 1,672 ### Using LR-based method to solve mixed integer programming As its name indicates, a Lagrangian relaxation is a relaxation and therefore only provides a dual bound. If you are interested in getting a primal solution, you have several ways to exploit a ... • 1,866 Accepted ### Lagrangian Relaxation bound greater than optimal solution After the discussion here and a suggestion on my post at gurobi's community I'll post an answer for the forum records. Concerning the presolve, I found out that in order to check if this is what is ... • 513 Accepted ### How to Speed Up the subgradient optimization procedure in a Lagrangian Relaxation Scheme In my experience, this sort of thing just requires a lot of trial and error. An SO variant that works well for one problem may not work as well for another, so you just have to implement and test a ... • 12.7k ### Lagrangian Relaxation The Weak Lower Bound A problem with subgradient optimization is the phenomenon of “zigzagging” in the vicinity of the optimal solution. That is, it is often seen that the direction of search in one step is almost opposite ... • 5,543 ### Lagrangian Relaxation for Two-Stage Stochastic Program I'll comment on the Lagrangian relaxation question and leave the Benders question for someone else to comment on. (You might want to consider splitting your question into two, one for LR and one for ... • 12.7k ### Augmented Lagrangian Function for Semidefinite Programming Problems There's a good discussion of this in Convex Optimization by Stephen Boyd and Lieven Vandenberghe. See section 5.9. With an ordinary scalar inequality constraint: $f_{i}(x) \leq 0$, you'll have a term ... ### The variable splitting scheme in the context of Lagrangian relaxation I am familiar with variable splitting (also known as Lagrangian decomposition) being applied to facility location problems. We used it in our paper: Snyder and Daskin, Stochastic $p$-robust location ... • 12.7k ### The variable splitting scheme in the context of Lagrangian relaxation The only occurrences of this decomposition that I am aware of are from: Jörnsten K, Näsberg M (1986) A new Lagrangian relaxation approach to the generalized assignment problem. European Journal of ... • 1,866 Accepted ### Related to Lagrangian dual Dualizing a constraint comes back to the first, the direction of the objective function, and the second, how the dualized constraint would be violated. In your case, the constraint is written as \$LHS-... • 6,206 ### Lagrangian Relaxation for Two-Stage Stochastic Program Larry Snyder explained very well. Few items to check/add for the Lagrangean Relaxaiton part: Make sure your lowerbounds- i.e. the feasible solution (most probably depending on fixed first stage ... • 852
{}
could anyone please explain to me the difference between these two belonging symbols (one with R in index $\in_R$) and the other (without R in index $\in$) I have found them in a scientific paper related to cryptographic field. • Can you link to the document and tell us what page? Or take a screenshot from the document? – mikeazo Jan 16 '17 at 20:43 • I typically see $\in_R$ when referring to something being randomly drawn from a set. But seeing the paper would help answer your question. P.S., I updated to use TeX equations instead of a picture. – mikeazo Jan 16 '17 at 20:45 • FYI to the community: downvotes on questions like this (with no comment) are what make people like me feel "unqualified" to even read this site, and take moths off at a time. I get that people insta downvote anything that looks like a "pure mathematics" question... but really? – Mike Ounsworth Jan 16 '17 at 21:22 • @MikeOunsworth: I didn't downvote this, but without an actual reference to the context in which the OP saw this symbol (and no, "in a scientific paper related to cryptographic field" doesn't count), I don't think this question is "useful and clear" enough to deserve an upvote, either. While mikeazo's and Yehuda Lindell's guess is probably right, without knowing the context that symbol could really mean anything. – Ilmari Karonen Jan 17 '17 at 4:08 • @IlmariKaronen Thank you. Your comment nicely (and politely) explains what's missing from the question. If more people left comments like that along with their downvote, I think that would go a long way to making crypto.SE feel more inviting. – Mike Ounsworth Jan 17 '17 at 14:38 1. $x \in S$ is used to denote that $x$ is in the set $S$. 2. $x \in_R\ S$ is used to denote that $x$ is chosen randomly from the set $S$. 3. Choosing a value randomly $x$ from $S$ is also sometimes denoted $x\leftarrow S$ or even $x\leftarrow_R\ S$. Sometimes this is used for choosing according to a distribution and not a set, but you need to look at the preliminaries of the paper to determine their notation. • I believe I've seen $x \overset{\$}{\leftarrow} S$used for this, too, and I wouldn't be at all surprised if somebody used$x \in_{\$} S$. As with standards, the nice thing about math symbols is that there are so many to choose from. ;) – Ilmari Karonen Jan 17 '17 at 3:57
{}
# Why member ring is favoured in chelates? why chelates which form 3 or 4 member rings or 6 or 7 member rings are as stable as 5 member ring chelates? for example if we measure stability of chelates with oxygen donors, say oxalate, malonate, succinate etc oxalate chelated with metal shoes more stability. why malonate , succinate as chelating ligands are less stable? • Chelating increases the stability , probably due to more change in positive entropy – Heisenberg Mar 19 '15 at 7:03 $\ce{Be(II)}$-malonato complexes are more stable than the corresponding oxalato complexes, cf. Preparative, potentiometric and NMR studies of the interaction of beryllium(II) with oxalate and malonate.
{}
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Physics , 2015, Abstract: We has set up a light scattering spectrometer to study the depolarization of light scattering in linear alkylbenzene. From the scattering spectra it can be unambiguously shown that the depolarized part of light scattering belongs to Rayleigh scattering. The additional depolarized Rayleigh scattering can make the effective transparency of linear alkylbenzene much better than it was expected. Therefore sufficient scintillation photons can transmit through the large liquid scintillator detector of JUNO. Our study is crucial to achieving the unprecedented energy resolution 3\%/$\sqrt{E\mathrm{(MeV)}}$ for JUNO experiment to determine the neutrino mass hierarchy. The spectroscopic method can also be used to judge the attribution of the depolarization of other organic solvents used in neutrino experiments. Borexino Collaboration Physics , 2004, Abstract: We report on the study of a new liquid scintillator target for neutrino interactions in the framework of the research and development program of the BOREXINO solar neutrino experiment. The scintillator consists of 1,2-dimethyl-4-(1-phenylethyl)-benzene (phenyl-o-xylylethane, PXE) as solvent and 1,4-diphenylbenzene (para-Terphenyl, p-Tp) as primary and 1,4-bis(2-methylstyryl)benzene (bis-MSB) as secondary solute. The density close to that of water and the high flash point makes it an attractive option for large scintillation detectors in general. The study focused on optical properties, radioactive trace impurities and novel purification techniques of the scintillator. Attenuation lengths of the scintillator mixture of 12 m at 430 nm were achieved after purification with an alumina column. A radio carbon isotopic ratio of C-14/C-12 = 9.1 * 10^{-18}$has been measured in the scintillator. Initial trace impurities, e.g. U-238 at 3.2 * 10^{-14} g/g could be purified to levels below 10^{-17} g/g by silica gel solid column purification. Physics , 2014, Abstract: We examine the prospects for detecting supernova$\nu_e$in JUNO, RENO-50, LENA, or other approved or proposed large liquid scintillator detectors. The main detection channels for supernova$\nu_e$in a liquid scintillator are its elastic scattering with electrons and its charged-current interaction with the$^{12}$C nucleus. In existing scintillator detectors, the numbers of events from these interactions are too small to be very useful. However, at the 20-kton scale planned for the new detectors, these channels become powerful tools for probing the$\nu_e$emission. We find that the$\nu_e$spectrum can be well measured, to better than$\sim 40\%$precision for the total energy and better than$\sim 25\%$precision for the average energy. This is adequate to distinguish even close average energies, e.g., 11 MeV and 14 MeV, which will test the predictions of supernova models. In addition, it will help set constraints on neutrino mixing effects in supernovae by testing non-thermal spectra. Without such large liquid scintillator detectors (or Super-Kamiokande with added gadolinium, which has similar capabilities), supernova$\nu_e$will be measured poorly, holding back progress on understanding supernovae, neutrinos, and possible new physics. Physics , 2011, DOI: 10.1016/j.astropartphys.2012.02.011 Abstract: We propose the liquid-scintillator detector LENA (Low Energy Neutrino Astronomy) as a next-generation neutrino observatory on the scale of 50 kt. The outstanding successes of the Borexino and KamLAND experiments demonstrate the large potential of liquid-scintillator detectors in low-energy neutrino physics. LENA's physics objectives comprise the observation of astrophysical and terrestrial neutrino sources as well as the investigation of neutrino oscillations. In the GeV energy range, the search for proton decay and long-baseline neutrino oscillation experiments complement the low-energy program. Based on the considerable expertise present in European and international research groups, the technical design is sufficiently mature to allow for an early start of detector realization. F. L. Villante Physics , 2014, DOI: 10.1016/j.physletb.2015.01.043 Abstract: Neutrinos produced in the Sun by electron capture reactions on$^{13}{\rm N}$,$^{15}{\rm O}$and$^{17}{\rm F}$, to which we refer as ecCNO neutrinos, are not usually considered in solar neutrino analysis since the expected fluxes are extremely low. The experimental determination of this sub-dominant component of the solar neutrino flux is very difficult but could be rewarding since it provides a determination of the metallic content of the solar core and, moreover, probes the solar neutrino survival probability in the transition region at$E_\nu\sim 2.5\,{\rm MeV}\$. In this letter, we suggest that this difficult measure could be at reach for future gigantic ultra-pure liquid scintillator detectors, such as LENA.
{}
# When did the term and taught technique 'cross multiplication' enter into common use? The title says it all, I suppose. I'm interested to know when/where the term/technique cross multiply came into use. Sources would be nice. In case it's unfamiliar to anyone, or in case the usage of the term varies, I'm referring to the compound arithmetic operation where an equation of two fractions is multiplied on both sides by its two denominators. It's named because it can be illustrated with a big cross: $$\frac{a}{b} = \frac{c}{d}$$ $$ad = cb$$ • According to link it was early 1950's – Amy B Nov 20 '15 at 2:45 • Ah, the old fashioned dictionary.com – NiloCK Nov 20 '15 at 2:47 • A quick check of google scholar suggests: <Short, R. L. (1939). Methods in Arithmetic and Algebra. School Science and Mathematics, 39 (3), 239-250.> as a possible source. ("To prove this, just cross multiply numerators and denominators") One issue is that the term cross multiplication had a different meaning before its use in solving proportions... – Benjamin Dickman Nov 20 '15 at 3:03 • Aren't methods for solving proportions found even in the Rhind Papyrus? Or is this question merely asking when the name "cross multiply" was used for it? – Gerald Edgar Nov 20 '15 at 15:23 • @GeraldEdgar Certainly people have been solving these problems for a very long time. I'm interested in the particular term, and more specifically in its emergence as a pedagogical tool using the mnemonic of the cross. – NiloCK Nov 20 '15 at 19:06
{}
# How mechanical Strain developed in metal bar at molecular level? If I have metal bar fixed to a support at one end while I apply a tensile force at the other end, the bar elongates while its cross sectional area decreases. I want to know How strain is developed at molecular or atomic level such that cross sectional area of the bar decreases and why is it perpendicular to the direction of the applied force ? It is in my view a very deep question, and a detailed answer depends on specific material classes. At a high level, some understanding could maybe be obtained by noting that generally bonds have to re-align themselves in the direction of applied load. At a macroscopic level this manifests itself as a resistance to volume changes, the one which in engineering elasticity is modelled via the bulk modulus: resistance to shape changes is instead modelled via the shear modulus. As changes in volume cost energy, a stretched metal bar will attempt to reduce such penalty by decreasing the cross section. I give for granted that the reasons behind the stretching in the direction of load application are obvious. In this frame, it is also interesting to note which materials will react to stretching by contracting (in the plane perpendicular to the applied load) to such extent, that the volume before and after the stretching is virtually the same. These are rubber , loosely speaking with Poisson Ratio $$\nu \approx 0.5$$. indeed, rubber elasticity relies mainly on macromolecular, long chain re-alignment, and not on bond stretching. The page What is Poisson's Ratio could be interesting, the video of the stretching honeycomb is revealing. Metals have a crystalline structure. Let's take one type of lattice: FCC. That arrrangement is the most compact possible, if we model the atoms as spheres. Only the Pauli exclusion principle avoids they being closer than they are. The effect of a macroscopic tensile strain is to increase the interatomic spacement in the stress direction, distorting the lattice. The deformed lattice has now some space for a rearrangement, and the transverse contraction is the result of the neighboring atoms filling the blanks so to speak, until reaching the limit of the exclusion principle. But, as the fcc lattice is the most compact possible arrangement, any other, as that distorted one, will have a bigger volume. The Poisson coefficient is then smaller than $$0.5$$ (what would result in a constant volume).
{}
# Unsupervised classification of satellite images sequences derived from time series with SOFM in python? I have the following data: Up to 2 images per day (time series from 2015 - 2019 with gaps) for a specific region (AOI - Germany - Hesse) with 2 variables (soil moisture, precipitation). Out of this data I try to derive soil properties. In theory that's possible because there is a (non-linear) relation between the soil moisture content at the same time for different types of soil. Because of different spatial distribution of rainfall and other climate variables that impact on soil moisture beside soil properties - I decide to generate sequences of drying periods (better comparable than whole time series) short after a rainfall event occurs. So I have now sequences for soil moisture like: consecutive_days = [0,1,2,3,4,5,6,7,8] soil_moisture = [0.8, np.nan, 0.7, np.nan, 0.6, 0.5, 0.4, np.nan, 0.2] #or with masking the nan values: consecutive_days = [0,2,4,5,6,8] soil_moisture = [0.8, 0.7, 0.6, 0.5, 0.4, 0.2] For every location I have now multiple sequences. Because I don't know anything about the number of different soil property classes and their spatial distribution I need to use an unsupervised classification method. Here I get lost. I found this paper. They classify soil properties with a method called SOFM - Self organized Feature Mapping. I can't find a python module that implement the SOFM algorithm for sequential data like I have. My goal is to derive a map with possible different soil properties classes. My question is: What kind of classification method is out there to classify unknown number of classes within sequences derived by time series with the problem that the sequences are not equal. With not equal I mean the third entry in a sequence can refer to the 3 day in the dry period as well as the 6 day when there is a gap in measurements. Also, the sequence length can differ from 2 days to 25 days. Or is there a better StackExchange to come up with this problem?
{}
# Check homework integration in Mathematica Given in my homework I have to compute (by hand) $$\iint\limits_{x^2+y^2\leq 1}(x^2+y^2)\,\mathrm dx\,\,\mathrm dy.$$ My solution so far: Let $f(x,y)=x^2+y^2$ and $K=\{(x,y)\in\mathbb{R}^2:x^2+y^2\leq 1\}$. With the transformation of polar coordinates $\varphi:(r,\phi)\mapsto(r\cdot\cos\phi,r\cdot\sin\phi)=(x,y)$ and the determinant of the Jacobian matrix $|\det D_\varphi|=r$ we can rewrite the set as $K=\{(r\cdot\cos\phi,r\cdot\sin\phi):r\in[0,1],\phi\in[0,2\pi]\}$ and our integrand too as $f(x,y)=x^2+y^2=r^2$. $$\iint\limits_K f(x,y)\,\mathrm dx\,\mathrm dy=\int\limits_0^1\int\limits_0^{2\pi}r^2\cdot |\det D_\varphi|\,\mathrm d\phi\,\mathrm dr=2\pi\int\limits_0^1r^3\,\mathrm dr=\left.2\pi\cdot\frac{1}{4}r^4\right|_0^1=\frac{\pi}{2}.$$ I now want to check the result with a CAS. I am pretty new to Mathematica so I just didn't found any clue on how to enter such an integral for computation. Any help out there? - ## migrated from math.stackexchange.comJun 10 '12 at 11:03 This question came from our site for people studying math at any level and professionals in related fields. In Mathematica: Integrate[Integrate[x^2 + y^2, {x, -Sqrt[1 - y^2], Sqrt[1 - y^2]}], {y, -1, 1}] Or, shorter: Integrate[x^2 + y^2, {y, -1, 1}, {x, -Sqrt[1 - y^2], Sqrt[1 - y^2]}] The main trick is to calculate the bound on $x$ based on the current value of $y$, which is what you need to make the integration bounds explicit. Indeed, $x_{max}=\sqrt{1-y^2}$. This is something you can do in most integral-calculating math software. You can also define the region implicitly, see this. For this specific problem, that would give: Integrate[(x^2 + y^2) Boole[x^2 + y^2 <= 1], {y, -100, 100}, {x, -100, 100}] Where the "100" bounds are just to limit the computation. However, Mathematica is even smart enough to calculate: Integrate[(x^2 + y^2) Boole[x^2 + y^2 <= 1], {y, -Infinity, Infinity}, {x, -Infinity, Infinity}] - Rather than writing this particular problem as an iterated integral, I wonder if it could be written as a "pure" double integral. Not all double integrals can be simply written as iterated integrals. –  Ragib Zaman Jun 7 '12 at 12:27 The last version with Integrate[(x^2 + y^2) Boole[x^2 + y^2 <= 1], {y, -Infinity, Infinity}, {x, -Infinity, Infinity}] is awesome. I will stick to the Boole function. –  Christian Ivicevic Jun 7 '12 at 12:34 You're indeed supposed to be using infinite limits when using the Boole[] form of the integral; the assumption is that the integrand is zero outside the boundary described within the Boole[] function, so it all works out. –  J. M. Jun 10 '12 at 11:48 For checking a paper-and-pencil evaluation, definitely the method using Boole is the way to go: that way, if you incorrectly described the limits on x in terms of y (or vice versa) by hand, you wouldn't repeat the same error when doing the Mathematica evaluation. –  murray Jun 10 '12 at 20:26
{}
# HW2 discussion, ECE301 Spring 2011, Prof. Boutin In question 2e $x(t)= \sum_{k=-\infty}^\infty \frac{1}{1+(x-7k)^2} \$ should it be like this? $x(t)= \sum_{k=-\infty}^\infty \frac{1}{1+(t-7k)^2} \$ yes, it should be. The correction has been made. -pm and I was trying to find out what the peak value is for this question but turns out to be very hard to calculate the sum $\sum_{t=-\infty}^\infty \frac{1}{1+t^2} \$            and wolfram said answer is π * coth(π). is there any easier way to do that? Yimin. Jan 20 You do not have to evaluate the sum. In particular, you do not need the peak value of that functions. Try to guess the period directly by looking at the sum. If you have no idea how to do this, read this page first. -pm Yeah I'm just trying to figure out the infinite sum just for fun. Thanks. Oh excellent! I think this deserves a page on its own. Let's try to involve the math folks for help.-pm And for question 4, are we still using the tempo? so my guess is use step functions to cut out the rhythm we want? (Yes, that's the idea. -pm) Then put the whole line in one equation? that will become pretty messy I guess. Yimin Not too bad, if you think about it carefully. Each note can be written in a somewhat simple form. Then you just add all the notes together. -pm For question  4 we only need the first second on the song? Right, we don't have to compress the entire song into second. No. You need to pick a note lasting one second (your choice what frequency), say we call this note $y_0(t)$. Then write your tune z(t) as a function of $y_0(t)$ rescaled and shifted. Hint: you can write z(t) as a summation of $y_0(a_i t+b_i)$'s multiplied by some step functions $u(t-c_i)-u(t-d_i)$ (to make the note last the right amount of time). Does that help? -pm ## Alumni Liaison Basic linear algebra uncovers and clarifies very important geometry and algebra. Dr. Paul Garrett
{}
# Properties Label 20691o Number of curves $3$ Conductor $20691$ CM no Rank $1$ Graph # Related objects Show commands for: SageMath sage: E = EllipticCurve("o1") sage: E.isogeny_class() ## Elliptic curves in class 20691o sage: E.isogeny_class().curves LMFDB label Cremona label Weierstrass coefficients j-invariant Discriminant Torsion structure Modular degree Faltings height Optimality 20691.i3 20691o1 $$[0, 0, 1, 726, -333]$$ $$32768/19$$ $$-24537891411$$ $$[]$$ $$10800$$ $$0.68308$$ $$\Gamma_0(N)$$-optimal 20691.i2 20691o2 $$[0, 0, 1, -10164, -419598]$$ $$-89915392/6859$$ $$-8858178799371$$ $$[]$$ $$32400$$ $$1.2324$$ 20691.i1 20691o3 $$[0, 0, 1, -837804, -295162893]$$ $$-50357871050752/19$$ $$-24537891411$$ $$[]$$ $$97200$$ $$1.7817$$ ## Rank sage: E.rank() The elliptic curves in class 20691o have rank $$1$$. ## Complex multiplication The elliptic curves in class 20691o do not have complex multiplication. ## Modular form 20691.2.a.o sage: E.q_eigenform(10) $$q - 2q^{4} - 3q^{5} + q^{7} + 4q^{13} + 4q^{16} - 3q^{17} - q^{19} + O(q^{20})$$ ## Isogeny matrix sage: E.isogeny_class().matrix() The $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the Cremona numbering. $$\left(\begin{array}{rrr} 1 & 3 & 9 \\ 3 & 1 & 3 \\ 9 & 3 & 1 \end{array}\right)$$ ## Isogeny graph sage: E.isogeny_graph().plot(edge_labels=True) The vertices are labelled with Cremona labels.
{}
# Q: Is it a coincidence that a circles circumference is the derivative of its area, as well as the volume of a sphere being the antiderivative of its surface area? What is the explanation for this? Physicist: For those of you not hip to the calculus groove, here’s what’s going down:  The derivative of Y with respect to X, written $\frac{dY}{dX}$, is just a description of how fast Y changes when X changes.  It so happens that if $Y=X^N$, then $\frac{dY}{dX}=NX^{N-1}$.  So, for example, if $Y=5X^3$, then $\frac{dY}{dX} = 15X^2$.  The area of a circle is $\pi R^2$, and the circumference is $2\pi R$, which is the derivative.  The volume of a sphere is $V = \frac{4}{3}\pi R^3$, and the surface area is $S = 4\pi R^2$, which is again the derivative.  This, it turns out, is no coincidence! If you describe volume, V, in terms of the radius, R, then increasing R will result in an increase in V that’s proportional to the surface area. If the surface area is given by S(R), then you’ll find that for a tiny change in the radius, dR, $dV = S(R)dR$, or $\frac{dV}{dR} = S(R)$. You can think of a sphere as a series of very thin surfaces added together.  This is another, equivalent, way of describing the situation.  Each layer adds (surface area of layer)x(thickness of layer) to the volume. You can think of this like painting a spherical tank. The increase in volume, dV, is the amount of paint you use, and the amount of paint is just the surface area, S(R), times the thickness of the paint, dR.  This same argument can be used to show that the volume is the integral of the surface area (just keep painting layer after layer). It’s a little easier to keep track of what’s going on with circles. If you increase the radius of a circle by a tiny amount, dR, then the area increases by (2πR)(dR). The same “derivative thing” holds up for the circumference vs. the area of a circle.  The change in area, dA, is dA = (2πR)dR.  So, $\frac{dA}{dR}=2\pi R$.  That is, the derivative of the area is just the circumference. This, by the way, is one of the arguments for using “τ” instead of “π”. τ = 2π, so the area of a circle is $\pi R^2 = \frac{1}{2}\tau R^2$. This makes the “differential nature” of the circumference a little more obvious. Answer gravy: By the way, this argument is only exact when the thickness of the new layer “goes to zero“.  Basically, the top of the new layer is a little longer/bigger than the bottom of the new layer.  So if the area is $A=\pi R^2$, and the change in radius is $\Delta R$, then $\Delta A = \pi(R+\Delta R)^2 - \pi R^2 = \pi R^2+2\pi R\Delta R + \pi (\Delta R)^2-\pi R^2 = 2\pi R\Delta R + \pi (\Delta R)^2$ This extra little $\pi (\Delta R)^2$ is a result of the top and bottom of the new layer being very slightly different lengths.  But when ΔR is small, (ΔR)2 is really small.  For example, if ΔR is one thousandth, then (ΔR)2 is one millionth.  The whole idea behind calculus is that when the scales get very small you can just ignore these “extra tiny” terms.  In fact, this is the essential difference between dA and ΔA, and how the derivative is defined: $\frac{dA}{dR}=\lim_{\Delta R\to 0}\frac{\Delta A}{\Delta R} = \lim_{\Delta R\to 0}\frac{1}{\Delta R}\left(2\pi R\Delta R + \pi (\Delta R)^2\right)$ $= \lim_{\Delta R\to 0} 2\pi R + \pi\Delta R = 2\pi R$ This entry was posted in -- By the Physicist, Math. Bookmark the permalink. ### 14 Responses to Q: Is it a coincidence that a circles circumference is the derivative of its area, as well as the volume of a sphere being the antiderivative of its surface area? What is the explanation for this? 1. Joe says: Cool. Wonderful answer to my question. 2. juges debnath says: It happens only in calculus. I only thing I would like to say is thanks to Newton for his ingenious discovery of Calculus ! 3. Patrick says: Very cool. I never thought about this before. But after this explanations it fits together so easily. Thank you 4. asdfasd says: “Volume is the integral of surface area.” To me, that makes so much more sense that “surface area is the derivative of volume.” Either way, this answers something I’ve been wondering about for a while. thanks man. 5. K.N. says: It’s easy to visualize in 1D, too. The sphere of radius r and center x is an interval (x-r,x+r), which has ‘volume’ 2r and ‘surface area’ 2: the two endpoints is the boundary. And you can get the former by integrating the latter in the obvious way. Hyperspheres somewhat less easy, though. 6. Derpo says: not to mention the derivative of the circumference is 2 pi which is the amount of radians equal to 360 degrees 7. Tsi says: Thank you for this wonderful explanation. However, I don’t know calculus. So can you explain some of these relationships between radius, circumference, area, circle, sphere surface area, and sphere volume in other terms, possible algebraic or geometric? To me, pi is the factor that accounts for the curvature of a circle as opposed to the straight lines of a square. That is, instead of four times the side of a square, which gives you the perimeter of that square, you use pi times the side(side being equal to diameter of inscribed circle) to give you the circumference of the inscribed circle. ie: circumference = 2πr = πD perimeter = 4D Circle/Square = πD/4D =π/4 Should not this relationship, somehow, carry through to a sphere vs a cube?? 8. Angel says: @Tsi: There is no algebraic explanation for it given that it’s a circle, or namely, a curve. Say we have a cube C with an inscribed sphere P in it. We know V(P)=4pi*r^3/3 and V(C)=s^3, with s=2r=d, so that V(C)=(2r)^3=8r^3. If you want the ratio of the volume of P to the volume of C, then: V(P)/V(C)=(4pi*r^3/3)/(8r^3)=4pi/(3*8)=pi/(3*2)=pi/6 If you want to do the same for the area as you did with the circles and squares, then: S(C)=6s^2=6(2r)^2=24r^2; S(P)=4pi*r^2. You want the ratio, so: S(P)/S(C)=4pi*r^2/24r^2=pi/6, which equals the ratio in volumes as well. Circumference and perimeter are not directly related to surface area here nor to volumes, so that’s why pi/4 cannot be brought over. However, because the surface areas are directly related to each other via intergration/differentiation, we can still see a common ratio. The are of a circle differs from the area of a sphere, and so does the area of square from that of a cube. If you were to obtain circumference from a sphere based on surface area, for instance, the formula would be (after much simplification) C=8pi*r, which is 4 times bigger than 2pi*r . Taking the perimeter of a cube based on its surface area, would give us P=48r, which is 6 times bigger than the usual 8r. Although, you could make the case that because the circumference is now 4 times as big, and the perimeter six times as big, you could use proportions to relate the ratios for sphere:cube and circle:sqaure by 4/6=2/3. Since you obtained pi/4, then the ratio of sphere:cube must be 2/3*pi/4=2pi/12=pi/6, which is what I obtained earlier. So, with the right calculations an proportions, you could relate the ratio of circumference:perimeter to the ratio of sphere’s volume: cube’s volume. Very interesting question. 9. Ruvian says: For everyone, to understand better “If you describe volume, V, in terms of the radius, R, then increasing R will result in an increase in V that’s proportional to the surface area.” Means that V/S=R/3, which means the volume and area have a linear relation, even though both are not linear. And it can be extended to the 1 and 2 dimensions, which are circumference and area, respectively. A/C=R/2, which, the same way, means area and perimeter have a linear relation, even though area is not linear and perimeter is linear. To the point: linear is just a concept and it’s relative, like derivative, we need to have another variable to relate with. Knowing V and S have a linear relation => dV/dS=K. And the same way, A and C have a linear relation => dA/dC=K2. 10. amine cheikh med says: why isn’t the area of the sphere also 2 times the antiderivative of the surface of a circle. the way I calculated it , I found it to be 4 time it. 11. Angel says: @anime cheikh med: What do you mean ‘also’ 2 times the anti-derivative? The relationship between a particular type of measurement of a sphere to same type of measurement of a circle is always 4, never 2. The volume of a circle would be V=pi*r^3/3 since A=pi*r^2 and V = anti-derivative[A(r)*dr]. Of course, this always turns out to be zero, because the difference in the radius is zero since circles are only two dimensional; that is, the third dimension of a circle, when measured, is z = 0. Note, however, that the theoretical volume of a circle is 4 times smaller than the volume of a sphere, which is V=4pi*r^3/3. Perhaps the better question to ask is: why, in general, is the ratio of a sphere to a circle, both with equal radius, always equal to four? Circles with thickness become cylinders: V = pi*r^2*h. If we were to put a sphere on the inside of this cylinder with the same radius, then we would figure out that h=d=2r. Thus, V=2pi*r^3. Now you see that the ratio of the volume of a sphere to the volume of a cylinder is 2/3. The area of a cylinder is A=2pi*r^2 + 2pi*rh = C(r+h) = C(r + 2r) = 3rC. The area of a sphere is A=4pi*r^2 = (2pi*r)*2r = 2rC. The ratio of the area of a sphere to the area of a cylinder is curiously also 2/3. This happens due to the natural differentiation relationship between volume and area. If we have sphere P and cylinder D, then we can calculate their volumes, areas, and circumferences respectively. That means we can also calculate their ratios. V(P)/V(D)=K, where K is some constant. Thus V(P)=K*V(D). If we differentiate both sides, we obtain dV(P)=K*dV(D). The K stays because of the coefficient rule of derivatives. Differentiating each volume formula respectively will simplify into dV=S*dr, so S(P)*dr=K*S(D)*dr. The drs will disappear since they are the same on both sides. This is why the relationship between measurements of both figures remain the same. Do not forget that circles are just cylinders with zero height. 12. Angel says: I want to add some information that may be interesting even if useless. I wrote a response above talking about relationships between measures of the same kind on different circular figures with equal radius. We established that circles are cylinders with h=0 and that dV = S*dr. We also know that the ratio of spheres to circles is 4, while the ratio of a sphere to a cylinder with h=d is 2/3. What I have noticed is that both figures, when circumference is calculated as a derivative of surface area, yield a circumference formula that multiplies the real circumference by some number. This number, we will call it “scale”, represent it by K, and it is given as a relationship. This scale will depend on the height of the cylinder. Thus K is in reality a function of height that multiples the real circumference to produce surface area. K(0) = 1, for circles, thus C=2pi*r just as it should be. For cylinders with h=r, K=4, since it has the same surface area formula as an sphere. For cylinders with h=2r, we know that S=6pi*r^2. Differentiating both sides will yield C=12pi*r. Dividing then both sides by 2pi*r yields K(2r)=6. For h=4r/3, when the volume of the cylinder is the same as of the sphere, K(4r/3)=14/3. We need to try find out why does K change with respect to height — which at the same time, we will treat as a function of the radius. Keep in mind that the circle is cylinder with h=0 and that the area of a cylinder is given by A=2pi(r^2+r*h). Differentiating both sides gives us dA/dr=2pi(2r+r*dh/dr+h)=2pi(r[2+dh/dr]+h)=2pi*r(2+dh/dr)+2pi*h=C(2+dh/dr)+2pi*h. We have now derived a formula for dA/dr, which is the apparent circumference, which includes K in it. Now, note that h is a linear function of the radius — or at least in our examples, the function is linear. Thus h=xr, where x is the slope that will produce height from radius. With that in mind, we know that dh/dr=x. Now we can substitute in, and things become much more clear: dA/dr=C(2+x)+2pi*xr=C(2+x+x)=(2x+2)C. Dividing both sides by C now yields K: 2x+2, where x=dh(r)/dr. We know this simplifies to K(0)=2(0)+2=2. Though this differ from our original established value, we need to keep in mind that cylinders have two circular faces. Hence, if h=0, we will have no lateral face, only two circular faces. The formula yields the circumference of both circular faces. One circular face would then be half of that, thus K(0)/2=1. This verifies part fo our formula. For every other case, h>0, thus K(x)=2x+2, x=dh/dr. In order to formally express this, we convert K into a piece wise function: K(x)={0 => x=0, 2x+2 => x>0|x=dh(r)/dr}. Now, using the previous values for h, we can obtain x, and verify that this equation is true in absolutely every case: h=r => x=1 => K(1)=2(1)+2=4, which we already defined as true. h=2r => x=2 => K(2)=2(2)+2=6, which we already defined as true. h=4r/3 => x=4/3 => K(4/3)=2(4/3)+2=14/3, which we already defined as true. Actually, we can correct the anomaly that occurs at h=0. Setting x=-1 solves it: this implies that h=-r; therefore, r=-h. A negative height places the second circular face of the cylinder in a non-existent plane, eliminate this second face and leaving us with one circle, for a factor in the circumference formula. At h=0, while t=both circular faces are the exact same circle, the formula still recognizes that they are two circles in the same plane due to zero height. This ensures that K(x)=2x+2, x=dh(r)/dr holds true for a circle. We have found an equation that explains why do tridimensional objects yield circumferences increased by some constant factor despite having their circumferences equal to that of a circle with equal radius. Although spheres are not cylinders, there is a direct relationship between the surface area and the volume of spheres and cylinders, which is why it also works for spheres and for cones (cones also have the same relationship with cylinders; that was discovered by Archimides). 13. Bruce Dunn says: No one ever talks about the fact that the product of the circumference of a circle (one dimension) the area of any ball of ‘n’ dimensions will give the area of the n+1 cover of a ball in n+2 space. Why would 2piR be a universal multiplier?
{}
# How do you factor completely 25x^2– 144? $\left(5 x + 12\right) \left(5 x - 12\right)$ $25 {x}^{2} - 144$ is a difference of squares, ${a}^{2} - {b}^{2}$, where $a = 5 x$ and $b = 12$, ${a}^{2} - {b}^{2} = \left(a - b\right) \left(a + b\right)$.
{}
What is the remainder of Euclidean division of L=111…1 (2018 times) in base 7 by 9 [closed] What is the remainder of Euclidean division of L=11111...1 (2018 times) in base 7 by 9? closed as off-topic by Leucippus, José Carlos Santos, Shailesh, Gibbs, Lee David Chung LinJan 29 at 0:54 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Leucippus, José Carlos Santos, Shailesh, Gibbs, Lee David Chung Lin If this question can be reworded to fit the rules in the help center, please edit the question. • @saulspatz in base $7$. $L =\frac{7^{2018} - 1}6$. – fleablood Jan 28 at 20:28 • @fleablood Oops, skipped right over that. – saulspatz Jan 28 at 20:43 • Unfortunately, not very useful in this case, as $6$ does not have an inverse mod $9$ – Jordan Green Jan 28 at 21:48 Note that $$\begin{split}L &= 1+7+7^2+7^3+7^4+7^5+ \dots + 7^{2017} \\ &\equiv (1+7+4)+(1+7+4)+ 7^6+\dots + 7^{2017} \pmod{9}. \end{split}$$ How many full repetitions of the pattern do we have? What is the equivalence class of each leftover term? $$L = 111......111_7 = \sum_{i=0}^{2017} 7^i$$ By Euler's Th. $$7^6 \equiv 1 \pmod 9$$ and so Direct observation we can do better: $$7^3\equiv(-2)^3 \equiv -8 \equiv 1 \pmod 9$$ $$1 + 7 + 7^2\equiv 1 -2 + 4 = 3 \pmod 9$$. So $$L = \sum_{i=0}^{2017} 7^i\equiv \sum_{i=0}^{2017} 7^{i\mod 3} \equiv \sum_{i=0}^{3*672-1+2} 7^{i\mod 3}$$ $$\equiv \sum_{k=1}^{672} (7^0 + 7^1 + 7^2) + 7^0 + 7^1\equiv \sum_{k=1}^{672}3 + 8 \equiv 672(3) +8\equiv 8 \pmod 9$$. The remainder is $$8$$ • How is $3$ times something plus $8$ a multiple of $3$? Also I thought $2016/3$ was $672$. – Oscar Lanzi Jan 28 at 21:35 • Good point! But $2016/6 = 336$ and $336 * 2= 772$ as everyone can do in their heads and not deign to use calculators knows.... I made arithmetic mistakes. A lot. I'll try to fix them. – fleablood Jan 28 at 22:29 • Likewise everyone knows $3+8 = 11 \equiv 3 \mod 9$ because $9 + 3 = 11$. Duh! I mean.... that's obvious, right? – fleablood Jan 28 at 22:35 • +1 for the sense of humor. Always good to have a proofreader. – Oscar Lanzi Jan 28 at 23:49
{}
# 10.18: Distribution of Differences in Sample Proportions (5 of 5) ### Learning Objectives • Estimate the probability of an event using a normal model of the sampling distribution. ### Why Do We Care about a Normal Model? Now we focus on the conditions for use of a normal model for the sampling distribution of differences in sample proportions. We use a normal model for inference because we want to make probability statements without running a simulation. If we are conducting a hypothesis test, we need a P-value. If we are estimating a parameter with a confidence interval, we want to state a level of confidence. These procedures require that conditions for normality are met. Note: If the normal model is not a good fit for the sampling distribution, we can still reason from the standard error to identify unusual values. We did this previously. For example, we said that it is unusual to see a difference of more than 4 cases of serious health problems in 100,000 if a vaccine does not affect how frequently these health problems occur. But without a normal model, we can’t say how unusual it is or state the probability of this difference occurring. ### When Is a Normal Model a Good Fit for the Sampling Distribution of Differences in Proportions? A normal model is a good fit for the sampling distribution of differences if a normal model is a good fit for both of the individual sampling distributions. More specifically, we use a normal model for the sampling distribution of differences in proportions if the following conditions are met. ${n}_{1}{p}_{1}≥10\text{ }{n}_{1}(1-{p}_{1})≥10\text{ }{n}_{2}{p}_{2}≥10\text{ }{n}_{2}(1-{p}_{2})≥10$ These conditions translate into the following statement: The number of expected successes and failures in both samples must be at least 10. (Recall here that success doesn’t mean good and failure doesn’t mean bad. A success is just what we are counting.) Here we complete the table to compare the individual sampling distributions for sample proportions to the sampling distribution of differences in sample proportions. ## More on Conditions for Use of a Normal Model All of the conditions must be met before we use a normal model. If one or more conditions is not met, do not use a normal model. Here we illustrate how the shape of the individual sampling distributions is inherited by the sampling distribution of differences. ### Learn By Doing Recall the AFL-CIO press release from a previous activity. “Fewer than half of Wal-Mart workers are insured under the company plan – just 46 percent. This rate is dramatically lower than the 66 percent of workers at large private firms who are insured under their companies’ plans, according to a new Commonwealth Fund study released today, which documents the growing trend among large employers to drop health insurance for their workers.” https://assessments.lumenlearning.co...sessments/3628 https://assessments.lumenlearning.co...sessments/3629 ### Learn By Doing https://assessments.lumenlearning.co...sessments/3926 ### Using the Normal Model in Inference When conditions allow the use of a normal model, we use the normal distribution to determine P-values when testing claims and to construct confidence intervals for a difference between two population proportions. We can standardize the difference between sample proportions using a z-score. We calculate a z-score as we have done before. $Z\text{}=\text{}\frac{\mathrm{statistic}-\mathrm{parameter}}{\mathrm{standard}\text{}\mathrm{error}}$ For a difference in sample proportions, the z-score formula is shown below. $Z\text{}=\text{}\frac{(\mathrm{difference}\text{}\mathrm{in}\text{}\mathrm{sample}\text{}\mathrm{proportions})-(\mathrm{difference}\text{}\mathrm{in}\text{}\mathrm{population}\text{}\mathrm{proportions})}{\mathrm{standard}\text{}\mathrm{error}}$ $Z\text{}=\text{}\frac{({\stackrel{ˆ}{p}}_{1}-{\stackrel{ˆ}{p}}_{2})\text{}-\text{}({p}_{1}-{p}_{2})}{\sqrt{\frac{{p}_{1}(1-{p}_{1})}{{n}_{1}}+\frac{{p}_{2}(1-{p}_{2})}{{n}_{2}}}}$ ## Abecedarian Early Intervention Project Recall the Abecedarian Early Intervention Project. For this example, we assume that 45% of infants with a treatment similar to the Abecedarian project will enroll in college compared to 20% in the control group. That is, we assume that a high-quality prechool experience will produce a 25% increase in college enrollment. We call this the treatment effect. Let’s suppose a daycare center replicates the Abecedarian project with 70 infants in the treatment group and 100 in the control group. After 21 years, the daycare center finds a 15% increase in college enrollment for the treatment group. This is still an impressive difference, but it is 10% less than the effect they had hoped to see. What can the daycare center conclude about the assumption that the Abecedarian treatment produces a 25% increase? Previously, we answered this question using a simulation. This difference in sample proportions of 0.15 is less than 2 standard errors from the mean. This result is not surprising if the treatment effect is really 25%. We cannot conclude that the Abecedarian treatment produces less than a 25% treatment effect. Now we ask a different question: What is the probability that a daycare center with these sample sizes sees less than a 15% treatment effect with the Abecedarian treatment? We use a normal model to estimate this probability. The simulation shows that a normal model is appropriate. We can verify it by checking the conditions. All expected counts of successes and failures are greater than 10. For the treatment group: $\begin{array}{l}70(0.45)=31.5\\ 70(0.55)=38.5\end{array}$ For the control group: $\begin{array}{l}100(0.20)=20\\ 100(0.80)=80\end{array}$ In the simulated sampling distribution, we can see that the difference in sample proportions is between 1 and 2 standard errors below the mean. So the z-score is between −1 and −2. When we calculate the z-score, we get approximately −1.39. $Z=\frac{(\mathrm{difference}\text{}\mathrm{in}\text{}\mathrm{sample}\text{}\mathrm{proportions})\text{}-\text{}(\mathrm{difference}\text{}\mathrm{in}\text{}\mathrm{population}\text{}\mathrm{proportions})}{\mathrm{standard}\text{}\mathrm{error}}$ $\mathrm{standard}\text{}\mathrm{error}\text{}=\text{}\sqrt{\frac{0.45(0.55)}{70}+\frac{0.20(0.80)}{100}}\text{}\approx \text{}0.072$ $Z\text{}=\text{}\frac{0.15-0.25}{0.072}\text{}\approx \text{}-1.39$ We use a simulation of the standard normal curve to find the probability. We get about 0.0823. Conclusion: If there is a 25% treatment effect with the Abecedarian treatment, then about 8% of the time we will see a treatment effect of less than 15%. This probability is based on random samples of 70 in the treatment group and 100 in the control group. ### Learn By Doing https://assessments.lumenlearning.co...sessments/3965 ### Let’s Summarize In “Distributions of Differences in Sample Proportions,” we compared two population proportions by subtracting. When we select independent random samples from the two populations, the sampling distribution of the difference between two sample proportions has the following shape, center, and spread. Shape: A normal model is a good fit for the sampling distribution if the number of expected successes and failures in each sample are all at least 10. Written as formulas, the conditions are as follows. ${n}_{1}{p}_{1}≥10\text{ }{n}_{1}(1-{p}_{1})≥10\text{ }{n}_{2}{p}_{2}≥10\text{ }{n}_{2}(1-{p}_{2})≥10$ Center: Regardless of shape, the mean of the distribution of sample differences is the difference between the population proportions, ${p}_{1}-{p}_{2}$ . This is always true if we look at the long-run behavior of the differences in sample proportions. As we know, larger samples have less variability. The formula for the standard error is related to the formula for standard errors of the individual sampling distributions that we studied in Linking Probability to Statistical Inference. $\sqrt{\frac{{p}_{1}(1-{p}_{1})}{{n}_{1}}+\frac{{p}_{2}(1-{p}_{2})}{{n}_{2}}}$ If a normal model is a good fit, we can calculate z-scores and find probabilities as we did in Modules 6, 7, and 8. The formula for the z-score is similar to the formulas for z-scores we learned previously. $\begin{array}{l}Z\text{}=\text{}\frac{\mathrm{statistic}-\mathrm{parameter}}{\mathrm{standard}\text{}\mathrm{error}}\\ Z\text{}=\text{}\frac{(\mathrm{difference}\text{}\mathrm{in}\text{}\mathrm{sample}\text{}\mathrm{proportions})-(\mathrm{difference}\text{}\mathrm{in}\text{}\mathrm{population}\text{}\mathrm{proportions})}{\mathrm{standard}\text{}\mathrm{error}}\\ Z\text{}=\text{}\frac{({\stackrel{ˆ}{p}}_{1}-{\stackrel{ˆ}{p}}_{2})\text{}-\text{}({p}_{1}-{p}_{2})}{\sqrt{\frac{{p}_{1}(1-{p}_{1})}{{n}_{1}}+\frac{{p}_{2}(1-{p}_{2})}{{n}_{2}}}}\end{array}$
{}
## Notre Dame Journal of Formal Logic ### Shortest Axiomatizations of Implicational S4 and S5 #### Abstract Shortest possible axiomatizations for the strict implicational fragments of the modal logics S4 and S5 are reported. Among these axiomatizations is included a shortest single axiom for implicational S4—which to our knowledge is the first reported single axiom for that system—and several new shortest single axioms for implicational S5. A variety of automated reasoning strategies were essential to our discoveries. #### Article information Source Notre Dame J. Formal Logic Volume 43, Number 3 (2002), 169-179. Dates First available in Project Euclid: 16 January 2004 Permanent link to this document http://projecteuclid.org/euclid.ndjfl/1074290715 Digital Object Identifier doi:10.1305/ndjfl/1074290715 Mathematical Reviews number (MathSciNet) MR2034744 Zentralblatt MATH identifier 1045.03021 #### Citation Ernst, Zachary; Fitelson, Branden; Harris, Kenneth; Wos, Larry. Shortest Axiomatizations of Implicational S4 and S5 . Notre Dame J. Formal Logic 43 (2002), no. 3, 169--179. doi:10.1305/ndjfl/1074290715. http://projecteuclid.org/euclid.ndjfl/1074290715. #### References • [1] Anderson, A. R., and N. D. Belnap, "The pure calculus of entailment", The Journal of Symbolic Logic, vol. 27 (1962), pp. 19--52. • [2] Anderson, A. R., and N. D. Belnap, Entailment. Volume I. The Logic of Relevance and Necessity, Princeton University Press, Princeton, 1975. • [3] Curry, H. B., "Review of Hacking's What is strict implication?'", 1963. • [4] Ernst, Z., B. Fitelson, K. Harris, and L. Wos, "A concise axiomatization of $\mathitRM_\rightarrow$", University of Łódz. Department of Logic. Bulletin of the Section of Logic, vol. 30 (2001), pp. 191--94. • [5] Hacking, I., `What is strict implication?'' The Journal of Symbolic Logic, vol. 28 (1963), pp. 51--71. • [6] Kalman, J. A., "Condensed detachment as a rule of inference", Studia Logica, vol. 42 (1983), pp. 443--51 (1984). • [7] Kripke, S., "The problem of entailment", The Journal of Symbolic Logic, vol. 24 (1959), p. 324. • [8] Lemmon, E. J., C. A. Meredith, D. Meredith, A. N. Prior, and I. Thomas, Calculi of Pure Strict Implication, Canterbury University College, Christchurch, 1957. • [9] McCune, W., " Otter" 3.0 Reference Manual and Guide, Technical Report ANL-94/6, Argonne National Laboratory, Argonne, IL, 1994. • [10] McCune, W., R. Veroff, and R. Padmanabhan, "Yet another single law for lattices". forthcoming in Algebra Universalis. • [11] Meredith, C. A., and A. N. Prior, "Investigations into implicational S5", Zeitschrift für mathematische Logik und Grundlagen der Mathematik, vol. 10 (1964), pp. 203--20. • [12] Meyer, R. K., and R. Z. Parks, "Independent axioms for the implicational fragment of Sobociń"ski's three-valued logic, Zeitschrift für mathematische Logik Grundlagen Mathematik, vol. 18 (1972), pp. 291--95. • [13] Parks, R. Z., "A note on R-Mingle and Soboci\' n"ski's three-valued logic, Notre Dame Journal of Formal Logic, vol. 13 (1972), pp. 227--28. • [14] Prior, A. N., Formal Logic, Clarendon Press, Oxford, 1962. • [15] Slaney, J., " MaGIC": Matrix Generator for Implication Connectives, Technical Report TR-ARP-11-95, Research School of Information Science and Engineer and Centre for Information Science Research, Australian National University, 1995. • [16] Ulrich, D., "Strict implication in a sequence of extensions of S4", Zeitschrift für mathematische Logik und Grundlagen der Mathematik, vol. 27 (1981), pp. 201--12. • [17] Veroff, R., "Using hints to increase the effectiveness of an automated reasoning program: Case studies", Journal of Automated Reasoning, vol. 16 (1996), pp. 223--39. • [18] Wos, L., "The resonance strategy", Computers & Mathematics with Applications. An International Journal, vol. 29 (1995), pp. 133--78. • [19] Wos, L., "Searching for circles of pure proofs", Journal of Automated Reasoning, vol. 15 (1995), pp. 279--315. • [20] Wos, L., " Otter" and the Moufang identity problem, Journal of Automated Reasoning, vol. 17 (1996), pp. 215--57. • [21] Wos, L., "The strategy of cramming", preprint ANL/MCS-P898-0801, Argonne National Laboratory, Argonne, 2001. • [22] Wos, L., and G. W. Pieper, A Fascinating Country in the World of Computing: Your Guide to Automated Reasoning, World Scientific Publishing Company, 2000. • [23] Wos, L., and G. W. Pieper, Automated Reasoning and the Discovery of Missing and Elegant Proofs, Rinton Press, forthcoming. • [24] Zhang, J., and H. Zhang, "SEM: A System for Enumerating Models", Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95), (1995), pp. 298--303.
{}
# All Purpose Tripod Rankings This set of rankings includes tripods considered to be all purpose.  Their height is tall enough to get the camera to eye level for most people.  They are typically more stable than the smaller travel tripods, but are also heavier and bulkier.  One could travel and hike with these tripods, but not efficiently. TripodPriceStiffness NmHeight inWeight lbsScore RRS TVC-23\$8751941.051.93.31788. RRS TVC-24\$9601601.049.83.31400. RRS TVC-24L\$9851132.066.13.71271. Leofoto LS-324C\$2881095.051.33.01096. Gitzo GT2542 Mountaineer\$9451181.053.73.71025. Manfrotto MT055CXPRO3\$3401289.055.14.21015. Gitzo GT1532 Mountaineer\$700953.051.72.91007. Oben CT-2491\$300706.061.03.6746.4 Manfrotto MT055CXPRO4\$380960.055.24.4725.4 Manfrotto MT190CXPRO3\$330757.053.53.4699.8 Gitzo G1220 MK2\$2501021.047.74.9580.4 Manfrotto MT055XPRO3\$230878.054.75.5530.1 Amazon Basics 70-Inch\$65779.053.95.1496.6 Manfrotto MT190X3\$152630.053.44.3470.4 Manfrotto MT190XPRO3\$176520.053.64.3383.3 The reasoning behind the rankings and calculation of the score can be found here.  The reported stiffness is the harmonic mean of the stiffness measured at full height in the pitch and yaw directions.  The score is simply $stiffness*height^{1.25}/weight$.  These rankings are by no means definitive as people place differing amounts of value on the different aspects of tripod design.  This is a simple overview with a performance metric.
{}
# Step-by-step Solution Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ## Step-by-step explanation Problem to solve: $\int_0^{\infty}\left(x^3e^{-2x}\right)dx$ Learn how to solve definite integrals problems step by step online. $\lim_{c\to\infty }\:\int_{0}^{c} x^3e^{-2x}dx$ Learn how to solve definite integrals problems step by step online. Integrate x^3e^(-2x) from 0 to \infty. Replace the integral's limit by a finite value. Use the integration by parts theorem to calculate the integral \int x^3e^{-2x}dx, using the following formula. First, identify u and calculate du. Now, identify dv and calculate v. The integral diverges. ### Problem Analysis $\int_0^{\infty}\left(x^3e^{-2x}\right)dx$ ### Main topic: Definite integrals ~ 0.45 seconds
{}
# Need help for heat conduction problem with multiple BC Dear all, I have difficulty in solving this problem (see the figure in attached thumbnails) I have a rectangular shape with length/height of L and the thickness/width of $$\delta$$ Within the rectangular area, a heat conduction occurred. I would like to determine the temperature profile within the rectangular. Based on the right hand side figure, I take a small cell with size of dx times dy. Applying the energy balance within the cell. I have attached all the equation that I can think of, and also the boundary conditions. I have difficulty in solving the differential equation. note: $$\lambda$$ is thermal conductivity $$\alpha$$ is heat transfer coefficient, it is suppose to be dependent on x-location and time. I am going to simulate the heat transfer coefficient as a periodic function. #### Attachments • schema1.jpg 12.5 KB · Views: 253 • equation1.jpg 22.1 KB · Views: 272 • BC.jpg 14.7 KB · Views: 305
{}
# Teaching Recursion with the N Queens Problem Posted in Computer Science ## A Gentle Introduction to Recursion Recursion, particularly recursive backtracking, is far and away the most challenging topic I cover when I teach the CSE 143 (Java Programming II) course at South Seattle College. Teaching the concept of recursion, on its own, is challenging: the concept is a hard one to encounter in everyday life, making it unfamiliar, and that creates a lot of friction when students try to understand how to apply recursion. The key, as I tell students from day one of the recursion unit, is to always think in terms of the base case and the recursive case. The base case gives your brain a "trapdoor" to exit out of an otherwise brain-bending infinite conceptual loop. It helps recursion feel more manageable. But most importantly: it enables thinking about recursion in terms of its inputs and outputs. More specifically, to understand recursion requires (no, not recursion) thinking about two things: where you enter the function and when you stop calling the function. These are the two least complicated cases, and they also happen to be the two most important cases. College courses move at an artificially inflated pace, ill-suited for most community college students, and the material prescribed must be presented at the given pace mostly independent of any real difficulties the students face (there is only minimal room for adjustment, at most 2-3 lectures). This means that, before the students have had an opportunity to get comfortable with the concept of recursion, and really nail it down, they're introduced to yet another mind-bending topic: recursive backtracking algorithms. These bring a whole new set of complications to the table. Practice is crucial to students' understanding, and all too often, the only way to get students to practice (particularly with difficult subject matter like recursion) is to spend substantial amounts of time in class. My recursion lectures routinely throw my schedule off by nearly a week, because even the simplest recursion or backtracking exercise can eat up an hour or more. ## Recursive Backtracking Backtracking is an approach for exploring problems that involve making choices from a set of possible choices. A classic example of backtracking is the 8 Queens problem, which asks: "How many ways are there of placing 8 queens on a chessboard, such that no queen attacks any other queen?" The problem is deceptively simple; solving it requires some mental gymnastics. (By the way, most people who have actually heard of the problem are computer scientists who were exposed to it in the process of learning how to solve it, leading to the hipster effect - it's often dismissed by computer scientists as an "easy" problem. The curse of knowledge at work.) The recursive backtracking algorithm requires thinking about the squares on which to place the 8 queens in question as the set of choices to be made. The naive approach ignores the constraints, and makes all 8 choices of where to place the 8 queens before ever checking if the queen placements are valid. Thus, we could start by placing all 8 queens in one single row on the top, or along one single column on the left. Using this approach, we have 64 possibilities (64 open squares) for the first queen, then 63 possibilities for the second queen, then 62 possibilities for the third queen, and so on. This gives a total number of possible combinations of: $$\dfrac{64!}{(64-8)!} = 178,462,987,637,760$$ (By the way, for those of you following along at home, you can do this calculation with Python:) >>> from scipy import * >>> math.factorial(64)/math.factorial(64-8) 178462987637760L Even for someone without a sense of big numbers, like someone in Congress, that's still a pretty big number. Too many for a human being to actually try in a single lifetime. ## Paring Down the Decision Tree But we can do better - we can utilize the fact that the queen, in chess, attacks horizontally and vertically, by doing two things: • Limit the placement of queens so that there is one queen per column; • Limit the placement of queens so that there is one queen per row. (Note that this is ignoring diagonal attacks; we'll get there in a minute.) This limits the number of solutions as follows: the first queen placed on the board must go in the first column, and has 8 possible squares in which it can go. The second queen must go in the second column, and has 7 possible squares in which it can go - ignoring the square corresponding to the row that would be attacked by the first queen. The third queen goes into the third column, which has 6 open squares (ignoring the two rows attacked by the two queens already placed). That leads to far fewer solutions: $$8! = 40,320$$ and for those following along at home in Python: >>> from scipy import * >>> math.factorial(8) 40320 To visualize how this utilization of information helps reduce the problem space, I often make use of a decision tree, to get the students to think about recursive backtracking as a depth-first tree traversal. (By the way, this is a strategy whose usefulness extends beyond the 8 queens problem, or even recursive backtracking problems. For example, the problem of finding cycles in a directed graph can be re-cast in terms of trees.) So far, we have used two of the three directions of attack for queens. This is also enough information to begin an implementation of an algorithm - a backtracking algorithm can use the fact that we place one queen per column, and one queen per row, to loop over each row, and steadily march through each column sequentially (or vice-versa). ## The Pseudocode There is still a bit more to do to cut down on the problem space that needs to be explored, but before we do any of that, we should first decide on an approach and sketch out the psuedocode. The structure of the explore method pseudocode thus looks like: explore(column): if last column: # base case else: # recursive case for each row: if this is a safe row: place queen on this row explore(column+1) remove queen from this row ## The Actual Code Over at git.charlesreid1.com/charlesreid1/n-queens I have several implementations of the N Queens problem: ## Row, Column, and Diagonal Attacks We have already utilized knowledge that there will only be one queen per column, and one queen per row. But one last bit of information we can utilize is the fact that queens attack diagonally. This allows us to eliminate any squares that are along the diagonals of queens that have already been placed on the board. How to eliminate the diagonals? It basically boils down to two approaches: 1. Use a Board class to abstract away details (and the Board class will implement "magic" like an isValid() method). 2. Hack the index - implement some index-based math to eliminate any rows that are on the diagonals of queens already on the board. The first approach lets you abstract away the details, possibly even using a Board class written by a textbook, which is lazy fine, if you are working on a practical problem and need some elbow grease, but not so much if you are a computer science student learning the basic principles of software design. The second approach requires some deep thinking about how the locations of the N (or 8) queens are being represented in the program. ## Accounting for Diagonal Attacks At some point, when you use the above pseudocode, you are going to want to know the answer to the following question: for a given column k, what rows are invalid because they are on diagonals of already-placed queens? To answer this, think about where the diagonal indices of chess board squares are located, and how to find the diagonals on column X attacked by a queen placed in column Y. The following diagram shows a queen on row 3 of column 2, and the diagonal attack vectors of that queen. Each of the squares along those diagonal vectors can be ruled out as possible squares to place a queen. When selecting a square for the third queen, which goes in the third column, the second and fourth rows can both be ruled out due to the diagonals. (The third row, of course, can also be ruled out, due to the one-queen-per-row rule.) However, the effect of the already-placed queen propagates forward, and affects the choice of possible squares for each queen after it. If we jump ahead in the recursive algorithm, to say, queen number 6, being placed on column number 6 (highlighted in blue), the queen in column 2 (row 3) still affects the choice of squares for that column (as do all queens previously placed on the board). In the case pictured in the figure, the seventh row (as well as an off-the-board row) of column 6 can be ruled out as possible squares for the placement of the 6th queen. Accounting for these diagonal attacks can lead to substantial speed-ups: each queen that is placed can eliminate up to two additional squares per column, which means the overall decision tree for the N queens problem becomes a lot less dense, and faster to explore. ## Why the N Queens Problem? Invariably, some students will deal with this difficult problem by questioning the premise of the question - a reasonable thing to wonder. This leads to a broader, more important question: why do computer scientists focus so much on games? Games, like computers, are self-contained universes, they are abstract systems, they remove messy details and complications. They allow you to start, from scratch, by setting up a board, a few rules, a few pieces - things that are easy to implement in a computer. Mazes, crossword puzzles, card games, checkers, chess, are all systems with a finite, small number of elements that interact in finite, small numbers of ways. The beauty of games is that those small rule sets can result in immensely complex systems, so that there are more branches in the chess decision tree (the Shannon number, $$10^{120}$$) than there are protons in the universe (the Eddington number, $$10^{80}$$). That simplicity is important in computer science. Any real-world problem is going to have to be broken down, eventually, into pieces, into rules, into a finite representation, so that anything we try to model with a computer, any problem we attempt to solve computationally, no matter how complex, will always have a game-like representation. (Side note: much of the literature in systems operations research, which studies the application of mathematical optimization to determine the best way to manage resources, came out of work on war games - which were themselves game-ified, simplified representations of real, complex systems. Econometrics, or "computational economics," is another field where game theory has gained much traction and finds many practical applications.) Recursion, too, is a useful concept in and of itself, one that shows up in sorting and searching algorithms, computational procedures, and even in nature. But it isn't just knowing where to look - it's knowing what you're looking for in the first place.
{}
# Matrices, Part 2 In order to set up a system of equations using matrices, you need to understand how matrices multiply one another. Not all matrices can be multiplied together – they need to be compatible with one another. Not only that, unlike scalar (single number) arithmetic, multiplication does not commute, that is, the order of the multiplication will generally produce different results or one order may not even be possible. So what do I mean by compatible? $\begin{array}{l} {\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]\times\left[{\begin{array}{cc}{5}&{6}\\{7}&{8}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{cc}{{(}{5}\times{1}{)}{+}{(}{7}\times{2}{)}}&{{(}{6}\times{1}{)}{+}{(}{8}\times{2}{)}}\\{{(}{5}\times{3}{)}{+}{(}{7}\times{4}{)}}&{{(}{6}\times{3}{)}{+}{(}{8}\times{4}{)}}\end{array}}\right]}\\ {{=}\hspace{0.33em}\left[{\begin{array}{cc}{19}&{22}\\{43}&{50}\end{array}}\right]} \end{array}$ To multiply these two 2 × 2 matrices, you take the first column of the second matrix and lay it over the top of the first matrix: $\begin{array}{l} {\left[{\begin{array}{cc}{5}&{7}\end{array}}\right]}\\ {\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]} \end{array}$ Starting with the top row of the first matrix, Multiply the numbers in the same position together and add the result of each: (5 × 1) + (7 × 2) = 19. This result is the first row, first column number in the new matrix. Repeat this using the second row of the first matrix: (5 × 3) + (7 × 4) = 43. This is the first element of the second row of the new matrix. Now do the same with the second column of the second matrix: $\begin{array}{l} {\left[{\begin{array}{cc}{6}&{8}\end{array}}\right]}\\ {\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]} \end{array}$ to get the second column of the new matrix. I will leave it as an exercise for you to confirm that if I reverse the order of the matrices, you will get a different result. That is, $\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]\times\left[{\begin{array}{cc}{5}&{6}\\{7}&{8}\end{array}}\right]\hspace{0.33em}\ne\hspace{0.33em}\left[{\begin{array}{cc}{5}&{6}\\{7}&{8}\end{array}}\right]\hspace{0.33em}\times\hspace{0.33em}\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]$ So this method works for any size matrices as long as they are compatible. From this example, you see that this works only if the second matrix has the same number of rows as the number of columns in the first matrix. This is easy to see if you put the dimensions together: (2 × 2) × (2 × 2). The inside numbers need to be the same if multiplication is possible (2 = 2). The outside numbers give the dimensions of the resulting matrix (2 × 2). So you can multiply a 3 × 2 matrix by a 2 × 4 matrix to get a 3 × 4 matrix, but you cannot reverse the order because the inside dimensions will not be equal. It’s interesting that if you multiply a 1 × (anything) matrix by a (same anything) × 1 matrix, you will get a 1 × 1 matrix which is just a number (a scalar). This multiplication works even if some or all of the elements of the matrices are variables. I will illustrate this in my next post.
{}
Properties of Polynomials Examples 2 # Properties of Polynomials Examples 2 Recall from the Properties of Polynomials page that a function of the form $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$ is called a polynomial, and if $a_m \neq 0$ then we say that the degree of the polynomial $p$ is $n$ written $\mathrm{deg} p = n$. We said that a number $\lambda$ is a root of the polynomial $p$ if $p(\lambda) = 0$. We then saw that $\lambda \in \mathbb{F}$ is a root of the polynomial $p$ with $\mathrm{deg} p = n ≥ 1$ if and only if there exists a polynomial $q$ where $\mathrm{deg} q = n - 1$ such that $p$ can be factored as: (1) \begin{align} \quad p(x) = (x - \lambda) q(x) \end{align} From this, we saw that a polynomial $p$ with $\mathrm{deg} p = n$ can have at most $n$ distinct roots. Lastly, we noted that a polynomial that is the zero function has its coefficients $a_0 = a_1 = ... = a_n = 0$. We will now look at some more examples regarding polynomials. ## Example 1 Show that the subset $\{ 0 \} \cup \{ p(x) \in \wp ( \mathbb{F} ) : \mathrm{deg} p \: \mathrm{is \: odd} \}$ of $\wp ( \mathbb{F} )$ is not a subspace of $\wp (\mathbb{F})$. Consider the polynomials $p(x) = x^3 + x^2$ and $q(x) = -x^3$. Both of these polynomials have odd degree and therefore are contained in the subset $\{ 0 \} \cup \{ p(x) \in \wp ( \mathbb{F} ) : \mathrm{deg} p \: \mathrm{is \: odd} \}$. Note that $p(x) = q(x) = x^3 + x^2 - x^3 = x^2$. $\mathrm{deg} (p + q) = 2$, and so $(p(x) + q(x)) \not \in \{ 0 \} \cup \{ p(x) \in \wp ( \mathbb{F} ) : \mathrm{deg} p \: \mathrm{is \: odd} \}$. Therefore $\{ 0 \} \cup \{ p(x) \in \wp ( \mathbb{F} ) : \mathrm{deg} p \: \mathrm{is \: odd} \}$ is not closed under addition and therefore not a subspace of $\wp (\mathbb{F})$. ## Example 2 Let $\mathrm{deg} p = m$. Prove that $p$ has $m$ distinct roots if and only if $p$ and $p'$ have no roots in common. $\Rightarrow$ Suppose that $p$ has $m$ distinct roots, $\lambda_1, \lambda_2, ..., \lambda_m \in \mathbb{F}$. Then for $c \in \mathbb{F}$ we can factor $p(x)$ as: (2) \begin{align} \quad p(x) = c(x - \lambda_1)(x - \lambda_2)...(x - \lambda_m) \end{align} Take a root $\lambda_j$ for $j = 1, 2, ..., m$. In specific, we can factor $p(x)$ as: (3) \begin{align} \quad p(x) = (x - \lambda_j)q(x) \end{align} The polynomial $q(x) = (x - \lambda_1)(x - \lambda_2)...(x - \lambda_{j-1})(x - \lambda_{j+1})...(x - \lambda_m)$. Now if we differentiate both sides of the equation above, we have that: (4) \begin{align} \quad p'(x) = (x - \lambda_j)q'(x) + q(x) \end{align} Thus we have that: (5) \begin{align} \quad p'(\lambda_j) = (\lambda_j - \lambda_j)q'(\lambda_j) + q(\lambda_j) \\ \quad p'(\lambda_j) = q(\lambda_j) \neq 0 \end{align} We can apply the same logic to the other roots of $p$ and we see that $\lambda_1, \lambda_2, ..., \lambda_m$ are not roots of $p'$ so $p$ and $p'$ have no roots in common. $\Leftarrow$ Suppose that $p$ and $p'$ have no roots in common. Proving this can be simplified by proving the logically equivalent contrapositive which says that if $p$ does not have $m$ distinct roots then $p$ and $p'$ have a root in common. Suppose that $p$ does not have $m$ distinct roots. Then some root $\lambda \in \mathbb{F}$ must have multiplicity greater than $1$, say $\lambda$ has multiplicity $k$. Then for some polynomial $q(x)$ we can factor $p(x)$ as: (6) \begin{align} \quad p(x) = (x - \lambda)^k q(x) \end{align} If we differentiate both sides of the equation above we have that: (7) \begin{align} \quad p'(x) = k(x - \lambda)^{k-1}q(x) + (x - \lambda)^kq'(x) \end{align} Therefore we have that: (8) \begin{align} \quad p'(\lambda) = k(\lambda - \lambda)^{k-1}q(\lambda) + (\lambda - \lambda)^k q'(\lambda) \\ \quad p'(\lambda) = 0 \end{align} Therefore $\lambda$ is a root of $p'$ and so $p$ and $p'$ have a room $\lambda$ in common.
{}
# Farm Machinery Questions and Answers – Principles of Governor « » This set of Farm Machinery Multiple Choice Questions & Answers (MCQs) focuses on “Principles of Governor”. 1. The average speed of engine connected to governor is 1500 rpm. If the fluctuation in speed is +-100 rpm, then % governor regulation is ______ a) 13.33% b) 14.56% c) 19.01% d) 21.23% Explanation: Nmin = 1500 – 100 = 1400 rpm Nmax = 1500+100 = 1600 rpm Governor regulation = $$\frac{1600-1400}{1500}=\frac{200}{1500}$$ = 13.33% 2. Varying the governor setting in an IC engine a) Does not vary the maximum power developed in the engine b) Varies the maximum torque developed by the engine c) Does not vary the governed range of speed in which engine runs d) Remains constant Explanation: Governor is a mechanical device which controls the speed of the engine by supplying fuel on varying load condition. When there is less load on engine speed of rotation of spindle increases and more centrifugal force acts on flyballs throwing them outwards thus sleeve moves upward and fuel supply is decreased. 3. At an engine throttle position of 75% the high idle speed of engine is shifted by 200 rpm towards maximum torque position. If the engine is maintaining a uniform speed of 2475 rps at given load, governor regulation is _____ a) 9.76% b) 7.77% c) 12.67% d) 1.89% Nmin = 2475 rpm Above maximum torque position Nmax = 2475 + 200 = 2675 rpm Governor regulation = $$\frac{[2(275-2475)]}{2675+2475}$$ = 7.77% Sanfoundry Certification Contest of the Month is Live. 100+ Subjects. Participate Now! 4. The high idle speed of engine is 2240 rpm. The peak torque of 180 Nm occurs, at 1450 engine rpm. If the lugging ability is 28 Nm. The engine power in KW at governor’s maximum position will be _________ (Governor regulation = 11.5%) a) 42.5 KW b) 51.49 KW c) 23.12 KW d) 31.3 KW Explanation: Nmax = 2240 rpm At 180 Nm = N = 1450 rpm Lugging = 28 Nm So, at 180 – 28 = 152 Nm, let speed be Nmin GR = 11.5% = 0.115% 0.115 = $$\frac{[2(2240-Nmin)]}{2240+Nmin}$$ Nmin = 1966.40 rpm P = 2πNτ = 2 * π * 1966.40 * 152 = 31.3 KW 5. Which governor does not achieve ISOCHRONOUS behaviour? a) Hartnell b) Spring Controlled c) Porter and Proell d) Watt Explanation: In Porter and Proell governor, we never achieve ISOCHRONOUS behaviour because we can’t neglect friction between sleeve and spindle. 6. Which governor is known as sensitive governor? a) Which keep speed fluctuation as small as possible b) Which keep speed fluctuation as large as possible c) Which doubles the speed fluctuation d) Which remains constant Explanation: While considering performance of engine during sensitiveness analysis of governor, requirement is to keep speed fluctuation as small as possible and governor which does this is called sensitive governor. 7. A Proell governor has equal arms of length 300 mm. The upper and lower ends of the arms are pivoted on the axis of the governor. The extension arms of the lower links are each 80 mm long and parallel to the axis when the radii of rotation of the balls are 150 mm and 200 mm. The mass of each ball is 10 kg and the mass of the central load is 100 kg. Determine the range of speed of the governor. a) 20 rpm b) 30 rpm c) 10 rpm d) 25 rpm Explanation: sin α = sin β = 150 / 300 = 0.5 or α = β = 30° and, MD = FG = 150 mm = 0.15 m FM = FD cos β = 300 cos 30° = 260 mm = 0.26 m IM = FM tan α = 0.26 tan 30° = 0.15 m BM = BF + FM = 80 + 260 = 340 mm = 0.34 m ID = IM + MD = 0.15 + 0.15 = 0.3 m FC – m (ω 1)2 * r1 – 10 $$[\frac{2πN}{60}]$$2 – 0.0165 (N1)2 N1 = 170 rpm h = PG = (PF)2 – (FG)2 = (300)2 – (200)2 = 224 mm = 0.224 m FM = GD = PG = 224 mm = 0.224 m BM = BF + FM = 80 + 224 = 304 mm = 0.304 m Range of speed = N2-N1 = 10 RPM 8. Who theoretically analysed James Watt’s design as a mathematical energy balance purpose? a) Isaac Newton b) Willard Gibbs c) Matthew Boulton d) James Clark Maxwell Explanation: Building on Watt’s design was American engineer Willard Gibbs who in 1872 theoretically analysed Watt’s conical pendulum governor from a mathematical energy balance perspective. During his Graduate school years at Yale University, Gibbs observed that the operation of the device in practice was beset with the disadvantages of sluggishness and a tendency to over-correct for the changes in speed it was supposed to control. Sanfoundry Global Education & Learning Series – Farm Machinery.
{}
# Measurement of $B_c(2S)^+$ and $B_c^*(2S)^+$ cross section ratios in proton-proton collisions at $\sqrt s$ = 13 TeV ## Statistics ### Citations Dimensions.ai Metrics 1 citation in Web of Science® 6 citations in Scopus® ### Altmetrics Detailed statistics
{}
## Intermediate Algebra (6th Edition) Published by Pearson # Chapter 5 - Section 5.3 - Polynomials and Polynomial Functions - Exercise Set: 4 3 #### Work Step by Step The degree of a term is the sum of the exponents on the variables contained in the term. For the term $-z^{3}$, the exponent on z is 3. Therefore, the degree of this term is 3. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
# NIPS Proceedingsβ ## How hard is my MDP?" The distribution-norm to the rescue" [PDF] [BibTeX] [Supplemental] [Reviews] ### Abstract In Reinforcement Learning (RL), state-of-the-art algorithms require a large number of samples per state-action pair to estimate the transition kernel $p$. In many problems, a good approximation of $p$ is not needed. For instance, if from one state-action pair $(s,a)$, one can only transit to states with the same value, learning $p(\cdot|s,a)$ accurately is irrelevant (only its support matters). This paper aims at capturing such behavior by defining a novel hardness measure for Markov Decision Processes (MDPs) we call the {\em distribution-norm}. The distribution-norm w.r.t.~a measure $\nu$ is defined on zero $\nu$-mean functions $f$ by the standard variation of $f$ with respect to $\nu$. We first provide a concentration inequality for the dual of the distribution-norm. This allows us to replace the generic but loose $||\cdot||_1$ concentration inequalities used in most previous analysis of RL algorithms, to benefit from this new hardness measure. We then show that several common RL benchmarks have low hardness when measured using the new norm. The distribution-norm captures finer properties than the number of states or the diameter and can be used to assess the difficulty of MDPs.
{}
## Generating images with LaTeX equations Sometimes it is useful to generate an image (like a PNG file) with a high-resolution mathematical equation. I know that there are online tools for doing that but imagine you are like me and prefer getting things done “in-house”. I will assume you have docker available (if not, go to https://www.docker.com/ and install Docker Desktop). In the first step we will need to get two images from Docker Hub (we do that in terminal / command line): docker pull miktex/miktex docker pull dpokidov/imagemagick Both commands will download some stuff and should be ready in a minute or so. Now we will need our LaTeX code that defines the equation: \documentclass{article} \usepackage{amsmath} \pagenumbering{gobble} \begin{document} \begin{equation*} e^{i\pi }+1=0 \end{equation*} \end{document} Now assuming that the LaTeX code is saved in test.tex file we will need to execute 3 steps: docker run --rm -v miktex:/miktex/.miktex -v $PWD:/miktex/work miktex/miktex latex test.tex docker run --rm -v miktex:/miktex/.miktex -v$PWD:/miktex/work miktex/miktex dvipng -D 600 -bg Transparent test.dvi docker run --rm -v \$PWD:/imgs dpokidov/imagemagick imgs/test1.png -trim imgs/test2.png The first command uses miktex/miktex Docker Image to “compile” out LaTeX file to DVI format. The second command uses the same Docker Image to convert DVI into PNG image. Our LaTeX document has one page, so one PNG will be created after conversion (test1.png). Unfortunately, we are not done yet, because our PNG image has strange margins. In the last step, we fix those margins by using dpokidov/imagemagick Docker Image. Our final result looks like this (test2.png):
{}
# Test Vectors for XEdDSA Anyone aware of test vectors (preferably with intermediate values) for the XEdDSA algorithm? Open Whisper Systems' own curve25519-java repository has a few test vectors in the code in android/jni/ed25519/tests/. From xeddsa_fast_test(), you get this test vector in particular:
{}
# How the? 1. Feb 3, 2009 ### jeahomgrajan 1. The problem statement, all variables and given/known data This is a simple math equation and i am a bit confused on how x^2+y^2=25 makes a circle and how y= sqrt 25-x^2 makes a semi circle 2. Feb 3, 2009 ### AssyriaQ If you would solve $$x^{2} + y^{2} = 25$$ for y, you would not only get $$y=\sqrt{25-x^{2}}$$, but also something else. What? 3. Feb 3, 2009 y=5-x 4. Feb 3, 2009 ### AssyriaQ No, not quite. (Keep in mind that $$\sqrt{a^{2}-b^{2}}\neq a-b$$.) Think of something as simple as e.g. $$x^{2}=9$$. There are two different values of x that satisfy this. What values would that be? 5. Feb 3, 2009 +-3? 6. Feb 3, 2009 ### jgens Well, since the circle is defined as the locus of all points in a plane which are all a fixed and equal distance from the center, it is fairly intuitively obvious that the implicit equation maps to a circle. A more general case of the formula you presented is x^2 + y^2 = r^2 where r is the radius of the circle. Perhaps a general way to derive and illustrate how that equation makes a circle is to examine the circular definition of the trigonometric functions. We know that sin(theta) = y/r and cos(theta) = x/r where r is the radius and (x,y) is the coordinate of the radius' intersection with the circle. I'm also presuming you're familiar with the identity sin^2(theta) + cos^2(theta) = 1. Placing our unit circle definitions in place of the identity yields x^2 + y^2 = r^2 which may intuitively illustrate how it makes a circle. Probably a better way of thinking about it is this: given any ordered pair (x,y) satisfying x^2 + y^2 = r^2, (x,y) must always be some fixed distance r from the center; hence, the equation produces the locus of all points on a plane an equal distance from the center. In other words, x^2 + y^2 = r^2 maps to a circle. 7. Feb 3, 2009 ### jeahomgrajan intence, but okay, what about the second equation which i have mentioned 8. Feb 3, 2009 ### NoMoreExams y = 5-x is a line, how are you getting semicircle 9. Feb 3, 2009 ### jgens y = 5 - x will definately not make a semi-circle. Solve x^2 + y^2 = r^2 for y and it should be fairly clear why it produces a semi-circle if x^2 + y^2 = r^2 creates a circle. 10. Feb 3, 2009 ### AssyriaQ Exactly. So if you now have $$y^{2}=25-x^{2}$$, what two values of y do you get? 11. Feb 4, 2009 ### jeahomgrajan y= SqaUREROOT 25-x^2, (this should be a semicircle right? 12. Feb 4, 2009 ### Staff: Mentor Yes, this is the upper half of the circle. The lower half is y = $-\sqrt{25 - x^2}$ 13. Feb 4, 2009 ### NoMoreExams What people were trying to tell you was that if $$a = b^2$$ then $$b = \pm \sqrt{a}$$ NOT just $$\sqrt{a}$$ 14. Feb 4, 2009 ### jeahomgrajan Alright i understand thanks
{}
× Back to all chapters Combine your skills of working with exponents and manipulating expressions for radical glory. #### Challenge Quizzes Evaluate $$\sqrt{ 40 ^2 + 42 ^2}$$. The value of $\sqrt{\frac{5}{72}} \left(\sqrt{\frac{9}{40}}-\sqrt{\frac{8}{45}}\right)$ can be expressed as $$\frac{1}{a}$$. What is $$a$$? Evaluate $\sqrt{2} - \frac{1}{ \sqrt{2} - \dfrac{1}{ \sqrt{2} - \dfrac{1}{\sqrt{2}-1} } }.$ Evaluate $\left( 4\sqrt{3} +7 \right)^{2016} \left( 4\sqrt{3} -7 \right)^{2016}.$ What is the value of $\sqrt{ 24 } \times \sqrt{3} \times \sqrt{ 8 } ?$ ×
{}
# Newton's 3rd Law: How can I break things? If I punch a wooden board hard enough and it breaks in two, has the board still exerted a force of equal magnitude on my fist? When the board breaks in two due to my force, the halves have a component of acceleration in the direction of my striking fist...that implies the board did not exert an equal and opposite reaction, no? - The board did exert an equal and opposite force, but your mass is considerably greater than the mass of the board so all the friction between you and the ground keeps you from accelerating. If you punched the board while standing on perfectly slippery ice, or in a vacuum, you would accelerate also, but at a much smaller rate than the board due to the mass difference. If you could ignore all losses (friction, the board breaking, etc.) then all Newton's 3rd Law really is saying is that the center of mass of the system doesn't change when you punch the board. So the board accelerates in the direction of your punch and you move away from it at rates that keep the system center of mass constant. - Interesting, thanks. –  photon Oct 6 '12 at 18:38 Yes . . . very very interesting. –  Velox Feb 27 '13 at 17:54 If I punch a wooden board hard enough and it breaks in two, has the board still exerted a force of equal magnitude on my fist? Yes. When the board breaks in two due to my force, the halves have a component of acceleration in the direction of my striking fist...that implies the board did not exert an equal and opposite reaction, no? Incorrect, the board did exert an opposite reaction - where do you think your bruised knuckles will come from? The transfer of momentum from your body to the board requires an opposite force to decelerate your fist or you could punch through arbitrarily thick material. That the opposite force will be equal in magnitude is a consequence of conversation of momentum applied to a 2-body interaction - modern physics no longer postulates the 3rd law. - Thanks, I didn't know what pain was until now. –  photon Oct 6 '12 at 18:40 If I punch a wooden board hard enough and it breaks in two, has the board still exerted a force of equal magnitude on my fist? Good question but short answer: Yes. Exactly..! The Newton's third law (a contact force) applies to every-day life like Jumping on the floor, Walking on slippery ground, balloons etc. So, the board has also exerted an equal force on your fists... How is this? (similar to $\vec{F_{AB}}=-\vec{F_{BA}}$) Whenever you exert a force on an object, the object also exerts an equal normal force on you so that both cancels out. Without it, your fist won't stop accelerating and you'd probably pierce through the ground. One thing to be noted is that these action-reaction pairs always act on two different bodies and depends on their masses. Here, the board does not have enough mass to withstand your force and hence the break-through. It could be seen from this example. As a ball falls on earth, you could tell that the ball is pulled by earth or according to 3rd law, earth is pulled by ball). Amazed of it, Eh..? Take the mass of ball to be $10kg$, then force exerted by Earth on ball is $F=mg=98N$. According to the 3rd law, this force is also exerted by the ball on earth. Hence, acceleration of Earth towards ball is $a=F/m=\frac{98}{5.98×10^{24}}=16.38×10^{-24}ms^{-2}$ which is too small to be measured. There are greater probabilities for the forces to be same, but not necessarily their acceleration..! A simulation for Newton's cradle which is also an example of the 3rd law... When the board breaks in two due to my force, the halves have a component of acceleration in the direction of my striking fist...that implies the board did not exert an equal and opposite reaction, no? NO... Actually the Newton's 3rd law is stated as: • All forces result from interactions between pairs of objects, each object exerting a force on the other. The two resultant forces have same strength but in exactly opposite directions. It seems that the breaking of board has confused you. The board-breaking is due to a Stress-Strain mechanism which is not necessary here..! It depends on the type of material, magnitude of force exerted on it, etc. When you apply the force on the board, the work done by you is stored as potential energy in the board. When it exerts the equal force on you, it also experiences the same force and when this force exceeds the breaking stress, the board BREAKS..! The potential energy is then dissipated in the form of heat. My thought is that, Simultaneous action-reaction pairs are observed both on board and on your fist. But, the board can't withstand the action-reaction due to it's low mass and small breaking stress..! - You may want to clarify that the "yes" applies to the first question asked, not the second. –  Chris White Oct 6 '12 at 17:47 Thanks for the calculation, it clarifies the concept. I would upvote if I could. –  photon Oct 6 '12 at 18:28
{}
Intersection Cohomology of Coordinate Hyperplanes - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T09:35:13Z http://mathoverflow.net/feeds/question/47504 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/47504/intersection-cohomology-of-coordinate-hyperplanes Intersection Cohomology of Coordinate Hyperplanes Dinakar Muthiah 2010-11-27T13:06:48Z 2010-11-28T06:33:30Z <p>I'm trying to learn how to compute stalks of IC sheaves, and I was wondering about the following example:</p> <p>Fix $n$. Let $X \subset \mathbb{C}^n$ be the variety cut out by the equation $x_1 \cdots x_n =0$, i.e. the coordinate hyperplanes. What are the stalks of $\mathrm{IC}(X)$ at the various points of $X$, in particular at the origin?</p> <p>This seems like a natural toy example, but if the general answer is difficult, I'd be happy to know how to compute this for small $n$.</p> http://mathoverflow.net/questions/47504/intersection-cohomology-of-coordinate-hyperplanes/47519#47519 Answer by Mike Skirvin for Intersection Cohomology of Coordinate Hyperplanes Mike Skirvin 2010-11-27T17:55:11Z 2010-11-28T06:33:30Z <p>Let $Y$ denote the disjoint union of the coordinate hyperplanes in $\mathbb{C}^n,$ and let $f:Y \to X$ denote the corresponding resolution of singularities.</p> <p>1) Show that $f_{\ast}\mathbb{C}_Y[n-1] \simeq IC_X$ (consider, for example, the support conditions and the fact that both sheaves are isomorphic to $\mathbb{C}_U[n-1]$ when restricted to the nonsingular open $U \subset X$).</p> <p>Edit (some details added): Letting $U$ denote the complement of the set where any two coordinate planes intersect, $f$ is an isomorphism when restricted to $U.$ We therefore have that the restriction (i.e., pullback) of $f_{\ast}\mathbb{C}_Y[n-1]$ to $U$ coincides with $\mathbb{C}_U[n-1]$ (by proper base change if you like).</p> <p>In order to conclude that $f_{\ast}\mathbb{C}_Y[n-1] \simeq IC_X,$ we now just need to check the support and cosupport conditions which uniquely define the intersection cohomology sheaf (together with the fact that its restriction to $U$ is the (shifted) constant sheaf). These conditions are similar to, but more restrictive than, the support and cosupport conditions for perverse sheaves.</p> <p>I recommend looking at page 21 of the wonderful article by de Cataldo and Migliorini, which can be found at <a href="http://arxiv.org/abs/0712.0349" rel="nofollow">http://arxiv.org/abs/0712.0349</a> for a statement of these support and cosupport conditions (and figure 1 on page 25 for a visual illustration of the definition).</p> <p>Since the fibers of $f$ consist of a finite number of points, the cohomology of the fibers is non-zero only in degree zero. This shows that the first condition (the support condition) is satisfied.</p> <p>For the second condition (the cosupport condition), you can either derive it from the support condition using Verdier duality and the properness of $f,$ or you can simply note that an open ball in $\mathbb{C}^{n-1}$ has non-zero compactly supported cohomology only in degree $2n-2.$</p> <p>2) Now it's straightforward to compute any of the stalks since the fiber of $x \in X$ consists of anywhere between one point and n points, depending on how many hyperplanes $x$ lives inside of.</p> <p>Alternatively, it is also possible to do this by using only basic definitions. To compute the stalk at $x,$ intersect a sufficiently small open ball around $x$ in $\mathbb{C}^n$ with $X$ and then calculate the intersection cohomology by considering intersection cochains (just like you would for singular cohomology, but now with a less restrictive notion of cochain).</p>
{}
# Benchmark study of TRIPOLI-4 through experiment and MCNP codes 1 LCAE - Laboratoire Capteurs et Architectures Electroniques DM2I - Département Métrologie Instrumentation & Information : DRT/LIST/DM2I Abstract : Reliability on simulation results is essential in nuclear physics. Although MCNP5 and MCNPX are the world widely used 3D Monte Carlo radiation transport codes, alternative Monte Carlo simulation tools exist to simulate neutral and charged particles' interactions with matter. Therefore, benchmark are required in order to validate these simulation codes. For instance, TRIPOLI-4.7, developed at the French Alternative Energies and Atomic Energy Commission for neutron and photon transport, now also provides the user with a full feature electron-photon electromagnetic shower. Whereas the reliability of TRIPOLI-4.7 for neutron and photon transport has been validated yet, the new development regarding electron-photon matter interaction needs additional validation benchmarks. We will thus demonstrate how accurately TRIPOLI-4's "deposited spectrum'' tally can simulate gamma spectrometry problems, compared to MCNP's "F8'' tally. The experimental setup is based on an HPGe detector measuring the decay spectrum of an $^{152}$Eu source. These results are then compared with those given by MCNPX 2.6d and TRIPOLI-4 codes. This paper deals with both the experimental aspect and simulation. We will demonstrate that TRIPOLI-4 is a potential alternative to both MCNPX and MCNP5 for gamma-electron interaction simulation. Keywords : Document type : Conference papers Domain : https://hal-cea.archives-ouvertes.fr/cea-02500115 Contributor : Marie-France Robbe <> Submitted on : Thursday, March 5, 2020 - 5:56:26 PM Last modification on : Friday, June 25, 2021 - 10:00:02 AM Long-term archiving on: : Saturday, June 6, 2020 - 4:30:39 PM ### Files article_MauganMichel_ANIMMA-v1... Files produced by the author(s) ### Citation Maugan Michel, Romain Coulon, Stéphane Normand, Nicolas Huot, Odile Petit. Benchmark study of TRIPOLI-4 through experiment and MCNP codes. ANIMMA 2011 - Advancements in Nuclear Instrumentation Measurement Methods and their Applications, Centre d'étude de l'énergie nucléaire (Belgium); France: Commissariat à l'énergie atomique.; Université de Provence;; IEEE Nuclear and Plasma Sciences Society, Jun 2011, Gand, Belgium. ⟨10.1109/ANIMMA.2011.6172891⟩. ⟨cea-02500115⟩ Record views
{}
Tag Info 40 All we need to create an interactive Google Map in the notebook is access to the individual tiles - and there is a relatively simple naming scheme for those tiles. I actually typed up a description of this naming scheme a few years ago and posted it here: http://facstaff.unca.edu/mcmcclur/GoogleMaps/Projections/GoogleCoords.html The examples on that page ... 25 As much as I would like to see a solution to this problem written in Mathematica, this is very unlikely given the scope of the problem. I would like to share a way to solve this using JLink, in the hope that it may help someone. JLink, for those who don't know, is a package that comes with Mathematica. It allows you to execute Java code from within ... 18 Animations as interactive visualizations The simplest form of interactive graphics is an animation in which the play head can be moved by the user. That doesn't sound very interactive, but in terms of functionality the play head is nothing but a type of Slider. With this simple interpretation of interactivity, any movie format supported by Export would be ... 17 It seems networkx uses the D3 library and the example is based on this. We can adapt that code to work with Mathematica and generate JSON output from Mathematica. Save the HTML from the linked page to index.html. Change miserables.json in the source code to graph.json. Generate JSON with Mathematica: g = RandomGraph[BarabasiAlbertGraphDistribution[100, ... 16 Comment This was originally answered on Oct 2, 2012 using V8. The performance can be dramatically improved using V9's URLFetchAsynchronous, as now shown below. Fortunately, we needn't download all the tiles at once. We can use Dynamic to set up a little pan-and-zoom explorer. The first load takes a bit and zooming out takes a bit. Panning and zooming ... 15 You can always do Import["http://wsj.com","XMLObject"]. That has the side effect of producing some irregular XML whenever the underlying HTML doesn't quite map cleanly to XML, but it mostly produces an XMLObject[] expression tree that you can match over and extract data from, and I've never seen a web page for which it won't return something. 13 I agree wholeheartedly with the comment of celtschk to the OP. Both journals have RSS feeds (with pointers at the bottom of their main pages) that are designed exactly for the purpose that you describe. I doubt that either journal wants you to "scrape" their content; scraping is specifically forbidden by the WSJ Terms of Use. I don't know how much easier ... 10 I've got my own package that I've used for a few years to generate LaTeX from Mathematica. All the labs on my Mathematica course page were produced with this package. Here's a handout on probability theory for Calc II students that was produced by the package. Unfortunately, it's not at all polished and really not usable by anyone but me. I can present ... 10 You do not really need a tool to depoly your CDF to HTML. It is very simple to do by hand. Here is what I do open your text editor and create a file called index.htm <HTML> <BODY > This is my CDF <p> <script src="http://www.wolfram.com/cdf-player/plugin/v1.0/cdfplugin.js" type="text/javascript"></script><script ... 10 This will download the titles of all articles that transclude the Persondata template, if that's what you're trying to do. Flatten@NestWhileList[ Import["http://en.wikipedia.org/w/api.php?action=query&list=\ embeddedin&eititle=Template:Persondata&format=json&eilimit=500" <> If[Length@# > 1, "&eicontinue=" <> ... 10 This is an issue with XOWA. The HTTP Server was rewritten in v2.7.2 to handle POSTs and other features. However, it looks like it crashes on your request. I'll look at fixing this for v2.8.2. I'll comment again here when I have a resolution, but feel free to contact me directly for more info. Hope this helps! [Edit: This was fixed for v2.8.2. XOWA now ... 9 The deploy functionality was introduced in Mathematica 8.0.4. To my knowledge, it is not available in 8.0.1.0, see the changelog. 8 The different HTML entities are stored in SystemConvertMLStringDataDump$HTMLEntities on version 9 and from here, it's a simple StringReplace: StringReplace["<select></select>", SystemConvertMLStringDataDump$HTMLEntities] (* "&lt;select&gt;&lt;/select&gt;" *) 6 Something like: ExportString[Cell[TextData["<select></select>"],"Text"],"HTML","FullDocument" -> False] produces: <p class="Text"> &lt;select&gt;&lt;/select&gt; </p> which might also be a good start. 5 The short answer is no, there is no straightforward (built-in) way to convert Mathematica's dynamic objects to non-proprietary HTML+SVG/JS. To see why, consider how you might try ti represent the following very simple example in HMTL/SVG? Manipulate[With[{pts = {#, Sin[a*#]} & /@ (x /. Quiet[Solve[Sin[a*x] == b*AiryAi[-x] && 0 < x < 10, ... 5 There's no easy way, it's a custom script that assembles the image out of individual slices, and it's written by someone who clearly didn't intend anyone to read it again (including himself). Reverse engineering. The script responsible is http://imgs.xkcd.com/clickdrag/1110.js, the image to be displayed is assembled in line 86 (\$image=...). Scanning the ... 5 You can download all the original tiles using the following functions. 404 and file not founds are handled gracefully. I'm avoiding displaying to the FE so as to lower the chances of crashing. url[n1_Integer, d1_String, n2_Integer, d2_String] := "http://imgs.xkcd.com/clickdrag/" <> ToString@n1 <> d1 <> ToString@n2 <> d2 <> ... 5 Using JLink and Apache Commons Email and Java Mail it is not that hard to get MIME controlling working. I just modified some code I wrote some time ago (mostly for being able to send Email from within webMathematica) and added the ability to send HTML emails. It is a whole package with the jar files in subfolder and a Notebook with an example, so I hope it ... 5 I don't know how robust this is, but this function seems to do what you want: ImportString[ ExportString[Delete[ImportString[#, "Table"], {{2}, {-2}}], "Table"], "HTML" ] & 5 Just import the source of the page instead of its rendered content: Import["http://nyt.com", "Source"] 5 I figure it's good to avoid trying to parse the String manually when we can have Mathematica turn it into an XMLObject for us with ImportString[string, {"HTML","XMLObject"}] which lends itself to more reliable parsing. It's not really simpler but should give less headaches down the line. Here is a quick demonstration, modifyXMLAttributes takes an XMLElement ... 5 Your code works fine—the site is just very stringent on the data supplied. I used Chrome's Inspect Element to see the values of all the input elements (including the hidden fields, as you'd noticed)—and I found that sometimes codigoColegio was left blank: It didn't work when I filled out codigoColegio to match nivel as you seemed to have done, but it did ... 5 Updated Using ToBoxes@Column[{Row[{"test", "1"}]}] we get TagBox[GridBox[{{TemplateBox[{"\"test\"", "\"1\""}, "RowDefault"]}}, <<omitted output>> That TemplateBox is strange because Row should have translated into RowBox. Let us force it: kubaExport[x_] := ExportString[ ToBoxes@x /. TemplateBox[a_, "RowDefault"] :> RowBox[a] // ... 5 I don't know what causes the issue (which looks like a bug to me), but I generally find the best way to deal with exerting control over Import not doing quit the right thing with HTML is to use the "XMLObject" element and use ordinary Mathematica functions on it, like so: In[1]:= ImportString@ ExportString[ ... 4 This stupid piece of code doesn't work very well: Export["test.xhtml", EvaluationNotebook[], "MathOutput" → "DisplayForm"] and crashes Mathematica 8 reliably too, but the files it creates contain selectable text - here selecting something in the browser... 4 For the two strings in your first example, this seems to work ImportString[string, "HTML"] For the baseurl as in the original post, Import[baseUrl, "Data"] gives something like data = Import[baseUrl, "Data"] data[[2, ;; 4]] {{"Item", "View Options"}, { 1., "1841-1869 (Province of Canada), number 195, 21 June 1845, page \ 15", "GIF | PDF"}, { 2., ... 4 I can understand, if Mathematica does not provide such functionality. It is running on top of an operating system, which delivers all the functionality to do these things, like socket I/O etc. I don't see the point to do this inside of Mathematica. What you can do is this: a) unix plattform Run["/path/to/wget", "http://www.nytimes.com"]; This is just ... 4 The answer in general is no, and I've also wished there was a simple way to do it. The only exception I know of is when you export a notebook with a ContoutPlot or ListContourPlot to HTML, as in this question. The exported GIF image actually contains a reference to an image map which is a very old-fashioned way of providing tooltip information in images ... 4 The benefit of using a symbolic tree representation with inert heads as in Leonid's parser is that you can then decide how to represent the data. And that is indeed what you should do, instead of extracting the elements using Cases. Here's an example using your parsed output above: Block[{ulContainer, liContainer}, ulContainer[_, l__] := {l}; ... 4 This thing could be done by the option "MathOutput"->"InputForm", However I'm not sure whether this can entirely solve the question(need more tests). Export["test.html", nb = EvaluationNotebook[], "HTML", "ConversionRules" -> {"Input" -> {"<pre><code>", "</code></pre>"}}, CharacterEncoding -> "CP936", ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
Introductory Algebra for College Students (7th Edition) $-y-3$ Simplifying Rational Expressions 1. Factor the numerator and the denominator completely. 2. Divide both the numerator and the denominator by any common factors. --- Numerator $:$ search for two factors of $-12$ whose sum is $-1$ ... these are $-4$ and $3$ $y^{2}-y-12=(y-4)(y+3)$ Denominator = $-(y-4)$ Expression = $\displaystyle \frac{(y-4)(y+3)}{-(y-4)}$ ... divide both with the common factor: $(y-4)$ Expression = $\displaystyle \frac{y+3}{-1}$ = $-(y+3)$ = $-y-3$
{}
# Thread: Finding Eigenvectors to diagonalize 1. ## Finding Eigenvectors to diagonalize Hello, I've been looking at question which states to diagonalize the matrix $A = \left(\begin{array}{ccc}1&t&0\\0&t&t\\0&0&0\end{ar ray}\right)$ I've found the eigenvalues to be 1, 0 and t. I know that if t = 1, the matrix is not diagonalizable, and if t = 0, then A is already diagonalized, and thus is diagonalizable. Now, I have to look at what happens when t is not equal to 1 or 0. Looking at each eigenvalue, I'm having trouble finding the eigenvectors. For $\lambda = 1$, they say, "Obviously $Ae_{1}=e_{1}$, therefore $e_{1}$ is a eigenvector." I don't follow that. It's not obvious to me. For $\lambda = 0$, I know that this is the same as just finding Ax = 0, however, my eigenvector is $\left(\begin{array}{c}1/t\\-1\\1\end{array}\right)$, & the one the chose was $\left(\begin{array}{c}t\\-1\\1\end{array}\right)$. I can't see where I went wrong. For $\lambda = t$, they have the eigenvector being $\left(\begin{array}{c}t\\t-1\\0\end{array}\right)$ I'm completely lost there. How are all these eigenvectors found? 2. Originally Posted by Silverflow Hello, I've been looking at question which states to diagonalize the matrix $A = \left(\begin{array}{ccc}1&t&0\\0&t&t\\0&0&0\end{ar ray}\right)$ I've found the eigenvalues to be 1, 0 and t. I know that if t = 1, the matrix is not diagonalizable, and if t = 0, then A is already diagonalized, and thus is diagonalizable. Now, I have to look at what happens when t is not equal to 1 or 0. Looking at each eigenvalue, I'm having trouble finding the eigenvectors. For $\lambda = 1$, they say, "Obviously $Ae_{1}=e_{1}$, therefore $e_{1}$ is a eigenvector." I don't follow that. It's not obvious to me. Then practice more! "Obviously", $Ae_1= \left(\begin{array}{ccc}1&t&0\\0&t&t\\0&0&0\end{ar ray}\right)\begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix}= \begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix}$. The whole point of being an eigenvalue is that there exist a non-zero vector, v, such that $Av= \lambda v$. In this example, that would be $Av= \left(\begin{array}{ccc}1&t&0\\0&t&t\\0&0&0\end{ar ray}\right)\begin{pmatrix}x \\ y \\ z\end{pmatrix}= \lambda\begin{pmatrix}x \\ y \\ z\end{pmatrix}$ or, after multiplying on the left side of the equation, $\begin{pmatrix}x+ ty \\ ty+ tz \\ 0\end{pmatrix}= \begin{pmatrix} \lambda x \\ \lambda y \\ \lambda z\end{pmatrix}$ And so there exist x, y, z, not all 0, satisfying $x+ ty= \lambda x$, $ty+ tz= \lambda y$, and $0= \lambda z$. If $\lambda= 1$, those equations become x+ ty= x, ty+ tz= y, and 0= z. From the last, z= 0 so the second equation becomes ty= y or (t-1)y= 0. Since $t\ne 1$, y= 0. The first equation then is x= x which is satisfied for all x. Any eigenvector corresponding to eigenvalue 1 must be of the form $\begin{pmatrix}x \\ 0 \\ 0\end{pmatrix}= x\begin{pmatrix} 1 \\ 0 \\ 0\end{pmatrix}$ which is spanned by $\begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix}= e_1$. For $\lambda = 0$, I know that this is the same as just finding Ax = 0, however, my eigenvector is $\left(\begin{array}{c}1/t\\-1\\1\end{array}\right)$, & the one the chose was $\left(\begin{array}{c}t\\-1\\1\end{array}\right)$. I can't see where I went wrong. Why do you think you "went wrong"? With $\lambda= 0$, the equations, $x+ ty= \lambda x$, $ty+ tz= \lambda y$, and $0= \lambda z$, become x+ ty= 0, ty+ tz= 0, and 0= 0. From the first, ty= -x so the second is -x+ tz= 0 or x= tz. Then y= -x/t and z= x/t. An eigenvector corresponding to eigenvalue 0 is $\begin{pmatrix}x \\-x/t \\ x/t \end{pmatrix}= x\begin{pmatrix}1 \\ -1/t\ \\ 1/t \end{pmatrix}$. If you take x= 1, this is $\begin{pmatrix}1 \\ -1/t \\ 1/t\end{pmatrix}$. If you take x= t, this is $\begin{pmatrix} t\\ -1 \\ 1\end{pmatrix}$. Each is a multiple of the other and both are perfectly good eigenvectors. Any multiple of an eigenvector is an eigenvector. More generally, the set of all eigenvectors corresponding to the same eigenvalue is a subspace. For $\lambda = t$, they have the eigenvector being $\left(\begin{array}{c}t\\t-1\\0\end{array}\right)$ I'm completely lost there. With $\lambda= t$, the equations, $x+ ty= \lambda x$, $ty+ tz= \lambda y$, and $0= \lambda z$, become $x+ ty= tx$, $ty+ tz= ty$, and $0= tz$. Both the second and third equations, since $t\ne 0$, give z= 0. The first equation is (t-1)x= ty or $y= \frac{t-1}{t}x$. Any eigenvector corresponding to eigenvalue t is of the form $\begin{pmatrix}x \\ \frac{t-1}{t}x \\ 0\end{pmatrix}= x\begin{pmatrix}1 \\ \frac{t-1}{t}\\ 0\end{pmatrix}$. Taking x= t gives $\begin{pmatrix} t \\ t-1 \\ 0\end{pmatrix}$. How are all these eigenvectors found?
{}
It is currently 19 Jan 2018, 21:35 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # M10-21 Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139529 [0], given: 12794 ### Show Tags 15 Sep 2014, 23:42 Expert's post 15 This post was BOOKMARKED 00:00 Difficulty: 65% (hard) Question Stats: 51% (01:18) correct 49% (01:18) wrong based on 105 sessions ### HideShow timer Statistics The function $$f$$ is defined by $$f(x) = - \frac{1}{x^2}$$ for all nonzero numbers $$x$$. If $$f(m) = - \frac{1}{16}$$ and $$f(mn) = f(\frac{1}{n})$$, what is the value of $$n^2$$? A. $$\frac{1}{16}$$ B. $$\frac{1}{4}$$ C. $$\frac{1}{2}$$ D. $$2$$ E. $$4$$ [Reveal] Spoiler: OA _________________ Kudos [?]: 139529 [0], given: 12794 Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139529 [1], given: 12794 ### Show Tags 15 Sep 2014, 23:42 1 KUDOS Expert's post 2 This post was BOOKMARKED Official Solution: The function $$f$$ is defined by $$f(x) = - \frac{1}{x^2}$$ for all nonzero numbers $$x$$. If $$f(m) = - \frac{1}{16}$$ and $$f(mn) = f(\frac{1}{n})$$, what is the value of $$n^2$$? A. $$\frac{1}{16}$$ B. $$\frac{1}{4}$$ C. $$\frac{1}{2}$$ D. $$2$$ E. $$4$$ Since $$f(x) = - \frac{1}{x^2}$$, then from $$f(m) = - \frac{1}{16}$$ we'll have that $$-\frac{1}{m^2}=-\frac{1}{16}$$, so $$m^2=16$$. The same way, from $$f(mn) = f(\frac{1}{n})$$ we'll have that $$-\frac{1}{(mn)^2}=-n^2$$, which simplifies to $$n^4=\frac{1}{m^2}$$. Since $$m^2=16$$, then $$n^4=\frac{1}{m^2}=\frac{1}{16}$$, which gives $$n^2=\frac{1}{4}$$. _________________ Kudos [?]: 139529 [1], given: 12794 Intern Joined: 02 Jan 2014 Posts: 41 Kudos [?]: 23 [0], given: 75 ### Show Tags 28 Oct 2015, 20:00 Hi Bunuel, Can we now simply solve it like this? Since f(x)=−1/x^2, then from f(m)=−1/16 we'll have that −1/n^4=−1/16, so n^2=4. As f(n^2) = -1/n^4. Could you please explain if this is correct? Kudos [?]: 23 [0], given: 75 Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139529 [0], given: 12794 ### Show Tags 28 Oct 2015, 23:59 Celerma wrote: Hi Bunuel, Can we now simply solve it like this? Since f(x)=−1/x^2, then from f(m)=−1/16 we'll have that −1/n^4=−1/16, so n^2=4. As f(n^2) = -1/n^4. Could you please explain if this is correct? There should be m instead of n. _________________ Kudos [?]: 139529 [0], given: 12794 Intern Joined: 30 Apr 2017 Posts: 4 Kudos [?]: 0 [0], given: 29 ### Show Tags 25 Sep 2017, 23:52 Could you please elaborate on the following: f(mn)=f(1/n) then we get -1/(mn)^2= - n^2. Don't understand who we got that result Kudos [?]: 0 [0], given: 29 Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139529 [0], given: 12794 ### Show Tags 25 Sep 2017, 23:56 d975490 wrote: Could you please elaborate on the following: f(mn)=f(1/n) then we get -1/(mn)^2= - n^2. Don't understand who we got that result $$f(x) = - \frac{1}{x^2}$$, hence $$f(mn) = -\frac{1}{(mn)^2}$$ and $$f(\frac{1}{n})=-\frac{1}{(\frac{1}{n})^2}=-n^2$$ _________________ Kudos [?]: 139529 [0], given: 12794 Intern Joined: 03 May 2014 Posts: 15 Kudos [?]: [0], given: 3 ### Show Tags 31 Oct 2017, 08:41 Bunuel wrote: The function $$f$$ is defined by $$f(x) = - \frac{1}{x^2}$$ for all nonzero numbers $$x$$. If $$f(m) = - \frac{1}{16}$$ and $$f(mn) = f(\frac{1}{n})$$, what is the value of $$n^2$$? A. $$\frac{1}{16}$$ B. $$\frac{1}{4}$$ C. $$\frac{1}{2}$$ D. $$2$$ E. $$4$$ What sub-topic/tag does this question fall under? I am looking to solve similar question types both from the GMAT Club and the OG. Kudos [?]: [0], given: 3 Math Expert Joined: 02 Sep 2009 Posts: 43335 Kudos [?]: 139529 [0], given: 12794 ### Show Tags 31 Oct 2017, 08:46 Edofarmer wrote: Bunuel wrote: The function $$f$$ is defined by $$f(x) = - \frac{1}{x^2}$$ for all nonzero numbers $$x$$. If $$f(m) = - \frac{1}{16}$$ and $$f(mn) = f(\frac{1}{n})$$, what is the value of $$n^2$$? A. $$\frac{1}{16}$$ B. $$\frac{1}{4}$$ C. $$\frac{1}{2}$$ D. $$2$$ E. $$4$$ What sub-topic/tag does this question fall under? I am looking to solve similar question types both from the GMAT Club and the OG. Functions and algebra. 13. Functions For more check Ultimate GMAT Quantitative Megathread Hope it helps. _________________ Kudos [?]: 139529 [0], given: 12794 Re: M10-21   [#permalink] 31 Oct 2017, 08:46 Display posts from previous: Sort by # M10-21 Moderators: chetan2u, Bunuel Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{}
Plotting shapely polygons is something we will often do, either for its own sake when we need to create a map, of for visual debugging purposes when we are crafting an algorithm. To illustrate this with a non-trivial example, let’s first create a polygon (which will have a hole in it) by computing the difference between two polygons, and plot the resulting one. ## Using Geopandas GeoSeries.plot() The simplest and more general way to plot a shapely polygon is to resort to Geopandas. More precisely, we will create a GeoSeries that holds our polygon. There are many things we can do with a GeoSeries object (see the official docs). In particular, the GeoSeries class has a plot() function which relies on matplotlib. from shapely.geometry import Polygon import geopandas as gpd import matplotlib.pyplot as plt poly1 = Polygon( [(0, 0), (1,0), (1,1), (0,1) ] ) poly2 = Polygon( [(0.25, 0.25), (0.5,0.25), (0.5,0.5), (0.25,0.5) ] ) polydiff = poly1.difference(poly2) myPoly = gpd.GeoSeries([polydiff]) myPoly.plot() plt.show() You can also plot the polygon’s boundary with myPoly.boundary.plot() plt.show() ## Manually extracting exterior and interior boundaries An alternative method, in case we don’t want to rely on Geopandas, is to do this manually. We will need to extract both exterior and interior boudnaries (if the polygon has holes in it). Each polygon object has an exterior ring, and zero or multiple interior rings. We can extract the coordinates of these rings and plot them as follows: xe,ye = polydiff.exterior.xy plt.plot(xe,ye) for LinearRing in polydiff.interiors: xi,yi = LinearRing.xy plt.plot(xi,yi) plt.show()
{}
# Photon is electromagnetic field, right? by Barry_G Tags: electromagnetic, field, photon Mentor P: 17,322 Quote by Barry_G Maxwell's equations do not explain how can photon have electric and magnetic field and yet we can not bend a beam of light by external electric or magnetic fields. Yes, they do. Specifically, the linearity of Maxwell's equations shows this, as mentioned above. P: 68 Quote by elfmotat Your responses are quite painful to read. The "photon" in question consists of an electric and a magnetic field, which he denoted as $E_{\gamma}$ and $B_{\gamma}$. Maxwell's Equations aren't just for the narrow range of applications you list above. They completely describe how electric and magnetic fields behave! They do not describe photons (em waves). http://en.wikipedia.org/wiki/Maxwell...ectromagnetism : any phenomenon involving individual photons, such as... would be difficult or impossible to explain if Maxwell's equations were exactly true, as Maxwell's equations do not involve photons. Electromagnetic wave equation is only DERIVED from them, and it is not to describe any properties of photons, but only to get to the speed of light. It also has nothing to do whether photons are neutrally charger or not, it is about working out that equation in a setup of vacuum and charge-free SPACE. Mentor P: 17,322 Quote by Barry_G They do not describe photons (em waves). As I said back in post #3 none of your questions are actually about photons. They are about classical EM waves. Specifically, you want to know how you can have EM fields in the absence of a charge. Maxwells equations in vacuum (and their associated wave solutions) are the answer to that question. P: 260 Quote by Barry_G They do not describe photons (em waves). http://en.wikipedia.org/wiki/Maxwell...ectromagnetism : any phenomenon involving individual photons, such as... would be difficult or impossible to explain if Maxwell's equations were exactly true, as Maxwell's equations do not involve photons. That article is talking about the failure of classical electrodynamics and why quantum electrodynamics is needed. Maxwell's Equations completely describe EM waves in the classical sense. Of course they don't involve photons as such, because a photon is not a classical concept. Notice how I put quotes around the word "photons" because what we were talking about wasn't photons exactly, it was EM waves. You're the one who keeps using the two interchangeably. You're mixing classical and quantum ideas. Classical and quantum electrodynamics are two completely different frameworks. For the majority of this thread we've been discussing light in terms of classical EM waves. Now you're claiming I'm wrong because you suddenly decided to switch over to quantum. I'm not wrong, you're just inconsistent. Quote by Barry_G Electromagnetic wave equation is only DERIVED from them, and it is not to describe any properties of photons, but only to get to the speed of light. You just said that Maxwell's equations don't describe EM waves, and now you're saying EM waves are derived from Maxwell's equations. Obviously if EM waves are derived from Maxwell's equations then Maxwell's equations describe EM waves. Quote by Barry_G It also has nothing to do whether photons are neutrally charger or not, it is about working out that equation in a setup of vacuum and charge-free SPACE. You derive the equations for light in charge-free space, so obviously light is charge-free. Is this really so hard to understand? P: 68 Quote by DaleSpam As I said back in post #3 none of your questions are actually about photons. They are about classical EM waves. You mean "photon" is concept that belongs to QM and has nothing to do with EM fields? Ok, yes, so I shall call it EM waves instead of photons. Specifically, you want to know how you can have EM fields in the absence of a charge. Maxwells equations in vacuum (and their associated wave solutions) are the answer to that question. Maxwell equations are about electric currents in wires, nothing to do with any EM waves, only electromagnetic wave equation has anything to do with light, and it is not about having EM fields in the absence of any charge, it's simply about EM fields propagating thorough empty space, but it says nothing about how would those EM field be influenced or not if there were any external fields in that space they propagate through. Maxwell's equations. The key property of Maxwell's equations that lead to this is the linearity. The linearity of Maxwell's equations shows that EM follows the principle of superposition which in turn implies that an EM field won't be altered by passing through an external static electric or magnetic field. There is no any superposition if you have a single wave or a single electric/magnetic field. For superposition to neutralize that field you would need another field of opposite sign. P: 68 Quote by elfmotat That article is talking about the failure of classical electrodynamics and why quantum electrodynamics is needed. Maxwell's Equations completely describe EM waves in the classical sense. Maxwell's Equations do not describe any waves. Not even electromagnetic wave equation describes any waves, it's just gets you the speed of light. You just said that Maxwell's equations don't describe EM waves, and now you're saying EM waves are derived from Maxwell's equations. Obviously if EM waves are derived from Maxwell's equations then Maxwell's equations describe EM waves. Maxwell equations are about electric currents and wires, nothing to do with any waves. You derive the equations for light in charge-free space, so obviously light is charge-free. Is this really so hard to understand? No, it means SPACE is charge free. It means there are no any external fields that could influence those fields making up em wave. Is that so hard to understand? P: 150 It seems quite obvious now that Barry does not want to learn anything and is trying to push his own personal theories. Is there a way to request thread lock? P: 260 Quote by Barry_G Maxwell equations are about electric currents in wires, NO NO NO NO NO!!! Maxwell's Equations tell us how electric and magnetic fields behave in any situation, with any charge and current distribution! They're not just about "electric currents in wires!" Quote by Barry_G nothing to do with any EM waves, NO!!! As you've been told and shown numerous times already, EM waves are derived from and described by Maxwell's Equations! Quote by Barry_G only electromagnetic wave equation has anything to do with light, and it is not about having EM fields in the absence of any charge, it's simply about EM fields propagating thorough empty space, but it says nothing about how would those EM field be influenced or not if there were any external fields in that space they propagate through. K^2 already showed you with a detailed post what happens in the presence of external fields! P: 260 Quote by Dead Boss It seems quite obvious now that Barry does not want to learn anything and is trying to push his own personal theories. Is there a way to request thread lock? Agreed, he doesn't want to learn at all. He seems to enjoy arguing about things he doesn't understand just for the sake of it. Thanks PF Gold P: 12,182 Quote by Barry_G You mean "photon" is concept that belongs to QM and has nothing to do with EM fields? Ok, yes, so I shall call it EM waves instead of photons. Maxwell equations are about electric currents in wires, nothing to do with any EM waves, only electromagnetic wave equation has anything to do with light, and it is not about having EM fields in the absence of any charge, it's simply about EM fields propagating thorough empty space, but it says nothing about how would those EM field be influenced or not if there were any external fields in that space they propagate through. There is no any superposition if you have a single wave or a single electric/magnetic field. For superposition to neutralize that field you would need another field of opposite sign. I have just been wading through this thread and it strikes me that you are determined to approach the understanding of this topic entirely on your own ideosyncratic terms. You keep wanting to bend what you are told to fit your particular model. Of course you are free to believe anything you want to but, as this thread has demonstrated, you just won't get anywhere near the accepted understanding of EM waves if you don't follow the established approach. You need to ask yourself whether you really believe you are right and that all the replies you've been given are flawed. Could you not consider starting at the very beginning and work towards some real sense instead of jumping in half way through, getting many things the wrong way round and then demanding to be given answers that make sense to you. This is a difficult topic and needs some Rigour if you want an understanding of it. I really don't think that you can accept (or even recognise) correct answers when you see them. You need to learn the basic terms and definitions in full and not use your own interpretation of things. (The title of the thread is slightly bonkers, by the way) P: 68 Quote by Dead Boss It seems quite obvious now that Barry does not want to learn anything and is trying to push his own personal theories. Is there a way to request thread lock? http://en.wikipedia.org/wiki/Maxwell%27s_equations It's not about light or EM waves. It is about geometry of fields, about currents and wires. Do you understand? Thanks PF Gold P: 12,182 Quote by Barry_G http://en.wikipedia.org/wiki/Maxwell%27s_equations It's not about light or EM waves. It is about geometry of fields, about currents and wires. Do you understand? Ummm. Do YOU understand? P: 68 Quote by elfmotat NO NO NO NO NO!!! Maxwell's Equations tell us how electric and magnetic fields behave in any situation, with any charge and current distribution! They're not just about "electric currents in wires!" Educate yourself. If you want to work out electric and magnetic fields for any distribution you need to use equations for point charges. That is Coulomb's law, Biot-Savart law and Lorentz force equations, for point charges, and then you integrate. NO!!! As you've been told and shown numerous times already, EM waves are derived from and described by Maxwell's Equations! Not even electromagnetic wave equation describes any waves. K^2 already showed you with a detailed post what happens in the presence of external fields! Nonsense. Thanks P: 3,752 Quote by Barry_G Maxwell's Equations do not describe any waves. Not even electromagnetic wave equation describes any waves, it's just gets you the speed of light. Ahhh.... Electromagnetic waves propagating through empty and charge-free space are described by Maxwell's equations. In fact, electromagnetic waves were predicted by Maxwell's equation before they were ever discovered. The discovery that light was a specific example of the electromagnetic waves predicted by Maxwell's equations came later. P: 260 Quote by Barry_G Educate yourself. If you want to work out electric and magnetic fields for any distribution you need to use equations for point charges. That is Coulomb's law, Biot-Savart law and Lorentz force equations, for point charges, and then you integrate. Coulomb's Law is derived from Maxwell's Equations (Gauss' Law for Electricity) under the assumption of static conditions (i.e. when $\partial E / \partial t =0$ and $\partial B / \partial t =0$). The Biot-Savart Law is derived from Maxwell's Equations (Ampere's Law and Gauss' Law for Magnetism) also under the assumption of static conditions. Coulomb's Law and the Biot-Savart Law are only valid when the electric and magnetic fields are not changing in time. Maxwell's Equations are more fundamental, and will describe the electric and magnetic fields under any conditions. The Lorentz Force Law will then tell you how a test particle placed in these fields will behave. Quote by Barry_G Not even electromagnetic wave equation describes any waves. I'm not really sure what you mean by that. It's a wave equation, so obviously it describes a wave. Quote by Barry_G Nonsense. Just because you say something is nonsense doesn't make it so. He showed you, step by step, how the linearity of Maxwell's Equations prove that external fields superimpose on the EM wave so that the wave itself is unaffected by the external fields. Mentor P: 11,781 Quote by Dead Boss It seems quite obvious now that Barry does not want to learn anything and is trying to push his own personal theories. Is there a way to request thread lock? The normal method is to use the "Report" button on a problematic post. In this case, however, you can consider it done. Related Discussions Quantum Physics 11 Quantum Physics 2 Quantum Physics 13 Special & General Relativity 31 Quantum Physics 5
{}
# Final Study Guide The final is Thursday Dec 7th 9:30-11:20am in LINC 368. The final is cumulative, all content on the midterm study guide may also be tested on the final exam. The final exam from last year is provided on canvas. ## Concepts When does the two sample t-test have exactly a t-distribution under the null hypothesis? Describe what happens if the equal variance two sample t-test is used in a situation where the populations do not have equal variance. Describe what happens if Welch’s two sample t-test is used in a situation where the populations do have equal variance. Why do some people reccomend you should always use Welch’s two sample t-test? Describe what happens if the paired t-test is used in a situation where the data isn’t paired. Describe what happens if the two sample t-test is used in a situation where the data is paired. Identify the appropriate test based on a setting and comparison of interest. Identify which margins in a $$2 \times 2$$ table are fixed by the study design. Identify the kind of sampling being used (Multinomial, Binomial, Retrospective) in a binary data setting from the study design. Identify which probabilities may be estimated in a $$2 \times 2$$ table based on the study design. State the null hypothesis the Wilcoxon Rank Sum tests with the addition of the location-shift hypothesis. State the null hypothesis the Wilcoxon Rank Sum tests without the addition of the location-shift hypothesis. Describe how Simpson’s paradox may complicate the analysis of multiple $$2 \times 2$$ tables. What additional assumption is crucial for the Mantel-Haenszel test? Describe why the F-test for variances isn’t reccomended. Describe how to construct a bootstrap confidence interval for a parameter of interest. Apply the delta method to find an approximate sampling distribution for a function of a statistic and use the result to construct a confidence interval. ## Procedures You should know how to perform the following test procedures (this includes describing the appropriate setting for the test, the null and alternative hypotheses, the test statistic, the reference distribution, as well as being able to complete the calculations), and be able to write a statistical summary from the results: • Chi-square goodness of fit • Two sample Z-test/confidence interval/p-value • Two sample equal variance t-test/confidence interval/p-value • Two sample Welch’s t-test/confidence interval/p-value • Paired t-test/confidence interval/p-value • Two proportion Z-test/confidence interval/p-value • Chi-square for homogeniety of proportions/independence test/p-value • Fisher’s Exact test/p-value • Log Odds Ratio test/p-value • McNemar’s paired binary data test/p-value • Paired binary data t-test/p-value • Mantel-Haenszel test/common odds ratio estimate • Mood’s median test/p-value • Wilcoxon Rank Sum test statistic/p-value • Levene’s test/p-value • Two sample Kolmogorov-Smirnov test/p-value
{}
Question # If ${}^n{P_7} = 42\left( {{}^n{P_5}} \right)$, then find $n$ Verified 150.9k+ views Hint- Here, we will be using the general formula for permutations. Given, ${}^n{P_7} = 42\left( {{}^n{P_5}} \right){\text{ }} \to {\text{(1)}}$ Since, we know that the general formula for permutation is given by Number of ways of arranging $r$ items out of $n$ items is ${}^n{P_r} = \dfrac{{n!}}{{\left( {n - r} \right)!}}$ Solving the given equation (1) using above formula, we get $\dfrac{{n!}}{{\left( {n - 7} \right)!}} = 42\left[ {\dfrac{{n!}}{{\left( {n - 5} \right)!}}} \right] \\ \Rightarrow \dfrac{{n\left( {n - 1} \right)\left( {n - 2} \right)\left( {n - 3} \right)\left( {n - 4} \right)\left( {n - 5} \right)\left( {n - 6} \right)\left( {n - 7} \right)!}}{{\left( {n - 7} \right)!}} = 42\left[ {\dfrac{{n\left( {n - 1} \right)\left( {n - 2} \right)\left( {n - 3} \right)\left( {n - 4} \right)\left( {n - 5} \right)!}}{{\left( {n - 5} \right)!}}} \right] \\ \Rightarrow n\left( {n - 1} \right)\left( {n - 2} \right)\left( {n - 3} \right)\left( {n - 4} \right)\left( {n - 5} \right)\left( {n - 6} \right) = 42n\left( {n - 1} \right)\left( {n - 2} \right)\left( {n - 3} \right)\left( {n - 4} \right) \\ \Rightarrow \left( {n - 5} \right)\left( {n - 6} \right) = 42 \Rightarrow {n^2} - 11n + 30 = 42 \Rightarrow {n^2} - 11n - 12 = 0 \\ \Rightarrow {n^2} + n - 12n - 12 = 0 \Rightarrow n\left( {n + 1} \right) - 12\left( {n + 1} \right) = 0 \Rightarrow \left( {n + 1} \right)\left( {n - 12} \right) = 0 \\ \\$ Either $n = - 1$ or $n = 12$ Since, the value of $n$ should always be positive so we will neglect $n = - 1$. Therefore, the possible value of $n$ is 12. Note- In these types of problems we have to check at the end that the values of $n$ we are getting are non-negative. If any value of $n$ comes out to be negative, then that value is not considered because that value is not feasible.
{}
# Funny Definite Integral 1. Jul 5, 2010 ### Char. Limit So, I was playing on Wolfram Alpha, and I managed to come up with this: http://www.wolframalpha.com/input/?i=integral_(+infinity+*+sqrt(-1)+)^pi+e^(ix)+dx&x=0&y=0 In Tex, I believe this is... $$\int_{i\infty}^{\pi}e^{i x} dx = i$$ However, I have more than one problem with it, and I want to know if my problems are actually problems. First, the bounds. Can you multiply a transfinite number by i? Would the answer make any sense whatsoever? And can you integrate from an imaginary point to a real point? Actually, those bounds are the only problems I have... But they do look problematic. Can someone tell me if this integral is a real integral? Last edited: Jul 5, 2010 2. Jul 5, 2010 ### Dickfore The function $f(z) = e^{i z}$ has an absolute value: $$|f(z)| = |e^{i z}| = e^{\Re(i z)} = e^{-\Im{z}}$$ which tends to zero as $\Im{z} \rightarrow +\infty$. The lower bound on your integral is exactly like that. Also, the function is entire. Therefore, the integral $$F(z) = \int_{\gamma}{f(z') \, dz'}$$ has the same value for all contours $\gamma$ starting from an infinitely high point in the upper half--plane and ending anywhere in the complex plane $z$. If $z = x + i y$, it is convenient to choose the contour as: $$\begin{array}{l} \gamma_{1}: \ z = i t, \infty > t \ge y, \ dz = i \, dt \\ \gamma_{2}: \ z = t + i y, 0 \le t \le x, \ dz = dt$$ and: $$F(z) = \int_{\infty}^{y}{e^{i i t} \, i \, dt} + \int_{0}^{x}{e^{i (t + i y)} \, dt}$$ $$F(z) = -i \, \int_{y}^{\infty}{e^{-t} \, dt} + e^{-y} \, \int_{0}^{x}{e^{i t} \, dt}$$ $$F(z) = -i \, \left.(-e^{-t})\right|^{\infty}_{0} + e^{-y} \, \left.\frac{e^{i t}}{i}\right|^{x}_{0}$$ $$F(z) = -i + e^{-y} \frac{e^{i x} - 1}{i} = e^{-y} \, \sin x - i \, [ 1 + e^{-y} \, (\cos x - 1) ]$$
{}
## INTRODUCTION Facilitators and barriers to tobacco product and nicotine addiction are highly varied1. While the actions of larger tobacco industry actors may garner more attention2, it may be valuable to assess the community-oriented effects of smaller actors that have direct relationships with tobacco-using communities. One type of smaller actor is the tobacco storefront, which can be defined as a retailer that physically exchanges tobacco products for money with the end user3,4. These tobacco storefronts can be subdivided into specialty retailers where tobacco products are the primary business line (e.g. vaping and smoke shops) and general retailers (e.g. grocery stores, convenience stores, chain retail outlets) where tobacco products are not the primary business line. Importantly, specialty tobacco retailers may represent an important geographical and demographic data point of community-level exposure and demand for tobacco product use. Tobacco-specific storefronts have existed for several hundred years5. Yet, the last decade has seen the rapid popularization of a distinctly different subtype of tobacco-oriented retailer: the vape-specific storefront6. Generally, vaping products, also known as ‘electronic nicotine delivery systems’ (ENDS), are devices that use electricity to vaporize liquid, usually containing nicotine, which is then inhaled. While originally marketed as a cessation tool, vaping products have gained popularity, particularly among young adults7. Key drivers of uptake for young adults may not be cessation, but rather nicotine addiction, social pressure, and even competitive vaping8. In 2010, US national sales of ENDS were $11.6 million, increasing to$751 million in 20169. Increasing use of ENDS among youth and young adults has mirrored growing sales of ENDS9. In 2019, 23% of middle and high school students in the US were current tobacco users with 20% using e-cigarettes and 27.5% of high school students using e-cigarettes, the highest of any product10. Reflective of the increase in the popularity of ENDS has been the proliferation of community-based vaping retail shops11. Importantly, vape shops concentrate on relational rather than solely transactional interactions with customers, focusing on developing rapport with customers and fostering a sense of community11-14. Vape shops are also important sources of information for current and potential ENDS users14,15. These shops may motivate their perception of ENDS as less harmful than conventional cigarettes16,17 and useful for cessation12,16,18, thereby impacting tobacco use attitudes and behaviors of customers. They also provide access to a wider array of products than non-specialty retailers11 and opportunity to sample e-liquids15. Past work on the effects of the tobacco retail environment has shown that high tobacco retailer density is associated with youth smoking19-21. Furthermore, research has shown that retailer density is negatively associated with smoking abstinence and pro-cessation attitudes in poorer areas22. Other studies examining the retail environment of vape shops suggest that they are more concentrated near college and university campuses23 and where tobacco retail density is high24,25. Other community characteristics, however, have been shown to differ. In New Jersey, vape shops are located where few racial minorities live25. This is in contrast to vape shops in Orange County, CA, which are more common in areas with larger proportions of Hispanics24. Less is known about the effects of retail density of vape shops on ENDS use, but these shops may play a significant role in shaping attitudes and norms around ENDS acceptability, initiation, and use26. Specifically, current ENDS use patterns suggest that age may be an important factor when attempting to measure the impact of community vaping retail outlets on tobacco and ENDS use behavior. This includes examining the increasing use of ENDS among youth and young adults. Hence, in this article, we use geographical data on tobacco-specific storefronts and vape-specific storefronts, along with geographically linked data on age groups, to better explore the relationships between retailer presence and separate age group tobacco use in California. ## METHODS ### Data collection Scripts written in the programming language Python were then written to scrape the business directory service and crowd-sourced review forum Yelp in order to match retailer names and business addresses to Yelp registered business pages. This was done in order to further categorize our initial list of CA tobacco retailers into subcategories available in Yelp (e.g. tobacco shops, vape shops, grocery stores, convenience stores, etc.). Therefore, an interim dataset was assembled with each case being a licensed store having a Yelp page. This dataset contained variables which indicated whether the store was categorized in Yelp as a ‘tobacco shop’, ‘vape shop’, ‘grocery store’, etc. Our data collection process on Yelp allowed us to automatically match categories for 69% of retailers, with the remaining retailers manually categorized using existing Yelp categories. Yelp is a platform that has been used by other researchers to assess the discrepancies in vape-shop characteristics by demographic groups and also to assess differences in characteristics between vape shops that close or remain open after one year28,29. Data for the population of each age category and median age were obtained from the American Community Survey at the census tract level30. The final dataset assembled was structured in a manner whereby each case was a census tract, and variables included information about the number of stores in a given category (e.g. tobacco shop, vape shop, etc.) as well as age information for each census tract. ### Statistical analysis Our manual review of retailer Yelp pages (leveraging user-submitted images on Yelp and Google Maps) indicated that retailers categorized as ‘Tobacco Shops’ or ‘Vape Shops’ appeared to often be storefronts specialized in selling tobacco and/or vaping products, whereas other common categories of licensed tobacco retailers (e.g. ‘Gas Stations’, ‘Grocery Stores’, ‘Convenience Stores’) were non-specific vendors. Therefore, in this article, we refer to retailers categorized by Yelp as ‘tobacco stores’ as tobacco-specific, stores categorized by Yelp as ‘vape stores’ as vaping-specific, and all stores without these categorizations as ‘non-specific’. Though 22131 licensed tobacco retailers were provided by CDFTA, 1557 census tracts contained 1800 tobacco or vape shops, with only 200 containing more than one tobacco or vape shop. Therefore, geospatial analyses proceeded with three bivariate classifications of census tracts: 1) containing a tobacco-specific shop, 2) containing a vape-specific shop, and 3) containing either type of shop (Table 1). ##### Table 1 Number/proportion of shops that are vape-specific, tobacco-specific, both, and neither, based on Yelp categories, according to geography and highest ventile of each available age (years) category CharacteristicsVape-specific shopTobacco-specific shopDual vape/tobacco shopNon-specific tobacco retailer Sample sizeShops84882041920320 Tracts with shop77777240310837 Tracts as percent of California3.43.31.746.7 Tracts with 2+ stores6544165104 Tracts in top ventile* (per capita)Age under 5 (>10.9%)4.14.13.55.8 Age 5–9 (>10.1%)2.24.42.75.8 Age 10–14 (>10.1%)3.02.73.54.8 Age 15–19 (>10.6%)3.33.22.74.9 Age 20–24 (>11.4%)8.88.26.75.5 Age 25–34 (>24.9%)8.910.48.25.9 Age 35–44 (>18.9%)3.57.04.04.6 Age 45–54 (>19.8%)2.12.31.73.2 Age 55–64 (>19.3%)2.12.12.23.9 Age 65–74 (>13.0%)2.13.92.24.3 Age 75–84 (>8.9%)4.05.43.24.6 Age ≥84 (>5.0%)6.65.45.75.0 * ‘Tracts in the Top Ventile’ indicates that characteristics displayed are for the group of 1158 census tracts with the highest density of the specified age range. Age-related data and median age were available for 23194 census tracts in California. In addition, the populations for eleven age groups were available: age under 5 years, 5–9 years, 10–14 years, 15–19 years, 20–24 years, 25–34 years, 35–44 years, 45– 54 years, 55–64 years, 65–74 years, 75–84 years, and over 84 years. Total population was used to standardize across census tracts. Census tracts were then separated into ventiles of 1158 tracts for each age group per capita, allowing for continuousization of dichotomous characteristics as proportions of tracts containing each of the three dichotomous characteristics. For each age group, figures were produced to display the relationship between ventiles of age group proportion (see horizontal axes in Figure 1) and proportion of shops in each ventile of census tracts (see vertical axes in Figure 1), separately for vape shops and tobacco shops. ##### Figure 1 At the census track level, the relationship between ventiles of twelve separate age groups compared to percentage of tracts containing a vape or tobacco shop. Each ventile contains 1158 tracts This study utilized a cross-sectional ecological design, whereby variables were reflective of retailer/ population characteristics in 2019, and analysis was conducted at the census tract level. In total, this study analyzed associations between retailer density and age groups at the census tract level using seventeen variables: 1) having a retailer which was tobacco-specific, 2) having a retailer which was vape-specific, 3) having a retailer which was both tobacco-specific and vape-specific, 4) having a retailer which was either tobacco-specific or vape-specific, 5) having a retailer which was neither tobacco-specific nor vape-specific, 6) proportion under 5 years old, 7) proportion 5–9 years, 8) proportion 10–14 years, 9) proportion 15–19 years, 10) proportion 20–24 years, 11) proportion 25–34 years, 12) proportion 35–44 years, 13) proportion 45–54 years, 14) proportion 55–64 years, 15) proportion 65–74 years, 16) proportion 75–84 years, and 17) proportion over 84 years. Least squares regression was used to test for significant linear relationships. A relatively low α=0.001 was used in assessing for statistical significance, as a high sample size reflected sufficiently high statistical power to detect very small effect sizes. The high breadth of age representation among the dataset’s diverse set of census tracts suggested that a parabolic relationship (i.e., with older and younger age groups being significantly different from those in-between) may emerge between age and the presence of a tobacco or vape shop. Therefore, polynomial regression was used to test the relationship between ventiles of median age and proportion of tracts with vape/tobacco shops. Using the R package plotrix in R v3.6.0 (R Foundation for Statistical Computing: Vienna, Austria), graphing techniques were used to produce a dual-axis plot comparing the shape of the association of median age and presence of vape or tobacco shop with the association between median age and non-specific tobacco vendors. Tobacco-specific and vape-specific shops exhibited much lower frequency than non-specific shops. Therefore, since the magnitudes of these two typologies were very different, the utility of this visualization procedure enabled us to compare the shape of association for these two meaningful subsets of licensed tobacco retailers in California (Figure 2). ##### Figure 2 In black, the relationship between median age and shops which are categorized as either vapespecific or tobacco-specific, at the census track level. In red, the relationship between median age and shops which are neither categorized as vape-specific nor tobacco-specific ArcGIS v10.7 (Esri: Redlands, CA) was used to produce maps that visualize the areas wherein age-related disparities appear most prominent. Point-coordinates of vape-specific storefronts and tobacco-specific storefronts are represented as distinct symbols atop a choropleth gradient map of age 25– 34 years per capita. Maps were produced for the four most populous cities in California along with their immediately surrounding areas, with all cities drawn on the same scale (Figure 3). ##### Figure 3 With vape-specific shops as green triangles and tobacco-specific shops as blue squares, shop location atop a census tract basemap for California’s four most populous cities, with white-black choropleth shading according to concentration of the population aged 25–34 years ## RESULTS Visual assessment of the relationships between age category ventiles with the presence of both vape shops and tobacco shops conveyed a very strong, clear positive relationship for density of the age 25– 34 category and a somewhat strong, but also clear, positive relationship for density of the age 20–24 category (Figure 1). Linear regression found statistically significant associations of age group ventiles with all three tract characteristics (i.e. containing a tobacco shop, containing a vape shop, and containing either) for density of four out of eleven age groups: 10–14 years, 20–24 years, 25–34 years, and 55–64 years. The characteristic of containing a vape shop also exhibited a significant linear association with age 5–9 years, and the characteristic of containing a tobacco shop additionally exhibited significant linear associations with age 15–19 years, age 35–44 years, and age 45–54 years. Consistent with results in Figure 1, statistical procedures indicated significant negative associations with density of those aged 10–14 years, 15–19 years, 45–54 years, and 55–64 years, whereas the association between containing either a tobacco or vape shop was positive for age groups between the two extremes (e.g. younger than 19 or older than 45). The strength of the association was strongest for the positive relationship between ventiles of age 25–34 years and having either a tobacco or vape shop, with a range between 1.7% of tracts containing either a tobacco or vape shop for tracts at the lowest age ventile (under 5.5% population age 25–34 years) to 12.3% for tracts at the highest ventile (over 24.9% population age 25–34 years). Polynomial regression found a significant inverse parabolic relationship between ventiles of median age and the proportion of tracts with tobacco or vape shops (p for linear term <0.001; p for quadratic term <0.001). Conversely, polynomial regression did not find a significant relationship between ventiles of median age and proportion of tracts with non-specific tobacco retailers (p for linear term = 0.375; p for quadratic term = 0.067). The shape of these distinct associations is visually overlaid in Figure 2. Geospatial analyses exhibited clustering in three of the four most populous cities in California (Figure 3). In Los Angeles, several clusters of many vape/tobacco shop concentrations seemed to exist, with perhaps the most distinct around Koreatown. In San Diego, a cluster appeared to exist along a portion of El Cajon Blvd about 2–3 miles south of San Diego State University. In San Jose, clear clustering was not apparent, although a relative dearth of vape and tobacco shops appeared to exist in neighboring Milpitas and Sunnyvale. In San Francisco, one notable cluster emerged across the agglomeration of businesses spanning Tenderloin and the Financial District. ## DISCUSSION Our study using validated addresses of licensed tobacco and vaping shop retailers showed that areas with high numbers of specialized tobacco/vaping retailers had high numbers of young adults, and this relationship was not observed for other age groups. These findings may indicate that tobacco and vaping specialty retailers are focusing on this young adult age demographic (20–34 years) for product marketing and access, and community-based engagement. According to a 2014 report from the US Surgeon General, the average age of smoking initiation was 15.3 years31. Therefore, this study may elucidate a timeline for increasing smoking intensity whereby occasional use in youth and adolescents leads to more sustained tobacco behavior in later more advanced young adult years, with vendors responding to this demand by operating their businesses in areas with a higher distribution of this age demographic. It likely also reflects the enforcement of the minimum age to purchase tobacco products in the state, which was increased from 18 to 21 years in 2016. Conversely, the relatively muted relationship between density of age 15–19 years with the presence of a tobacco or vape shop may relate to other factors, such as inaccessibility caused by teenagers living with their parents or college bans of tobacco and vape products. For example, a large number of college campuses have banned consumption of tobacco and vape products, including the 9 campuses of the University of California and the 23 campuses of the California State University32. It is possible that the effect of these policies may serve as a minor contribution to explaining the null association observed between density of the 15–19 age group and presence of a tobacco or vape shop, and perhaps may also help to elucidate the relatively muted relationship with density of the 20–24 age group when compared to the somewhat clearer relationship observed among the 25–34 age. Further research should be conducted to determine whether an expansion of targeted tobacco control policies (e.g. restrictions on product marketing) for geographies with relatively dense concentrations of the age 25–34 group would benefit tobacco control efforts in California. Little difference was observed between the relationship of age variables between tobacco-specific shop presence and vape-specific shop presence. This may indicate that age-dependent differences stratified for different types of tobacco products may be difficult to observe in ecologically focused studies that use geographical location as the primary predicting variable. It may also indicate that age demographics may be skewed by other tobacco use factors, such as poly tobacco use or transition between combustibles and ENDS use in this population33,34. Future research should focus on identifying geographically bound age-related associations for tobacco use stratified for specific tobacco products, while also assessing other sociological variables (e.g. sex, socioeconomic, race) that may influence initiation, use and transition. ### Limitations This study should be considered primarily as hypothesis generating, as its ecological methodology does not permit attribution of findings to individuals. Specifically, the significant positive relationship between the concentration of those aged 25–34 years and the proportion of tobacco-specific or vape-specific shops in census tracts, does not necessarily indicate that individuals in the age 25–34 group are more likely to use the products most associated with these relatively specialized shops. Moreover, though these shops are known to carry a broader array of tobacco and/or vape products, young adults shopping at these stores may be purchasing the same products in non-specific retailers that carry the same tobacco or vape products. In addition, as this study leveraged categories listed on Yelp, it relies on a retail owner’s self-categorization of ‘Vape Shop’ or ‘Tobacco Shop’ labels, which could be subject to inconsistencies. However, the potential effect of any misclassification was minimized by our manual review of user-submitted images for these shops. Finally, this study attributes licensed tobacco retailers as epidemiologically meaningful with respect to potential consumer uptake of ENDS compared to other tobacco products. However, more rigorous research needs to be conducted to compare whether users most commonly obtain specific types of vape or tobacco products from these retailers, online stores, or other outlets. ## CONCLUSIONS This study adds hypothesis-generating information about disparities in spatial risk for tobacco/vaping use between different age groups. Use of vaping products is unevenly distributed among demographic groups, and this disparity may be perpetuated by an uneven distribution in storefronts that specialize in the sale of these products. Results of this study show that tobacco and vaping storefronts were much more commonly found in areas with higher concentrations of those aged 20–34 years, but this association was not found for younger or older age categories. These findings emphasize the need for anti-tobacco efforts to target groups that are at especially heightened risk for e-cigarette uptake and use, and these findings underscore the particular importance of generational differences in tobacco-related behavior.
{}
# linear algebra – Finding equation of line through specified point and perpendicular to the following vector. I have the following question. Suppose we have the point $$P=(1,4,-3)$$ and the vector $$v=langle -1,4,0 rangle.$$ I want to find the equation of the line that passes through the point $$P$$ that is also perpendicular to $$v$$. So of course letting $$Q=(x,y,z)$$ be a point on this line tells us that $$langle x-1,y-4,z+3rangle cdot langle-1,4,0 rangle = 0$$. This tells us that $$-x+4y=15$$. My question is how do we retrieve the parametric, vector, and symmetric form of the line from this equation? Thanks again. Krull.
{}
# Why isn't the monetary value of the first transaction used in the gamma-gamma spend model? I've been doing some customer value calculations using the Lifetimes library in python. This library employs the approach found in Fader, Hardie, and Lee's paper on using iso-value curves for customer base analysis. This approach is also implemented in the BTYD (Buy 'Til You Die) package for R. In this approach, a gamma-gamma spend model is used to estimate the average order value for each customer (gamma distribution) and the average of these average values across all customers (gamma distribution). As a customer makes more purchases, their expected spend moves closer to their observed average and further away from the average of all customers' averages. The derivation is discussed in Hardie's paper, and the intuition is discussed on page 20 of Fader, Hardie, and Lee. In Fader, Hardie, and Lee (page 20), it is asserted that... If no repeat purchases are observed... our best guess is that the individual's average transaction value in the future is simply the mean of the overall transaction value distribution This means that the first purchase that an individual makes is not enough to learn anything about what their average spend might be, and it is totally ignored. In the formulas presented, a single-purchase customer's values are not used because their repeat purchase frequency is 0. It is unclear why the model could not be adjusted to use first purchases, thus including observations from the overwhelming number of single-purchase customers in most data sets. The paper does not intuitively explain why this information is not used. If the overall average is \$50 and a first-time customer spends \$5, it seems reasonable to assume that their average value may be lower than that of most customers, right?
{}
# hpfem/esco2012-boa Switch branches/tags Nothing to show Fetching contributors… Cannot retrieve contributors at this time 52 lines (31 sloc) 3 KB \title{New Quadratic Solid-Shell Elements and their Evaluation on Popular Benchmark Problems} \tocauthor{F. Abed-Meraim} \author{} \institute{} \maketitle \begin{center} {\large \underline{Farid Abed-Meraim}}\\ LEM3 UMR CNRS 7239, Arts et Metiers ParisTech, 4 rue Augustin Fresnel, 57078 Metz\\ {\tt farid.abed-meraim@ensam.eu} \\ \vspace{4mm}{\large Vuong-Dieu Trinh}\\ LEM3 UMR CNRS 7239, Arts et Metiers ParisTech, 4 rue Augustin Fresnel, 57078 Metz\\ {\tt vuong-dieu.trinh@metz.ensam.fr} \\ \vspace{4mm}{\large Alain Combescure}\\ LaMCoS UMR CNRS 5259, INSA-Lyon, 18, 20 rue des Sciences, 69621 Villeurbanne\\ {\tt alain.combescur@insa.lyon.fr} \end{center} \section*{Abstract} In recent years, considerable effort has been devoted to the development of 3D finite elements able to model thin structures (Cho et al., 1998; Sze and Yao, 2000; Abed-Meraim and Combescure, 2002; Vu-Quoc and Tan, 2003; Chen and Wu, 2004). To this end, coupling solid and shell formulations proved to be an interesting strategy, providing continuum finite element models that can be efficiently used for structural applications. In the present work, two solid-shell elements are formulated (a 20-node and a 15-node element) based on a purely three-dimensional approach. The advantages of these elements are shown through the analysis of various structural problems. Note that their main advantage is to allow complex structural shapes to be simulated without classical problems of connecting zones meshed with different element types. These solid-shell elements have a special direction called the ''thickness'', along which a set of integration points are located. Reduced integration is also used to prevent some locking phenomena and to increase computational efficiency. Focus will be placed here on linear benchmark problems, where it is shown that these solid-shell elements perform much better than their counterparts, conventional solid elements. \bibliographystyle{plain} \begin{thebibliography}{10} \bibitem{Cho-Park-Lee} {\sc C. Cho and H.C. Park and S.W. Lee}. {Stability analysis using a geometrically nonlinear assumed strain solid shell element model}. Finite Elem. Analysis Des. 29 (1998) 121-135. \bibitem{Sze-Yao} {\sc K.Y. Sze and L.Q. Yao}. {A hybrid stress ANS solid-shell element and its generalization for smart structure modeling. Part I-solid-shell element formulation}. Int. J. Num. Meth. Eng. 48 (2000) 545-564. \bibitem{Abed-Meraim-Combescure} {\sc F. Abed-Meraim and A. Combescure}. {SHB8PS - a new adaptive, assumed-strain continuum mechanics shell element for impact analysis}. Comp. and Struct. 80 (2002) 791-803. \bibitem{Vu-Quoc-Tan} {\sc L. Vu-Quoc and X.G. Tan }. {Optimal solid shells for non-linear analyses of multilayer composites. I. Statics}. Comp. Meth. Applied Mech. Eng. 192 (2003) 975-1016. \bibitem{Chen-Wu} {\sc Y.I. Chen and G.Y. Wu}. {A mixed 8-node hexahedral element based on the Hu-Washizu principle and the field extrapolation technique}. Structural Engineering and Mechanics 17 (2004) 113-140. \end{thebibliography}
{}
0% # Required Rpm For Ball Mill ### Page 1 Ball Milling Theory - freeshell Once you know the ideal speed of rotation for your mill jars, you will need to design your mill around this critical parameter. With most ball mill designs, you have two areas of speed reduction to tweak: from the motor drive shaft to the drive pulley and from the roller bar to the milling jar. ### Electromechanical Dynamic Behaviour and Start-Up ... Therefore, the ball mill model is typically verified with a reduced proportion of prototypes. The drive motor of the large ball mill is usually required to start with a reduced-voltage method . Therefore, the large ball mill usually uses air clutch to assist start-up. During the start-up process, the motor is started up to the rated speed. ### Ball Mills - Mineral Processing & Metallurgy If N = 15 rpm nc obtained = 75 %. Ball Mill Lining. The mill lining can be made of rubber or different types of steel (manganese or Ni-hard) with liner types according to the customers requirements. For special applications we can also supply porcelain, basalt and other linings. Fig. 3. Rubber lining, grate mill. Ball Mill Charge volume ### 211 questions with answers in BALL MILLING | Science topic Temperature, ball-powder ratio, ball size and energy of the ball milling system can and do play a role in altering the amount of stearic acid required to prevent cold welding. ### how to calculate rpm of mill in cement plant Ball Mill Critical Speed. A Ball Mill Critical Speed , This video should the working principle of a ball mill and the ball action at , We have all the laboratory and plant equipment you. ... Talk with the Experts at Paul O Abbe about your process requirements and Ball Mill Loading, Wet Milling, Size Reduction and Mill Speed - Critical Speed needs. ### Planetary Ball Mill PM 100 - RETSCH - highest fineness Planetary Ball Mills are used wherever the highest degree of fineness is required.In addition to well-proven mixing and size reduction processes, these mills also meet all technical requirements for colloidal grinding and provide the energy input necessary for mechanical alloying.The extremely high centrifugal forces of a planetary ball mill result in very high pulverization energy and ... ### Practical 1 : Ball Milling | TF Lab 1 Practical 1: Title: Ball Milling Objective: To grind the coarse salt to a smaller size by using a ball mill and to obtain the particle size distribution of the initial and the sieved final mixture. Introduction: 'Ball milling is a method used to break down the … ### End Mill Speed and Feed Calculator - Martin Chick & Associates End Mill Speed & Feed Calculator. I am creating a new calculator based on your feedback. Please fill out the form below with feeds and speeds that work for you and I … ### Ball Mill - RETSCH - powerful grinding and homogenization RETSCH is the world leading manufacturer of laboratory ball mills and offers the perfect product for each application. The High Energy Ball Mill E max and MM 500 were developed for grinding with the highest energy input. The innovative design of both, the mills and the grinding jars, allows for continuous grinding down to the nano range in the shortest amount of time - with only minor warming ... ### Ball Milling | Material Milling, Jet Milling | AVEKA Ball milling is a size reduction technique that uses media in a rotating cylindrical chamber to mill materials to a fine powder. As the chamber rotates, the media is lifted up on the rising side and then cascades down from near the top of the chamber. With this motion, the particles in between the media and chamber walls are reduced in size by ... ### Ball Mill Critical Speed - Mineral Processing & Metallurgy A Ball Mill Critical Speed (actually ball, rod, AG or SAG) is the speed at which the centrifugal forces equal gravitational forces at the mill shell’s inside surface and no balls will fall from its position onto the shell. The imagery below helps explain what goes on inside a mill as speed varies. Use our online formula. The mill speed is typically defined as the percent of the Theoretical ... ### ball mill rpm - Newbie Questions - Forum ball mill rpm - posted in Newbie Questions: ok so I know this is a question I should be able to figure out on my own I thought I know of some web sites that had the equation but I must have been wrong any way the jar is 7.75 inchs wide and 10.5 inchs long and I`m using 50 cal round musket balls for media (it will be half full of media) so what rpm should I run it at Thanks so much ### High energy ball milling process for nanomaterial synthesis The impact energy of the milling balls in the normal direction attains a value of up to 40 times higher than that due to gravitational acceleration. Hence, the planetary ball mill can be used for high-speed milling. Schematic view of motion of the ball and powder mixture. ### TECHNICAL NOTES 8 GRINDING R. P. King 8.1.3 Power drawn by ball, semi-autogenous and autogenous mills A simplified picture of the mill load is shown in Figure 8.3 Ad this can be used to establish the essential features of a model for mill power. The torque required to turn the mill is given by Torque T Mcgdc T f (8.9) Where Mc is the total mass of the charge in the mill and Tf is ... ### RPMs on a ball mill and a star rolling machine Okay, I'm "considering" building a combo machine for ball milling and star rolling. The thing I'm confused about is the RPM of the drums. I always thought a ball mill should be fairly slow because rock tumblers that I've seen are fairly slow - maybe 60 revs per minute. I guess I'm wrong. I've seen recommendations of much higher speeds. Of ### Ball Mill Design/Power Calculation The ball mill motor power requirement calculated above as 1400 HP is the power that must be applied at the mill drive in order to grind the tonnage of feed from one size distribution. The following shows how the size or select the matching mill required … ### Ball Mill - an overview | ScienceDirect Topics The terms high-speed vibration milling (HSVM), high-speed ball milling (HSBM), and planetary ball mill (PBM) are often used. The commercial apparatus are PBMs Fritsch P-5 and Fritsch Pulverisettes 6 and 7 classic line, the Retsch shaker (or mixer) mills ZM1, MM200, MM400, AS200, the Spex 8000, 6750 freezer/mill SPEX CertiPrep, and the SWH-0.4 ... ### Mill Speed - Critical Speed - Paul O. Abbe Mill Speed - Critical Speed. Mill Speed . No matter how large or small a mill, ball mill, ceramic lined mill, pebble mill, jar mill or laboratory jar rolling mill, its rotational speed is important to proper and efficient mill operation. Too low a speed and little energy is imparted on the product. ### Ball Nose Milling Strategy Guide - In The Loupe Ball Nose Milling Without a Tilt Angle. Ball nose end mills are ideal for machining 3-dimensional contour shapes typically found in the mold and die industry, the manufacturing of turbine blades, and fulfilling general part radius requirements.To properly employ a ball nose end mill (with no tilt angle) and gain the optimal tool life and part finish, follow the 2-step process below (see Figure 1). ### Ball Mill Parameter Selection – Power, Rotate Speed, Steel ... 1 Calculation of ball mill capacity. The production capacity of the ball mill is determined by the amount of material required to be ground, and it must have a certain margin when designing and selecting. There are many factors affecting the production capacity of the ball mill, in addition to the nature of the material (grain size, hardness, density, temperature and humidity), the degree of ... ### TECHNICAL SPECIFICATIONS The existing Ball mill system is envisaged to be used in combination with Roller Press and VSK separator in semi finish mode. The rejects from VSK separator will go to ... Motor required KW : 500 RPM : 250-750 Duct connection Separator to cyclone/s Cyclone/s to fan Fan to separator Duct to filter Exhaust duct Silencer 541 FV2 ### Mill Speed - an overview | ScienceDirect Topics Dipak K. Sarkar, in Thermal Power Plant, 2015 4.6.1 Low-speed mill. Mills operating below 75 rpm are known as low-speed mills.Low-speed units include ball or tube or drum mills, which normally rotate at about 15–25 rpm.Other types of mills, e.g., ball-and-race and roll-and-race mills, that generally fall into the medium-speed category may also be included in this category provided their ... ### Ball mill - Wikipedia A ball mill is a type of grinder used to grind or blend materials for use in mineral dressing processes, paints, pyrotechnics, ceramics, and selective laser sintering.It works on the principle of impact and attrition: size reduction is done by impact as the balls drop from near the top of the shell. A ball mill consists of a hollow cylindrical shell rotating about its axis. T ### Speeds And Feeds For Milling With End Mills Milling Speeds and Feeds Charts The most important aspect of milling with carbide end mills is to run the tool at the proper rpm and feed rate. We have broken these recommendations down into material categories so you can make better decisions with how to productively run your end mills. ### Ball Mill - Fireworks Cookbook Ball Mill. $624.99 –$ 1,599.49 Each. Used for grinding materials to an extra fine powder. Essential part of making fast black power and other fast burning comps. Base is double-walled 1/8″ aluminum extrusion with 3 web walls (pictured) 13″ wide and 1 1/8″ tall. Lead media is a hardened ball … ### Size reduction with Planetary Ball Mills - McCrone Planetary Ball Mills RETSCH’s innovative Planetary Ball Mills meet and exceed all requirements for fast and reproducible grinding down to the nano range. They are used for the most demanding tasks, from routine sample processing to colloidal grinding and advanced materials development. 2 Planetary Ball MillS ### required rpm for ball mill | worldcrushers ideal rpm for attrition ball mill. … required to turn a ball mill is approximated by: P = 0.285 d (1.073- j ) m n where d is the internal diameter in metres, …. ### Choose right speed to improve ball mill grinding ... In order for the ball mill to perform normal grinding operations, its working speed must be less than the critical speed. Ball mills generally work in a “dropped” state. There are many working speeds to achieve this working state, but there must be the most favorable working speed. 1. If the rotating speed is low, the ball in the ball mill ... ### Variables in Ball Mill Operation | Paul O. Abbe® The first problem will ball mills is that we cannot see what is occurring in the mill. The other problem is that many of the independent variables are non-linear or have paradoxical effects on the end results. In ball milling of dry solids the main independent variables are mill diameter, mill speed, media size, solids loading and residence time. ### Micro Ball Mill GT300-Beijing Grinder instrument equipment ... Micro Ball Mill GT300. The Micro ball mill GT300 is designed for modern laboratory applications. It can process small amount and large batch sample, for example: plants, animal tissue and small quantity samples in dry ,wet or cryogenic condition. It can mix and homogenize powders and suspensions in only a few seconds. ### Which of the following gives the work required for size ... Dk Bose : 7 months ago. 200 mesh is very small and when the particle diameter is very less Rittinger's law (n=2) is useful. Related Questions on Mechanical Operations. Which of the following gives the work required for size reduction of coal to -200 mesh in a ball mill most accurately ? A. Rittinger's law. B. Kick's law. C. Bond's law. ### Effect of Ball Mill Parameters’ Variation on the Particles ... It was discovered that the milling speed and diameter of the ball have a great influence on the MA leaching process with an optimum speed of 600 $${rpm}$$ and diameter of 10 $$\mat{mm}$$ (r 0 = 22.0 $${\mu m}$$) yielding values $${r}_{{retained}}^{600{rpm}}=0.76{ \mu m}$$ and $${r}_{{reacted}}^{600{rpm}}=21.24{ \mu m}$$ when compared with ... ### AMIT 135: Lesson 7 Ball Mills & Circuits – Mining Mill ... The motor power draw required to turn a mill from rest to the operating speed includes the energy required for the initial starting torque and mechanical arrangements to rotate the mill. It is generally accepted that practical mill power (PM ) is a function of mill capacity and diameter, i.e.,P M = Mill Constant * (Mill Diameter ) n ### STEPPING FORWARD: USING VARIABLE SPEED DRIVES FOR ... The use of VSDs makes it very easy to set the mill speed by just modifying this value on a touchscreen. But this is not only an option for new installations. In installations where SAG and Ball mills have been working for several years at fixed speed, but the speed regulation has been observed to be necessary VSDs can also be integrated very ... ### TECHNICAL AND COMMERCIAL BENEFITS OF GEARLESS MILL … and 13.0 rpm was installed in the minerals industry. Since then, numerous GMDs for ball mills and SAG mills have been installed in the minerals industry. The main advantages are that the GMD is adjustable in speed, and thus can fulfill the customer requirements with respect to flexibility and adjustability of the process, and that it does ### Ball Mill Design/Power Calculation Ball Mill Power Calculation Example A wet grinding ball mill in closed circuit is to be fed 100 TPH of a material with a work index of 15 and a size … ### Suggested RPM for end mill - Practical Machinist Using a 1.25 ball mill, I milled some troughs for the parts to lie in, then drilled and tapped 3/8-16 threaded holes on either side of the trough. I made a strap 3/4" thick to use as a clamp and counterbored the clamp for nylon pads, hoping that these pads … ### Calculate and Select Ball Mill Ball Size for Optimum Grinding In Grinding, selecting (calculate) the correct or optimum ball size that allows for the best and optimum/ideal or target grind size to be achieved by your ball mill is an important thing for a Mineral Processing Engineer AKA Metallurgist to do. Often, the ball used in ball mills is oversize “just in case”. Well, this safety factor can cost you much in recovery and/or mill liner wear and tear. ### How to calculate planetry ball mill parmeters? Bulk ultra fine grained (UFG) Zn was produced by in situ consolidation of Zn elemental powder with ball milling at room temperature and annealed for 1 hour at 200°C after pure Zn milled. Air Micro Grinder 80 000 Rpm - Search Results 25833 Ball Mill Motor Rpm Ag5 Grinding Machine And Its Rpm Required Cement Mill Kiln In Capacity 100 Tpd 300 Tpd In Dry Process Machinery Required For Pulses Milling Required Investment For Mini Dall Mill Machinery Required For Tire Grinding 20726 Motor Power Required To Drive Cement Ball Mill Required Machine For Grinding Upto .17 Unoair Micro Grinder 7000 Rpm Budget Required For Cement Grinding Plant Power Tools Grinding Machine Rpm Ball Mill Rpm For Clinker Grinding 3 Ft Gardner Denver Ball Mill Rpm Critical Speed And Critical Rpm Of Ball Mill 150 Rpm Wet Grinder Ideal Rpm For Ball Mill Crushers Company In Dubaigrinder Mill Rpm Pdf Equipment Required To Set Up A Gold Mill Pdf - Search Results Hammer Mills Required Milling Cutting Speeds Amp;amp Rpm Calculations Cement Ball Mill Rpm Forumal Ball Valve Required Torque Calculation
{}
# How to convert Sars-CoV-2 ORF1ab codon positions to ORF1b positions? Some websites mention amino acid mutations as ORF1ab, most however use ORF1a/b instead. So how can I translate from one to the other? What do I need to subtract from say ORF1ab:4588I to get the corresponding N in ORF1b:<N>I? ORF1ab:4588I corresponds to ORF1b:187I so one needs to subtract 4401 to go from 1ab to 1b and add 4401 to go the other way.
{}
# Fixing Prometheus WebUI behind Traefik Helm, Kubernetes, Prometheus, and Traefik If you’re using Traefik as a reverse proxy for the Prometheus dashboard using Host rules, you may notice that you can’t reach the Prometheus UI. Thanks to this commit, an extra rule is created other than the one(s) you request: Host:/. You need to delete that rule. I did so by editing the ingress object manifest for the Prometheus Server and deleting the path key with a value of /. ## References • https://github.com/helm/charts/tree/master/stable/prometheus
{}
To evaluate the outcomes of deep anterior lamellar keratoplasty (DALK) in treating keratoconus in relation to cone base diameter (CBD). A retrospective study. Sixty-one eyes of 49 keratoconus patients who underwent DALK between 2009 and 2018 were enrolled. Preoperative and postoperative uncorrected visual acuity (UCVA), best-corrected visual acuity (BCVA), spherical equivalent, and astigmatism were measured. Scheimpflug tomography (Pentacam) was used to measure the cone base area (CBA) and CBD using MATLAB software. The mean age of the patients was 20.8 ± 6.1 years old, and the mean follow-up time was 27.3 ± 15.2 months. Mean UCVA improved from 1.23 ± 0.48 to 0.57 ± 0.27 (LogMAR, 95% CI [0.52, 0.80]; P < 0.001), whereas mean BCVA improved from 0.98 ± 0.55 to 0.18 ± 0.13 (95% CI [0.66, 0.94]; P < 0.001). The mean spherical equivalent decreased by 4.53 ± 5.65 D (95% CI [- 6.25, - 2.82]; P < 0.001), with little change in astigmatism (95% CI [- 1.39, 0.64]; P = 0.457). The postoperative BCVA in the patients with CBD < 5.07 mm and corneal curvature ≥ 55D was significantly better than those whose CBD ≥ 5.07 mm (0.14 ± 0.09 vs 0.25 ± 0.15, P = 0.001). The follow-up time was negatively correlated with the BCVA (P = 0.004). In this study, outcomes of DALK in keratoconus were related to CBD and corneal curvature. Patients with large CBD (≥ 5.07 mm) where the corneal curvature ≥ 55D are more likely to have poor visual outcomes after DALK. © 2022. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
{}
# csvy: Design-based estimation of domain means with monotonicity... In csurvey: Constrained Regression for Survey Data csvy R Documentation ## Design-based estimation of domain means with monotonicity constraints. ### Description The csvyby function performs design-based domain mean estimation with monotonicity and block-monotone shape constraints. ### Usage csvy(formula, design, subset = NULL, family = stats::gaussian(), nD = NULL, level = 0.95, n.mix = 100L, test = TRUE,...) ## S3 method for class 'csvy' barplot(height, beside = TRUE,...) ## S3 method for class 'csvy' coef(object,...) ## S3 method for class 'csvy' confint(object, parm, level = 0.95, type = c("link", "response"), ...) ## S3 method for class 'csvy' deff(object,...) ## S3 method for class 'csvy' ftable(x,...) ## S3 method for class 'csvy' SE(object,...) ## S3 method for class 'csvy' summary(object,...) ## S3 method for class 'csvy' svycontrast(stat, contrasts,...) ## S3 method for class 'csvy' vcov(object,...) ## S3 method for class 'csvy' predict(object, newdata, type = c("link", "response"), se.fit = FALSE, level = 0.95,...) ## S3 method for class 'csvy' fitted(object,...) ### Arguments formula A formula object which gives a symbolic description of the model to be fitted. It has the form "response ~ predictor". The response is a vector of length n. A predictor can be a non-parametrically modelled variable with a monotonicity or block ordering restriction, or a combination of both. In terms of a non-parametrically modelled predictor, the user is supposed to indicate the relationship between the domain mean and a predictor x in the following way: Assume that μ is the vector of domain means and x is a predictor: incr(x): μ is increasing in x. decr(x): μ is decreasing in x. block.Ord(x): μ is has a block ordering in x. conc(x): μ is concave in x. conv(x): μ is convex in x. incr.conc(x): μ is increasing and concave in x. incr.conv(x): μ is increasing and convex in x. decr.conc(x): μ is decreasing and concave in x. decr.conv(x): μ is decreasing and convex in x. design A survey design, which must be specified by the svydesign routine in the survey package. subset Expression to select a subpopulation. nD Total number of domains. family A parameter indicating the error distribution and link function to be used in the model. It can be a character string naming a family function or the result of a call to a family function. This is borrowed from the glm routine in the stats package. There are four families: Gaussian, binomial, poisson, and Gamma. level Confidence level of the approximate confidence surfaces. The default is 0.95. n.mix The number of simulations used to get the approximate confidence intervals or surfaces. If n.mix = 0, no simulation will be done and the face of the final projection will be used to compute the covariance matrix of the constrained estimate. The default is n.mix = 100L. test A logical scalar. If test == TRUE, then the p-value for the test H_0:θ is in V versus H_1:θ is in C is returned. C is the constraint cone of the form \{β: Aβ ≥ 0\}, and V is the null space of A. The default is test = TRUE. ... This term includes two other arguments: deff and multicore. deff = TRUE will request a design effect from svymean. multicore = TRUE will use multicore package to distribute subsets over multiple processors. The coef function returns estimated systematic component of a csvy object. The confint function returns the confidence interval of a csvy object. If type = "response", then the interval is for the mean; if type = "link", then the interval is for the systematic component. parm An argument in the generic confint function in the stats package. For now, this argument is not in use. The following arguments are used in the predict function. object A csvy object. newdata A data frame in which to look for variables with which to predict. If omitted, the fitted values are used. type If the response is Gaussian, type = "response" and type = "link" give the predicted mean; if the response is binomial, poisson or Gamma, type = "response" gives the predicted mean, and type = "link" gives the predicted systematic component. se.fit Logical switch indicating if confidence intervals are required. The following arguments are used in the barplot function. See barplot.svystat for more details. height Analysis result. beside Grouped, rather than stacked, bars. The following arguments are used in the ftable function. See ftable.svystat for more details. x A csvy object. The following arguments are used in the svycontrast function. See svycontrast for more details. stat A csvy object. contrasts A vector or list of vectors of coefficients, or a call or list of calls. ### Details In a one dimensional situation, we assume that \bar{y}_{U_t} are non-decreasing over T domains. If this monotonicity is not used in estimation, the population domain means can be estimated by the Horvitz-Thompson estimator or the Hajek estimator. To use the monotonicity information, this csvy function starts from the Hajek estimates \bar{y}_{S_t} = (∑_{k\in S_t}y_k/π_k)/N_t and the isotonic estimator (\hat{θ}_1,…,\hat{θ}_T)^T minimizes the weighted sum of squared deviations from the sample domain means over the set of ordered vectors; that is, \bold{\hat{θ}} is the minimizer of (\tilde{\bold{y}}_{S} - \bold{θ})^T \bold{W}_s (\tilde{\bold{y}}_{S} - \bold{θ}) subject to \bold{Aθ} ≥q \bold{0}, where \bold{W}_S is the diagonal matrix with elements \hat{N}_1/\hat{N},…,\hat{N}_D/\hat{N}, and \hat{N} = ∑_{t=1}^T \hat{N}_t and \bold{A} is a m\times T constraint matrix imposing the monotonicity constraint. Domains can also be formed from multiple covariates. In that case, a grid will be used to represent the domains. For example, if there are two predictors x_1 and x_2, and x_1 has values on D_1 domains: 1,…,D_1, x_2 has values on D_2 domains: 1,…,D_2, then the domains formed by x_1 and x_2 will be a D_1\times D_2 by 2 grid. To get 100(1-α)\% approximate confidence intervals or surfaces for the domain means, we apply the method in Meyer, M. C. (2018). \hat{p}_J is the estimated probability that the projection of y_s onto \cal C lands on \cal F_J, and the \hat{p}_J values are obtained by simulating many normal random vectors with estimated domain means and covariance matrix I, where I is a M \times M matrix, and recording the resulting sets J. The user needs to provide a survey design, which is specified by the svydesign function in the survey package, and also a data frame containing the response, predictor(s), domain variable, sampling weights, etc. So far, only stratified sampling design with simple random sampling without replacement (STSI) is considered in the examples in this package. Note that when there might be any empty domain, the user must specify the total number of domains in the nD argument. For binomial and Poisson families use family=quasibinomial() and family=quasipoisson() to avoid a warning about non-integer numbers of successes. The ‘quasi’ versions of the family objects give the same point estimates and standard errors and do not give the warning. ### Value The output is a list of values used for estimation, inference and visualization. Main output include: survey.design The survey design used in the model. etahat Estimated shape-constrained domain systematic component. etahatu Estimated unconstrained domain systematic component. muhat Estimated shape-constrained domain means. muhatu Estimated unconstrained domain means. lwr Approximate lower confidence band or surface for the shape-constrained domain mean estimate. upp Approximate upper confidence band or surface for the shape-constrained domain mean estimate. lwru Approximate lower confidence band or surface for the unconstrained domain mean estimate. uppu Approximate upper confidence band or surface for the unconstrained domain mean estimate. amat The k \times M constraint matrix imposing shape constraints in each dimension, where M is the total number of domains. grid A M \times p grid, where p is the total number of predictors or dimensions. nd A vector of sample sizes in all domains. Ds A vector of the number of domains in each dimension. acov Constrained mixture covariance estimate of domain means. cov.un Unconstrained covariance estimate of domain means. CIC The cone information criterion proposed in Meyer(2013a). It uses the "null expected degrees of freedom" as a measure of the complexity of the model. See Meyer(2013a) for further details of cic. CIC.un The cone information criterion for the unconstrained estimator. zeros_ps Index of empty domain(s). nd Sample size of each domain. pval p-value of the one-sided test. family The family parameter defined in the formula. df.residual The observed degree of freedom for the residuals of a csvy fit. df.null The degree of freedom for the null model of a csvy fit. domain Index of each domain in the data set contained in the survey.design object. null.deviance The deviance for the null model of a csvy fit. deviance The residual deviance of a csvy fit. ans.unc_cp A data frame including the grid which is the combination of domains in each predictor, the domain mean estimate, and the constrained standard error. Xiyue Liao ### References Xu, X. and Meyer, M. C. (2021) One-sided testing of population domain means in surveys. Oliva, C., Meyer, M. C., and Opsomer, J.D. (2020) Estimation and inference of domain means subject to qualitative constraints. Survey Methodology Meyer, M. C. (2018) A Framework for Estimation and Inference in Generalized Additive Models with Shape and Order Restrictions. Statistical Science 33(4) 595–614. Wu, J., Opsomer, J.D., and Meyer, M. C. (2016) Survey estimation of domain means that respect natural orderings. Canadian Journal of Statistics 44(4) 431–444. Meyer, M. C. (2013a) Semi-parametric additive constrained regression. Journal of Nonparametric Statistics 25(3), 715. Lumley, T. (2004) Analysis of complex survey samples. Journal of Statistical Software 9(1) 1–19. plotpersp, to create a 3D Plot for a csvy Object with at least two predictors. incr, to specify an increasing order constraint in a csvy formula. decr, to specify a decreasing order constraint in a csvy formula. conc, to specify a concave order constraint in a csvy formula. conv, to specify a concave order constraint in a csvy formula. incr.conc, to specify an increasing-concave order constraint in a csvy formula. decr.conv, to specify an decreasing-convex order constraint in a csvy formula. decr.conc, to specify an decreasing-concave order constraint in a csvy formula. incr.conv, to specify an increasing-convex order constraint in a csvy formula. block.Ord, to specify a blocking ordering order constraint in a csvy formula. svyby, to compute survey statistics on subsets of a survey defined by factors. svymean, to compute means for data from complex surveys. svyglm, to fit a generalised linear model to data from a complex survey design, with inverse-probability weighting and design-based standard errors. ### Examples # Example 1: monotonic in one dimension data(api) mcat <- apipop$meals for(i in 1:10){mcat[trunc(apipop$meals/10) + 1 == i] <- i} mcat[mcat == 100] <- 10 mcat <- factor(mcat) M <- 10 # total number of domains nsp<-c(200, 200, 200) ## sample sizes per stratum es<-sample(apipop$snum[apipop$stype=='E'&!is.na(apipop$avg.ed)&!is.na(apipop$api00)],nsp[1]) ms<-sample(apipop$snum[apipop$stype=='M'&!is.na(apipop$avg.ed)&!is.na(apipop$api00)],nsp[2]) hs<-sample(apipop$snum[apipop$stype=='H'&!is.na(apipop$avg.ed)&!is.na(apipop$api00)],nsp[3]) sid<-c(es, ms, hs) pw <- 1:6194*0 + 4421 / nsp[1] pw[apipop$stype == 'M'] <- 1018 / nsp[2] pw[apipop$stype == 'H'] <- 755 / nsp[3] fpc <- 1:6194*0 + 4421 fpc[apipop$stype == 'M'] <- 1018 fpc[apipop$stype == 'H'] <- 755 strsamp <- cbind(apipop, mcat, pw, fpc)[sid, ] dstrat <- svydesign(ids = ~snum, strata = ~stype, fpc = ~fpc, data = strsamp, weight = ~pw) rds <- as.svrepdesign(dstrat, type = "JKn") ansc1 <- csvy(api00 ~ decr(mcat), design = rds, family = gaussian(), nD = M) # summary(ansc1) # Example 2: unconstrained in x1 and increasing in x2 and x3 D1 <- 5 D2 <- 5 D3 <- 6 Ds <- c(D1, D2, D3) M <- cumprod(Ds)[3] # total number of domains x1vec <- 1:D1 x2vec <- 1:D2 x3vec <- 1:D3 grid <- expand.grid(x1vec, x2vec, x3vec) N <- M*100*4 Ns <- rep(N/M, M) mu.f <- function(x) { mus <- x[1]^(0.25) + 4*exp(0.5 + 2*x[2]) / (1 + exp(0.5 + 2*x[2])) + sqrt(1/4 + x[3]) mus <- as.numeric(mus$Var1) return (mus) } mus <- mu.f(grid) H <- 4 nh <- c(180, 360, 360, 540) n <- sum(nh) Nh <- rep(N/H, H) #generate population y <- NULL z <- NULL for(i in 1:M){ Ni <- Ns[i] mui <- mus[i] ei <- rnorm(Ni, 0, sd = 1) yi <- mui + ei y <- c(y, yi) zi <- i/M + rnorm(Ni, mean = 0, sd = 1) z <- c(z, zi) } x1 <- rep(grid[,1], times = Ns) x2 <- rep(grid[,2], times = Ns) x3 <- rep(grid[,3], times = Ns) domain <- rep(1:M, times = Ns) cts <- quantile(z, probs = seq(0, 1, length = 5)) strata <- 1:N*0 strata[z >= cts[1] & z < cts[2]] <- 1 strata[z >= cts[2] & z < cts[3]] <- 2 strata[z >= cts[3] & z < cts[4]] <- 3 strata[z >= cts[4] & z <= cts[5]] <- 4 freq <- rep(N/(length(cts) - 1), n) w0 <- Nh/nh w <- 1:N*0 w[strata == 1] <- w0[1] w[strata == 2] <- w0[2] w[strata == 3] <- w0[3] w[strata == 4] <- w0[4] pop <- data.frame(y = y, x1 = x1, x2 = x2, x3 = x3, domain = domain, strata = strata, w = w) ssid <- stratsample(pop$strata, c("1" = nh[1], "2" = nh[2], "3" = nh[3], "4" = nh[4])) sample.stsi <- pop[ssid, ,drop = FALSE] ds <- svydesign(id = ~1, strata = ~strata, fpc = ~freq, weights = ~w, data = sample.stsi) #domain means are increasing w.r.t x1, x2 and block monotonic in x3 ord <- c(1, 1, 2, 2, 3, 3) ans <- csvy(y ~ incr(x1)*incr(x2)*block.Ord(x3, order=ord), design = ds, family = gaussian(), nD = M, test = FALSE, n.mix = 0) #3D plot of estimated domain means: x1 and x2 plotpersp(ans) #3D plot of estimated domain means: x3 and x2 plotpersp(ans, x3, x2) #3D plot of estimated domain means: x3 and x2 for each domain of x1 plotpersp(ans, x3, x2, categ = "x1") #3D plot of estimated domain means: x3 and x2 for each domain of x1 plotpersp(ans, x3, x2, categ = "x1", NCOL = 3) csurvey documentation built on Aug. 28, 2022, 9:05 a.m.
{}
## Probability of winning a best-of-7 series April 22, 2019 By $Probability of winning a best-of-7 series$ The NBA playoffs are in full swing! A total of 16 teams are competing in a playoff-format competition, with the winner of each best-of-7 series moving on to the next round. In each matchup, two teams play 7 basketball games … Continue reading → ## Comparing Point-and-Click Front Ends for R April 22, 2019 By Now that I've completed seven detailed reviews of Graphical User Interfaces (GUIs) for R, let's try to compare them. It's easy enough to count their features and plot them,... ## Le Monde puzzle [#1094] April 22, 2019 By A rather blah number Le Monde mathematical puzzle: Find all integer multiples of 11111 with exactly one occurrence of each decimal digit.. Which I solved by brute force, by... ## Using R/exams for Written Exams in Finance Classes April 22, 2019 By Experiences with using R/exams for written exams in finance classes with a moderate number of students at Texas A&M International University (TAMIU). ... ## Practical Data Science with R Book Update (April 2019) April 22, 2019 By I thought I would give a personal update on our book: Practical Data Science with R 2nd edition; Zumel, Mount; Manning 2019. The second edition should be fully available... April 22, 2019 By Today I am happy to announce a new free course: Help Your Team Learn R! Over the last few years I’ve helped a number of data teams train their... ## India has 100k records on iNaturalist April 21, 2019 By Biodiversity citizen scientists use iNaturalist to post their observations with photographs. The observations are then curated there by crowd-sourcing the identifications and other trait related aspects too. The data... ## Reproducible Environments April 21, 2019 By Great data science work should be reproducible. The ability to repeat experiments is part of the foundation for all science, and reproducible work is also critical for business applications. Team collaboration,... ## survivalists [a Riddler’s riddle] April 21, 2019 By $survivalists [a Riddler’s riddle]$ A neat question from The Riddler on a multi-probability survival rate: Nine processes are running in a loop with fixed survivals rates .99,….,.91. What is the probability that the... ## Binning with Weights April 21, 2019 By After working on the MOB package, I received requests from multiple users if I can write a binning function that takes the weighting scheme into consideration. It is a... ## Familiarisation with the Australian Election Study by @ellis2013nz April 21, 2019 By The Australian Election Study is an impressive long term research project that has collected the attitudes and behaviours of a sample of individual voters after each Australian federal election... ## FizzBuzz in R and Python April 21, 2019 By In this post, we will solve a simple problem (called "FizzBuzz") that is asked by some employers in data scientist job interviews. The question seeks to ascertain the applicant's... ## Process Mining (Part 2/3): More on bupaR package April 20, 2019 By Recap In the last post, the discipline of event log and process mining were defined. The bupaR package was introduced as a technique to do process mining in R. Objectives for... ## Before you take my DataCamp course please consider this info April 20, 2019 By Today, I am finally getting around to writing this very sad blog post: Before you take my DataCamp course please consider the following information about the sexual harassment scandal... ## Batch Deployment of WoE Transformations April 20, 2019 By After wrapping up the function batch_woe() today with the purpose to allow users to apply WoE transformations to many independent variables simultaneously, I have completed the development of major... ## Styling DataTables April 19, 2019 By Most of the shiny apps have tables as the primary component. Now lets say you want to prettify your app and style the tables. All you need understand how... ## Quick Example of Latent Profile Analysis in R April 19, 2019 By Latent Profile Analysis (LPA) tries to identify clusters of individuals (i.e., latent profiles) based on responses to a series of continuous variables (i.e., indicators). LPA assumes that there are... ## Control Charts Another Package April 19, 2019 By I got an email from Alex Zanidean, who runs the xmrr package “You might enjoy my package xmrr for similar charts – but mine recalculate the bounds automatically” and if... ## Happy EasteR! Let’s find some eggs… April 19, 2019 By It's Easter Time! Let's find some eggs... Hi there! Yes, it's the most Easterful time of the year again. For some of us a sacret time, for others mainly an... ## ODSC East 2019 Talks to Expand and Apply R Skills R programmers are not necessary data scientists, but rather software engineers. We have an entirely new multitrack focus area that helps engineers learn AI skills – AI for Engineers.... ## tint 0.1.2: Some cleanups April 19, 2019 By A new version 0.1.2 of the tint package is arriving at CRAN as I write this. It follows the recent 0.1.1 release which included two fabulous new vignettes... ## Animating the US Treasury yield curve rates by @ellis2013nz April 19, 2019 By My eye was caught by this tweet by Robin Wigglesworth of the Financial Times with an Alan Smith animation of the US Treasury yield curve from 2005 to 2009.... ## Generating the Ultimate List of 41 Data Science Podcasts by Crowdsourcing Google Results April 18, 2019 By Confession time: years ago, I was skeptical of podcasts. I was a music-only listener on commutes. Can you imagine? But around 2016, I gave in and finally took the... ## Using ecmwfr to measure global warming April 18, 2019 By For my research I needed to download gridded weather data from ERA-Interim, which is a big dataset generated by the ECMWF. Getting long term data through their website is... April 18, 2019 By Metadata are an essential part of a robust data science workflow ; they record the meaning of each variable : its units, quality, allowed range, how we collect it,... ## Base Rate Fallacy – or why No One is justified to believe that Jesus rose April 18, 2019 By In this post we are talking about one of the most unintuitive results of statistics: the so called false positive paradox which is an example of the so called... ## Applying gradient descent – primer / refresher April 18, 2019 By Every so often a problem arises where it’s appropriate to use gradient descent, and it’s fun (and / or easier) The post Applying gradient descent – primer / refresher... ## Common Uncommon Notations that Confuse New R Coders April 17, 2019 By Here are a few of the more commonly used notations found in R code and documentation that confuse coders of any skill level who are new to R. Be... ## A Comparative Review of the JASP Statistical Software April 17, 2019 By JASP is a free and open source statistics package that targets beginners looking to point-and-click their way through analyses. This article is one of a series of reviews which...
{}
# Implementing deutsch_problem(seed=None) and deutsch() from qiskit text I'm attempting to work through the exercise at the end of https://learn.qiskit.org/course/ch-gates/phase-kickback#phase-8-0 where I am to build and implement Deutsch's algorithm. I understand that deutsch_problem(seed=None) will randomly determine which function is in the black box, but I am not sure how to implement it using Python coding in the def deutsch(function). My attempt is below and though I have not yet run this on qasm_simulator to see what result I get, I feel that since qc.draw() only gives me the circuit leading up to the black-box, something must be incorrect. # My attempt # My code within def deutsch(function): deutsch_problem(seed=None) #I THINK this randomly decides which of the 4 functions are in the black box #The next three lines are my attempt to turn the random function into a unitary for the circuit usim = Aer.get_backend('unitary_simulator') qobj = assemble(problem) unitary = usim.run(qobj).result().get_unitary() #The next section builds Deutsch's algorithm to measure the outcome qc=QuantumCircuit(2,1) qc.x(1) qc.h([0,1]) qc.compose(unitary) qc.h(0) qc.measure(0,0) This is how I do it, I tried to append gate directly with to_gate, remember to transpile or decompose the label gate before run simulator. if you want to get unitary of the gate, you can use Operator directly def deutsch(function): """Implements Deutsch's algorithm. Args: function (QuantumCircuit): Deutsch function to solve. Must be a 2-qubit circuit, and either balanced, or constant. Returns: bool: True if the circuit is balanced, otherwise False. """ qc = QuantumCircuit(2,1) qc.x(1) qc.h([0,1]) qc.barrier() ################## Three way to append the deutsch_problem() qc.append(problem.to_gate(label='unitary'),[0,1]) #qc.append(Operator(problem),[0,1]) #qc.unitary(Operator(problem),[0,1]) ################### qc.barrier() qc.h(0) qc.measure(0,0) qc = qc.decompose('unitary') display(qc.draw()) svsim = Aer.get_backend('aer_simulator') #qc = transpile(qc,svsim) final_state = svsim.run(qc).result().get_counts()#.get_statevector() final_state = list(final_state.keys())[0] if final_state == '0': return 'constant' if final_state == '1': return 'balance' from qiskit import Aer, assemble from qiskit.quantum_info.operators import Operator problem =deutsch_problem() #display(problem.draw()) deutsch(problem) $$$$ ` • Thank you so much! Much of python is still new and I had not yet seen append, to_gate, Operator(), ... so this has taught me more than I had hoped! Jun 20 at 13:08 • also don't forget the decompose and transpile step for the label gate(oracle), otherwise simulation will pop up error. @PGibbon – poig Jun 20 at 16:32
{}
We use YouTube Live to broadcast seminar talks live if the speaker agrees. • This event has passed. # Hong Liu (刘鸿), Asymptotic Structure for the Clique Density Theorem ## May 26 Tuesday @ 4:30 PM - 5:30 PM KST Room B232, IBS (기초과학연구원) ### Speaker Hong Liu Mathematics Institute, University of Warwick, UK http://homepages.warwick.ac.uk/staff/H.Liu.9/ The famous Erdős-Rademacher problem asks for the smallest number of r-cliques in a graph with the given number of vertices and edges. Despite decades of active attempts, the asymptotic value of this extremal function for all r was determined only recently, by Reiher [Annals of Mathematics, 184 (2016) 683-707]. Here we describe the asymptotic structure of all almost extremal graphs. This task for r=3 was previously accomplished by Pikhurko and Razborov [Combinatorics, Probability and Computing, 26 (2017) 138–160]. ## Details Date: May 26 Tuesday Time: 4:30 PM - 5:30 PM KST Event Category: Event Tags: Room B232 IBS (기초과학연구원) ## Organizer Sang-il Oum (엄상일) Website: https://dimag.ibs.re.kr/home/sangil/ 기초과학연구원 수리및계산과학연구단 이산수학그룹 대전 유성구 엑스포로 55 (우) 34126 IBS Discrete Mathematics Group (DIMAG) Institute for Basic Science (IBS) 55 Expo-ro Yuseong-gu Daejeon 34126 South Korea E-mail: dimag@ibs.re.kr
{}
Question In linux, when using the set command (to set behaviors of the shell, like existing on error), using set [...] option instead of set - option to disable the option + ^^ e.g. set +h, to disable the caching of the locations of the binaries set in the PATH Question In linux, when using the set command (to set behaviors of the shell, like existing on error), using set [...] option instead of set - option to disable the option ? Question In linux, when using the set command (to set behaviors of the shell, like existing on error), using set [...] option instead of set - option to disable the option + ^^ e.g. set +h, to disable the caching of the locations of the binaries set in the PATH If you want to change selection, open document below and click on "Move attachment" behavioral settings of the shell. Your current options can be displayed with echo \$- . Various set commands are usually entered at the top of a script or given as command-line options to bash . <span>Using set + option instead of set - option disables the option. Here are a few examples: set -e Exit immediately if any simple command gives an error. set -h Cache the location of commands in your PATH . The shell will become confused if binaries a #### Summary status measured difficulty not learned 37% [default] 0 No repetitions
{}
## Duke Mathematical Journal ### Embeddability for 3-dimensional Cauchy–Riemann manifolds and CR Yamabe invariants #### Abstract Let $M^{3}$ be a closed Cauchy–Riemann (CR) 3-manifold. In this article, we derive a Bochner formula for the Kohn Laplacian in which the pseudo-Hermitian torsion does not play any role. By means of this formula we show that the nonzero eigenvalues of the Kohn Laplacian have a positive lower bound, provided that the CR Paneitz operator is nonnegative and the Webster curvature is positive. This means that $M^{3}$ is embeddable when the CR Yamabe constant is positive and the CR Paneitz operator is nonnegative. Our lower bound estimate is sharp. In addition, we show that the embedding is stable in the sense of Burns and Epstein. #### Article information Source Duke Math. J., Volume 161, Number 15 (2012), 2909-2921. Dates First available in Project Euclid: 29 November 2012 https://projecteuclid.org/euclid.dmj/1354198149 Digital Object Identifier doi:10.1215/00127094-1902154 Mathematical Reviews number (MathSciNet) MR2999315 Zentralblatt MATH identifier 1271.32040 Subjects Primary: 32V30: Embeddings of CR manifolds Secondary: 32V20: Analysis on CR manifolds #### Citation Chanillo, Sagun; Chiu, Hung-Lin; Yang, Paul. Embeddability for 3-dimensional Cauchy–Riemann manifolds and CR Yamabe invariants. Duke Math. J. 161 (2012), no. 15, 2909--2921. doi:10.1215/00127094-1902154. https://projecteuclid.org/euclid.dmj/1354198149 #### References • [1] A. Andreotti and Y.-T. Siu, Projective embedding of pseudoconcave spaces, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 24 (1970), 231–278. • [2] J. S. Bland, Contact geometry and CR structures on $S^{3}$, Acta Math. 172 (1994), 1–49. • [3] J. S. Bland and T. Duchamp, Moduli for pointed convex domains, Invent. Math. 104 (1991), 61–112. • [4] L. Boutet de Monvel, “Intégration des équations de Cauchy-Riemann induites formelles” in Séminaire Goulaouic-Lions-Schwartz 1974–1975, Centre de Mathématiques, École Polytechnique, Paris, 1975, Exp. No. 9, 14 pp. • [5] D. M. Burns and C. L. Epstein, Embeddability for three-dimensional CR-manifolds, J. Amer. Math. Soc. 3 (1990), 809–841. • [6] J. Cao and S.-C. Chang, Pseudo-Einstein and Q-flat metrics with eigenvalue estimates on CR-hypersurfaces, Indiana Univ. Math. J. 56 (2007), 2839–2857. • [7] S.-C. Chen and M.-C. Shaw, Partial Differential Equations in Several Complex Variables, AMS/IP Stud. Adv. Math. 19, International Press, Boston, 2001. • [8] j.-H. Cheng, A. Malchiodi, and P. Yang, A positive mass theorem in three-dimensional Cauchy-Riemann geometry, in preparation. • [9] H.-L. Chiu, The sharp lower bound for the first positive eigenvalue of the sublaplacian on a pseudohermitian 3-manifold, Ann. Global Anal. Geom. 30 (2006), 81–96. • [10] C. L. Epstein, A relative index on the space of embeddable CR-structures, I, Ann. of Math. (2) 147 (1998), 1–59. • [11] C. L. Epstein, A relative index on the space of embeddable CR-structures, II, Ann. of Math. (2) 147 (1998), 61–91. • [12] C. R. Graham and J. M. Lee, Smooth solutions of degenerate Laplacians on strictly pseudoconvex domains, Duke Math. J. 57 (1988), 697–720. • [13] K. Hirachi, “Scalar pseudo-Hermitian invariants and the Szegő kernel on three-dimensional CR manifolds” in Complex Geometry (Osaka, 1990), Lect. Notes Pure Appl. Math. 143, Dekker, New York, 1993, 67–76. • [14] J. J. Kohn, The range of the tangential Cauchy-Riemann operator, Duke. Math. J. 53 (1986), 525–545. • [15] J. M. Lee, Pseudo-Einstein structures on CR Manifolds, Amer. J. Math. 110 (1988), 157–178. • [16] L. Lempert, Holomorphic invariants, normal forms, and the moduli space of convex domains, Ann. of Math. (2) 128 (1988), 43–78. • [17] L. Lempert, On three-dimensional Cauchy-Riemann manifolds, J. Amer. Math. Soc. 5 (1992), 923–969. • [18] L. Lempert, Embeddings of three-dimensional Cauchy-Riemann manifolds, Math. Ann. 300 (1994), 1–15. • [19] H. Rossi, “Attaching analytic spaces to an analytic space along a pseudoconvex boundary” in Proceedings of the Conference on Complex Analysis (Minneapolis, 1964), Springer, Berlin, 1965, 242–256. • [20] N. Tanaka, A Differential Geometric Study on Strongly Pseudo-Convex Manifolds, Lectures in Mathematics, Department of Mathematics, Kyoto University, No. 9, Kinokuniya Book-Store, Tokyo, 1975. • [21] S. M. Webster, Pseudo-Hermitian structures on a real hypersurface, J. Differential Geom. 13 (1978), 25–41.
{}
## Where communities thrive • Join over 1.5M+ people • Join over 100K+ communities • Free without limits ##### Activity • 15:55 Ao-Last edited #2297 • 14:11 Ao-Last opened #2297 • Jan 22 20:53 jeffemandel closed #2270 • Jan 22 20:53 jeffemandel commented #2270 • Jan 22 18:27 snjeza commented #2287 • Jan 22 04:33 mozhuanzuojing edited #2286 • Jan 21 20:24 snjeza commented #1607 • Jan 21 07:48 Osolemio44 commented #2287 • Jan 21 03:50 rgrunber on master Add Changelog for 1.3.0 Signed… (compare) • Jan 21 03:50 rgrunber closed #2290 • Jan 21 02:45 github-actions[bot] labeled #2240 • Jan 21 02:45 github-actions[bot] labeled #2295 • Jan 21 02:44 testforstephen on master • Jan 21 02:44 testforstephen closed #2245 • Jan 21 02:44 testforstephen closed #2225 • Jan 20 23:57 mozhuanzuojing commented #2294 • Jan 20 21:18 rgrunber synchronize #2295 • Jan 20 20:57 openshift-ci[bot] labeled #2295 • Jan 20 20:57 rgrunber converted_to_draft #2295 • Jan 20 20:57 rgrunber opened #2295 CMGFirstDerivatives @CMGFirstDerivatives Hi All - I am looking to recruit multiple Java Developers to join first derivatives based in London at all levels. Ideally looking for someone with Financial Service background and knowledge of cloud technologies such as GCP/AWS/Azure Iwan Aucamp @aucampia I'm having this problem, [Error - 1:38:34 PM] Feb 19, 2021, 1:38:34 PM Error while handling document save. URI: file:///home/iwana/syncthing/sw-wpw/d/dev.azure.com/pdgm/BigLoop/incubator/bla-ted/teg-server-b-stub/src/main/java/shaij/Fake.java teg-server-b-stub/src/main/java/shaij [in bla-ted] does not exist Java Model Exception: Java Model Status [teg-server-b-stub/src/main/java/shaij [in bla-ted] does not exist] at org.eclipse.jdt.internal.core.JavaElement.newNotPresentException(JavaElement.java:573) But I can't see why. I am doing nothing different from other subprojects in the same project Iwan Aucamp @aucampia Okay, cleaning out settings helped: find . -iregex '^.*/[.]$$classpath\|project\|settings$$' | xargs -t rm -vr \rm -vr */bin/ odd and a bit annoying I actually expected ./gradlew eclipse to do that Krishna Hrithik @kh-2000 i have created a maven-archetype-quickstart project, how to connect mysql to it? Ronak Gandhi @ronakg A question about importing large multiple Maven projects. What settings to use for java.jdt.ls.vmargs? Brian Weller @bmweller:matrix.org [m] Hello All. working with a client who does their own desktop deployment package of VSCode. The base installation is locked down under Program Files. When I use the refactoring or apply a 'fix' I get the following error 'running the contributed command: 'java.apply.workspaceEdit' failed. Attaching log and version details as images. Note that because the client's build is locked down I run code with the --extensions-dir option so that I can install this plugin. Thanks in advance. Tomer Eliyahu @tomereli Hi, not sure if this is the right place to ask - but I'm trying to run a spring boot test in vscode which depends on pom.xml with the following: <dependency> <groupId>de.flapdoodle.embed</groupId> <artifactId>de.flapdoodle.embed.mongo</artifactId> <scope>test</scope> </dependency> 2021-03-16 23:09:08.309 INFO 2925741 --- [ main] o.s.b.a.mongo.embedded.EmbeddedMongo : Download 3.2.2:Linux:B64 : starting... My proxy is set up correctly in ~/.m2/settings.xml and I can run mvn clean install which downloads all the non-test dependencies automatically during build from the command line (zsh). I tried setting the following according to https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-maven: "maven.executable.options": "-Dhttp.proxyHost=my-proxy.org.com -Dhttp.proxyPort=911", Any ideas? Tomer Eliyahu @tomereli OK so I'm pretty sure that the problem is that the language server JVM doesn't set the proxy system properties - how can I verify if this is indeed the case? Asir Shahriar Roudra @roudra323 I've set up my vs code for java. But while installing Language Support for Java(TM) by Red Hat extension it is giving an error. Here is the log message: [2021-03-22 23:31:42.665] [renderer1] [error] An unknown error occurred. Please consult the log for more details. [2021-03-23 00:06:53.635] [renderer1] [error] Corrupt ZIP: end of central directory record signature not found: validating: Corrupt ZIP: end of central directory record signature not found at async G.doInstallFromGallery (file:///F:/Programming/Softwares/Microsoft VS Code/resources/app/out/vs/code/electron-browser/sharedProcess/sharedProcessMain.js:47:209233) Then I downloaded it manually and tried to install but error happend again. Here is the log: [2021-03-23 00:31:32.133] [renderer1] [error] invalid comment length. expected: 3755. found: 0: Error: invalid comment length. expected: 3755. found: 0 at S (file:///F:/Programming/Softwares/Microsoft VS Code/resources/app/out/vs/code/electron-browser/sharedProcess/sharedProcessMain.js:47:192087) at file:///F:/Programming/Softwares/Microsoft VS Code/resources/app/out/vs/code/electron-browser/sharedProcess/sharedProcessMain.js:47:193482 at F:\Programming\Softwares\Microsoft VS Code\resources\app\node_modules.asar\yauzl\index.js:37:7 at F:\Programming\Softwares\Microsoft VS Code\resources\app\node_modules.asar\yauzl\index.js:133:16 at F:\Programming\Softwares\Microsoft VS Code\resources\app\node_modules.asar\yauzl\index.js:631:5 at F:\Programming\Softwares\Microsoft VS Code\resources\app\node_modules.asar\fd-slicer\index.js:32:7 at FSReqCallback.wrapper [as oncomplete] (fs.js:524:5) how to solve this problem? krismael @krismael Hello, I begin my first project of code editing solo. Please how can I check that I have installed correctly VScode-java version for first time with all the required components ? Prashanth Nandavanam @forwardmeasure Hello all, the project I am the architect of uses Java (Spring Boot), NodeJS, and Python, among other technologies. My Java developers use Eclipse, IntelliJ, and VSCode. I enforce our formatting guidelines via an Eclipse formatter file, and that works well for Eclipse and IntelliJ. With VSCode, however, I am unable to get it to use the Eclipse formatter file. I have tried specifying the absolute location like this: "java.format.settings.url": "/Users/pnandavanam/eclipse-codeformatter.xml", but VSCode simply does not use it. I've got the usual Java extensions installed, and not much else (The RedHat and Microsoft Java extensions). Any idea on how to get VSCode to use my Eclipse formatter? Thanks in advance! Snjeza @snjeza @forwardmeasure could you create an issue at https://github.com/eclipse/eclipse.jdt.ls/issues/new/choose ? shane99a @shane99a In a monorepo with multiple java projects, why does the extension try to build all of them? How do I get it to stop? Fred Bricon @fbricon @forwardmeasure you're facing redhat-developer/vscode-java#1827. Please try a CI build to check it's fixed in 0.77.0 for you HVEVB @HVEVB I had a project which compiled completely fine on another computer, but when I cloned the repo on my current computer, I'm getting this error when trying to compile Exception calling "DownloadFile" with "2" argument(s): "Ett undantag uppstod under en WebClient-begäran." At line:1 char:282 + ... pe]::Tls12; $webclient.DownloadFile('https://repo.maven.apache.org/ma ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : WebException Fel: kunde inte hitta eller ladda huvudklassen org.apache.maven.wrapper.MavenWrapperMain Orsakades av: java.lang.ClassNotFoundException: org.apache.maven.wrapper.MavenWrapperMain Translated version Exception calling "DownloadFile" with "2" argument(s): "An exception occurred during a WebClient request." At line:1 char:282 + ... pe]::Tls12;$webclient.DownloadFile('https://repo.maven.apache.org/ma ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : WebException Error: could not find or load main class org.apache.maven.wrapper.MavenWrapperMain Caused by: java.lang.ClassNotFoundException: org.apache.maven.wrapper.MavenWrapperMain HVEVB @HVEVB I can't even create a new project Catarina Gamboa @CatarinaGamboa Hello everyone, I would like to create a VSCode Java Extension to enhance the existing VSCode Java features with a new kind of verification/diagnostic (related to refinement types in Java). I saw that this should possible (https://github.com/redhat-developer/vscode-java/wiki/Contribute-a-Java-Extension) and I was wondering if you could point me to a project that implements this kind of extension. Thanks in advance! Snjeza @snjeza Catarina Gamboa @CatarinaGamboa Thank you for the fast response @snjeza! charkins @charkins Anyone know how to prevent language server from truncating stacktraces when logging an exception? I'm hitting an issue with an annotation processor but most of the stacktrace is truncated ("... 25 more") and cannot see the root cause. charkins @charkins Nevermind, apparently org.eclipse.jdt.internal.compiler.problem.AbortCompilation is the root cause and doesn't provide any sort of message. Tom-- @schittli Good evening, does anyone knows if: ## Does VSCode offer an intellij Keyboard mapping? Sandared @Sandared Hi everyone. Since version 0.80.0 the build status always stucks at 46% for our project without any more information or error messages. Is there a way to make this more verbose to see what's happening? Version 0.79.2 with the same project works perfectly fine fvclaus @fvclaus I am using prettier to format my Java code and have trouble getting the import order right. I want to use organizeImports on save to remove unused imports, but it seems that organizeImports runs after editor.formatOnSaveand messes up the order again. I have tried to replicate the order of prettier-java in my settings, but the way vscode-java sorts imports seems to be incompatible with prettier. prettier-java will format import lombok.experimental.UtilityClass; import lombok.val; and vscode-java ("java.completion.importOrder": []) import lombok.val; import lombok.experimental.UtilityClass; This is just an example. It seems to affect all import statements that have a different package "depth". Is there a setting to change the order in which formatOnSave and "java.saveActions.organizeImports": true run? I also tried "[java]": { "editor.codeActionsOnSave": { "source.organizeImports": true } } this will switch the output between prettier-java and vscode-java everytime the file is saved. Alternatively, is there a way to configure the package order in vscode-java without listing all possible packages? Jacob Williams @singlow I have my java projects configured to use JavaSE-1.8 from build.gradle. That seems to be recognized in the status bar when editing a java file. However it seems that it allows me to use methods/classes that are from newer versions of java without complaining. My gradle build rejects it but in vscode it is just fine with me using List.of and autocompletes it even though it is not available in java 1.8. BuZZ-dEE @buzz-dee:matrix.org [m] hi, i'm missing cdi support in vscode? is there something i need to install ? BuZZ-dEE @buzz-dee:matrix.org [m] :point_up: Edit: hi, i'm missing cdi support in vscode-java? is there something i need to install ? @buzz-dee:matrix.org I'm using spring boot extension to resolve my beans. It's the only extension as far as I know that is able to do that. Unfortunately this extension doesn't resolve CDI annotation only spring support hi, I'm trying to know if there is a limitation with maven support in vscode, as the language server doesn't resolve an additionnal source path added with swagger-codegen-maven-plugin. This plugin generate javaClasses starting from swagger definition. <groupId>io.swagger</groupId> <artifactId>swagger-codegen-maven-plugin</artifactId> <version>\${swagger-codegen-maven-plugin.version}</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <inputSpec>/sompath/to/swaggerdef.json</inputSpec> <modelPackage>com.my.model</modelPackage> <apiPackage>com.my.api</apiPackage> <language>jaxrs-cxf-client</language> <generateApiTests>false</generateApiTests> <configOptions> <dateLibrary>java8</dateLibrary> <sourceFolder>myfolder</sourceFolder> </configOptions> <output>target/generated-sources</output> </configuration> </execution> </executions> </plugin> Pola-Haw @Pola-Haw Sorry, but I'm totally green and don't get anything. I just start learning and figuring out how to set everything up and what resources to use to learn comfortably (to extend it is possible). I'm just looking for help in this regard. phoebej89 @phoebej89 hello Thomas Oster @t-oster Hi, I do have a project which uses my own annotation processor (https://lazysql.thomas-oster.de/). I got it working with using the maven-apt plugin as described here (https://github.com/redhat-developer/vscode-java/wiki/Annotation-Processing-support-for-Maven-projects). The problem is if the annotation processor throws a compiler message or warning, it is not displayed in the IDE. how to run java frame program in vs code dr3s @dr3s I want to confirm some behavior before opening a github issue. No matter what I change, my multi-module gradle project in VSCode creates classpath entries and compiles code to bin instead of build whether the default or set explicitly. Is there anything I can check that would be misconfigured? This seems like a clear bug that I can't work around because the setting isn't able to be changed for the JLS when gradle is used. dr3s @dr3s https://docs.gradle.org/current/dsl/org.gradle.api.Project.html#org.gradle.api.Project:buildDir says the default is build. I have searched for the term build and bin in my source and I got nada. I tried setting buildDir in settings.gradle and nothing. I would be happy to provide my gradle properties, screenshots in vscode, and logs if you like. dr3s @dr3s back to intellij I guess charkins @charkins How can I tell what version of jdt.ls is being used by vscode-java? I appear to be getting hit by https://bugs.eclipse.org/bugs/show_bug.cgi?id=377850 which was fixed ~4.15 Snjeza @snjeza @charkins You can run Java: Open Java Language Server Log File and find a line like the following: !MESSAGE Initializing Java Language Server 1.5.0.202110161628
{}
Find the equation of the circle, the coordinates of the end points of one Question: Find the equation of the circle, the coordinates of the end points of one of whose diameters are A(3, 2) and B(2, 5) Solution: The equation of a circle passing through the coordinates of the end points of diameters is: $\left(x-x_{1}\right)\left(x-x_{2}\right)+\left(y-y_{1}\right)\left(y-y_{2}\right)=0$ Substituting, values: $\left(x_{1}, y_{1}\right)=(3,2) \&\left(x_{2}, y_{2}\right)=(2,5)$ We get: $(x-3)(x-2)+(y-2)(y-5)=0$ $\Rightarrow x^{2}-2 x-3 x+6+y^{2}-5 y-2 y+10=0$ $\Rightarrow x^{2}+y^{2}-5 x-7 y+16=0$ Ans: $x^{2}+y^{2}-5 x-7 y+16=0$
{}
# zbMATH — the first resource for mathematics The triguarded fragment of first-order logic. (English) Zbl 1416.03008 Barthe, Gilles (ed.) et al., LPAR-22. 22nd international conference on logic for programming, artificial intelligence and reasoning, Awassa, Ethiopia, November 17–21, 2018. Selected papers. Manchester: EasyChair. EPiC Ser. Comput. 57, 604-619 (2018). Summary: Past research into decidable fragments of first-order logic ($$\mathbb{FO}$$) has produced two very prominent fragments: the guarded fragment $$\mathbb{GF}$$, and the two-variable fragment $$\mathbb{FO}^2$$. These fragments are of crucial importance because they provide significant insights into decidability and expressiveness of other (computational) logics like modal logics (MLs) and various description logics (DLs), which play a central role in verification, knowledge representation, and other areas. In this paper, we take a closer look at $$\mathbb{GF}$$ and $$\mathbb{FO}^2$$, and present a new fragment that subsumes them both. This fragment, called the triguarded fragment (denoted $$\mathbb{TGF}$$), is obtained by relaxing the standard definition of $$\mathbb{GF}$$: quantification is required to be guarded only for subformulae with three or more free variables. We show that, in the absence of equality, satisfiability in $$\mathbb{TGF}$$ is N2ExpTime-complete, but becomes NExpTime-complete if we bound the arity of predicates by a constant (a natural assumption in the context of MLs and DLs). Finally, we observe that many natural extensions of $$\mathbb{TGF}$$, including the addition of equality, lead to undecidability. For the entire collection see [Zbl 1407.68021]. ##### MSC: 03B20 Subsystems of classical logic (including intuitionistic logic) 03B25 Decidability of theories and sets of sentences 03B45 Modal logic (including the logic of norms) 68T27 Logic in artificial intelligence 68Q17 Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) Full Text:
{}
# Is there a formula that gives the position of an object depending on the time, but which doesn't allow the object to surpass the speed of light? I have found these two formulas: $v = at + v_0$ $x = \frac{1}{2}at^2 + v_0t + x_0$ • a is the acceleration • v is the velocity • x is the position • t is the time • $v_0$ is the initial velocity • $x_0$ is the initial position The problem is that with an acceleration of $10$ m.s$^{-2}$ (for example) the object would surpass the speed of light in $3 \times 10^7$ s = $11$ months and $11$ days, which should not be possible. Is there some other formula that gives the position of an object depending on its acceleration and on the time, but which works and does not allow the speed of the object to surpass the speed of light? • In this configuration, as far as I know, there is no formula that limits the speed of any particle to the speed of light. So, in classical mechanics particles are allowed to move in speed of light and even to move at greater speeds. However, in special relativity we have this $\gamma = \frac{1}{\sqrt{1- v^2 / c^2}}$ factor that doesn't allow particles to pass speed of light. But, still I am not sure about the classical mechanics case; there might be other theories that change some definitions and gives you what you want. – sahin Mar 24 '15 at 8:55 • Look at sparknotes.com/physics/specialrelativity/dynamics/…, you can see $dE/dx=F$, which means if your force is constant, it is the energy that increases constantly. $E=\gamma(v) m_0 c^2$, you can deduce the v. – jaromrax Mar 24 '15 at 9:22 Have a look at the article by Phil Gibbs on the relativistic rocket. This describes the motion of a rocket that is accelerating with a constant acceleration. In this context constant acceleration means the crew of the rocket feel a constant acceleration. Technically the rocket has a constant four-acceleration. Anyhow, the velocity of the rocket as observed by a non-accelerating observer is given by: $$v = \frac{at}{\sqrt{1 + (at/c)^2}}$$ where $a$ is the acceleration measured by the occupants of the rocket and $t$ is time as measured by the non-accelerating observers. At long times, when $(at/c)^2 \gg 1$ the velocity is approximately: $$v \approx \frac{at}{\sqrt{(at/c)^2}} = c$$ So at long times the velocity approaches $c$, though it never reaches it. Is there some other formula ... which ... does not allow the speed ... to surpass the speed of light? That would be the equations of special relativity mentioned by sahin in a comment. Image from Loodog? Another factor you have to take into account with classical mechanics is to work out how a constant force can be applied to your object over 11 months and 11 days without affecting it's mass (therefore no fuel onboard) and without the additional speed giving rise to large opposing forces such as friction. • I just love the diagram! – Binary Geek Mar 24 '15 at 10:27 Look at sparknotes.com/physics/specialrelativity/dynamics/…, you can see $dE/dx=F$ - if your force is constant, it is the energy that increases constantly. $E=\gamma(v)m_0c^2$, you can deduce the $v$. Beacause of laziness I used mathomatic, and it gives me something like this: $$v=c\sqrt{1-\left(\dfrac{m_0 c^2}{F\cdot x + m_0 c^2}\right)^{2}}$$ If you check it for x=0 and x=inf, you get reasonable results. which should not be possible. Indeed, uniform coordinate acceleration $$a$$ is inconsistent with special relativity however, uniform proper acceleration $$\alpha$$ is consistent. The proper acceleration is the acceleration of the object according to an attached accelerometer. For 1D motion, the relationship between $$\alpha$$ and $$a$$ is given by $$\alpha = \gamma^3 a = \frac{a}{\left( 1 - \frac{v^2}{c^2} \right)^{\frac{3}{2}} }$$ Since the Lorentz factor goes to infinity as $$v \rightarrow c$$, $$a$$ must go to zero if $$\alpha$$ is to remain finite. If, from an inertial reference frame, an object were observed to have uniform coordinate acceleration $$a$$, the object's proper acceleration would be arbitrarily large as the object's speed approached $$c$$ in this frame. Is there some other formula that gives the position of an object depending on its acceleration and on the time, but which works and does not allow the speed of the object to surpass the speed of light? In the context of special relativity, one must be careful to distinguish between the proper and coordinate accelerations since, as described in the link above, they are not the same thing. Assuming by, "its acceleration", you mean its (constant) proper acceleration, then, with zero initial conditions, the formula is $$x(t) = \frac{c^2}{\alpha}\left[\sqrt{1 + \frac{\alpha^2 t^2}{c^2}}-1 \right]\;,\; t \ge 0$$ See that, as $$t$$ gets very large, $$x(t)$$ asymptotically approaches $$ct$$, i.e., the speed asymptotically approaches $$c$$.
{}
Title Search for supersymmetry in pp collisions at $\sqrt{s}=7TeV$ in events with a single lepton, jets, and missing transverse momentum Author Chatrchyan, S. Khachatryan, V. Sirunyan, A. M. Bansal, M. Bansal, S. Cornelis, T. de Wolf, E.A. Janssen, X. Luyckx, S. Mucibello, L. Roland, B. Rougny, R. Selvaggi, M. van Haevermaet, H. Van Mechelen, P. Van Remortel, N. Van Spilbeeck, A. et al. Faculty/Department Faculty of Sciences. Physics Publication type article Publication 2013 Berlin , 2013 Subject Physics Source (journal) European physical journal : C : particles and fields. - Berlin Volume/pages 73(2013) :5 , p. 1-41 ISSN 1434-6044 ISI 000319518900001 Carrier E Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract Results are reported from a search for new physics processes in events containing a single isolated high-transverse-momentum lepton (electron or muon), energetic jets, and large missing transverse momentum. The analysis is based on a 4.98 fb(-1) sample of proton-proton collisions at a center-of-mass energy of 7 TeV, obtained with the CMS detector at the LHC. Three separate background estimation methods, each relying primarily on control samples in the data, are applied to a range of signal regions, providing complementary approaches for estimating the background yields. The observed yields are consistent with the predicted standard model backgrounds. The results are interpreted in terms of limits on the parameter space for the constrained minimal supersymmetric extension of the standard model, as well as on cross sections for simplified models, which provide a generic description of the production and decay of new particles in specific, topology based final states. Full text (open access) https://repository.uantwerpen.be/docman/irua/141b67/4753.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000319518900001&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000319518900001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000319518900001&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle
{}
Journal topic Atmos. Meas. Tech., 13, 13–38, 2020 https://doi.org/10.5194/amt-13-13-2020 Atmos. Meas. Tech., 13, 13–38, 2020 https://doi.org/10.5194/amt-13-13-2020 Research article 06 Jan 2020 Research article | 06 Jan 2020 # Performance evaluation of THz Atmospheric Limb Sounder (TALIS) of China Performance evaluation of THz Atmospheric Limb Sounder (TALIS) of China Wenyu Wang1,2, Zhenzhan Wang1, and Yongqiang Duan1,2 Wenyu Wang et al. • 1Key Laboratory of Microwave Remote Sensing, National Space Science Center, Chinese Academy of Sciences, Beijing, China • 2University of Chinese Academy of Sciences, Beijing, China Correspondence: Zhenzhan Wang (wangzhenzhan@mirslab.cn) Abstract THz Atmospheric Limb Sounder (TALIS) is a microwave limb sounder being developed for atmospheric vertically resolved profile observations by the National Space Science Center, Chinese Academy of Sciences (NSSC, CAS). It is designed to measure temperature and chemical species such as O3, HCl, ClO, N2O, NO, NO2, HOCl, H2O, HNO3, HCN, CO, SO2, BrO, HO2, H2CO, CH3Cl, CH3OH, and CH3CN with a high vertical resolution from about 10 to 100 km to improve our comprehension of atmospheric chemistry and dynamics and to monitor the man-made pollution in the atmosphere. Four heterodyne radiometers including several FFT spectrometers of 2 GHz bandwidth with 2 MHz resolution are employed to obtain the atmospheric thermal emission in broad spectral regions centred near 118, 190, 240, and 643 GHz. A theoretical simulation is performed to estimate the retrieval precision of the main targets and to compare them with that of Aura MLS standard spectrometers. Single scan measurement and averaged measurement are considered in the simulation, respectively. The temperature profile can be obtained with a precision of <2 K for a single scan from 10 to 60 km by using the 118 GHz radiometer, and the 240 and 643 GHz radiometers can provide temperature information in the upper troposphere. Chemical species such as H2O, O3, and HCl show a relatively good single scan retrieval precision of <20 % over most of the useful range and ClO, N2O, and HNO3 can be retrieved with a precision of <50 %. The other species should be retrieved by using averaged measurements because of the weak intensity and/or low abundance. 1 Introduction Better precision observation of Earth's atmosphere is essential to numerical weather prediction and climate change studies. Satellites can provide daily global coverage of the atmosphere. Instruments such as nadir microwave sounders and infrared sounders have been applied to measure the atmospheric temperature and humidity but with the poor vertical resolution and altitude range (Swadley et al., 2008). Limb sounders can not only provide the temperature profile with better vertical resolution, but also gather information on chemical composition in a wide altitude range. In the terahertz domain, the measurement performances are independent of the day–night cycle. Microwave limb sounding is a particularly useful technique in detecting stratospheric and mesospheric temperature and chemistry, and also has large potential for global wind measurement in the middle and upper atmosphere (Wu et al., 2008; Baron et al., 2013). A few instruments have been launched in the last 20 years; their observation data have offered a better understanding of the physical and chemical processes in Earth's atmosphere. The first instrument applying the microwave limb sounding technique from space was the Microwave Limb Sounder (MLS) onboard the Upper Atmosphere Research Satellite (UARS) launched in 1991. The sounder offered unique information of temperature/pressure, O3, H2O, ClO, and additional data products including SO2, HNO3, and CH3CN (Waters et al., 1993, 1999; Barath et al., 1993). The Sub-Millimetre Radiometer (SMR) onboard the Odin satellite launched in February 2001 was the first radiometer to employ sub-millimetre limb sounding. Various target species, such as O3, ClO, N2O, HNO3, H2O, CO, and NO, as well as isotopes of H2O, O3, and ice cloud, have been detected (Murtagh et al., 2002; Urban et al., 2005; Eriksson et al., 2007). The Aura MLS, the successor of the UARS MLS onboard the Aura satellite launched in July 2004, gave successful observations of OH, HO2, H2O, O3, HCl, ClO, HOCl, BrO, HNO3, N2O, CO, HCN, CH3CN, SO2, ice cloud, and wind (Waters et al., 2004, 2006; Wu et al., 2008; Livesey et al., 2013). The Superconducting Submillimeter-wave Limb-Emission Sounder (SMILES) onboard the Japanese Experiment Module (JEM) of the International Space Station (ISS) launched in September 2009 (Kikuchi et al., 2010). SMILES was equipped with 4 K cooled superconductor–insulator–superconductor (SIS) mixers to reduce the system noise temperature, so that the sensitivity of SMILES was higher than that of other similar sensors such as MLS and SMR (Takahashi et al., 2010; Baron et al., 2011). Currently, several new instruments are being developed. Stratospheric Inferred Winds (SIW) is a Swedish mini sub-millimetre limb sounder for measuring wind, temperature, and molecules in the stratosphere. It can provide horizontal wind vectors within 30–90 km, as well as the profiles of temperature, O3, H2O, and other trace chemical species (Baron et al., 2018). SIW is designed for small satellites and will be launched as early as 2020–2022. In addition, the successor of SMILES, SMILES-2, is being studied for measuring the whole vertical range of 15–180 km with low noise (Ochiai et al., 2017). THz Atmospheric Limb Sounder (TALIS) is the pre-research project of civil aerospace technology proposed by the China National Space Administration (CNSA). TALIS is being designed at the National Space Science Center, Chinese Academy of Sciences (NSSC, CAS), for good precision measurement of atmospheric temperature and key chemical species. It has four microwave radiometers in the frequency bands of 118, 190, 240, and 643 GHz, which are similar to the Aura MLS. The TALIS mission objectives are to provide the information for research on the dynamics and the chemistry of the middle and upper atmosphere by measuring the volume mixing ratio (VMR) profile of the chemical species and other atmospheric conditions such as cirrus with much finer spectral resolution. The pre-research will be completed in 2020 and a prototype will be tested. The satellite mission equipped with TALIS will be proposed around 2021. In this paper, we present a simulation study on precision estimates for the geophysical parameters measured by TALIS. The outline of the present study is as follows: Sect. 2 describes the instrument characteristics and spectral bands. The retrieval method and the simulation result are discussed in Sects. 3 and 4, respectively. The final section gives a conclusion about the performance and future works. 2 Instrument overview ## 2.1 Instrument characteristics The TALIS payload (Fig. 1) and its proposed scan characteristics are summarised in Table 1. The instrument will be set at a Sun-synchronous orbit at a normal altitude of 600 km. The offset parabolic antenna is made of a single reflector with a 1.6 m projective aperture and four independent feeds. The layout of four discrete feeds is shown in Fig. 2. Compared with the quasi-optical separation layout (such as MLS), this strategy is easier and has better observation precision since it needs fewer reflectors. But it will lead to a vertical observed displacement of about 20 km between 118, 190, and 643 GHz and a horizontal displacement of 240 GHz. The widths of the field of view (FOV) at the tangent point are about 5.5, 3.8, 3.3, and 0.96 km at 118, 190, 240, and 643 GHz, respectively. The two-point calibration method is adopted by TALIS, and two calibration targets are set at the end of the arm. The extra target can be used to improve the calibration precision and evaluate the antenna effect and non-linearity. At the beginning of the scan, TALIS will view the hot target (ambient temperature) and the extra target (lower temperature) in 3 s, and then it will scan the limb from 0 to 100 km vertically and obtain the spectra every 1 km with an integration time of 0.1 s; finally, it views the cold space at 200 km in 5 s. The process of retrace is the same (also records data) and gives a total period (scan and retrace) of about 36 s. Figure 1Schematic diagram of the TALIS payload. The reflector, feeds, and receivers are formed into a whole. The scan driver controls the scan angle. The calibration system is fixed in the satellite. At the beginning, the feeds are covered by calibration targets. Then it will scan the limb. When the system rotates to the top, it will view the cold space. Table 1Characteristics of the TALIS payload. Figure 2The layout of the four antenna feeds. The observation displacements of the four radiometers are shown. TALIS has four radiometers which cover the significant thermal emission spectra in the 118, 190, 240, and 643 GHz regions (see Table 2). Single-sideband (SSB) can keep the complete spectral lines, while double-sideband (DSB) can cover more spectral lines because of the image band. Thus, all the radiometers of TALIS will operate in the double-sideband mode except the 118 GHz radiometer. Eleven FFT spectrometers of 2 GHz bandwidth with 2 MHz resolution will be used in TALIS. The bands and system noise temperature for each radiometer are shown in Table 2. Table 2Spectral bands and Tsys of TALIS. * This is a single-sideband value for the 118 GHz radiometer, and the double-sideband value is for the other radiometers. ## 2.2 Spectral bands The spectral bands of TALIS are selected with the following criteria: (1) maximisation of the number of species which will exert a strong influence on atmospheric chemistry and dynamics, (2) necessary space between the passband, and (3) trade-off between realisable bandwidth and resolution. TALIS covers most spectral bands of the Aura MLS and extends them (see Fig. 3), but lacks the 2.4 THz band. The broader bandwidth and the finer resolution of TALIS can provide a better retrieval precision and effective altitude range compared with the Aura MLS. More chemical species can be measured by TALIS, such as NO2, NO, and SO2 (normal concentration). Figure 3Spectral bands of the Aura MLS and TALIS radiometers. The diamonds represent the filter centres and the solid lines mean the bandwidth of the MLS. The 118 GHz radiometer, covering the strong O2 line at 118.75 GHz, is used to measure the atmospheric temperature and tangent pressure. Since there are few meteorological data set about the temperature above the middle atmosphere with good vertical resolution, it is necessary to measure the temperature profile with a wide altitude range, good vertical resolution, and good precision. In addition, the Zeeman effect will affect the O2 line, and the influence should be studied (Schwartz et al., 2006). Other information such as ice cloud can be treated as an additional measurement. Figure 4 gives an overview of the 118 GHz spectral band. Figure 4Contributions of the main target chemical species to the 118 GHz spectrum. The brightness temperature is measured from the single-sideband radiometer. The tangent height is 30 km. The 190 GHz radiometer is mainly designed to cover the 183.31 GHz H2O line. Monitoring water vapour is important for understanding the mechanisms of humidity feedback on climate and is essential for improving the accuracy of the weather forecast. Other chemical species such as N2O, ClO, O3, and HCN are also included in 190 GHz bands (see Fig. 5). Figure 5Contributions of the main target chemical species to the 190 GHz spectra. The brightness temperature is measured from the double-sideband radiometer. The tangent height is 30 km. The top axis represents the lower sideband frequencies and the bottom axis represents the upper sideband frequencies. Each panel represents a single spectrometer. The main objective of the 240 GHz radiometer is to measure the CO at 230.54 GHz, and the strong O3 lines in a wide spectral band where upper tropospheric O3 can be obtained with good precision because of the weak water vapour continuum absorption. In addition, the 233.95 GHz O2 line will be used to measure temperature and tangent pressure together with the 118.75 GHz line. SO2 is an important pollutant in the Earth' atmosphere and will give rise to acid rain. There is no obvious SO2 emission with the standard profile present in the passband of the 240 GHz radiometer. The only SO2 which is observable by the MLS comes from volcanic eruptions. The MLS demonstrated that SO2 can be measured by 190, 240, and 640 GHz radiometers, but only the 240 GHz SO2 product is recommended for general use (Pumphrey et al., 2015). The wide and strong lines of HNO3 can be used to retrieve profiles well. NO2 is a unique species not covered by the Aura MLS, and TALIS' wider bandwidth and finer resolution have the potential ability to measure it. The spectra of the 240 GHz radiometer are depicted in Fig. 6. Figure 6Contributions of the main target chemical species to the 240 GHz spectra. The brightness temperature is measured from the double-sideband radiometer. The tangent height is 30 km. The top axis represents the lower sideband frequencies and the bottom axis represents the upper sideband frequencies. Each panel represents a single spectrometer. The 643 GHz radiometer is designed to cover as many spectral lines as possible; thus, about 17 species are included. The spectral lines covering O3, HCl, ClO, N2O, O2, and H2O are clearly visible (Fig. 7), and other lines which are relatively weak such as NO, HNO3, CO, SO2, BrO, HO2, H2CO, HOCl, and CH3Cl can also be used. The O2 line at 627.75 GHz and the H2O line at 657.9 GHz have the potential to be used as supplements to the 118 and 190 GHz radiometers. O3 is the major species in the stratosphere and mesosphere, which is quite important in atmospheric radiation transfer. Using the high sensitivity lines in the 643 GHz bands, one can measure O3 with good precision (Takahashi et al., 2011; Kasai et al., 2013). The only lines of HCl below 1 THz are in the 625 GHz frequency band; thus, HCl can be measured by the 643 GHz radiometer (Lary and Aulov, 2008). ClO is a key catalyst for ozone loss and the 649.45 GHz line is suitable for ClO observation with good precision (Santee et al., 2008; Sato et al., 2012). The HOCl, which will affect the stratospheric chlorine budget, has distinct lines above 600 GHz, and the 635.87 GHz line has been pointed out to be the best line for observation (Urban, 2003). Both the 649.701 and 660.486 GHz lines can be used to measure the hydroperoxyl radical HO2, which will contribute to the catalytic ozone chemistry in the upper stratosphere and mesosphere (Millán et al., 2015). Since ClO, HO2, and HOCl all can be measured, the reaction rate of ClO and HO2 to form HOCl in the atmosphere can be determined (Johnson et al., 1995). N2O can be measured at 652.834 GHz, which has been validated by the MLS (Lambert et al., 2007). NO has two weak signals at 651.45 and 651.75 GHz, which can be used to measure the abundance. HNO3 can be measured using the 650 GHz bands. Measuring these nitrogen species will help researchers to understand the chemistry and dynamics of the atmosphere better. The BrO, which plays an important role in the depletion of ozone, can be measured using the 624.768 and 650.179 GHz lines. Because of the low abundance of BrO, measurements must be significantly averaged in order to get reliable results (Millán et al., 2012). CO and H2CO are the major species in the CH4 oxidation to CO2 and H2O in the stratosphere and mesosphere (Suzuki et al., 2015). The major spectral line of CO used by the MLS is at 230.538 GHz; however, the 661.07 GHz line can also provide information (Livesey et al., 2008). H2CO has a line at 656.45 GHz, but the signal is very weak. The SO2 lines in the 660 GHz band have the potential to detect the background levels of SO2. CH3Cl can be measured in the 649 GHz band near the line of ClO. The MLS measured CH3OH and CH3CN in the troposphere and lower stratosphere by the 625 GHz spectrometer (Pumphrey et al., 2011). Figure 7Contributions of the main target chemical species to the 643 GHz spectra. The brightness temperature is measured from the double-sideband radiometer. The tangent height is 30 km. The top axis represents the lower sideband frequencies and the bottom axis represents the upper sideband frequencies. Each panel represents a single spectrometer. 3 Retrieval methodology ## 3.1 Forward model The retrieval of data measured by the microwave limb sounder requires the accurate simulation of the observed thermal emission spectra. The forward model is a mathematical tool used to describe the radiative transfer, spectroscopy, and instrumental characteristics. The output of the forward model is the convolution of the atmospheric radiation and instrument response. Radiative transfer describes the emission, propagation, scattering, and absorption of electromagnetic radiation (Mätzler, 2006). Scattering can usually be neglected above the upper troposphere as the atmosphere is largely cloud-free at these altitudes, and such clouds as there are (e.g. polar stratospheric clouds) have particle sizes shorter than the TALIS observation wavelengths. In this way and assuming local thermodynamic equilibrium (LTE), the formal solution of the radiative transfer equation is defined by $\begin{array}{}\text{(1)}& \begin{array}{rl}& {I}_{v}\left({S}_{\mathrm{2}}\right)={I}_{v}\left({S}_{\mathrm{1}}\right){e}^{-{\mathit{\tau }}_{v\left({S}_{\mathrm{1}},\phantom{\rule{0.125em}{0ex}}{S}_{\mathrm{2}}\right)}}\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}+{\int }_{{S}_{\mathrm{1}}}^{{S}_{\mathrm{2}}}{\mathit{\alpha }}_{v}\left(s\right){B}_{v}\left(T\right){e}^{-{\mathit{\tau }}_{v\left(S,\phantom{\rule{0.125em}{0ex}}{S}_{\mathrm{2}}\right)}}\mathrm{d}s,\end{array}\end{array}$ where Iv is the radiance at frequency v reaching the sensor, α is the absorption coefficient, and τ is the opacity or optical thickness. Bv stands for the atmospheric emission which is given by the Planck function describing the radiation of a black body at temperature T and frequency v per unit solid angle, unit frequency interval, and unit emitting surface (Urban et al., 2004): $\begin{array}{}\text{(2)}& {B}_{v}\left(T\right)=\frac{\mathrm{2}h{v}^{\mathrm{3}}}{{c}^{\mathrm{2}}}\frac{\mathrm{1}}{{e}^{hv/{k}_{B}T}-\mathrm{1}},\end{array}$ where h is the Planck constant, c is the speed of light, and kB denotes the Boltzmann constant. Spectroscopy models and databases allow us to compute the absorption coefficient, which requires pressure, temperature, and the species concentrations along the line of sight. The basic expression can be written as $\begin{array}{}\text{(3)}& \mathit{\alpha }\left(v\right)=nS\left(T\right)F\left(v\right),\end{array}$ where S is called the line strength, F means the line shape function, and n is the number density of the absorber. Sensor characteristics also have to be taken into account by the forward model, including the antenna field of view, the sideband folding, and the spectrometer channel response (Eriksson et al., 2006). Firstly, the radiance which encounters the antenna response could be expressed by the integration: $\begin{array}{}\text{(4)}& {I}_{v}^{\mathrm{a}}={\int }_{\mathrm{\Omega }}\phantom{\rule{0.125em}{0ex}}{I}_{v}\left(\mathrm{\Omega }\right){W}_{v}^{\mathrm{a}}\left(\mathrm{\Omega }\right)\mathrm{d}\mathrm{\Omega },\end{array}$ where ${W}_{v}^{\mathrm{a}}$ is the normalised antenna response function. Normally, the variation of Iv in the azimuth angle dimension can be neglected or calculated beforehand. Secondly, a heterodyne mixer converts the signals to intermediate frequency, folding the upper and lower sideband signals together in consequence. The apparent intensity after the mixer can be modelled as $\begin{array}{}\text{(5)}& {I}_{v}^{\mathrm{if}}=\frac{{W}_{v}^{\mathrm{s}}\left(v\right){I}_{v}^{\mathrm{a}}+{W}_{v}^{\mathrm{s}}\left(v{}^{\prime }\right){I}_{{v}^{\prime }}^{\mathrm{a}}}{{W}_{v}^{\mathrm{s}}\left(v\right)+{W}_{v}^{\mathrm{s}}\left(v{}^{\prime }\right)},\end{array}$ where ${W}_{v}^{\mathrm{s}}$ means the sideband response. At last, the final signal will be recorded by spectrometers, which can be described in a similar way to the antenna response: $\begin{array}{}\text{(6)}& {I}^{\mathrm{c}}={\int }_{v}{I}_{v}^{\mathrm{if}}{W}_{v}^{\mathrm{c}}\left(v\right)\mathrm{d}v.\end{array}$ Here ${W}_{v}^{\mathrm{c}}$ means the normalised channel response, and the radiance is denoted Ic. The measured radiance is transformed to brightness temperatures using the Planck function. ## 3.2 Retrieval algorithm The optimal estimation method (OEM) is the most common method used in atmospheric sounding for retrieving vertical profiles of chemistry species (Rodgers, 2000). In OEM theory, a predicted noisy measurement $\stackrel{\mathrm{^}}{y}$ can be expressed by a forward model F with an unknown atmospheric state x and the system noise ϵy according to $\begin{array}{}\text{(7)}& \stackrel{\mathrm{^}}{y}=\mathbit{F}\left(\mathbit{x},\phantom{\rule{0.125em}{0ex}}\mathbit{b}\right)+{\mathit{ϵ}}_{y}.\end{array}$ The noiseless predicted radiances F(x,b) are compared with the observed radiance y so that the unknown state which minimises the cost function χ2 could be found. The cost function is given by $\begin{array}{}\text{(8)}& \begin{array}{rl}& {\mathit{\chi }}^{\mathrm{2}}={\left[y-\mathbf{F}\left(\mathbit{x},\mathbf{b}\right)\right]}^{T}{\mathbf{S}}_{y}^{-\mathrm{1}}\left[\mathbit{y}-\mathbf{F}\left(\mathbit{x},\mathbit{b}\right)\right]\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}+{\left[\mathbit{x}-{\mathbit{x}}_{\mathrm{a}}\right]}^{T}{\mathbf{S}}_{\mathrm{a}}^{-\mathrm{1}}\left[\mathbit{x}-{\mathbit{x}}_{\mathrm{a}}\right],\end{array}\end{array}$ where xa is an a priori state vector, and Sa and Sy stand for the covariance matrices representing the natural variability of the state vector and the measurement error vector, respectively. Assuming there is no correlation between the channels, the off-diagonal elements of Sy are zero and the diagonal elements are set to the square of the system noise. Usually, a simple formula can be used to determine the SSB radiometric noise standard deviation: $\begin{array}{}\text{(9)}& \mathit{ϵ}=\frac{{T}_{\mathrm{sys}}}{\sqrt{\mathit{\beta }d\mathit{\tau }}},\end{array}$ where Tsys is the system noise temperature, which is the sum of the receiver noise temperature and the atmospheric temperature received by the antenna, β is the noise-equivalent bandwidth, and dτ is the integration time for measuring a single spectrum. The diagonal elements of Sa specify a priori variance and the off-diagonal terms are used to describe correlations between adjacent elements in order to make the retrieved profile smoother. The Planck function is used to compute the brightness temperature. Finally, the Levenberg–Marquardt method, which is the modification of the Gauss–Newton iterative, is used to solve the non-linear problem. The solution is given by $\begin{array}{}\text{(10)}& \begin{array}{rl}& {\mathbit{x}}_{i+\mathrm{1}}={\mathbit{x}}_{i}+{\left[\left(\mathbf{I}+\mathit{\gamma }\right){\mathbf{S}}_{\mathrm{a}}^{-\mathrm{1}}+{\mathbf{K}}_{\mathrm{xi}}^{T}{\mathbf{S}}_{y}^{-\mathrm{1}}{\mathbf{K}}_{i}\right]}^{-\mathrm{1}}\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\left\{{\mathbf{K}}_{\mathrm{xi}}^{T}{\mathbf{S}}_{y}^{-\mathrm{1}}\left[\mathbit{y}-\mathbf{F}\left({\mathbit{x}}_{i}\right)\right]-{\mathbf{S}}_{\mathrm{a}}^{-\mathrm{1}}\left({\mathbit{x}}_{i}-{\mathbit{x}}_{\mathrm{a}}\right)\right\},\end{array}\end{array}$ where γ denotes the Levenberg–Marquardt parameter and ${\mathbf{K}}_{{\mathrm{x}}_{\mathrm{i}}}$ represents the weighting function matrix (Jacobian). I represents the identity matrix. The OEM provides an approach to describe the retrieval error completely. The averaging kernel matrix A, which represents the sensitivity of the retrieved state to the true state, is written as $\begin{array}{}\text{(11)}& \mathbf{A}={\mathbf{G}}_{y}{\mathbf{K}}_{x}=\frac{\partial \stackrel{\mathrm{^}}{x}}{\partial \mathbit{x}},\end{array}$ where Gy is the contribution matrix, which expresses the sensitivity of the retrieved state to the measurement: $\begin{array}{}\text{(12)}& {\mathbf{G}}_{y}=\frac{\partial \stackrel{\mathrm{^}}{x}}{\partial \mathbf{y}}={\left({\mathbf{K}}_{x}^{T}{\mathbf{S}}_{y}{\mathbf{K}}_{x}+{\mathbf{S}}_{\mathrm{a}}^{-\mathrm{1}}\right)}^{-\mathrm{1}}{\mathbf{K}}_{x}^{T}{\mathbf{S}}_{y}^{-\mathrm{1}}.\end{array}$ The retrieval resolution can be estimated from the full width at half maximum (FWHM) of the averaging kernel (Marks and Rodgers, 1993). There is another useful variable defined as the measurement response, which represents the true state contribution in the retrieval (Baron et al., 2002): $\begin{array}{}\text{(13)}& W\left(i\right)={\sum }_{j}\left|A\left(i,\phantom{\rule{0.125em}{0ex}}j\right)\right|.\end{array}$ The ideal measurement response should be near 1. In practice, the reliable range of a retrieval is usually characterised by $\left|W-\mathrm{1}\right|<\mathrm{0.2}$. Figure 8The antenna patterns of TALIS. The retrieval error can be described by two covariance matrices, the smoothing error covariance matrix which is from the need for a priori information, $\begin{array}{}\text{(14)}& {\mathbf{S}}_{n}=\left(\mathbf{A}-\mathbf{I}\right){\mathbf{S}}_{\mathrm{a}}{\left(\mathbf{A}-\mathbf{I}\right)}^{T},\end{array}$ and the measurement error covariance matrix due to the measurement noise, $\begin{array}{}\text{(15)}& {\mathbf{S}}_{m}={\mathbf{G}}_{y}{\mathbf{S}}_{y}{\mathbf{G}}_{y}^{T}.\end{array}$ The error covariance matrix used in the following simulation is the total of Sn and Sm. 4 Measurement performance ## 4.1 Simulation setup The objective of the simulation is to evaluate the observation performance of TALIS. In this simulation, the Atmospheric Radiative Transfer Simulator (ARTS 2.3) forward model and its corresponding retrieval tool, Qpack2, are used (Eriksson et al., 2005, 2011). The instrumental setup follows the characteristics of TALIS described in Tables 1 and 2. The ideal rectangle back-end channel response function is used. The simulation antenna patterns of the four radiometers are shown in Fig. 8. The full-width at half-power points of antenna patterns are used in the following simulation. In this simulation, the scan altitude range is from 10 to 90 km and the spectra are obtained every 1 km. A retrieval grid with 2.5 km spacing is used since it can match the FOV of TALIS well, and cutting down the size of the state vector will give a significant increase in speed (Livesey and Van Snyder, 2004). A mid-latitude summer atmospheric condition extracted from FASCOD which is provided by ARTS (profiles of BrO and HO2 are from MLS L3 v4.2 monthly averaged data, 20–30 N, July 2018) is chosen to perform the simulation (Clough et al., 1986). The scattering from tropospheric clouds, refraction, and the Zeeman effect are not considered because of the large computational complexity. A spectroscopic line parameters catalogue created with the data taken from the JPL catalogue (Pickett et al., 1998), HITRAN database (Rothman et al., 2013), and Perrin catalogue (Perrin et al., 2005) is used for line-by-line absorption calculation. The measurement covariance matrix is set diagonal as described in Sect. 3 in order to reduce the computing time; 110 % of a typical profile is used to build the a priori covariance matrix with 3 km vertical correlation between the adjacent pressure levels by a parametric exponential function. The true profiles are defined with a vertical resolution of 0.5 km. The true species profiles are multiplied by a factor of 1.1 to be the a priori profiles, and to the true temperature profile is added a 5 K offset to be the a priori profile. The molecules are retrieved simultaneously from each band. No spectrally flat extinction term was added. The expected 1σ noise is calculated by Eq. (9), and the noise is assumed to be 2.2, 2.2, 2.2, and 5.1 K at 118, 190, 240, and 640 GHz, respectively. The species such as BrO and HO2 whose emission radiances are small compared with the system noise must be averaged to increase the precision. Here the lower noise (1σ noise multiplied by a factor of 0.1, equivalent to a 10 latitude weekly zonal mean) is used to represent the averaged production. Figure 9Temperature product comparison between the TALIS FFT spectrometer and the MLS “Standard” spectrometer using the 118.75 GHz line. All other factors are identical. ## 4.2 Comparison of TALIS and the Aura MLS As discussed in Sect. 2.2, TALIS has similar bands to the Aura MLS. The major difference between these two instruments is the spectrometers used in limb sounding. A simulation is performed to compare the performance of the main products between the TALIS FFT spectrometer and the Aura MLS “Standard” 25-channel spectrometer. Figures 9 to 11 show the retrieval products of TALIS and MLS; all the factors are identical, except the spectrometer. Figure 10H2O product comparison between the TALIS FFT spectrometer and MLS “Standard” spectrometer using the 183.31 GHz line. All other factors are identical. Figure 11O3 product comparison between the TALIS FFT spectrometer and MLS “Standard”' spectrometer using the 235.71 GHz line. All other factors are identical. According to the simulation results, TALIS can do a better job than the Aura MLS because of the wider bandwidth and finer resolution. The temperature precision of TALIS is 1.5 K better than the Aura MLS at about 15–30 km and the vertical resolution is also improved. The difference in precision becomes small above 50 km. H2O precision is improved by about 2 %–10 % at about 15–50 km. O3 precision is improved by about 3 %–20 % at about 10–60 km. However, the digital autocorrelator spectrometers of the MLS which can improve the performance in the mesosphere are not considered here. ## 4.3 Retrieval precision Since the simulation has been performed, an evaluation of the retrieval precision on the target species of TALIS is made. Results are plotted in Figs. 12 to 28. The precision (square root of diagonal elements of the error covariance matrix) is given for a single scan and averaged measurement, respectively, and the relative error is also provided. Auxiliary information about the averaging kernel function and measurement response are also included. Results are discussed in detail in the following. Figure 12Simulation results of temperature retrieval using 118.75 (a), 233.95 (b), and 627.75 GHz (c) lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). Figure 13Simulation results of H2O retrieval using 183.31 (a) and 657.9 GHz (b) lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). ### 4.3.1 Better precision products Temperature, H2O, O3, HNO3, HCl, N2O, and ClO are treated as better precision products because of the good precision for a single scan measurement. These products can be used in scientific research directly. Atmospheric temperature is the most important parameter that can be retrieved with a high signal-to-noise ratio at lower frequency or a good vertical resolution at high frequency by using O2 lines. TALIS will use the 118 GHz radiometer to detect the atmospheric temperature profile, with the 240 and 643 GHz radiometers working as supplemental products. Results are shown in Fig. 12, and the sensitivity is significantly high at the 118 GHz band. Single scan precision is good from 15 to 60 km, with precision <2 K. The retrieval vertical resolution is 2.5–4 km below 50 km and 4–6 km from 50 to 80 km. The precision of the averaged measurement will be <1 K from 15 to 85 km. “Wide” filters of MLS make measurements extending down into the troposphere where TALIS lacks sensitivity. However, the 240 GHz product can compensate for the loss of information since the precision is better in the upper troposphere (error <1 K for a vertical resolution of 2.5–3 km between 10 and 15 km). The result of the 643 GHz band is similar to that of the 240 GHz band. Once the temperature profile is retrieved, the pressure profile can be calculated from the hydrostatic equilibrium equation using a known pressure and temperature at a reference tangent point. The pressure profile is not a direct product and is not shown here. The H2O profile, another key parameter, can be measured by 190 and 643 GHz radiometers. The 183.31 GHz line is generally used by humidity sounders to detect water vapour with good precision. Figure 13 shows the retrieval precision will be <10 % from 10 to 55 km by 190 GHz single scan measurement with a vertical resolution of 2.5–4 km. Averaged measurement has retrieval precisions <1 % at 10–55 km and <5 % at 10–80 km. The profile can also be retrieved by the 643 GHz radiometer with poorer precision. O3 has quite strong intensity in most spectral regions of TALIS. All the radiometers except 118 GHz can be used to observe this gas, which is important for energy balance (Fig. 14). The 240 GHz radiometer which covers the 235.7 GHz line has the highest O3 sensitivity. The profile can be retrieved with a single scan precision <10 % from 10 to 55 km and the vertical resolution is 2.5–3 km. The vertical resolution will degrade to 3–6 km for altitudes higher than 70 km. By averaging the measurements, the precision will be <5 % at 10–70 km. The other two bands show good performance from 15 to 55 km with a single scan precision <10 %. Figure 14Simulation results of O3 retrieval using 190 (a), 235.7 (b), and 657.5 GHz (c) lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). Figure 15Simulation results of HNO3 retrieval using 244 (a) and 656 GHz (b) lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). Figure 16Simulation result of HCl retrieval using 624.9 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). HNO3 is a common species in the stratosphere and has relatively strong lines at 240 and 643 GHz bands. Figure 15 shows the results of HNO3 retrievals. The 240 GHz radiometer can measure HNO3 at a 15–32 km altitude range with a single scan precision <30 % and the vertical resolution is 2.5–3 km. Averaging the measurements can improve the retrieval with a precision <10 % from 15 to 35 km. The 643 GHz signal is stronger than that in the 240 GHz band, but it is strongly absorbed by O3 below about 30 km. However, after averaging the measurements, information can be retrieved between 15 and 70 km with a precision better than 60 %. Figure 16 shows the expected precision of HCl observation. HCl can be measured at 15–50 km with <20 % single scan relative error with the vertical resolution of 2.5–3 km. By averaging the measurements, the precision will be <10 % at 12–72 km. N2O can be retrieved from the band at 190 GHz in the upper troposphere, while the band at 643 GHz can provide more information and good precision in the stratosphere. Figure 17 shows that the single scan precision of 643 GHz is <20 % at 12–32 km with the vertical resolution of 2.5 km. By averaging the measurements, the precision will be <10 % from 10 to 42 km. The 190 GHz can give a similar precision at 10–20 km. Figure 17Simulation results of N2O retrieval using 200.98 (a) and 652.834 GHz (b) lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). ClO can be retrieved from radiances measured by the 190 and 643 GHz bands (Fig. 18). However, the result shows that the best retrievals are performed from the band at 643 GHz, but information can also be retrieved from the 190 GHz radiometer with poorer precision. Single scan measurement from the 643 GHz radiometer can be used to obtain ClO with <40 % precision from 30 to 45 km, and the vertical resolution is about 2.5–4 km throughout the useful range. By averaging the measurements, precision will be <30 % from 23 to 57 km. Since ClO vanishes in the middle stratosphere (30–40 km) during nighttime, the precision will be relatively worse in the nighttime. In the polar regions, the relative precision will be better between 20 and 25 km during chlorine activation. Figure 18Simulation results of ClO retrieval using 203.4 (a) and 649.45 GHz (b) lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). ### 4.3.2 Medium precision products Medium precision products including CO, HCN, and CH3Cl mean that their single scan retrieval precisions are not satisfying but can be used to some degree. There is a choice for the user to select the single scan or averaged products. CO can be measured using the 230.538 and 661.07 GHz lines. Figure 19 shows that the 240 GHz radiometer can provide CO information with 30 %–90 % single scan precision from 10 to 90 km. The vertical resolution is in the range 3.5–5.5 km from the upper troposphere to the lower mesosphere, degrading to 6–10 km in the upper mesosphere. By using averaged measurements, CO can be retrieved with <30 % relative error at the range of 10–90 km. However, the retrieval of the 643 GHz measurement shows poorer precision. Figure 19Simulation results of CO retrieval using 230.538 (a) and 661.07 GHz (b) lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). HCN is measured by the 190 GHz radiometer at the 177.26 GHz line. The single scan precision is <50 % from 12 to 28 km and the vertical resolution is about 5 km at the height of 30 km, degrading to 8 km at about 40 km (Fig. 20). By averaging the measurements, the relative error will be <30 % at 10–40 km. Figure 20Simulation result of HCN retrieval using 177.26 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). CH3Cl can be measured by the 643 GHz radiometer. As the result shows (Fig. 21), the 649.5 GHz band is suitable for CH3Cl observation in the upper troposphere and lower stratosphere. It can be measured with <30 % single scan precision from 12 to 23 km, with <20 % averaged precision from 10 to 30 km. The vertical resolution is about 3–4 km over most of the useful range. Figure 21Simulation results of CH3Cl retrieval using 649.5 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). ### 4.3.3 Poor precision products There are several weak lines in the spectral regions of TALIS, such as HOCl, BrO, and HO2. Significant averaging must be done to these measurements in order to obtain reliable and satisfying precision. The 635.87 GHz line is the most appropriate line for HOCl observation. However, the single scan retrieval has a poor precision of 60 %–80 % at 25–45 km with the vertical resolution of about 4–6 km. Figure 22 reveals that HOCl can be retrieved from 20 to 50 km with an averaged measurement precision of <50 %. Figure 22Simulation result of HOCl retrieval using 635.87 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). BrO can be measured by using the 624.768 GHz spectral line. Figure 23 shows the simulation result of BrO retrieval. As the averaging kernel reveals, there is almost no useful information in single scan measurement because of the quite poor signal-to-noise ratio. Therefore, averaging is needed to obtain reliable and scientific results. The error is 50 % from 24 to 48 km with the vertical resolution of about 4 km. Figure 23Simulation results of BrO retrieval using 624.768 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). HO2 can be measured by the 643 GHz radiometer with <50 % precision at the vertical range of 30–90 km by using averaged data (Fig. 24). The precision of single scan retrieval is 55 %–70 % at 40–75 km, which is not desirable because of the weak signal. The vertical resolution is about 6 km. Figure 24Simulation results of HO2 retrieval using 649.701 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). ### 4.3.4 Promising products The unique products are the target species which are not covered by the Aura MLS but which are covered by TALIS. There are four gases: NO, NO2, H2CO, and SO2 (normal VMR). However, their signals all have weak intensity and must be averaged to improve the retrieval precision. NO (daytime) can be retrieved from averaged data with <50 % precision at 28–90 km (Fig. 25), while it vanishes in the nighttime. The vertical resolution is about 4–10 km, while its single scan measurement has little information in the area where NO largely exists. Figure 25Simulation result of NO retrieval using 651.75 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). NO2 (nighttime) has a weak line in the spectrum of the 240 GHz band, and it vanishes in the daytime. Figure 26 shows that only averaged measurement can provide some information at 20–40 km, with a precision of about 40 %–60 % in the nighttime. The vertical resolution is about 5 km. Figure 26Simulation result of NO2 retrieval using 232.7 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). Although H2CO has a line at 656.45 GHz, its emission radiance is too weak. Almost no useful information can be obtained (Fig. 27). However, this line has the potential to measure H2CO. More average or other effective methods should be applied to get acceptable precision. Figure 27Simulation result of H2CO retrieval using 656.45 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). The MLS standard SO2 product is taken from the 240 GHz retrieval, but is only effective when its concentration is significantly enhanced. TALIS has both 240 and 643 GHz radiometers, which cover the lines of SO2. The 240 GHz radiometer can be used to measure SO2 in the same way as MLS. The 643 GHz radiometer can give the concentration of the nominal background. The averaged result shows that SO2 can be retrieved at 14–20 km, 46–70 km with the relative error about <50 % (Fig. 28). The vertical resolution is about 6 km. Figure 28Simulation result of SO2 retrieval using 659 GHz lines. “Single” and “Average” represent the retrieval error using different noise. “Profile” represents the typical profile used in the simulation. The black solid line in the last panel represents the FWHM (i.e. vertical resolution). 5 Conclusions Simulation analysis for temperature and chemical species retrieval has been performed to assess the measurement performance of TALIS and to support the mission. This study mainly focuses on a large number of important chemical species in the middle and upper atmosphere which can be observed by the limb sounder. The results are summarised in Table 3. Table 3Simulation results of TALIS retrieval precision. Seven species show high sensitivity, sufficient for scientifically useful single profile retrievals; 118, 240, and 643 GHz observations of O2 are used to estimate the temperature profile, which is quite important in meteorology. The 118 GHz radiometer can obtain temperature with a precision <2 K at 10–60 km and the 240 and 643 GHz radiometers can provide more information in the upper troposphere (precision <1 K at 10–15 km). The 190 GHz radiometer can be used to measure H2O with a precision <10 % at 10–55 km and give information of upper tropospheric humidity. O3 can be measured by three radiometers, and the 240 GHz radiometer has the best precision. The precision is <10 % from 10 to 55 km by single scan measurement. HNO3 can be derived from 240 GHz retrieval with a precision <30 % at 15–32 km. The precision of HCl single scan retrieval is <20 % over most of the useful range. The 643 GHz radiometer can give a good estimate of the N2O profile, with a precision <20 % at 12–32 km. The single scan precision of ClO measured by the 643 GHz radiometer is about <40 % in the area where ClO mainly exists. CH3Cl can be measured in the upper troposphere and low stratosphere with a precision of about 30 %. The profile of CO retrieved from 240 GHz measurement is better than that from 643 GHz measurement. The best sensitivity is found between 70 and 90 km where the VMR of CO is large, and the precision is about 50 %. HCN has 50 % single scan precision at 12–28 km, which may need to be averaged. Other measurements, such as HO2, HOCl, NO, NO2, BrO, SO2, and H2CO, must be significantly averaged before scientific use because of the weak signals. Apart from these products, some potential products will be discussed in the future works. Line-of-sight wind is important information which could be measured by TALIS. Cloud ice water content (IWC) is also an essential product provided by the passive microwave radiometer. Future studies will also investigate the Zeeman effect since it polarises and changes the shape of the O2 lines. TALIS has the potential to monitor chemical composition in the whole of Earth's atmosphere, which is important for numerical weather prediction models and to characterise the long-time change in climate. Measurement data can be used for atmospheric chemistry and dynamics study, which is quite important for geoscience. This paper is the preliminary analysis of the instrument. More studies such as calibration research and error analysis will be performed in the future. Code and data availability Code and data availability. ARTS can be downloaded at http://www.radiativetransfer.org/getarts/ (last access: 15 December 2017; University of Hamburg and Chalmers University of Technology, 2017b). Qpack is included in the Atmlab which can be downloaded from http://www.radiativetransfer.org/tools/ (last access: 15 December 2017; University of Hamburg and Chalmers University of Technology, 2017a). Profiles and spectroscopy data of Perrin and HITRAN are included in ARTS XML Data. The JPL molecular spectroscopy catalogue is available at https://spec.jpl.nasa.gov/ (last access: 21 February 2019; NASA JPL, 2019). MLS version 4.2 data can be obtained at https://doi.org/10.5067/Aura/MLS/DATA3020 (Millan et al., 2016). Author contributions Author contributions. ZW designed the mission concept. WW performed the simulation and wrote the manuscript. WW and YD analysed the results. ZW edited the article. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors would like to thank the ARTS and Qpack development teams for assistance in configuring and running the model. The authors thank the JPL for providing spectroscopy data and MLS data. They would also like to thank the reviewers and the editors for their valuable and helpful suggestions. Review statement Review statement. This paper was edited by Bernd Funke and reviewed by Hugh C. Pumphrey and two anonymous referees. References Barath, F. T., Chavez, M. C., Cofield, R. E., Flower, D. A., Frerking, M. A., Gram, M. B., Harris, W. M., Holden, J. R., Jarnot, R. F., Kloezeman, W. G., Klose, G. J., Lau, G. K., Loo, M. S., Maddison, B. J., Mattauch, R. J., McKinney, R. P., Peckham, G. E., Pickett, H. M., Pickett, G., Soltis, F. S., Suttie, R. A., Tarsala, J. A., Waters, J. W., and Wilson, W. J.: The Upper Atmosphere Research Satellite microwave limb sounder instrument, J. Geophys. Res., 98, 10751–10762, https://doi.org/10.1029/93JD00798, 1993. Baron, P., Ricaud, P., de la Nöe, J., Eriksson, P., Merino, F., Ridal, M., and Murtagh, D. P.: Studies for the Odin Sub-Millimetre Radiometer. II. Retrieval methodology, Can. J. Phys., 80, 341–356, https://doi.org/10.1139/P01-150, 2002. Baron, P., Urban, J., Sagawa, H., Möller, J., Murtagh, D. P., Mendrok, J., Dupuy, E., Sato, T. O., Ochiai, S., Suzuki, K., Manabe, T., Nishibori, T., Kikuchi, K., Sato, R., Takayanagi, M., Murayama, Y., Shiotani, M., and Kasai, Y.: The Level 2 research product algorithms for the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES), Atmos. Meas. Tech., 4, 2105–2124, https://doi.org/10.5194/amt-4-2105-2011, 2011. Baron, P., Murtagh, D. P., Urban, J., Sagawa, H., Ochiai, S., Kasai, Y., Kikuchi, K., Khosrawi, F., Körnich, H., Mizobuchi, S., Sagi, K., and Yasui, M.: Observation of horizontal winds in the middle-atmosphere between 30 S and 55 N during the northern winter 2009–2010, Atmos. Chem. Phys., 13, 6049–6064, https://doi.org/10.5194/acp-13-6049-2013, 2013. Baron, P., Murtagh, D., Eriksson, P., Mendrok, J., Ochiai, S., Pérot, K., Sagawa, H., and Suzuki, M.: Simulation study for the Stratospheric Inferred Winds (SIW) sub-millimeter limb sounder, Atmos. Meas. Tech., 11, 4545–4566, https://doi.org/10.5194/amt-11-4545-2018, 2018. Clough, S. A., Kneizys, F. X., Shettle, E. P., and Anderson, G. P.: Atmospheric radiance and transmittance – FASCOD2, Sixth Conference on Atmospheric Radiation, Williamsburg, VA, 13–16 May 1986, American Meteorological Society, Boston, MA, Extended Abstracts, A87-15076 04-47, 141–144, 1986. Eriksson, P., Jiménez, C., and Buehler, S. A.: Qpack, a general tool for instrument simulation and retrieval work, J. Quant. Spectrosc. Ra., 91, 47–64, https://doi.org/10.1016/j.jqsrt.2004.05.050, 2005. Eriksson, P., Ekström, M., Melsheimer, C., and Buehler, S. A.: Efficient forward modelling by matrix representation of sensor responses, Int. J. Remote Sens., 27, 1793–1808, https://doi.org/10.1080/01431160500447254, 2006. Eriksson, P., Ekström, M., Rydberg, B., and Murtagh, D. P.: First Odin sub-mm retrievals in the tropical upper troposphere: ice cloud properties, Atmos. Chem. Phys., 7, 471–483, https://doi.org/10.5194/acp-7-471-2007, 2007. Eriksson, P., Buehler, S. A., Davis, C. P., Emde, C., and Lemke, O.: ARTS, the atmospheric radiative transfer simulator, Version 2, J. Quant. Spectrosc. Ra., 112, 1551–1558, https://doi.org/10.1016/j.jqsrt.2011.03.001, 2011. Johnson, D. G., Traub, W. A., Chance, K. V., Jucks, K. W., and Stachnik, R. A.: Estimating the abundance of ClO from simultaneous remote sensing measurements of HO2, OH, and HOCl, Geophys. Res. Lett., 22, 1869–1871, https://doi.org/10.1029/95GL01249, 1995. Kasai, Y., Sagawa, H., Kreyling, D., Dupuy, E., Baron, P., Mendrok, J., Suzuki, K., Sato, T. O., Nishibori, T., Mizobuchi, S., Kikuchi, K., Manabe, T., Ozeki, H., Sugita, T., Fujiwara, M., Irimajiri, Y., Walker, K. A., Bernath, P. F., Boone, C., Stiller, G., von Clarmann, T., Orphal, J., Urban, J., Murtagh, D., Llewellyn, E. J., Degenstein, D., Bourassa, A. E., Lloyd, N. D., Froidevaux, L., Birk, M., Wagner, G., Schreier, F., Xu, J., Vogt, P., Trautmann, T., and Yasui, M.: Validation of stratospheric and mesospheric ozone observed by SMILES from International Space Station, Atmos. Meas. Tech., 6, 2311–2338, https://doi.org/10.5194/amt-6-2311-2013, 2013. Kikuchi, K., Nishibori, T., Ochiai, S., Ozeki, H., Irimajiri, Y., Kasai, Y., Koike, M., Manabe, T., Mizukoshi, K., Murayama, Y., Nagahama, T., Sano, T., Sato, R., Seta, M., Takahashi, C., Takayanagi, M., Masuko, H., Inatani, J., Suzuki, M., and Shiotani, M.: Overview and early results of the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES), J. Geophys. Res., 115, D23306, https://doi.org/10.1029/2010JD014379, 2010. Lambert, A., Read, W. G., Livesey, N. J., Santee, M. L., Manney, G. L., Froidevaux, L., Wu, D. L., Schwartz, M. J., Pumphrey, H. C., Jimenez, C., Nedoluha, G. E., Cofield, R. E., Cuddy, D. T., Daffer, W. H., Drouin, B. J., Fuller, R. A., Jarnot, R. F., Knosp, B. W., Pickett, H. M., Perun, V. S., Snyder, W. V., Stek, P. C., Thurstans, R. P., Wagner, P. A., Waters, J. W., Jucks, K. W., Toon, G. C., Stachnik, R. A., Bernath, P. F., Boone, C. D., Walker, K. A., Urban, J., Murtagh, D., Elkins, J. W., and Atlas, E.: Validation of the Aura Microwave Limb Sounder middle atmosphere water vapor and nitrous oxide measurements, J. Geophys. Res., 112, D24S36, https://doi.org/10.1029/2007JD008724, 2007. Lary, D. J. and Aulov, O.: Space-based measurements of HCl: Intercomparisonand historical context, J. Geophys. Res., 113, D15S04, https://doi.org/10.1029/2007JD008715, 2008. Livesey, N. J. and Van Snyder, W.: EOS MLS Retrieval Processes Algorithm Theoretical Basis, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, 91109-8099, Tech. Rep. JPL D-16159/CL #04-2043, version 2.0, available at: https://mls.jpl.nasa.gov/data/eos_algorithm_atbd.pdf (last access: 26 March 2019), 2004. Livesey, N. J., Filipiak, M. J., Froidevaux, L., Read, W. G., Lambet, A., Santee, M. L., Jiang, J. H., Pumphrey, H. C., Waters, J. W., Cofield, R. E., Cuddy, D. T., Daffer, W. H., Drouin, B. J., Fuller, R. A., Jarnot, R. F., Jiang, Y. B., Knosp, B. W., Li, Q. B., Perun, V. S., Schwartz, M. J., Snyder, W. V., Stek, P. C., Thurstans, R. P., Wagner, P. A., Avery, M., Browell, E. V., Cammas, J. P., Christensen, L. E., Diskin, G. S., Gao, R. S., Jost, H. J., Loewenstein, M., Lopez, J. D., Nedelec, P., Osterman, G. B., Sachse, G. W., and Webster, C. R.: Validation of Aura Microwave Limb Sounder O3 and CO observations in the upper troposphere and lower stratosphere, J. Geophys. Res., 113, D15S02, https://doi.org/10.1029/2007JD008805, 2008. Livesey, N. J., Logan, J. A., Santee, M. L., Waters, J. W., Doherty, R. M., Read, W. G., Froidevaux, L., and Jiang, J. H.: Interrelated variations of O3, CO and deep convection in the tropical/subtropical upper troposphere observed by the Aura Microwave Limb Sounder (MLS) during 2004–2011, Atmos. Chem. Phys., 13, 579–598, https://doi.org/10.5194/acp-13-579-2013, 2013. Marks, C. J. and Rodgers, C. D.: A retrieval method for atmospheric composition from limb emission measurements, J. Geophys. Res., 98, 14939–14953, https://doi.org/10.1029/93JD01195, 1993. Mätzler, C.: Thermal Microwave Radiation: Applications for Remote Sensing, Iee Electromagnetic Waves Series 52, 2006. Millán, L., Livesey, N., Read, W., Froidevaux, L., Kinnison, D., Harwood, R., MacKenzie, I. A., and Chipperfield, M. P.: New Aura Microwave Limb Sounder observations of BrO and implications for Bry, Atmos. Meas. Tech., 5, 1741–1751, https://doi.org/10.5194/amt-5-1741-2012, 2012. Millán, L., Wang, S., Livesey, N., Kinnison, D., Sagawa, H., and Kasai, Y.: Stratospheric and mesospheric HO2 observations from the Aura Microwave Limb Sounder, Atmos. Chem. Phys., 15, 2889–2902, https://doi.org/10.5194/acp-15-2889-2015, 2015. Millan, L., Livesey, N., and Read, W.: MLS/Aura Level 3 Bromine Monoxide (BrO) Daily 10degrees Lat Zonal Mean V004, Greenbelt, MD, USA, Goddard Earth Sciences Data and Information Services Center (GES DISC), https://doi.org/10.5067/Aura/MLS/DATA3020, 2016. Murtagh, D., Frisk, U., Merino, F., Ridal, M., Jonsson, A., Stegman, J., Witt, G., Eriksson, P., Jimenez, C., Mégie, G., de la Nöe, J., Ricaud, P., Baron, P., Pardo, J., Hauchcorne, A., Llewellyn, E., Degenstein, D., Gattinger, R., Lloyd, N., Evans, W., McDade, I., Haley, C., Sioris, C., von Savigny, C., Solheim, B., McConnell, J., Strong, K., Richardson, E., Leppelmeier, G., Kyrola, E., Auvinen, H., and Oikarinen, L.: An overview of the Odin atmospheric mission, Can. J. Phys., 80, 309–319, https://doi.org/10.1139/P01-157, 2002. NASA Jet Propulsion Laboratory (JPL): JPL Molecular Spectroscopy Catalogue, available at: https://spec.jpl.nasa.gov/, last access: 21 February 2019. Ochiai, S., Baron, P., Nishibori, T., Irimajiri, Y., Uzawa, Y., Manabe, T., Maezawa, H., Mizuno, A., Nagahama, T., Sagawa, H., Suzuki, M., and Shiotani, M.: SMILES-2 Mission for Temperature, Wind, and Composition in the Whole Atmosphere, SOLA, 13A, 13–18, https://doi.org/10.2151/sola.13A-003, 2017. Perrin, A., Puzzarini. C., Colmont, J.-M., Verdes, C., Wlodarczak, G., Cazzoli, G., Buehler, S., Flaud, J.-M., and Demaison, J.: Molecular Line Parameters for the “MASTER” (Millimeter Wave Acquisitions for Stratosphere/Troposphere Exchange Research) Database, J. Atmos. Chem., 51, 161–205, https://doi.org/10.1007/s10874-005-7185-9, 2005. Pickett, H. M., Poynter, R. L., Cohen, E. A., Delitsky, M. L., Pearson, J. C., and Müller, H. S. P.: Submillimeter, millimeter, and microwave spectral line catalog, J. Quant. Spectrosc. Ra., 60, 883–890, https://doi.org/10.1016/S0022-4073(98)00091-0, 1998. Pumphrey, H. C., Santee, M. L., Livesey, N. J., Schwartz, M. J., and Read, W. G.: Microwave Limb Sounder observations of biomass-burning products from the Australian bush fires of February 2009, Atmos. Chem. Phys., 11, 6285–6296, https://doi.org/10.5194/acp-11-6285-2011, 2011. Pumphrey, H. C., Read, W. G., Livesey, N. J., and Yang, K.: Observations of volcanic SO2 from MLS on Aura, Atmos. Meas. Tech., 8, 195–209, https://doi.org/10.5194/amt-8-195-2015, 2015. Rodgers, C. D.: Inverse Methods for Atmospheric Sounding: Theory and Practice, World Scientific, Singapore, 2000. Rothman, L. S., Gordon, I. E., Babikov, Y., Barbe, A., Benner, D. C., Bernath, P. F., Birk, M., Bizzocchi, L., Boudon,V., Brown, L. R., Campargue, A., Chance, K., Cohen, E. A., Coudert, L. H., Devi, V. M., Drouin, B. J., Fayt, A., Flaud,J.-M., Gamache, R. R., Harrison, J. J., Hartmann, J. -M., Hill, C., Hodges, J. T., Jacquemart, D., Jolly, A., Lamouroux, J., Le Roy, R. J., Li, G., Long, D. A., Lyulin, O. M., Mackie, C. J., Massie, S. T., Mikhailenko, S., Müller, H. S. P., Nau-menko, O. V., Nikitin, A. V., Orphal, J., Perevalov, V., Per-rin, A., Polovtseva, E. R., Richard, C., Smith, M. A. H., Starikova, E., Sung, K., Tashkun, S., Tennyson, J., Toon, G. C., Tyuterev, V. G., and Wagner, G.: The HITRAN2012 molecular spectroscopic database, J. Quant. Spectrosc. Ra., 130, 4–50, https://doi.org/10.1016/j.jqsrt.2013.07.002, 2013. Santee, M. L., Lambert, A., Read, W. G., Livesey, N. J., Manney, G. L., Cofield, R. E., Cuddy, D. T., Daffer, W. H., Drouin, B. J., Froidevaux, L., Fuller, R. A., Jarnot, R. F., Knosp, B. W., Perun, V. S., Snyder, W. V., Stek, P. C., Thurstans, R. P., Wagner, P. A., Waters, J. W., Connor, B., Urban, J., Murtagh, D., Ricaud, P., Barrett, B., Kleinböhl, A., Kuttippurath, J., Küllmann, H., von Hobe, M., Toon, G. C., and Stachnik, R. A.: Validation of the Aura Microwave Limb Sounder ClO measurements, J. Geophys. Res., 113, D15S22, https://doi.org/10.1029/2007JD008762, 2008. Sato, T. O., Sagawa, H., Kreyling, D., Manabe, T., Ochiai, S., Kikuchi, K., Baron, P., Mendrok, J., Urban, J., Murtagh, D., Yasui, M., and Kasai, Y.: Strato-mesospheric ClO observations by SMILES: error analysis and diurnal variation, Atmos. Meas. Tech., 5, 2809–2825, https://doi.org/10.5194/amt-5-2809-2012, 2012. Schwartz, M. J., Read, W. G., Snyder, W. V.: EOS MLS forward model polarized radiative transfer for Zeeman-split oxygen, IEEE T. Geosci. Remote, 44, 1182–1191, https://doi.org/10.1109/TGRS.2005.862267, 2006. Suzuki, M., Manago, N., Ozeki, H., Ochiai, S., and Baron, P.: Sensitivity study of SMILES-2 for chemical species, in: Proc. SPIE 9639, Sensors, Systems, and Next-Generation Satellites XIX, Toulouse, France, 16 October 2015, 96390M, https://doi.org/10.1117/12.2194832, 2015. Swadley, S. D., Poe, G. A., Bell, W., Hong, Y., Kunkee, D. B., McDermid, I. S., and Leblanc, T.: Analysis and Characterization of the SSMIS Upper Atmosphere Sounding Channel Measurements, IEEE T. Geosci. Remote, 46, 962–983, https://doi.org/10.1109/TGRS.2008.916980, 2008. Takahashi, C., Ochiai, S., and Suzuki, M.: Operational retrieval algorithms for JEM/SMILES level 2 data processing system, J. Quant. Spectrosc. Ra., 111, 160–173, https://doi.org/10.1016/j.jqsrt.2009.06.005, 2010. Takahashi, C., Suzuki, M., Mitsuda, C., Ochiai, S., Manago, N., Hayashi, H., Iwata, Y., Imai, K., Sano, T., Takayanagi, M., and Shiotani, M.: Capability for ozone high-precision retrieval on JEM/SMILES observation, Adv. Space Res., 48, 1076–1085, https://doi.org/10.1016/j.asr.2011.04.038, 2011. University of Hamburg and Chalmers University of Technology: Qpack2, available at: http://www.radiativetransfer.org/tools/, last access: 15 December 2017a. University of Hamburg and Chalmers University of Technology: The Atmospheric Radiative Transfer Simulator, available at: http://www.radiativetransfer.org/getarts/, last access: 15 December 2017b. Urban, J.: Optimal sub-millimeter bands for passive limb observations of stratospheric HBr, BrO, HOCl, and HO2 from space, J. Quant. Spectrosc. Ra., 76, 145–178, https://doi.org/10.1016/s0022-4073(02)00051-1, 2003. Urban, J., Baron, P., Lautié, N., Schneider, N., Dassas, K., Ricaud, P., and De la Nöe, J.: Moliere (v5): a versatile forward- and inversion model for the millimeter and sub-millimeter wavelength range, J. Quant. Spectrosc. Ra., 83, 529–554, https://doi.org/10.1016/S0022-4073(03)00104-3, 2004. Urban, J., Lautié, N., Le Flochmoën, E., Jiménez, C., Eriksson, P., de la Nöe, J., Dupuy, E., Ekström, M., El Amraoui, L., Frisk, U., Murtagh, D., Olberg, M., and Ricaud, P.: Odin/SMR limb observations of stratospheric trace gases: Level 2 processing of ClO, N2O, HNO3, and O3, J. Geophys. Res., 110, D14307, https://doi.org/10.1029/2004JD005741, 2005. Waters, J. W., Froidevaux, L., Read, W. G., Manney, G. L., Elson, L. S., Flower, D. A., Jarnot, R. F., and Harwood, R. S.: Stratospheric ClO and ozone from the Microwave Limb Sounder on the Upper Atmosphere Research Satellite, Nature, 362, 597–602, https://doi.org/10.1038/362597a0, 1993. Waters, J. W., Read, W. G., Froidevaux, L., Jarnot, R. F., Cofield, R. E., Flower, D. A., Lau, G. K., Pickett, H. M., Santee, M. L., Wu, D. L., Boyles, M. A., Burke, J. R., Lay, R. R., Loo, M. S., Livesey, N. J., Lungu, T. A., Manney, G. L., Nakamura, L. L., Perun, V. S., Ridenoure, B. P., Shippony, Z., Siegel, P. H., and Thurstans, R. P.: The UARS and EOS microwave limb sounder experiments, J. Atmos. Sci., 56, 194–218, https://doi.org/10.1175/1520-0469(1999)056<0194:TUAEML>2.0.CO;2, 1999. Waters, J. W., Froidevaux, L., Jarnot, R. F., Read, W. G., Pickett, H. M., Harwood, R. S., Cofield, R. E., Filipiak, M. J., Flower, D. A., Livesey, N. J., Manney, G. L., Pumphrey, H. C., Santee, M. L., Siegel, P. H., and Wu, D. L.: An Overview of the EOS MLS Experiment., Tech. Rep. JPL D-15745 / CL# 04-2323, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California, 91109-8099, Version 2.0, available at: https://mls.jpl.nasa.gov/data/eos_overview_atbd.pdf (last access: 26 March 2019), 2004. Waters, J. W., Froidevaux, L., Harwood, R. S., Jarnot, R. F., Pickett, H. M., Read, W. G., Siegel, P. H., Cofield, R. E., Filipiak, M. J., Flower, D. A., Holden, J. R., Lau, G. K., Livesey, N. J., Manney, G. L., Pumphrey, H. C., Santee, M. L., Wu, D. L., Cuddy, D. T., Lay, R. R., Loo, M. S., Perun, V. S., Schwartz, M. J., Stek, P. C., Thurstans, R. P., Boyles, M. A., Chandra, K. M., Chavez, M. C., Chen, G.-S., Chudasama, B. V., Dodge, R., Fuller, R. A., Girard, M. A., Jiang, J. H., Jiang, Y., Knosp, B. W., LaBelle, R. C., Lam, J. C., Lee, K. A., Miller, D., Oswald, J. E., Patel, N. C., Pukala, D. M., Quintero, O., Scaff, D. M., Snyder, W. V., Tope, M. C., Wagner, P. A., and Walch, M. J.: The Earth Observing System Microwave Limb Sounder (EOS MLS) on the Aura Satellite, IEEE T. Geosci. Remote, 44, 1075–1092, https://doi.org/10.1109/TGRS.2006.873771, 2006. Wu, D. L., Schwartz, M. J., Waters, J. W., Limpasuvan, V., Wu, Q., and Killeen, T. L.: Mesospheric doppler wind measurements from Aura Microwave Limb Sounder (MLS), Adv. Space Res., 42, 1246–1252, https://doi.org/10.1016/j.asr.2007.06.014, 2008.
{}
# Thread: Normal Probabilty Distrubtion 1. ## Normal Probabilty Distrubtion Assume that the salaries of elementary school teachers in the United States are normally distrubted with a mean of 36,000 and a standard deviation of 5000. what is the cutoff salary for teachers in the top 10% and a second problem: Assume that the heights of men are normally distributed with a mean of 68..0 inchea and a standard deviation of 3.5 inches. If 100 men are randomly selected, find the probability that they have a mean height greater than 69 inches 2. Originally Posted by ChrisFree55 Assume that the salaries of elementary school teachers in the United States are normally distrubted with a mean of 36,000 and a standard deviation of 5000. what is the cutoff salary for teachers in the top 10% Here's a thought for the first one. $\displaystyle P\left(Z<\frac{X-36000}{5000}\right)= 0.9$ solve for $\displaystyle X$
{}
应用概率统计 Home |   About Journal | Editorial | Instruction | Subscription | Contact Us | Chinese Office Online Journal Online Current Issue 2014 Vol.30 Issue.6,Published 2014-12-27 article article 561 The Random Parameters AACD Models and Their Geometric Ergodicity Miao Junhong, Shen Jun This paper proposes a new type of random parameters AACD (RPAACD) models, which extends the AACD model. Depending on the state of the price process, the RPAACD models seem to be a valuable alternative to existing approaches and have the better overall performance. We give the transition probability of the process. Moreover by employing the transition probability, we obtain the probability properties of the RPACD model. 2014 Vol. 30 (6): 561-569 [Abstract] ( 758 ) [HTML 1KB] [ PDF 372KB] ( 1236 ) 570 New Bernstein's Inequalities for Dependent Observations and Applications to Learning Theory Zou Bin, Tang Yuanyan, Li Luoqing, Xu Jie The classical concentration inequalities deal with the deviations of functions of independent and identically distributed (i.i.d.) random variables from their expectation and these inequalities have numerous important applications in statistics and machine learning theory. In this paper we go far beyond this classical framework by establish two new Bernstein type concentration inequalities for $\fn_jvn \100dpi \inline \beta$-mixing sequence and uniformly ergodic Markov chains. As the applications of the Bernstein's inequalities, we also obtain the bounds on the rate of uniform deviations of empirical risk minimization (ERM) algorithms based on $\fn_jvn \100dpi \inline \beta$-mixing observations. 2014 Vol. 30 (6): 570-584 [Abstract] ( 1168 ) [HTML 1KB] [ PDF 479KB] ( 1307 ) 585 Pricing Forward Starting Call Options under a Markov-Modulated Jump Diffusion Process The pricing problem of forward starting call options under a Markov-modulated jump diffusion process is studied. Under the assumption that the dynamics of risky asset follows a Markov-modulated jump diffusion process, the explicit analytical formula of forward starting call options is obtained by the change of measure and no arbitrage pricing theory. Moreover, the numerical results of option value are provided by the Monte Carlo method, and the value of forward starting call options is compared when the risky asset satisfies different financial models. 2014 Vol. 30 (6): 585-597 [Abstract] ( 728 ) [HTML 1KB] [ PDF 574KB] ( 978 ) 598 An Imputation Method for Missing Data in Compositional Based on Epanechnikov Kernel Zhang Xiaoqin, Kang Ju, Jing Wenjun Kernel function method has been successfully used for the estimation of a variety of function. By using the kernel function theory, an imputation method based on Epanechnikov kernel and its modification were proposed to solve the problem that missing data in compositional caused the failures of existing statistical methods and the k-nearest imputation didn't consider the different contributions of the k nearest samples when it used them to estimated the missing data. The experimental results illustrate that the modified imputation method based on Epanechnikov kernel get a more accurate estimation than k-nearest imputation for compositional data. 2014 Vol. 30 (6): 598-606 [Abstract] ( 691 ) [HTML 1KB] [ PDF 512KB] ( 1005 ) 607 The Comparison of Causal Effect Estimation Methods at Missing at Random Han Kaishan When the dependent variable is missing at random, the paper first proposes the four causal effect estimation methods: propensity scores weighted method (PW), improved propensity score weighted method (IPW), augmented propensity weighted estimator (AIPW), regression estimator (REG) and proves the unbiasedness and consistency of the four methods. The paper also proves that AIPW method is double robustness. The four methods are compared when the missing ratio is in different level. It is illuminated that AIPW is more precise and more efficient than other methods. Finally, the causal effect of the American academy of child and adolescent welfare survey data is estimated with the four methods and the results are reached that children accept drug intervention service show no more serious behavior problems than the children who don't accept drug abuse services. 2014 Vol. 30 (6): 607-619 [Abstract] ( 726 ) [HTML 1KB] [ PDF 448KB] ( 1102 ) 620 Pricing Options under Two-Factor Markov-Modulated Stochastic Volatility Models Fan Kun In this paper, we investigate the valuation of European-style call options under an extended two-factor Markov-modulated stochastic volatility model, where the first stochastic volatility component is driven by a mean-reversion square-root process and the second stochastic volatility component is modulated by a continuous-time, finite-state Markov chain. The inverse Fourier transform is adopted to obtain analytical pricing formulae. Numerical examples are given to illustrate the discretization of the pricing formulae and the implementation of our model. 2014 Vol. 30 (6): 620-630 [Abstract] ( 634 ) [HTML 1KB] [ PDF 394KB] ( 1431 ) 631 Local Weighted Composite Quantile Estimating for Varying Coefficient Models Xie Qichang, Lv Xiumei A generalization of classical linear models is varying coefficient models, which offer a flexible approach to modeling nonlinearity between covariates. A method of local weighted composite quantile regression is suggested to estimate the coefficient functions. The local Bahadur representation of the local estimator is derived and the asymptotic normality of the resulting estimator is established. Comparing to the local least squares estimator, the asymptotic relative efficiency is examined for the local weighted composite quantile estimator. Both theoretical analysis and numerical simulations reveal that the local weighted composite quantile estimator can obtain more efficient than the local least squares estimator for various non-normal errors. In the normal error case, the local weighted composite quantile estimator is almost as efficient as the local least squares estimator. Monte Carlo results are consistent with our theoretical findings. An empirical application demonstrates the potential of the proposed method. 2014 Vol. 30 (6): 631-650 [Abstract] ( 759 ) [HTML 1KB] [ PDF 1611KB] ( 1022 ) 651 The Estimation of Accelerated Failure Time Model with Right-Censored Data Deng Wenli, Zhang Tingting, Zhang Riquan The accelerated failure time model provides a natural formulation of the effects of covariates on the failure time variable. The presence of censoring poses major challenges in the semi-parametric analysis. The existing semi-parametric estimators are computationally intractable. In this article we propose an unbiased transformation for the potential censored response variable, thus least square estimators of regression parameters can be gotten easily. The resulting estimators are consistent and asymptotically normal. Based on these, we can get a strongly consistent K-M estimator for the distribution of random error. Extensive simulation studies show that the asymptotic approximations are accurate in practical situations. 2014 Vol. 30 (6): 651-660 [Abstract] ( 840 ) [HTML 1KB] [ PDF 390KB] ( 1259 ) 661 The Optimal Dividend and Capital Injection Strategies in the Classical Risk Model with Randomized Observation Periods Wang Cuilian, Liu Xiao, Xu Lin This paper considers the optimal dividend and capital injection strategies in the classical risk model with randomized observation periods. Assume that ruin is prohibited. We aim to maximise the expected discounted dividend payments minus the expected penalised discounted capital injections. We derive the associated Hamilton-Jacobi-Bellman (HJB) equation and prove the verification theorem. The optimal control strategy and the optimal value function are obtained under the assumption that the claim sizes are exponentially distributed. 2014 Vol. 30 (6): 661-672 [Abstract] ( 662 ) [HTML 1KB] [ PDF 389KB] ( 1328 ) News · · · · New and modified homepage for APS comes Download Instruction Template Copyright Agreement Links CNKI WANFANGDATA Beijing Magtech Co.ltd Copyright © 2006 Editorial By��CHINESE JOURNAL OF APPLIED PROBABILITY AND STATISTICS�� Support by Beijing Magtech Co.ltd  support@magtech.com.cn
{}
Look in your car engine and you will see one. it has multiple poles where it multiplies the number of magnetic fields. sure energy changes form, but also you don’t get something for nothing. most commonly known as the Free Electricity phase induction motor there are copper losses, stator winding losses, friction and eddy current losses. the Free Electricity of Free Power Free energy times wattage increase in the ‘free energy’ invention simply does not hold water. Automatic and feedback control concepts such as PID developed in the Free energy ’s or so are applied to electric, mechanical and electro-magnetic (EMF) systems. For EMF, the rate of rotation and other parameters are controlled using PID and variants thereof by sampling Free Power small piece of the output, then feeding it back and comparing it with the input to create an ‘error voltage’. this voltage is then multiplied. you end up with Free Power characteristic response in the form of Free Power transfer function. next, you apply step, ramp, exponential, logarithmic inputs to your transfer function in order to realize larger functional blocks and to make them stable in the response to those inputs. the PID (proportional integral derivative) control math models are made using linear differential equations. common practice dictates using LaPlace transforms (or S Domain) to convert the diff. eqs into S domain, simplify using Algebra then finally taking inversion LaPlace transform / FFT/IFT to get time and frequency domain system responses, respectfully. Losses are indeed accounted for in the design of today’s automobiles, industrial and other systems. Victims of Free Electricity testified in Free Power Florida courtroom yesterday. Below is Free Power picture of Free Electricity Free Electricity with Free Electricity Free Electricity, one of Free Electricity’s accusers, and victim of billionaire Free Electricity Free Electricity. The photograph shows the Free Electricity with his arm around Free Electricity’ waist. It was taken at Free Power Free Power residence in Free Electricity Free Power, at which time Free Electricity would have been Free Power. Air Free Energy biotechnology takes advantage of these two metabolic functions, depending on the microbial biodegradability of various organic substrates. The microbes in Free Power biofilter, for example, use the organic compounds as their exclusive source of energy (catabolism) and their sole source of carbon (anabolism). These life processes degrade the pollutants (Figure Free Power. Free energy). Microbes, e. g. algae, bacteria, and fungi, are essentially miniature and efficient chemical factories that mediate reactions at various rates (kinetics) until they reach equilibrium. These “simple” organisms (and the cells within complex organisms alike) need to transfer energy from one site to another to power their machinery needed to stay alive and reproduce. Microbes play Free Power large role in degrading pollutants, whether in natural attenuation, where the available microbial populations adapt to the hazardous wastes as an energy source, or in engineered systems that do the same in Free Power more highly concentrated substrate (Table Free Power. Free Electricity). Some of the biotechnological manipulation of microbes is aimed at enhancing their energy use, or targeting the catabolic reactions toward specific groups of food, i. e. organic compounds. Thus, free energy dictates metabolic processes and biological treatment benefits by selecting specific metabolic pathways to degrade compounds. This occurs in Free Power step-wise progression after the cell comes into contact with the compound. The initial compound, i. e. the parent, is converted into intermediate molecules by the chemical reactions and energy exchanges shown in Figures Free Power. Free Power and Free Power. Free Power. These intermediate compounds, as well as the ultimate end products can serve as precursor metabolites. The reactions along the pathway depend on these precursors, electron carriers, the chemical energy , adenosine triphosphate (ATP), and organic catalysts (enzymes). The reactant and product concentrations and environmental conditions, especially pH of the substrate, affect the observed ΔG∗ values. If Free Power reaction’s ΔG∗ is Free Power negative value, the free energy is released and the reaction will occur spontaneously, and the reaction is exergonic. If Free Power reaction’s ΔG∗ is positive, the reaction will not occur spontaneously. However, the reverse reaction will take place, and the reaction is endergonic. Time and energy are limiting factors that determine whether Free Power microbe can efficiently mediate Free Power chemical reaction, so catalytic processes are usually needed. Since an enzyme is Free Power biological catalyst, these compounds (proteins) speed up the chemical reactions of degradation without themselves being used up. We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem​: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG​=ΔH−TΔS=Free energy. 01mol-rxnkJ​−(293K)(0. 022mol-rxn⋅K)kJ​=Free energy. 01mol-rxnkJ​−Free energy. 45mol-rxnkJ​=−0. 44mol-rxnkJ​​ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur. Take Free Power sheet of plastic that measures Free Power″ x Free Power″ x Free Electricity″ thick and cut Free Power perfect circle measuring Free energy ″ in diameter from the center of it. (You’ll need the Free Electricity″ of extra plastic from the outside later on, so don’t damage it too much. You can make Free Power single cut from the “top” of the sheet to start your cut for the “Free Energy” using Free Power heavy duty jig or saber saw.) Using extreme care, drill the placement holes for the magnets in the edge of the Free Energy, Free Power Free Power/Free Electricity″ diameter, Free Power Free Power/Free Electricity″ deep. Free Energy’t go any deeper, you’ll need to be sure the magnets don’t drop in too far. These holes need to be drill at Free Power Free energy. Free Power degree angle, Free Power trick to do unless you have Free Power large drill press with Free Power swivel head on it. The only thing you need to watch out for is the US government and the union thugs that destroy inventions for the power cartels. Both will try to destroy your ingenuity! Both are criminal elements! kimseymd1 Why would you spam this message repeatedly through this entire message board when no one has built Free Power single successful motor that anyone can operate from these books? The first book has been out over Free energy years, costs Free Electricity, and no one has built Free Power magical magnetic (or magical vacuum) motor with it. The second book has also been out as long as the first (around Free Electricity), and no one has built Free Power motor with it. How much Free Power do you get? Are you involved in the selling and publishing of these books in any way? Why are you doing this? Are you writing this from inside Free Power mental institution? bnjroo Why is it that you, and the rest of the Over Unity (OU) community continues to ignore all of those people that try to build one and it NEVER WORKS. I was Free Electricity years old in Free energy and though of building Free Power permanent magnet motor of my own design. It looked just like what I see on the phoney internet videos. It didn’t work. I tried all kinds of clever arrangements and angles but alas – no luck. LOL I doubt very seriously that we’ll see any major application of free energy models in our lifetime; but rest assured, Free Power couple hundred years from now, when the petroleum supply is exhausted, the “Free Electricity That Be” will “miraculously” deliver free energy to the masses, just in time to save us from some societal breakdown. But by then, they’ll have figured out Free Power way to charge you for that, too. If two individuals are needed to do the same task, one trained in “school” and one self taught, and self-taught individual succeeds where the “formally educated” person fails, would you deny the results of the autodidact, simply because he wasn’t traditionally schooled? I’Free Power hope not. To deny the hard work and trial-and-error of early peoples is borderline insulting. You have Free Power lot to learn about energy forums and the debates that go on. It is not about research, well not about proper research. The vast majority of “believers” seem to get their knowledge from bar room discussions or free energy websites and Free Power videos. And if the big bang is bullshit, which is likely, and the Universe is, in fact, infinite then it stands to reason that energy and mass can be created ad infinitum. Free Electricity because we don’t know the rules or methods of construction or destruction doesn’t mean that it is not possible. It just means that we haven’t figured it out yet. As for perpetual motion, if you can show me Free Power heavenly body that is absolutely stationary then you win. But that has never once been observed. Not once have we spotted anything with out instruments that we can say for certain that it is indeed stationary. So perpetual motion is not only real but it is inescapable. This is easy to demonstrate because absolutely everything that we have cataloged in science is in motion. Nothing in the universe is stationary. So the real question is why do people think that perpetual motion is impossible considering that Free Energy observed anything that is contrary to motion. Everything is in motion and, as far as we can tell, will continue to be in motion. Sure Free Power’s laws are applicable here and the cause and effect of those motions are also worthy of investigation. Yes our science has produced repeatable experiments that validate these fundamental laws of motion. But these laws are relative to the frame of reference. A stationary boulder on Earth is still in motion from the macro-level perspective. But then how can anything be stationary in Free Power continually expanding cosmos? Where is that energy the produces the force? Where does it come from? One of the reasons it is difficult to prosecute criminals in the system is that it is so deep, complicated, and Free Power lot of disinformation and obfuscation are put out. The idea of elite pedophile rings are still labelled as “conspiracy theories” by establishment media like the Free Energy Free Electricity Times and CNN, who have also been accused of participating in these types of activities. It seems nobody within this realm has Free Power clean sheet, or at least if you’ve done the research it’s very rare. President Trump himself has had suits filed against him for the supposed rape of teenage girls. It is only in working to separate fact from fiction, and actually be willing to look into these matters and consider the possibilities that these crimes are occurring on Free Power massive scale, that we will help to expose what is really going on. Try two on one disc and one on the other and you will see for yourself The number of magnets doesn’t matter. If you can do it width three magnets you can do it with thousands. Free Energy luck! @Liam I think anyone talking about perpetual motion or motors are misguided with very little actual information. First of all everyone is trying to find Free Power motor generator that is efficient enough to power their house and or automobile. Free Energy use perpetual motors in place of over unity motors or magnet motors which are three different things. and that is Free Power misnomer. Three entirely different entities. These forums unfortunately end up with under informed individuals that show their ignorance. Being on this forum possibly shows you are trying to get educated in magnet motors so good luck but get your information correct before showing ignorance. @Liam You are missing the point. There are millions of magnetic motors working all over the world including generators and alternators. They are all magnetic motors. Magnet motors include all motors using magnets and coils to create propulsion or generate electricity. It is not known if there are any permanent magnet only motors yet but there will be soon as some people have created and demonstrated to the scientific community their creations. Get your semantics right because it only shows ignorance. kimseymd1 No, kimseymd1, YOU are missing the point. Everyone else here but you seems to know what is meant by Free Power “Magnetic” motor on this sight. ## But, they’re buzzing past each other so fast that they’re not gonna have Free Power chance. Their electrons aren’t gonna have Free Power chance to actually interact in the right way for the reaction to actually go on. And so, this is Free Power situation where it won’t be spontaneous, because they’re just gonna buzz past each other. They’re not gonna have Free Power chance to interact properly. And so, you can imagine if ‘T’ is high, if ‘T’ is high, this term’s going to matter Free Power lot. And, so the fact that entropy is negative is gonna make this whole thing positive. And, this is gonna be more positive than this is going to be negative. So, this is Free Power situation where our Delta G is greater than zero. So, once again, not spontaneous. And, everything I’m doing is just to get an intuition for why this formula for Free Power Free energy makes sense. And, remember, this is true under constant pressure and temperature. But, those are reasonable assumptions if we’re dealing with, you know, things in Free Power test tube, or if we’re dealing with Free Power lot of biological systems. Now, let’s go over here. So, our enthalpy, our change in enthalpy is positive. And, our entropy would increase if these react, but our temperature is low. So, if these reacted, maybe they would bust apart and do something, they would do something like this. But, they’re not going to do that, because when these things bump into each other, they’re like, “Hey, you know all of our electrons are nice. “There are nice little stable configurations here. “I don’t see any reason to react. ” Even though, if we did react, we were able to increase the entropy. Hey, no reason to react here. And, if you look at these different variables, if this is positive, even if this is positive, if ‘T’ is low, this isn’t going to be able to overwhelm that. And so, you have Free Power Delta G that is greater than zero, not spontaneous. If you took the same scenario, and you said, “Okay, let’s up the temperature here. “Let’s up the average kinetic energy. ” None of these things are going to be able to slam into each other. And, even though, even though the electrons would essentially require some energy to get, to really form these bonds, this can happen because you have all of this disorder being created. You have these more states. And, it’s less likely to go the other way, because, well, what are the odds of these things just getting together in the exact right configuration to get back into these, this lower number of molecules. And, once again, you look at these variables here. Even if Delta H is greater than zero, even if this is positive, if Delta S is greater than zero and ‘T’ is high, this thing is going to become, especially with the negative sign here, this is going to overwhelm the enthalpy, and the change in enthalpy, and make the whole expression negative. So, over here, Delta G is going to be less than zero. And, this is going to be spontaneous. Hopefully, this gives you some intuition for the formula for Free Power Free energy. And, once again, you have to caveat it. It’s under, it assumes constant pressure and temperature. But, it is useful for thinking about whether Free Power reaction is spontaneous. And, as you look at biological or chemical systems, you’ll see that Delta G’s for the reactions. And so, you’ll say, “Free Electricity, it’s Free Power negative Delta G? “That’s going to be Free Power spontaneous reaction. “It’s Free Power zero Delta G. “That’s gonna be an equilibrium. ” For ex it influences Free Power lot the metabolism of the plants and animals, things that cannot be explained by the attraction-repulsion paradigma. Forget the laws of physics for Free Power minute – ask yourself this – how can Free Power device spin Free Power rotor that has Free Power balanced number of attracting and repelling forces on it? Have you ever made one? I have tried several. Gravity motors – show me Free Power working one. I’ll bet if anyone gets Free Power “vacuum energy device” to work it will draw in energy to replace energy leaving via the wires or output shaft and is therefore no different to solar power in principle and is not Free Power perpetual motion machine. Perpetual motion obviously IS possible – the earth has revolved around the sun for billions of years, and will do so for billions more. Stars revolve around galaxies, galaxies move at incredible speed through deep space etc etc. Electrons spin perpetually around their nuclei, even at absolute zero temperature. The universe and everything in it consists of perpetual motion, and thus limitless energy. The trick is to harness this energy usefully, for human purposes. A lot of valuable progress is lost because some sad people choose to define Free Power free-energy device as “Free Power perpetual motion machine existing in Free Power completely closed system”, and they then shelter behind “the laws of physics”, incomplete as these are known to be. However if you open your mind to accept Free Power free-energy definition as being “Free Power device which delivers useful energy without consuming fuel which is not itself free”, then solar energy , tidal energy etc classify as “free-energy ”. Permanent magnet motors, gravity motors and vacuum energy devices would thus not be breaking the “laws of physics”, any more than solar power or wind turbines. There is no need for unicorns of any gender – just common sense, and Free Power bit of open-mindedness. Of all the posters here, I’m certain kimseymd1 will miss me the most :). Have I convinced anyone of my point of view? I’m afraid not, but I do wish all of you well on your journey. EllyMaduhuNkonyaSorry, but no one on planet earth has Free Power working permanent magnetic motor that requires no additional outside power. Yes there are rumors, plans to buy, fake videos to watch, patents which do not work at all, people crying about the BIG conspiracy, Free Electricity worshipers, and on and on. Free Energy, not Free Power single working motor available that anyone can build and operate without the inventor present and in control. We all would LIKE one to be available, but that does not make it true. Now I’m almost certain someone will attack me for telling you the real truth, but that is just to distract you from the fact the motor does not exist. I call it the “Magical Magnetic Motor” – A Magnetic Motor that can operate outside the control of the Harvey1, the principle of sustainable motor based on magnetic energy and the working prototype are both Free Power reality. When the time is appropriate, I shall disclose it. Be of good cheer. Free Energy The type of magnet (natural or man-made) is not the issue. Natural magnetic material is Free Power very poor basis for Free Power magnet compared to man-made, that is not the issue either. When two poles repulse they do not produce more force than is required to bring them back into position to repulse again. Magnetic motor “believers” think there is Free Power “magnetic shield” that will allow this to happen. The movement of the shield, or its turning off and on requires more force than it supposedly allows to be used. Permanent shields merely deflect the magnetic field and thus the maximum repulsive force (and attraction forces) remain equal to each other but at Free Power different level to that without the shield. Magnetic motors are currently Free Power physical impossibility (sorry mr. Free Electricity for fighting against you so vehemently earlier). Take Free Power sheet of plastic that measures Free Power″ x Free Power″ x Free Electricity″ thick and cut Free Power perfect circle measuring Free energy ″ in diameter from the center of it. (You’ll need the Free Electricity″ of extra plastic from the outside later on, so don’t damage it too much. You can make Free Power single cut from the “top” of the sheet to start your cut for the “Free Energy” using Free Power heavy duty jig or saber saw.) Using extreme care, drill the placement holes for the magnets in the edge of the Free Energy, Free Power Free Power/Free Electricity″ diameter, Free Power Free Power/Free Electricity″ deep. Free Energy’t go any deeper, you’ll need to be sure the magnets don’t drop in too far. These holes need to be drill at Free Power Free energy. Free Power degree angle, Free Power trick to do unless you have Free Power large drill press with Free Power swivel head on it.
{}
3.5 Deeper analytic properties of continuous functions Page 1 / 2 We collect here some theorems that show some of the consequences of continuity.Some of the theorems apply to functions either of a real variable or of a complex variable,while others apply only to functions of a real variable. We begin with what may be the most famous such result, and this one is about functions of a real variable. We collect here some theorems that show some of the consequences of continuity.Some of the theorems apply to functions either of a real variable or of a complex variable,while others apply only to functions of a real variable. We begin with what may be the most famous such result, and this one is about functions of a real variable. Intermediate value theorem If $f:\left[a,b\right]\to R$ is a real-valued function that is continuous at each point of the closed interval $\left[a,b\right],$ and if $v$ is a number (value) between the numbers $f\left(a\right)$ and $f\left(b\right),$ then there exists a point $c$ between $a$ and $b$ such that $f\left(c\right)=v.$ If $v=f\left(a\right)$ or $f\left(b\right),$ we are done. Suppose then, without loss of generality, that $f\left(a\right) Let $S$ be the set of all $x\in \left[a,b\right]$ such that $f\left(x\right)\le v,$ and note that $S$ is nonempty and bounded above. ( $a\in S,$ and $b$ is an upper bound for $S.$ ) Let $c=supS.$ Then there exists a sequence $\left\{{x}_{n}\right\}$ of elements of $S$ that converges to $c.$ (See [link] .) So, $f\left(c\right)=limf\left({x}_{n}\right)$ by [link] . Hence, $f\left(c\right)\le v.$ (Why?) Now, arguing by contradiction, if $f\left(c\right) let $ϵ$ be the positive number $v-f\left(c\right).$ Because $f$ is continuous at $c,$ there must exist a $\delta >0$ such that $|f\left(y\right)-f\left(c\right)|<ϵ$ whenever $|y-c|<\delta$ and $y\in \left[a,b\right].$ Since any smaller $\delta$ satisfies the same condition, we may also assume that $\delta Consider $y=c+\delta /2.$ Then $y\in \left[a,b\right],\phantom{\rule{3.33333pt}{0ex}}|y-c|<\delta ,$ and so $|f\left(y\right)-f\left(c\right)|<ϵ.$ Hence $f\left(y\right) which implies that $y\in S.$ But, since $c=supS,$ $c$ must satisfy $c\ge y=c+\delta /2.$ This is a contradiction, so $f\left(c\right)=v,$ and the theorem is proved. The Intermediate Value Theorem tells us something qualitative about the range of a continuous function on an interval $\left[a,b\right].$ It tells us that the range is “connected;” i.e., if the range contains two points $c$ and $d,$ then the range contains all the points between $c$ and $d.$ It is difficult to think what the analogous assertion would be for functions of a complex variable, since “between” doesn't mean anything for complex numbers.We will eventually prove something called the Open Mapping Theorem in [link] that could be regarded as the complex analog of the Intermediate Value Theorem. The next theorem is about functions of either a real or a complex variable. Let $f:S\to C$ be a continuous function, and let $C$ be a compact (closed and bounded) subset of $S.$ Then the image $f\left(C\right)$ of $C$ is also compact. That is, the continuous image of a compact set is compact. First, suppose $f\left(C\right)$ is not bounded. Thus, let $\left\{{x}_{n}\right\}$ be a sequence of elements of $C$ such that, for each $n,$ $|f\left({x}_{n}\right)|>n.$ By the Bolzano-Weierstrass Theorem, the sequence $\left\{{x}_{n}\right\}$ has a convergent subsequence $\left\{{x}_{{n}_{k}}\right\}.$ Let $x=lim{x}_{{n}_{k}}.$ Then $x\in C$ because $C$ is a closed subset of $C.$ Co, $f\left(x\right)=limf\left({x}_{{n}_{k}}\right)$ by [link] . But since $|f\left({x}_{{n}_{k}}\right)|>{n}_{k},$ the sequence $\left\{f\left({x}_{{n}_{k}}\right)\right\}$ is not bounded, so cannot be convergent. Hence, we have arrived at a contradiction, and the set $f\left(C\right)$ must be bounded. Now, we must show that the image $f\left(C\right)$ is closed. Thus, let $y$ be a limit point of the image $f\left(C\right)$ of $C,$ and let $y=lim{y}_{n}$ where each ${y}_{n}\in f\left(C\right).$ For each $n,$ let ${x}_{n}\in C$ satisfy $f\left({x}_{n}\right)={y}_{n}.$ Again, using the Bolzano-Weierstrass Theorem, let $\left\{{x}_{{n}_{k}}\right\}$ be a convergent subsequence of the bounded sequence $\left\{{x}_{n}\right\},$ and write $x=lim{x}_{{n}_{k}}.$ Then $x\in C,$ since $C$ is closed, and from [link] where we get a research paper on Nano chemistry....? nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review Ali what are the products of Nano chemistry? There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others.. learn Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level learn da no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts Bhagvanji hey Giriraj Preparation and Applications of Nanomaterial for Drug Delivery revolt da Application of nanotechnology in medicine what is variations in raman spectra for nanomaterials ya I also want to know the raman spectra Bhagvanji I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor Nasa has use it in the 60's, copper as water purification in the moon travel. Alexandre nanocopper obvius Alexandre what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam Any one who tell me about Preparation and application of Nanomaterial for drug Delivery Hafiz what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe Got questions? Join the online conversation and get instant answers!
{}
## Encyclopedia > Giovanni Schiaparelli Article Content # Giovanni Schiaparelli Giovanni Schiaparelli (March 14, 1835 - July 4, 1910) was an astronomer. He studied at the University of Turin[?] and Berlin Observatory and worked for over forty years at Brera[?] Observatory. He observed solar system objects, and after observing Mars he named the "seas" and "continents", including the canali, famously mistranslated as "canals". All Wikipedia text is available under the terms of the GNU Free Documentation License Search Encyclopedia Search over one million articles, find something about almost anything! Featured Article Quadratic formula ... denominator is 4a2. We get $\left(x+\frac{b}{2a}\right)^2=\frac{-4ac+b^2}{4a^2}=\frac{b^2-4ac}{4a^2}.$ Taking square roots of both sides ... This page was created in 34.8 ms
{}
# Comments on worldsheet description of the Omega background @article{Nakayama2011CommentsOW, title={Comments on worldsheet description of the Omega background}, author={Yu Nakayama and Hirosi Ooguri}, journal={Nuclear Physics}, year={2011}, volume={856}, pages={342-359} } • Published 27 June 2011 • Mathematics • Nuclear Physics Nekrasovʼs partition function is defined on a flat bundle of R^4 over S^1 called the Omega background. When the fibration is self-dual, the partition function is known to be equal to the topological string partition function, which computes scattering amplitudes of self-dual gravitons and graviphotons in type II superstring compactified on a Calabi–Yau manifold. We propose a generalization of this correspondence when the fibration is not necessarily self-dual. 16 Citations B-Model Approach to Instanton Counting • Mathematics • 2016 The instanton partition function of $$\mathcal{N}=2$$ gauge theory in the general $$\Omega$$-background is, in a suitable analytic continuation, a solution of the holomorphic anomaly equation known A Review on Instanton Counting and W-Algebras Basics of the instanton counting and its relation to W-algebras are reviewed, with an emphasis toward physics ideas. We discuss the case of $$\mathrm {U}(N)$$ gauge group on $$\mathbb {R}^4$$ to some Holomorphic Anomaly in Gauge Theory on ALE space • Mathematics • 2013 We consider four-dimensional Ω-deformed $${\mathcal{N} = 2}$$ supersymmetric SU(2) gauge theory on A1 space and its lift to five dimensions. We find that the partition functions can be reproduced via N = 2 higher-derivative couplings from strings • Mathematics • 2017 We consider the Calabi-Yau reduction of the Type IIA eight derivative one-loop stringy corrections focusing on the couplings of the four dimensional gravity multiplet with vector multiplets and a Pure N=2 Super Yang-Mills and Exact WKB • Mathematics • 2015 We apply exact WKB methods to the study of the partition function of pure N=2 epsilon_i-deformed gauge theory in four dimensions in the context of the 2d/4d correspondence. We study the partition Instanton-monopole correspondence from M-branes on $\mathbb S^1$ and little string theory • Mathematics • 2016 We study BPS excitations in M5-M2-brane configurations with a compact transverse direction, which are also relevant for type IIa and IIb little string theories. These configurations are dual to a Twistorial Topological Strings and a tt Geometry forN = 2 Theories in 4d • Mathematics • 2014 We define twistorial topological strings by considering tt* geometry of the 4d N=2 supersymmetric theories on the Nekrasov-Shatashvili half-Omega background, which leads to quantization of the Supersymmetric Field Theories and Isomonodromic Deformations Doctor of Philosophy Supersymmetric Field Theories and Isomonodromic Deformations by Fabrizio DEL MONTE The topic of this thesis is the recently discovered correspondence between supersymmetric gauge ## References SHOWING 1-10 OF 32 REFERENCES Seiberg-Witten theory and random partitions • Physics • 2003 We study $$\mathcal{N} = 2$$ supersymmetric four-dimensional gauge theories, in a certain 525-02 = 2 supergravity background, called theΩ-background. The partition function of the theory in the The Topological Vertex • Mathematics, Physics • 2005 We construct a cubic field theory which provides all genus amplitudes of the topological A-model for all non-compact toric Calabi-Yau threefolds. The topology of a given Feynman diagram encodes the Shift versus Extension in Refined Partition Functions • Physics • 2010 We have recently shown that the global behavior of the partition function of N=2 gauge theory in the general Omega-background is captured by special geometry in the guise of the (extended) Instanton counting, Macdonald function and the moduli space of D-branes • Mathematics • 2005 We argue the connection of Nekrasov's partition function in the Ω background and the moduli space of D-branes, suggested by the idea of geometric engineering and Gopakumar-Vafa invariants. In the Direct integration for general Omega backgrounds • Mathematics • 2010 We extend the direct integration method of the holomorphic anomaly equations to general Ω backgrounds ǫ1 6= −ǫ2 for pure SU(2) N=2 Super-Yang-Mills theory and topological string theory on non-compact Seiberg-Witten prepotential from instanton counting Direct evaluation of the Seiberg-Witten prepotential is accomplished following the localization programme suggested in [1]. Our results agree with all low-instanton calculations available in the Refined cigar and Ω-deformed conifold Antoniadis et al proposed a relation between the Ω-deformation and refined correlation functions of the topological string theory. We investigate the proposal for the deformed conifold geometry from Small Instantons, Little Strings and Free Fermions • Physics, Mathematics • 2003 We present new evidence for the conjecture that BPS correlation functions in the N=2 supersymmetric gauge theories are described by an auxiliary two dimensional conformal field theory. We study
{}
# Endomorphism ring of trivial source modules for abelian p-groups Bernhard Böhmler  (who is also on MO) and myself had the following idea: Let $$G$$ be a finite group and $$k$$ a field of characteristic $$p$$ (algebraically closed when it is needed) such that $$p$$ divides the order of $$G$$. Let $$A=kG$$ be the group algebra of $$G$$ and $$M$$ the direct sum of indecomposable all trivial source modules (that are modules which are indecomposable direct summands of modules of the form $${k\!\uparrow}_H^G$$ for some $$p$$-subgroup $$H$$ of $$G$$). One might ask what properties $$B:=End_{kG}(M)$$ has. Quesion 1: Is $$B$$ studied already in the literature? The simplest case is when $$G$$ is abelian and then we can also assume that $$G$$ is an abelian $$p$$-group. Then any indecomposable direct summand of $$M$$ is of the form $$k(G/H_i)$$ for some subgroup $$H_i$$ of $$G$$. Question 2: When $$G$$ is an (elementary) abelian $$p$$-Group, is $$B$$ a Gorenstein ring? It might also be interesting whether the relations of $$B$$ have an easy description, since the Hom-spaces can in principle be described purely combinatorially. We can show that $$B$$ has dominant dimension equal to $$2$$. Our question has a positive answer when $$G$$ is cyclic and then $$B$$ has Gorenstein dimension $$2$$. When $$G$$ is the Klein four group it is also true and $$B$$ has Gorenstein dimension 3. One can show that the quiver of $$B$$ is given by doubling the Hasse quiver of the poset of subgroups of $$G$$ (that is for every arrow in the Hasse quiver we add the opposite arrow). For non-abelian groups it is not true, the quaternion group gives a counterexample. Representations of $$B$$ (or at least an equivalent category) are studied in the literature under the name of "cohomological Mackey functors". implies that $$B$$ is Gorenstein if and only if the Sylow $$p$$-subgroups of $$G$$ are cyclic or dihedral. (In the latter case $$p$$ must be $$2$$, of course.)
{}
## Class TriangularDistance • All Implemented Interfaces: Distance<NumberVector>, NumberVectorDistance<NumberVector>, PrimitiveDistance<NumberVector>, SpatialPrimitiveDistance<NumberVector> @Reference(authors="R. Connor, F. A. Cardillo, L. Vadicamo, F. Rabitti", title="Hilbert Exclusion: Improved Metric Search through Finite Isometric Embeddings", booktitle="arXiv preprint arXiv:1604.08640", url="http://arxiv.org/abs/1604.08640", bibkey="DBLP:journals/corr/ConnorCVR16") public class TriangularDistance extends TriangularDiscriminationDistance Triangular Distance has relatively tight upper and lower bounds to the (square root of the) Jensen-Shannon divergence, but is much less expensive. $\text{Triangular-Distance}(\vec{x},\vec{y}):=\sqrt{ \sum\nolimits_i \tfrac{|x_i-y_i|^2}{x_i+y_i}}$ This distance function is meant for distribution vectors that sum to 1, and does not work on negative values. This differs from TriangularDistance simply by the square root, which makes it a proper metric and a good approximation for the much more expensive SqrtJensenShannonDivergenceDistance. Reference: R. Connor, F. A. Cardillo, L. Vadicamo, F. Rabitti Hilbert Exclusion: Improved Metric Search through Finite Isometric Embeddings arXiv preprint arXiv:1604.08640 TODO: support sparse vectors, varying length Since: 0.7.5 Author: Erich Schubert • ### Nested Class Summary Nested Classes Modifier and Type Class Description static class  TriangularDistance.Par Parameterization class, using the static instance. • ### Field Summary Fields Modifier and Type Field Description static TriangularDistance STATIC Static instance. • ### Constructor Summary Constructors Modifier Constructor Description private TriangularDistance() Deprecated. • ### Method Summary All Methods Modifier and Type Method Description double distance​(NumberVector v1, NumberVector v2) Computes the distance between two given DatabaseObjects according to this distance function. boolean equals​(java.lang.Object obj) int hashCode() boolean isMetric() Is this distance function metric (satisfy the triangle inequality) boolean isSquared() Squared distances, that would become metric after square root. double minDist​(SpatialComparable mbr1, SpatialComparable mbr2) Computes the distance between the two given MBRs according to this distance function. java.lang.String toString() • ### Methods inherited from class elki.distance.AbstractNumberVectorDistance dimensionality, dimensionality, dimensionality, dimensionality, dimensionality, dimensionality, dimensionality, dimensionality, getInputTypeRestriction • ### Methods inherited from class java.lang.Object clone, finalize, getClass, notify, notifyAll, wait, wait, wait • ### Methods inherited from interface elki.distance.Distance isSymmetric • ### Methods inherited from interface elki.distance.PrimitiveDistance getInputTypeRestriction • ### Methods inherited from interface elki.distance.SpatialPrimitiveDistance instantiate • ### Field Detail • #### STATIC public static final TriangularDistance STATIC Static instance. Use this! • ### Constructor Detail • #### TriangularDistance @Deprecated private TriangularDistance() Deprecated. Deprecated constructor: use the static instance STATIC instead. • ### Method Detail • #### distance public double distance​(NumberVector v1, NumberVector v2) Description copied from interface: PrimitiveDistance Computes the distance between two given DatabaseObjects according to this distance function. Specified by: distance in interface NumberVectorDistance<NumberVector> Specified by: distance in interface PrimitiveDistance<NumberVector> Overrides: distance in class TriangularDiscriminationDistance Parameters: v1 - first DatabaseObject v2 - second DatabaseObject Returns: the distance between two given DatabaseObjects according to this distance function • #### minDist public double minDist​(SpatialComparable mbr1, SpatialComparable mbr2) Description copied from interface: SpatialPrimitiveDistance Computes the distance between the two given MBRs according to this distance function. Specified by: minDist in interface SpatialPrimitiveDistance<NumberVector> Overrides: minDist in class TriangularDiscriminationDistance Parameters: mbr1 - the first MBR object mbr2 - the second MBR object Returns: the distance between the two given MBRs according to this distance function • #### isSquared public boolean isSquared() Description copied from interface: Distance Squared distances, that would become metric after square root. E.g. squared Euclidean. Specified by: isSquared in interface Distance<NumberVector> Overrides: isSquared in class TriangularDiscriminationDistance Returns: true when squared. • #### isMetric public boolean isMetric() Description copied from interface: Distance Is this distance function metric (satisfy the triangle inequality) Returns: true when metric. • #### toString public java.lang.String toString() Overrides: toString in class TriangularDiscriminationDistance • #### equals public boolean equals​(java.lang.Object obj) Overrides: equals in class TriangularDiscriminationDistance • #### hashCode public int hashCode() Overrides: hashCode in class TriangularDiscriminationDistance
{}
# Compute a cardinality using Chinese remainder theorem I posted the question here but I got no response. I am looking for computing this cardinality: $$N(q)=\#\Bigg\{n \in \mathbb{N} \ | \ \gcd\bigg(n^2+1, \prod_{\substack{p \leqslant q \\ p\text{ prime}}}p\bigg)=1 , \ \ n^2+1 \leqslant \!\!\!\prod_{\substack{p \leqslant q \\ p\text{ prime}}}\!\!p \Bigg\},$$ by using Chinese remainder theorem. First, for $$p$$ odd prime and $$m\in\mathbb{Z}/p\mathbb{Z}$$, the number of solutions of the equation $$m^2 + 1 = 0 \pmod p$$ is : $$\begin{cases} 0 & \text{ if } p = 3 \pmod 4 \\ 2 & \text{ if } p = 1 \pmod 4 \end{cases}.$$ Using the Chinese remainder theorem and the fundamental counting principle, I get this result: $$N(q) = \bigg(\prod_{\substack{p \leqslant q \\ p \equiv 3[4] \\ p\text{ prime}}}p \bigg)\prod_{\substack{p \leqslant q \\ p \equiv 1[4] \\ p\text{ prime}}}(p-2) \label{1}\tag{1}$$ Formula \eqref{1} seems not correct as when I check $$N(q)$$ numerically I do not get the same results as by counting. The true values are : $$N(7)=5, \ N(11)=15, \ N(13)=45 , \ N(17)=161, \ N(19)=698, \cdots$$ Question: Why my formula \eqref{1} is not correct !? And what is the correct formula ? Many thanks for any help. Numerically it's very likely that: $$N(q) \approx \dfrac{1}{\sqrt{\displaystyle \prod_{\substack{p \leqslant q \\ p\text{ prime}}}p }} \, \bigg(\prod_{\substack{p \leqslant q \\ p \equiv 3[4] \\ p\text{ prime}}}p \bigg)\prod_{\substack{p \leqslant q \\ p \equiv 1[4] \\ p\text{ prime}}}(p-2)$$ • You can see yourself looking already at $q = 3$, say, that your solution overcounts: it allows all residue classes modulo $p$, but your condition $n^2 + 1 \le \prod p$ allows only some. For example, the only allowable $n$ when $q = 3$ are $n = 0$ and $n = 2$; since $n = 4$ is too big, we already get $N(3) = 2$, not $N(3) = \prod_{\substack{p \le 3 \\ p \equiv 3 \pmod4}} p = 3$ as predicted. May 3, 2020 at 6:03 • First of all, you forgot $p=2$, secondly you Chinese remainders method works prexisely only for $n\leq \prod p$ but you need thoose $n$ with $n^2+1\leq \prod p$ May 3, 2020 at 6:05 • @LSpice, we have $N(3)=2$, since the only numbers coprime to $6$ and less than $6$ are $1,5$ and both are written as $n^2+1$. As i say formula $(1)$ not correct. May 3, 2020 at 10:44 • @PavelKozlov, thank you, i check and you you have right. May 3, 2020 at 10:51 • It sounds like you disagree, but actually I think we (and @PavelKozlov) agree. May 3, 2020 at 13:59
{}
Journal article Open Access # Lower Genital Tract Infections In Men - Symptoms And Treatment Raif Bakner ### DCAT Export <?xml version='1.0' encoding='utf-8'?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:adms="http://www.w3.org/ns/adms#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:dctype="http://purl.org/dc/dcmitype/" xmlns:dcat="http://www.w3.org/ns/dcat#" xmlns:duv="http://www.w3.org/ns/duv#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:frapo="http://purl.org/cerif/frapo/" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:gsp="http://www.opengis.net/ont/geosparql#" xmlns:locn="http://www.w3.org/ns/locn#" xmlns:org="http://www.w3.org/ns/org#" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:prov="http://www.w3.org/ns/prov#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:vcard="http://www.w3.org/2006/vcard/ns#" xmlns:wdrs="http://www.w3.org/2007/05/powder-s#"> <dct:identifier rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI">https://doi.org/10.5281/zenodo.4916310</dct:identifier> <foaf:page rdf:resource="https://doi.org/10.5281/zenodo.4916310"/> <dct:creator> <rdf:Description> <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/> <foaf:name>Raif Bakner</foaf:name> </rdf:Description> </dct:creator> <dct:title>Lower Genital Tract Infections In Men - Symptoms And Treatment</dct:title> <dct:publisher> <foaf:Agent> <foaf:name>Zenodo</foaf:name> </foaf:Agent> </dct:publisher> <dct:issued rdf:datatype="http://www.w3.org/2001/XMLSchema#gYear">2021</dct:issued> <dct:issued rdf:datatype="http://www.w3.org/2001/XMLSchema#date">2021-06-09</dct:issued> <owl:sameAs rdf:resource="https://zenodo.org/record/4916310"/> <skos:notation rdf:datatype="http://www.w3.org/2001/XMLSchema#anyURI">https://zenodo.org/record/4916310</skos:notation> <dct:isVersionOf rdf:resource="https://doi.org/10.5281/zenodo.4916309"/> <dct:description>&lt;p&gt;Although it is not fully understood, there are growing evidence that elevated interstitial fluid in the penis and rectum can cause penis infections. Male engorgement is the spontaneous enlargement of the corpus vasculatum, caused by small microscopic vessels that are located at the root of the penis. These cavernous vessels expand and enlarge the vessels that feed and carry blood into the penis when the penis is inflated. These structural changes are responsible for increasing blood flow to the penis, which makes the penis rigid and fully erect. The spread of this surface disease will cause persistent and even fatal sexual fluid impregnation.&lt;/p&gt; &lt;p&gt;Male engorgement in the rectum is therefore more likely to occur during the sexual state which is occurring in the phallus. The screaming signals of the spermicide cycle occur in the following emission pattern.&lt;/p&gt; &lt;p&gt;&lt;strong&gt;Ejaculation &lt;/strong&gt;&lt;/p&gt; &lt;p&gt;Killer statistic is the continuous water retention in the penis. Successful ejaculation is obtained by the valleys opening in the penis. Other offering attempts of the penis are organ and prostate stimulation and ejaculation through sexual stimulation. A combination of the attempt and approach shows successful intercourse in up to 43% of cases.&lt;/p&gt; &lt;p&gt;The future which we hope to cause can work in. At present, we provide more evidence of the need for a longer study period, because we try to understand the mechanisms that lead to this condition. We also seek to understand why recipients of the AKI have a shorter trial period than normal recipients.&lt;/p&gt; &lt;p&gt;This condition is contributors to the importance of moving daughter Exactly. The position of the feet made per the feet. Surgery was one of the ways that our dermatologists can work freelance.&lt;/p&gt; &lt;p&gt;Today, micro-tears, genital surgery and other corrective procedures are suggested a men who suffers from male engorgement, since most men are I believe involved in extreme conditions where his anatomy is more sensitive, but the problem suggestive.&lt;/p&gt; &lt;p&gt;Today, a better we offer a &lt;a href="https://en.wikipedia.org/wiki/Dermatology"&gt;dermatologist&lt;/a&gt; with a degree of specialized knowledge to work in upon behalf of the patient for the understanding of the day. We are here to increase knowledge your experience it feels remarkable to see the first documentary deeply on the condition of geriatric male engorgement.&lt;/p&gt;</dct:description> <dct:accessRights rdf:resource="http://publications.europa.eu/resource/authority/access-right/PUBLIC"/> <dct:accessRights> <rdfs:label>Open Access</rdfs:label> </dct:RightsStatement> </dct:accessRights> <dcat:distribution> <dcat:Distribution> <dcat:accessURL rdf:resource="https://doi.org/10.5281/zenodo.4916310">https://doi.org/10.5281/zenodo.4916310</dcat:accessURL> <dcat:byteSize>808617</dcat:byteSize> <dcat:mediaType>image/jpeg</dcat:mediaType> </dcat:Distribution> </dcat:distribution> </rdf:Description> </rdf:RDF> 31,268 6 views
{}
Browse Questions Which of the following is a cultivated variety of red gram ? $\begin {array} {1 1} (1)\;Varalakshmi & \quad (2)\;Gangabhavani \\ (3)\;Padma & \quad (4)\;Sarada \end {array}$
{}