text
stringlengths
100
957k
meta
stringclasses
1 value
# Can the displacement current be equal to and opposite in sign to the conduction current so the total current will be zero? Dor If so, what will I measure in the Ampermeter, the zero total current or the value of the conduction current? I was thinking of the following example- a circuit consist of a current source, an Ampermeter, a switch, and a semiconductor. The semiconductor can have both conduction and displacement currents since it is a dielectric conductor. At some point in time I'm switching off the circuit (infintly fast) and so the current at the outer circuit is zero. The electric field will change in time thus there will be a displacement current. To make the total current zero, the conduction current needs to cancel the displacement current. Is this description true? Thanks Homework Helper Gold Member "Can the displacement current be equal to and opposite in sign to the c" has been truncated. So, it might seem [at a quick glance] that you are asking about the speed of light, c. I think the full title is something like "Can the displacement current be equal to and opposite in sign to the conduction current?" Dale Dor Sorry Dale and thank you robphy for drewing my attention. I've edited the title to a more reasonable one Mentor No problem! Since my post was based on a mistaken understanding I have deleted it. Dor
{}
# Fermions in the same state I need some clarification of what is meant when someone says "fermions cannot occupy the same quantum state". Consider two bosons: $$\psi(\vec{r_1}, s_1, \vec{r_2}, s_2) = \frac{1}{\sqrt{2}} \left( \phi_A(\vec{r_1}, s_1)\phi_B(\vec{r_2}, s_2) + \phi_A(\vec{r_2}, s_2)\phi_B(\vec{r_1}, s_1) \right)$$ This is one wavefunction of two particles. A wavefunction directly corresponds to a state, and since this is only one wavefunction, it seems there is only one state -- that two bosons occupy. But now consider two fermions: $$\psi(\vec{r_1}, s_1, \vec{r_2}, s_2) = \frac{1}{\sqrt{2}} \left( \phi_A(\vec{r_1}, s_1)\phi_B(\vec{r_2}, s_2) - \phi_A(\vec{r_2}, s_2)\phi_B(\vec{r_1}, s_1) \right)$$ Again, one wavefunction (=> one state) and two particles that occupy it. Yeah, $\psi(\vec{r_1}, s_1, \vec{r_2}, s_2) = -\psi(\vec{r_2}, s_2, \vec{r_1}, s_1)$, but it's still just one state -- occupied by two fermions. Could someone clarify? - Great question that exposes some really confusing terminology. This is a rather long answer, and the punchline is basically in the second-to-last paragraph, but I think (hope) it's worthwhile to read the whole answer because I tried to give a somewhat systematic description of fermionic states using a specific, simple example along the way. Firstly, let's use Dirac notation; it makes things a bit more clear in my opinion. Let's also restrict the initial discussion to the spin states of two spin-$1/2$ particles (which are therefore fermions) so that the Hilbert space for the state of each particle is two-dimensional. The Hilbert space $\mathcal H_{1/2}$ for a single spin-$1/2$ particle is spanned by the vectors $|+\rangle, |-\rangle$ corresponding to the spin being "up" and "down" respectively. The Hilbert space for the composite system of two distinguishable spin $1/2$ particles is the tensor product $\mathcal H=\mathcal H_{1/2}\otimes\mathcal H_{1/2}$ of the the single spin $1/2$ Hilbert space with istelf. This Hilbert space is four-dimensional and is spanned by the four states \begin{align} |+\rangle|+\rangle, \qquad |+\rangle|-\rangle, \qquad |-\rangle|+\rangle,\qquad |-\rangle|-\rangle \end{align} Every state of the system is some linear combination of these four. Now suppose, instead that the spins are identical, then it turns out that the physical Hilbert space of the system is no longer the tensor product; it is a subspace of the tensor product called the "antisymmetric subspace" which is defined as follows. We define the exchange operator $P$ on $\mathcal H$ as the unique linear operator with the following action on any tensor product basis state \begin{align} P|i\rangle|j\rangle = |j\rangle|i\rangle \end{align} In other words, the exchange operator just exchanges the two factors of any product state. We say that a state $|\psi\rangle$ in the tensor product space is antisymmetric provided \begin{align} P|\psi\rangle = -|\psi\rangle \end{align} The antisymmetric subspace of $\mathcal H$ is then defined as the set of all vectors that are antisymmetric. We then have the following physical fact: For a system consisting of two identical fermions, the state of the system must reside in the antisymmetric subspace of the tensor product of the single-particle Hilbert spaces. Now let's go back to the spin example to see what this means concretely. An arbitrary state $|\psi\rangle$ of the two spin $1/2$ system can be written as \begin{align} |\psi\rangle = c_{++}|+\rangle|+\rangle + c_{+-}|+\rangle|-\rangle + c_{-+}|-\rangle|+\rangle + c_{--}|-\rangle|-\rangle \end{align} The exchance operator acting on this state gives \begin{align} P|\psi\rangle = c_{++}|+\rangle|+\rangle + c_{+-}|-\rangle|+\rangle + c_{-+}|+\rangle|-\rangle + c_{--}|-\rangle|-\rangle \end{align} but for identical fermions, the state must be antisymmetric, and this implies constraints on the coefficients \begin{align} c_{++} = 0, \qquad c_{--} = 0, \qquad c_{-+} = -c_{+-} \end{align} so the most general (normalized) fermionic state for the system is \begin{align} |\psi\rangle = \frac{1}{\sqrt{2}}(|+\rangle|-\rangle - |-\rangle|+\rangle) \end{align} When we say that the particles cannot occupy the same state, this is just another way of pointing out in this case that the coefficients of the states $|+\rangle|+\rangle$ and $|-\rangle|-\rangle$ must vanish; these are states in which either both spins are "up" or both are "down". In particular, you say but it's still just one state -- occupied by two fermions. Well certainly that's true since the (pure) state of any quantum mechanical system must be some vector in some Hilbert space. The above example shows, however, that the "same state" terminology can be thought of in terms of the two tensor factors in the Hilbert space; namely the product basis vectors in which the single-particle states of both particles are the same should be excluded from the Hilbert space. Note: I have concentrated on low dimensional examples, the analysis goes through analogously for Hilbert spaces of any dimension; the Fermionic state are always just those in the antisymmetric subspace, so any product basis vectors in which both factors are the same should be excluded from the Hilbert space basis; such vectors do not live in the antisymmetric subspace. - One important thing is missing: the exchange operator is applicable if and only if particles are identical. –  Incnis Mrsi Aug 21 at 16:41 @IncnisMrsi I'm not sure what you mean by "is applicable," but it is not true that the exchange operator is "only defined if the particles are identical." For example, if the two particles are both of spin $1/2$, then the spin Hilbert space will be (a subpspace of) the tensor product $\mathcal H_{1/2}\otimes\mathcal H_{1/2}$, and the exchange operator can be defined on that Hilbert space regardless of whether or not the particles are identical. The particles being identical means that the spin state must be an eigenstate of the exchange operator with appropriate eigenvalue $(\pm 1)$. –  joshphysics Aug 21 at 18:38 Sure, I understand the difference between “can be defined” and “should be used to extract the −1 eigenspace”. That’s why I said what I said. –  Incnis Mrsi Aug 21 at 18:45 @IncnisMrsi In that case, I'm not sure what part of the response is missing. I explicitly address the role of the exchange operator in determining appropriate states of identical particles. –  joshphysics Aug 21 at 18:47 Only the small silly thing that if two fermions are not identical, then nothing hinders them to have the same wavefunction. In other words “the same quantum state” is not only about wavefunction, but necessarily about identity. –  Incnis Mrsi Aug 21 at 18:56 The idea is actually simple. However, most book usually use sloppy terms, or they have not give explicit discussion on this issue, so it usually confused students. The correct phrase should be: Individual fermion in a system cannot have the same single particle wavefunction It is clear that the whole system itself always described by a total wavefunction $\Psi$. However, if the particles are not interacting, we can solve each individual particle wavefunction $\psi$ separately and construct the total wavefunction as: $$\Psi(r_1,r_2,...,r_n) \propto \prod_{\sigma_i} \sigma_i \psi(r_i)$$ where $\sigma_i$ are all possible sign permutation for fermion and boson. The symmetrization and anti-symmetrization is the direct results from the indistinguishably of particles. So why we usually discuss about the single particle wavefunction $\psi$ rather than the total wavefunction $\Psi$? Though it is possible to measure the total wavefunction, however, each individual particle is actually the smallest measurable subsystem (corresponding to partial trace). When we treat each particle separately, interesting phenomenon appears such as entanglement. -
{}
## Artigo • Similares em SciELO ## versão impressa ISSN 2007-0934 ### Rev. Mex. Cienc. Agríc vol.10 no.6 Texcoco Set. 2019  Epub 02-Out-2020 #### https://doi.org/10.29312/remexca.v10i6.1767 Articles InfoStat, InfoGen and SAS for mutually orthogonal contrasts in randomized complete block experiments in subdivided plots 1Facultad de Ciencias Agrícolas-Centro de Investigación y Estudios Avanzados en Fitomejoramiento-Universidad Autónoma del Estado de México. El Cerrillo Piedras Blancas, Estado de México. AP. 435. Tel. y Fax. 722 2965518, ext. 148 (djperezl@uaemex.mx; m-rubi65@yahoo.com.mx; fgrfca@hotmail.com; jrfrancom@uaemex.mx; padillalaraaraceli@hotmail.com). Abstract The series of experiments (SE) in time and space in arrangements of divided plots have been used daily in various disciplines of science and technology, but for subdivided plots (PS) there is little information. With mutually orthogonal contrasts (CMO’s), the variability of means or total treatments in groups is divided into an analysis of variance in experiments of one or more factors. This study presents the programs to analyze data of the pod weight in green registered in fava beans in an SE in complete blocks at random in arrangement of PS to run in the System for Statistical Analysis, InfoStat or InfoGen, prepared by the author for correspondence. The procedures for determining how many and what the coefficients of each contrast are and how to calculate some CMO’s are indicated. A combined analysis of variance and CMO’s are presented for main factors and for their interactions, but the three statistical packages can generate the analyzes for each trial. With the platform that has been developed in the present study, it will be easier to obtain the decomposition of the effects of main factors and their interactions in an SE under PS based on the construction of an appropriate set of orthogonal polynomials or a combination between these and the CMO’s. Keywords: Vicia faba L.; factorial experiments; fixed effects model; High Valleys of central Mexico Resumen Palabras clave: Vicia faba L.; experimentos factoriales; modelo de efectos fijos; Valles Altos del centro de México Introduction When designing and analyzing experiments, analysis of variance (Anava) has been conventionally used to minimize experimental error and reliably estimate the effects between treatments (Juarez and Corona, 1990; Sahagún and Frey, 1990; Sahagún, 1997; Meneses et al., 2004). The assumptions that must be satisfied is that the chosen linear model adequately describes the observations, and that the errors follow a normal and independent distribution, with zero mean and constant variance, although unknown (Sahagún, 1990; Sahagún et al., 2008). The analysis of quantitative variables in fixed, random or mixed effects models can refer to balanced or unbalanced cases (Matzinger et al., 1959; Sahagun, 1998; Montgomery, 2010) with qualitative or quantitative factors such as fertilization, planting density or population , insecticides, fungicides, plant hormones, cultivars, localities, years or their combinations, among others (Sahagún and Frey, 1990; Sahagún, 1997; Meneses et al., 2004; González et al., 2007). Factorial experiments save resources, increase the accuracy of estimates of mean effects and make possible the study of their interactions (Sahagún et al., 2008). The use of crossing plans and experimental designs is a common procedure in plant and animal genetic improvement (Matzinger et al., 1959; Sahagun, 1997), as well as in seed production, generation, application, transfer and validation of technology (Sahagun and Frey, 1990; Meneses et al., 2004; González et al., 2007; Torres et al., 2017). Anava is a prerequisite in the comparison of treatment means, but its effects can also be divided into mutually orthogonal contrasts (CMO’s) when one of the four basic experimental designs is chosen or when the series of experiments in time and space are applied with arrangements in strips and divided plots (Gomez and Gomez, 1984; Martínez, 1988; Sahagún, 1998; Rebolledo, 2002). The formation of CMO’s has these advantages: a) each hypothesis test provides new, independent information; b) the interpretation of results is simpler; and c) the maximum estimated number is limited. In its construction the basic guide is its congruence with the objectives of the research, no matter if they are mutually orthogonal or not, or how many treatments are evaluated (Sahagún et al., 2008). In incomplete factories the interactions of greater degree have been used as experimental error, although it is not common to experiment with more than four factors and it is not frequent that all their interactions are significant (Sahagún et al., 2008; Montgomery, 2010; Walpole et al., 2012). Genotype x environment interaction, orthogonal polynomials, different regression techniques and various multivariate methods also use ANAVA (Gomez and Gomez, 1984; Sahagún, 1990; Sahagún et al., 2008; Torres et al., 2017). The randomized complete block SE (DBCA) in divided plots is described in many publications, such as Martinez (1988); Rebolledo (2002) who also developed several programs for the SAS to generate Anava, the comparison of treatment means with the test of Tukey and the CMO’s. The linear model of an SE in DBCA under PS was described by Herrera (2011) and Padilla et al. (2019) but there is still little information for random and mixed models and particularly when two factors are housed in a large, medium or small plot (Villa et al., 2010). The analysis of an SE in subdivided plots is subject to more errors when many variables have been recorded, but the development of programs for InfoStat, InfoGen and SAS, among others, will save time and effort if the linear model chosen is correct (Sahagún, 1998; Herrera, 2011; Padilla et al., 2019). The objective of this study was to present programs to analyze an SE in PS in DBCA with CMO’s using three statistical packages of common use. Framework Linear model It is established that: i= 1, 2, 3, ..., and experiments; j= 1, 2, 3,…, r repetitions; k= 1, 2, 3,… at levels on a large plot; l= 1, 2, 3,…, b levels in medium plot; m= 1, 2, 3,…, c levels in small plot. Thus: Yijklm=μ+αi+βj(i)+(αγ)ik+εijk+δl+(γδ)kl+(αδ)il+(αγδ)ikl+εijkl+θm+(γθ)km+(δθ)lm+(γδθ)klm+(αθ)im+(αγθ)ikm+(αδθ)ilm+εijklm Where: µ is the great arithmetic mean; αi is the ith experiment; βj(i) is the j-th repetition nested in the ith experiment; γk is the k-th fertilization; δl is 1-th population density; θm is the m-th cultivar; εijk, εijkl and εijklm are the errors a, b and c of large, medium and small plot, the remaining ten components are viable interactions (Herrera, 2011; Padilla et al., 2019). Obtaining CMO coefficients Gomez and Gomez (1984); Sahagún et al. (2008) define an orthogonal contrast as a linear combination of treatment effects. If T1, T2, T3, Tt are unknown parameters related to the effects of t treatments and C1, C2, C3, Ct are known constants, called coefficients of contrasts, in monofactorial experiments, each contrast (Li) is calculated as: Li=C1T1+C2T2+C3T3+ , , CtTt=i=1tCiTi i=1tCi= 0 The sum of squares (SC) of Li with a degree of freedom (gl) is calculated as: SCLi = [i=1tCiTi]2ri=1tCi2 With t treatments there are t-1 orthogonal contrasts. To test its statistical significance in a fixed effects model, the ratio that results from dividing SC Li by SC of the error is compared. The F test is used (1, degrees of freedom of error), at the level of significance chosen (p= 0.05 or p= 0.01). Two contrasts, L1 and L2, with a degree of freedom (GL), are mutually orthogonal if the sum of the cross products of their coefficients is equal to zero; that is to say: L1 = C11T1 + C12T2 + C13T3+,…,+C1tTt L2 = C21T1 + C22T2 + C23T3+,…,+C2tTt i=1tC1iC2i=C11C21+C12C22+C13C23+,...,+C1tC2t= 0 p contrasts with a GL (p>2) are mutually orthogonal if each pair and all pairs in the group are orthogonal. Since the maximum number of mutually orthogonal contrasts with a GL is equal to the number of treatments GL then: SCL1 + SCL2+SCL3 +, + SCLt-1= SC of treatments To calculate the coefficients of mutually orthogonal contrasts (CMO) in the interactions of any order in factorial experiments, proceed as follows: 1. To determine how many coefficients there will be, multiply the CMO number of each factor; 2. The coefficients for each interaction are obtained as the product of their values and signs; and 3. All the coefficients of each contrast are captured in the SAS editor program or in the InfoStat or InfoGen dialog. 4. The program is run, the outputs are verified, a receipt is made or the outputs are printed. With 5 and 2 CMO for factors A and B there will be 10 possibilities. Since there are 6 or 3 coefficients in factors A or B, any CMO will have 18 coefficients. If the coefficients of the first contrast for factors A and B are (1 -1 1 -1 1 -1) and [ -1 2 -1 ], the first combination with values: (1 -1 1 -1 1 -1) [ -1 2 -1 ] will produce -1 2 -1 1 -2 1 -1 2 -1 1 -2 1 -1 2 -1 1 -2 1. Note: multiply 1(-1), 1(2), 1(-1), -1(-1), -1(2), -1(-1), …, -1(-1), -1(2), -1(-1). For AxC there will also be 10 cases, with 18 coefficients each; its values and signs are equal to those of A1B1. Thus: (1 -1 1 -1 1 -1) [ -1 2 -1 ] = -1 2 -1 1 -2 1 -1 2 -1 1 -2 1 -1 2 -1 1 -2 1. For the BxC interaction, 3x3 = 9 coefficients must be obtained for each of the four CMOs. For B1C1 you will have: [ -1 2 -1 ] [ -1 2 -1 ] = 1 -2 1 -2 4 -2 1 -2 1. Note: each value is obtained as -1(-1), -1(2), -1(-1), 2(-1), 2(2), 2(-1), -1(-1), -1(2), -1(-1). In AxBxC, there are 6x3x3= 54 coefficients for each of the 5x2x2= 20 mutually orthogonal contrasts. For SAS its values and signs are obtained by multiplying the coefficients of each AxB interaction with those of C, but for InfoStat these are generated by multiplying the values of C by those of AxB. For SAS: In A1B1C1 its values are the product of ( -1 2 -1 1 -2 1 -1 2 -1 1 -2 1 -1 2 -1 1 -2 1) with [ -1 2 -1 ] = 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1. Statistical analysis The pod weights in green were subjected to a combined analysis of variance. Algebraic procedures are described in Herrera (2011); Padilla et al. (2019). The outputs were obtained with the versions described in SAS Institute (1989); InfoStat (Balzarini et al., 2008; Di Rienzo et al., 2008) and InfoGen (Balzarini and Di Rienzo, 2016). Additionally, a subdivision of the three main effects and their four interactions with mutually orthogonal contrasts was made (Gomez and Gomez, 1984). Calculation of some CMO In Padilla et al. (2019) shows the performance data in green pod used to perform the following calculations. With the totals of AxB, AxC and AxBxC, the CMOs for the three factors and for the BxC interaction can be calculated. Contrast 1 of factor A (FER) SC L1 =(i=1cCiTi)2ebcri=1cCi2= (365.11-375.06+463.38-344.10+410.86-353.36)²2(3)(3)(3)(6)= (1239.45-1072.52)2324=86 Contrast 1 of factor B (DEN) SCL1 =(i=1cCiTi)2eacri=1cCi2= [496.23-2783.38+1032.3922(6)(3)(3)(6)= (1528.62-1566.76)2648= 2.245 Contrast 1 of factor C (CUL) SCL1 =(i=1cCiTi)2eabri=1cCi2=  [(819.77-2741.83+750.37]22(6)(3)(3)(6)=(1570.14-1483.66)2648 = 11.54 Contrast 1 of the interaction FER x DEN (AxB) SCL1 =(i=1cCiTi)2ecri=1cCi2 , where: e, c and r are localities, cultivars and repetitions, respectively. = [81.81 - 70.94 + 108.36 - 79.72 + 82.8 - 72.57 -2(128.61) + 2(137.40) - 2(147.09) + 2(113.16) -2(143.56) + 2(113.56) + 154.69 - 116.72 + 208.03 - 151.22 + 184.50 - 167.23 ]2 / [2(3)(3)(36) ] = (1548.43 - 1546.92)2 / 648 = 0.0035. Contrast 1 of the FER x CUL (AxC) interaction. SCL1 =(i=1cCiTi)2ebri=1cCi2 , where: b are the density levels of population. 126.19-144.06+170.27-121.85+144.81-112.59-2120.35+2111.17-2155.01+2107.96-2135.61+2111.73+118.57-119.83+138.20-114.29+130.44-129.04)´}22(3)(3)(36)=8.31 Contrast 1 of the DEN x CUL interaction (B x C). SCL1 =(i=1cCiTi)2eari=1cCi2 , where: a are the levels of fertilization. =[188.54-2269,27+361.96-2152.38+4260.93-2328.52+155.28-2253.18+341.91]22(6)(3)(36=5.53 Contrast 1 of the interaction FER x DEN x CUL (AxBxC). SCL1 =(i=1cCiTi)2eri=1cCi2 [30.31 - 241.9 + 53.98  -  29.29  +  252.3 -62.47 + 47.78 - 251.3 +  71.19- 30.49 +  237.81 - 53.55+28.84-250.23+65.74-21.83+235.73-55.03-226.82+445.81-247.72+215.05-443.92+2(52.2)-231.55+453.34-270.12+222.7-437.81+247.45-229.74+442.67-263.15+226.47-437.38+247.88+24.68-240.90+52.99 -26.6+241.18-51.95+29.03-242.45+66.72 -26.53+237.54-50.22           +24.17-250.66+55.61-24.27+240.45-65.32]2                                                                                                                                       2(3)(216)=0.91 The above calculations and the rest of the CMOs that are possible to estimate can be obtained with the following routine: Programs for InfoStat and InfoGen Stage 1 The data is ordered as experiments (EXP), repetitions (REP), fertilization (FER), population density (DEN), cultivars (CUL) and variables (s). Stage 2 In the main menu choose statistics\analysis of variance; In the define dialog box: Dependent variables: RVV. Classification variables: EXP REP FER DEN CUL, Choose accept. Stage 3 In specifications of the terms of the model write: EXP\EXP>REP*FER; EXP>REP\EXP>REP*FER; FER\EXP>REP*FER; EXP*FER\EXP>REP*FER; EXP>REP*FER;DEN\EXP>FER>REP*DEN;DEN*EXP\EXP>FER>REP*DEN;DEN*FER\EXP>FER>REP*DEN;DEN*EXP*FER\EXP>FER>REP*DEN;EXP>FER>REP*DEN;CUL;CUL*EXP;CUL*FER; CUL*DEN;CUL*EXP*FER;CUL*EXP*DEN;CUL*FER*DEN; choose accept. Note: in the dialog box the instructions above must be written separately on each line and without the semicolon. Step 4 In the dialog box that shows Analysis of variance choose: comparisons\contrasts\treatments\choose effects\matrix of contrasts\accept. Before clicking on accept, you must choose to control orthogonality. After defining whether a main effect or an interaction will be estimated, enter the coefficients in the contrast matrix. SAS Program In the database Exp= localities, A, B and C are fertilization, population density and cultivars and the variables are identified as X1, X2, X16. Data fava bean; Input Exp rep A B C X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16; Cards; Here the data is captured in the order of the input. PROC SORT; BY EXP; PROC GLM; BY EXP; CLASS REP A B C; MODEL X1-X16= REP A REP*A B A*B REP*B(A) C A*C B*C A*B*C; TEST H=REP A E=REP*A; TEST H=B A*B E=REP*B(A); An analysis of individual variance with glm is generated and mutually orthogonal contrasts are calculated for main effects, large and medium plot errors are used for f tests. CONTRAST 'A1, A3,A5 vs A2,A4,A6' A 1 -1 1 -1 1 -1/E=REP*A; CONTRAST 'A1 vs A3, A5' A 2 0 -1 0 -1 0/E=REP*A; CONTRAST 'A3 vs A5' A 0 0 1 0 -1 0/E=REP*A; CONTRAST 'A2 vs A4, A6' A 0 2 0 -1 0 -1/E= REP *A; CONTRAST 'A4 vs A6' A 0 0 0 1 0 -1/E=REP*A; CONTRAST 'B2 vs B1, B3' B -1 2 -1/E=REP*B(A); CONTRAST 'B1 vs B3' B 1 0 -1/E=REP*B(A); The contrasts of factor c are tested with the residual of the model, which is the small plot error; they do not need to be specified at the end. CONTRAST 'C2 vs C1, C3' C -1 2 -1; CONTRAST 'C1 vs C3' C 1 0 -1; The contrasts of the axb interaction are tested with the median plot error. CONTRAST 'A1B1' A*B -1 2 -1 1 -2 1 -1 2 -1 1 -2 1 -1 2 -1 1 -2 1/E=REP*B(A); CONTRAST 'A2B1' A*B -2 4 -2 0 0 0 1 -2 1 0 0 0 1 -2 1 0 0 0/E=REP*B(A); CONTRAST 'A3B1' A*B 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1 0 0 0/E=REP*B(A); CONTRAST 'A4B1' A*B 0 0 0 -2 4 -2 0 0 0 1 -2 1 0 0 0 1 -2 1/E=REP*B(A); CONTRAST 'A5B1' A*B 0 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1/E=REP*B(A); CONTRAST 'A1B2' A*B 1 0 -1 -1 0 1 1 0 -1 -1 0 1 1 0 -1 -1 0 1/E=REP*B(A); CONTRAST 'A2B2' A*B 2 0 -2 0 0 0 -1 0 1 0 0 0 -1 0 1 0 0 0/E=REP*B(A); CONTRAST 'A3B2' A*B 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1 0 0 0/E=REP*B(A); CONTRAST 'A4B2' A*B 0 0 0 2 0 -2 0 0 0 -1 0 1 0 0 0 -1 0 1/E=REP*B(A); CONTRAST 'A5B2' A*B 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1/E=REP*B(A); The errors of the axc interaction are tested with the residual of the model, so they do not need to be indicated at the end. CONTRAST 'A1C1' A*C -1 2 -1 1 -2 1 -1 2 -1 1 -2 1 -1 2 -1 1 -2 1; CONTRAST 'A2C1' A*C -2 4 -2 0 0 0 1 -2 1 0 0 0 1 -2 1 0 0 0; CONTRAST 'A3C1' A*C 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1 0 0 0; CONTRAST 'A4C1' A*C 0 0 0 -2 4 -2 0 0 0 1 -2 1 0 0 0 1 -2 1; CONTRAST 'A5C1' A*C 0 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1; CONTRAST 'A1C2' A*C 1 0 -1 -1 0 1 1 0 -1 -1 0 1 1 0 -1 -1 0 1; CONTRAST 'A2C2' A*C 2 0 -2 0 0 0 -1 0 1 0 0 0 -1 0 1 0 0 0; CONTRAST 'A3C2' A*C 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1 0 0 0; CONTRAST 'A4C2' A*C 0 0 0 2 0 -2 0 0 0 -1 0 1 0 0 0 -1 0 1; CONTRAST 'A5C2' A*C 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1; CONTRAST 'B1C1' B*C 1 -2 1 -2 4 -2 1 -2 1; CONTRAST 'BIC2' B*C -1 0 1 2 0 -2 -1 0 1; CONTRAST 'B2C1' B*C -1 2 -1 0 0 0 1 -2 1; CONTRAST 'B2C2' B*C 1 0 -1 0 0 0 -1 0 1; The axbxc interaction coefficients are tested with the small or residual plot error of the model and are not indicated at the end. CONTRAST 'A1B1C1' A*B*C 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1; CONTRAST 'A1B1C2' A*B*C -1 0 1 2 0 -2 -1 0 1 1 0 -1 -2 0 2 1 0 -1 -1 0 1 2 0 -2 -1 0 1 1 0 -1 -2 0 2 1 0 -1 -1 0 1 2 0 -2 -1 0 1 1 0 -1 -2 0 2 1 0 -1; CONTRAST 'A1B2C1' A*B*C -1 2 -1 0 0 0 1 -2 1 1 -2 1 0 0 0 -1 2 -1 -1 2 -1 0 0 0 1 -2 1 1 -2 1 0 0 0 -1 2 -1 -1 2 -1 0 0 0 1 -2 1 1 -2 1 0 0 0 -1 2 -1; CONTRAST 'A1B2C2' A*B*C 1 0 -1 0 0 0 -1 0 1 -1 0 1 0 0 0 1 0 -1 1 0 -1 0 0 0 -1 0 1 -1 0 1 0 0 0 1 0 -1 1 0 -1 0 0 0 -1 0 1 -1 0 1 0 0 0 1 0 -1; CONTRAST 'A2B1C1' A*B*C 2 -4 2 -4 8 -4 2 -4 2 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A2B1C2' A*B*C -2 0 2 4 0 -4 -2 0 2 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A2B2C1' A*B*C -2 4 -2 0 0 0 2 -4 2 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A2B2C2' A*B*C 2 0 -2 0 0 0 -2 0 2 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A3B1C1' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -2 1 -2 4 -2 1 -2 1 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A3B1C2' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 1 2 0 -2 -1 0 1 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A3B2C1' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A3B2C2' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A4B1C1' A*B*C 0 0 0 0 0 0 0 0 0 2 -4 2 -4 8 -4 2 -4 2 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1; CONTRAST 'A4B1C2' A*B*C 0 0 0 0 0 0 0 0 0 -2 0 2 4 0 -4 -2 0 2 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1; CONTRAST 'A4B2C1' A*B*C 0 0 0 0 0 0 0 0 0 -2 4 -2 0 0 0 2 -4 2 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1; CONTRAST 'A4B2C2' A*B*C 0 0 0 0 0 0 0 0 0 2 0 -2 0 0 0 -2 0 2 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1; CONTRAST 'A5B1C1' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -2 1 -2 4 -2 1 -2 1 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1; CONTRAST 'A5B1C2' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 1 2 0 -2 -1 0 1 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1; CONTRAST 'A5B2C1' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1; CONTRAST 'A5B2C2' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1; PROC GLM; CLASS EXP REP A B C; MODEL X1-X16= EXP REP(EXP) A A*EXP REP*A(EXP) B A*B B*EXP A*B*EXP REP*B(A EXP) C A*C B*C A*B*C C*EXP A*C*EXP B*C*EXP; TEST H= EXP A REP(EXP) EXP*A E=REP*A(EXP); TEST H= B A*B B*EXP A*B*EXP E=REP*B(A EXP); A combined analysis of variance is obtained with partition of main effects with mutually orthogonal contrasts. CONTRAST 'A1, A3, A5 VS A2, A4, A6' A 1 -1 1 -1 1 -1/E=REP*A(EXP); CONTRAST 'A1 VS A3, A5' A 2 0 -1 0 -1 0/E=REP*A(EXP); CONTRAST 'A3 VS A5' A 0 0 1 0 -1 0/E=REP*A(EXP); CONTRAST 'A2 VS A4, A6' A 0 2 0 -1 0 -1/E=REP*A(EXP); CONTRAST 'A4 VS A6' A 0 0 0 1 0 -1/E=REP*A(EXP); CONTRAST 'B2 VS B1, B3' B -1 2 -1/E=REP*B(A EXP); CONTRAST 'B1 VS B3' B 1 0 -1/E=REP*B(A EXP); CONTRAST 'C2 VS C1, C3' C -1 2 -1; CONTRAST 'C1 VS C3' C 1 0 -1; CONTRAST 'A1B1' A*B -1 2 -1 1 -2 1 -1 2 -1 1 -2 1 -1 2 -1 1 -2 1/E=REP*B(A EXP); CONTRAST 'A2B1' A*B -2 4 -2 0 0 0 1 -2 1 0 0 0 1 -2 1 0 0 0/E=REP*B(A EXP); CONTRAST 'A3B1' A*B 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1 0 0 0/E=REP*B(A EXP); CONTRAST 'A4B1' A*B 0 0 0 -2 4 -2 0 0 0 1 -2 1 0 0 0 1 -2 1/E=REP*B(A EXP); CONTRAST 'A5B1' A*B 0 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1/E=REP*B(A EXP); CONTRAST 'A1B2' A*B 1 0 -1 -1 0 1 1 0 -1 -1 0 1 1 0 -1 -1 0 1/E=REP*B(A EXP); CONTRAST 'A2B2' A*B 2 0 -2 0 0 0 -1 0 1 0 0 0 -1 0 1 0 0 0/E=REP*B(A EXP); CONTRAST 'A3B2' A*B 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1 0 0 0/E=REP*B(A EXP); CONTRAST 'A4B2' A*B 0 0 0 2 0 -2 0 0 0 -1 0 1 0 0 0 -1 0 1/E=REP*B(A EXP); CONTRAST 'A5B2' A*B 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1/E=REP*B(A EXP); CONTRAST 'A1C1' A*C -1 2 -1 1 -2 1 -1 2 -1 1 -2 1 -1 2 -1 1 -2 1; CONTRAST 'A2C1' A*C -2 4 -2 0 0 0 1 -2 1 0 0 0 1 -2 1 0 0 0; CONTRAST 'A3C1' A*C 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1 0 0 0; CONTRAST 'A4C1' A*C 0 0 0 -2 4 -2 0 0 0 1 -2 1 0 0 0 1 -2 1; CONTRAST 'A5C1' A*C 0 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1; CONTRAST 'A1C2' A*C 1 0 -1 -1 0 1 1 0 -1 -1 0 1 1 0 -1 -1 0 1; CONTRAST 'A2C2' A*C 2 0 -2 0 0 0 -1 0 1 0 0 0 -1 0 1 0 0 0; CONTRAST 'A3C2' A*C 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1 0 0 0; CONTRAST 'A4C2' A*C 0 0 0 2 0 -2 0 0 0 -1 0 1 0 0 0 -1 0 1; CONTRAST 'A5C2' A*C 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1; CONTRAST 'B1C1' B*C 1 -2 1 -2 4 -2 1 -2 1; CONTRAST 'BIC2' B*C -1 0 1 2 0 -2 -1 0 1; CONTRAST 'B2C1' B*C -1 2 -1 0 0 0 1 -2 1; CONTRAST 'B2C2' B*C 1 0 -1 0 0 0 -1 0 1; CONTRAST 'A1B1C1' A*B*C 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1 1 -2 1 -2 4 -2 1 -2 1 -1 2 -1 2 -4 2 -1 2 -1; CONTRAST 'A1B1C2' A*B*C -1 0 1 2 0 -2 -1 0 1 1 0 -1 -2 0 2 1 0 -1 -1 0 1 2 0 -2 -1 0 1 1 0 -1 -2 0 2 1 0 -1 -1 0 1 2 0 -2 -1 0 1 1 0 -1 -2 0 2 1 0 -1; CONTRAST 'A1B2C1' A*B*C -1 2 -1 0 0 0 1 -2 1 1 -2 1 0 0 0 -1 2 -1 -1 2 -1 0 0 0 1 -2 1 1 -2 1 0 0 0 -1 2 -1 -1 2 -1 0 0 0 1 -2 1 1 -2 1 0 0 0 -1 2 -1; CONTRAST 'A1B2C2' A*B*C 1 0 -1 0 0 0 -1 0 1 -1 0 1 0 0 0 1 0 -1 1 0 -1 0 0 0 -1 0 1 -1 0 1 0 0 0 1 0 -1 1 0 -1 0 0 0 -1 0 1 -1 0 1 0 0 0 1 0 -1; CONTRAST 'A2B1C1' A*B*C 2 -4 2 -4 8 -4 2 -4 2 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A2B1C2' A*B*C -2 0 2 4 0 -4 -2 0 2 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A2B2C1' A*B*C -2 4 -2 0 0 0 2 -4 2 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A2B2C2' A*B*C 2 0 -2 0 0 0 -2 0 2 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A3B1C1' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -2 1 -2 4 -2 1 -2 1 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A3B1C2' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 1 2 0 -2 -1 0 1 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A3B2C1' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A3B2C2' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 0; CONTRAST 'A4B1C1' A*B*C 0 0 0 0 0 0 0 0 0 2 -4 2 -4 8 -4 2 -4 2 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1; CONTRAST 'A4B1C2' A*B*C 0 0 0 0 0 0 0 0 0 -2 0 2 4 0 -4 -2 0 2 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1; CONTRAST 'A4B2C1' A*B*C 0 0 0 0 0 0 0 0 0 -2 4 -2 0 0 0 2 -4 2 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1; CONTRAST 'A4B2C2' A*B*C 0 0 0 0 0 0 0 0 0 2 0 -2 0 0 0 -2 0 2 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1; CONTRAST 'A5B1C1' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -2 1 -2 4 -2 1 -2 1 0 0 0 0 0 0 0 0 0 -1 2 -1 2 -4 2 -1 2 -1; CONTRAST 'A5B1C2' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 1 2 0 -2 -1 0 1 0 0 0 0 0 0 0 0 0 1 0 -1 -2 0 2 1 0 -1; CONTRAST 'A5B2C1' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 1 -2 1 0 0 0 0 0 0 0 0 0 1 -2 1 0 0 0 -1 2 -1; CONTRAST 'A5B2C2' A*B*C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 -1 0 1 0 0 0 0 0 0 0 0 0 -1 0 1 0 0 0 1 0 -1; RUN; Some results obtained with the three statistical packages. Source of variation Degrees of freedom Sum of squares Squares means F value p value Model 159 4 571.71 28.75 6.58 0.0001 Experiments (Exp) 1 37.13 37.13 3.7 0.0686 Rep (Exp) 4 1 650.68 412.67 41.16 0.0001 Fertilization (Fer) 5 185.09 37.02 3.69 0.0157 Exp x Fer 5 150.73 30.15 3.01 0.0349 Error a 20 200.5 10.02 2.29 Density (Den) 2 1 333.63 666.63 61.46 0.0001 Fer x Den 10 49.90 4.99 0.46 0.9073 Exp x Den 2 67.69 33.84 3.12 0.0532 Exp x Fer x Den 10 106.91 10.69 0.99 0.4684 Error b 48 520.61 10.85 2.48 Cultivars (Cul) 2 33.85 16.92 3.87 0.0228 Cul x Fer 10 50.63 5.06 1.16 0.3228 Cul x Den 4 7.92 1.98 0.45 0.7703 Cul x Fer x Den 20 59.49 2.97 0.68 0.8416 Cul x Exp 2 56.25 28.13 6.43 0.002 Cul x Exp x Fer 10 53.98 5.40 1.23 0.2725 Cul x Exp x Den 4 7.08 1.77 0.41 0.8048 Error c 164 716.96 4.37 Total 323 5 288.67 Mutually orthogonal contrasts for fertilization (large plot). Fer (A) Degrees of freedom Sum of squares F  value p  value Large plot error A1 1 86.01 8.58 0.0083 Exp>Rep>Fer A2 1 64.11 6.39 0.02 Exp>Rep>Fer A3 1 25.64 2.56 0.1255 Exp>Rep>Fer A4 1 8.56 0.85 0.3665 Exp>Rep>Fer A5 1 0.79 0.08 0.7813 Exp>Rep>Fer Total 5 185.1 3.69 0.0157 Exp>Rep>Fer Mutually orthogonal contrasts for population density (medium plot). Den (B) Degrees of freedom Sum of squares F  value p  value Medium plot error B1 1 2.25 0.21 0.65 Exp>Fer>Rep*Den B2 1 1 331.02 122.72 0.001 Exp>Fer>Rep*Den Total 2 1 333.27 61.46 0.0001 Exp>Fer>Rep*Den Mutually orthogonal contrasts for fava bean cultivars (small plot). Cul (C) Degrees of freedom Sum of squares F  value p  value Small plot error C1 1 11.54 2.64 0.1061 Model Residual C2 1 22.3 5.0 0.0252 Model Residual Total 2 33.84 3.87 0.0228 Model Residual Mutually orthogonal contrasts for the fertilization x density interaction (AxB). DenxFer or AxB Degrees of freedom Sum of squares F  value p  value Interaction error C1 1 0.003 0.0003 0.98 Exp>Fer>Rep*Den C2 1 2.96 0.27 0.6 Exp>Fer>Rep*Den C3 1 8.18 0.75 0.38 Exp>Fer>Rep*Den C4 1 12.94 1.19 0.28 Exp>Fer>Rep*Den C5 1 0.3 0.03 0.86 Exp>Fer>Rep*Den C6 1 0.7 0.6 0.8 Exp>Fer>Rep*Den C7 1 14.32 1.32 0.25 Exp>Fer>Rep*Den C8 1 0.06 0.01 0.94 Exp>Fer>Rep*Den C9 1 2.99 0.28 0.6 Exp>Fer>Rep*Den C10 1 7.45 0.69 0.41 Exp>Fer>Rep*Den Total 10 49.9 0.9 0.90 Exp>Fer>Rep*Den Mutually orthogonal contrasts for the interaction fertilization x cultivars or AxC. CulxFer or AxC Degrees of freedom Sum of squares F  value p value Interaction Error C1 1 8.31 1.9 0.16 Model Residual C2 1 0.05 0.01 0.91 Model Residual C3 1 0.14 0.03 0.85 Model Residual C4 1 3.08 0.71 0.4 Model Residual C5 1 0.02 0.004 0.94 Model Residual C6 1 6.94 1.59 0.2 Model Residual C7 1 4.51 1.03 0.31 Model Residual C8 1 4.35 1 0.32 Model Residual C9 1 15.23 3.48 0.06 Model Residual C10 1 8.01 1.83 0.17 Model Residual Total 10 50.64 1.16 0.32 Model Residual Mutually orthogonal contrasts for the interaction density x cultivars or BxC. CulxDen or BxC Degrees of freedom Sum of squares F Value p value Interaction Error C1 1 5.54 1.27 0.26 Model Residual C2 1 0.14 0.03 0.85 Model Residual C3 1 1.03 0.24 0.62 Model Residual C4 1 1.21 0.28 0.59 Model Residual Total 4 7.92 0.45 0.77 Model Residual Mutually orthogonal contrasts for the interaction fertilization x density x cultivars. CulxDenxFer or AxBxC Degrees of freedom Sum of squares F value p value Interaction Error C1 1 0.91 0.21 0.64 Model Residual C2 1 3.4 0.78 0.37 Model Residual C3 1 14.76 3.38 0.06 Model Residual C4 1 0.16 0.04 0.84 Model Residual C5 1 0.12 0.03 0.87 Model Residual C6 1 0.61 0.14 0.7 Model Residual C7 1 2.8 0.64 0.42 Model Residual C8 1 2.17 0.5 0.48 Model Residual C9 1 8.06 1.84 0.17 Model Residual C10 1 7.65 1.75 0.18 Model Residual C11 1 2.15 0.49 0.48 Model Residual C12 1 0.33 0.08 0.78 Model Residual C13 1 0.71 0.16 0.68 Model Residual C14 1 1.19 0.27 0.6 Model Residual C15 1 0.57 0.13 0.71 Model Residual C16 1 1.31 0.3 0.58 Model Residual C17 1 0.001 0 0.98 Model Residual C18 1 8.12 1.86 0.17 Model Residual C19 1 3.65 0.84 0.36 Model Residual C20 1 0.81 0.18 0.66 Model Residual Total 20 59.48 0.68 0.84 Model Residual Conclusions The three statistical packages generate similar information for the series of experiments in randomized complete blocks in arrangement of subdivided plots (SE in DBCA in PS) in free or student versions (without cost). The annual license for a PC for SAS is USD $2 000.00 and for InfoStat or InfoGen it is only USD$50.00, the last two are friendlier than SAS, requiring less information to obtain statistical analyzes of interest. The coefficients of mutually orthogonal contrasts (CMO’s) must be captured in all three software’s, their calculation for interactions is more laborious when the number of levels within each factor increases. The user has the option of constructing only a subset of CMO’s or non-orthogonal, that are congruent with the research objectives. The statistical significance of the F values of the CMO’s could be considered as a practical guide to design other complementary analyzes, such as multiple comparisons of treatment means or the application of multivariate techniques. With the information presented in this essay, it will be easier to extend your analysis of SE in DBCA in PS in the case of orthogonal polynomials (PO), response surfaces, or combinations of PO with CMO’s. Balzarini, M. G.; González, L.; Tablada, M.; Casanoves, F.; Di Rienzo, J. A. y Robledo, C. W. 2008. Manual del usuario de InfoStat, Editorial Brujas, Córdoba, Argentina. 82-112 pp. [ Links ] Balzarini, M. G. y Di Rienzo, J. A. 2016. InfoGen. FCA. Universidad Nacional de Córdoba, Argentina. http://www.info-gen.com.mx. [ Links ] Di Rienzo, J. A.; Casanoves, F.; Balzarini, M. G.; González, L.; Tablada, M. y Robledo, C. W. 2008. InfoStat, versión 2008. Grupo InfoStat, FCA, Universidad Nacional de Córdoba. Argentina. [ Links ] Gomez, K. A., Gomez, A. A. 1984. Statistical procedures for agricultural research. 2nd (Ed.). John Wiley & Sons, Inc. Printed in Singarore. 680 p. [ Links ] González, H. A.; Pérez, L. D. J.; Sahagún, C. J.; Norman, M. T. H.; Balbuena, M. A. y Gutiérrez, R. F. 2007. Análisis de una cruza dialélica completa de líneas endogámicas de maíz. Rev. Cienc. Agríc. Informa. 16(1):10-17. [ Links ] González, H. A.; Sahagún, C. J. y Pérez, L. D. J. 2007. Estudio de ocho líneas de maíz en un experimento dialélico incompleto. Rev. Cienc. Agríc. Informa. 16(1):3-9. [ Links ] Herrera, S. L. A. 2011. Análisis de la varianza de un grupo de experimentos en parcelas subdivididas. Revista de la Facultad de Ciencias Veterinarias, UCV. 52(1):59-72. [ Links ] Juárez, M. J. A. y Corona, S. T. 1990. El análisis de experimentos por el método Papadakis. Rev. Chapingo. 71-72:110-113. [ Links ] Martínez, G. A. 1988. Diseños experimentales. Métodos y elementos de teoría. Editorial Trillas. Primera Edición. México, DF. 756 p. [ Links ] Matzinger, D. F.; Sprague, G. F. and Cockerham, C. C. 1959. Diallel Crosses of maize in experiments repeated over locations and years. Agron. J. 51(3):346-350. [ Links ] Meneses, M.; Mejía I. C. J. A. y Villanueva, V. C. 2004. Cambios en los componentes de varianza genética al realizar selección combinada en una población de calabaza. Rev. Chapingo Ser. Hortic. 10(2):165-172. [ Links ] Montgomery, D. C. 2010. Diseño y análisis de experimentos. Limusa-Noriega Editores. Segunda Edición, México, DF. 686 p. [ Links ] Padilla, L. A.; González, H. A.; Pérez, L. D. J.; Rubí, A. M.; Gutiérrez, R. F.; Ramírez, D. J. F.; Franco, M. J. R. P. y Serrato, C. R. 2019. Programas para SAS e InfoStat para analizar una serie de experimentos en parcelas subdivididas. Universidad Autónoma del Estado de México. Primera Edición. Toluca, México. 45- 55 p. [ Links ] Rebolledo, R. H. H. 2002. Manual SAS por computadora. Análisis estadístico de datos experimentales. Editorial Trillas. Primera edición. México, DF. 208 p. [ Links ] Sahagún, C. J. 1990. Utilidad del análisis de varianza en el estudio de la interacción entre genotipos y ambientes. Xilonen. 1(1):21-32. [ Links ] Sahagún, C. J. y Frey, K. J. 1990. Eficiencia de tres diseños experimentales para la evaluación de genotipos. Revista Chapingo. 71-72:114-122. [ Links ] Sahagún, J. J. 1997. Estimación de varianzas genéticas con machos S0 y líneas hembras S1 en el Diseño II. Rev. Chapingo Ser. Hortic. 3(2):71-76. [ Links ] Sahagún, C. J. 1998. Construcción y análisis de los modelos fijos, aleatorios y mixtos. Universidad Autónoma Chapingo (UACH). Departamento de Fitotecnia. Boletín técnico núm. 2. 65 p. [ Links ] Sahagún, C. J.; Martínez, G. A. y Rodríguez, P. J. E. 2008. Problemas y métodos comunes del análisis de experimentos factoriales. Rev. Chapingo Ser. Hortic. 14(2):213-222. [ Links ] SAS Institute Inc. 1989. SAS/STAT User’s Guide, Version 6, Fourth Edition, Volume 1, Cary, NC, USA. 943 p. [ Links ] Torres, F. J. L.; Mendoza, G. B.; Prassana, B. M.; Alvarado, G.; San Vicente, F. M. y Crossa, J. 2017. Grain yield and stability of white early maize hybrids in the highland valleys of Mexico. Crop Sci. 57(6):3002-3015. [ Links ] Villa, M. A.; Herrera, L.; Díaz, I. y Sozzi, A. 2010. Análisis de varianza para diseños en parcelas subdivididas con tratamientos terciarios aleatorios y una factorial en las subparcelas. Ciencia. 18(2):126-136. [ Links ] Walpole, R. E.; Myers, R. H. and Ye, K. 2012. Probability and Statistics for engineers and scientists. Prentice Hall-Pearson Education. Ninth Edition. USA. 791 p. [ Links ] Received: July 01, 2019; Accepted: September 01, 2019 Este es un artículo publicado en acceso abierto bajo una licencia Creative Commons
{}
# On compositions of symmetrically and elementarily indivisible structures Model theory seminarFriday, May 23, 201412:30 pmGC6417 # On compositions of symmetrically and elementarily indivisible structures ### Ben Gurion University of the Negev A structure M in a first order language L is indivisible if for every colouring of its universe in two colours, there is a monochromatic substructure M’ of M such that M’ is isomorphic to M. Additionally, we say that M is symmetrically indivisible if M’ can be chosen to be symmetrically embedded in M (That is, every automorphism of M’ can be can be extended to an automorphism of M}), and that M is elementarily indivisible if M’ can be chosen to be an elementary substructure. The notion of indivisibility is a long-studied subject. We will present these strengthenings of the notion, examples and some basic properties. in [1] several questions regarding these new notions arose: If M is symmetrically indivisible or all of its reducts to a sublanguage symmetrically indivisible? Is an elementarily indivisible structure necessarily homogeneous? Does elementary indivisibility imply symmetric indivisibility? We will define a new “product” of structures, generalising the notions of lexicographic order and lexicographic product of graphs, which preserves indivisibility properties and use it to answer the questions above. [1] Assaf Hasson, Menachem Kojman and Alf Onshuus, On symmetric indivisibility of countable structures, Model Theoretic Methods in Finite Combinatorics, AMS, 2011, pp.417–452. Posted by on April 30th, 2014
{}
# 0.7 Bits to symbols to signals  (Page 3/8) Page 3 / 8 A Gray code has the property that the binaryrepresentation for each symbol differs from its neighbors by exactly one bit.A Gray code for the translation of binary into 4-PAM is $\begin{array}{ccc}01\hfill & \to \hfill & +3\hfill \\ 11\hfill & \to \hfill & +1\hfill \\ 10\hfill & \to \hfill & -1\hfill \\ 00\hfill & \to \hfill & -3\hfill \end{array}$ Mimic the code in naivecode.m to implement this alternative and plot the number of errors as a function ofthe noise variance v . Compare your answer with [link] . Which code is better? ## Symbols to signals Even though the original message is translated into the desired alphabet, it is not yet ready for transmission:it must be turned into an analog waveform. In the binary case, a simple method is to use a rectangular pulseof duration $T$ seconds to represent $+1$ , and the same rectangular pulse inverted (i.e., multiplied by $-1$ ) to represent the element $-1$ . This is called a polar non-return-to-zero line code.The problem with such simple codes is that they use bandwidth inefficiently.Recall that the Fourier transform of the rectangular pulse in time is the $\text{sinc}\left(f\right)$ function in frequency [link] , which dies away slowly as $f$ increases. Thus, simple codes like the non-return-to-zeroare compact in time, but wide in frequency, limiting the number of simultaneous nonoverlapping users ina given spectral band. More generally, consider the four-level signal of [link] . This can be turned into an analog signal for transmission by choosinga pulse shape $p\left(t\right)$ (that is not necessarily rectangular and not necessarily of duration $T$ ) and then transmitting $\begin{array}{cc}\hfill p\left(t-kT\right)& \text{if}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}k\text{th}\phantom{\rule{4.pt}{0ex}}\text{symbol}\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}1\hfill \\ \hfill -p\left(t-kT\right)& \text{if}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}k\text{th}\phantom{\rule{4.pt}{0ex}}\text{symbol}\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}-1\hfill \\ \hfill 3p\left(t-kT\right)& \text{if}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}k\text{th}\phantom{\rule{4.pt}{0ex}}\text{symbol}\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}3\hfill \\ \hfill -3p\left(t-kT\right)& \text{if}\phantom{\rule{4.pt}{0ex}}\text{the}\phantom{\rule{4.pt}{0ex}}k\text{th}\phantom{\rule{4.pt}{0ex}}\text{symbol}\phantom{\rule{4.pt}{0ex}}\text{is}\phantom{\rule{4.pt}{0ex}}-3\hfill \end{array}$ Thus, the sequence is translated into an analog waveform by initiating a scaled pulse at the symbol time $kT$ , where the amplitude scaling is proportional to the associated symbolvalue. Ideally, the pulse would be chosen so that • the value of the message at time $k$ does not interfere with the value of the message at other sample times(the pulse shape causes no intersymbol interference ), • the transmission makes efficient use of bandwidth, and • the system is resilient to noise. Unfortunately, these three requirements cannot all be optimized simultaneously, and so the design of thepulse shape must consider carefully the tradeoffs that are needed. The focus in Chapter [link] is on how to design the pulse shape $p\left(t\right)$ , and the consequences of that choice in terms of possible interference betweenadjacent symbols and in terms of the signal-to-noise properties of the transmission. For now, to see concretely how pulse shaping works, let's pick a simple nonrectangular shape and proceedwithout worrying about optimality. Let $p\left(t\right)$ be the symmetrical blip shape shown in the top part of [link] , and defined in pulseshape.m by the hamming command. The text string in str is changed into a 4-level signal as in Example [link] , and then the complete transmitted waveform is assembled by assigning an appropriatelyscaled pulse shape to each data value. The output appears in the bottom of [link] . Looking at this closely, observe that the first letter T is represented by the four values $-1\phantom{\rule{4pt}{0ex}}-1\phantom{\rule{4pt}{0ex}}-1\phantom{\rule{4pt}{0ex}}-3$ , which corresponds exactly to the first four negative blips, three small and one large. what are the products of Nano chemistry? There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others.. learn Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level learn Preparation and Applications of Nanomaterial for Drug Delivery Application of nanotechnology in medicine what is variations in raman spectra for nanomaterials I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam Any one who tell me about Preparation and application of Nanomaterial for drug Delivery Hafiz what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
{}
# Numerator ## Numerator Meaning About the numerator definition, it is the "top part of a fraction. Here's the simple numerator definition math you're probably looking for: The numerator is the top part of the fraction, while a denominator is the bottom part of a fraction. For example, in the fraction 5/7, the number 5 is the numerator (top) and 7 is the denominator (bottom). Moreover, note that a fraction represents a part of a whole. That being said, a numerator represents the number of parts of that whole being considered, while the denominator exhibits the total number of parts created from the whole. ### Numerator and Denominator in Division In the fraction 5/7, the whole value (say, a pizza slice) has been divided into 7 equal parts. If someone has 5/7 of the pizza, they have five of those seven equal parts. ### Numerator and Denominator Definition Let’s make you understand the numerator and denominator meaning. The numerator represents how many divisions are being selected out of the total number of equal parts. On the other hand, the denominator represents the number of equal parts in which the whole thing has to be divided. This would be better explained using an example. 7/9 is a fraction in which the denominator 9 represents that 9 equal divisions have to be made in a circle. 7 parts selected out of 9 equal parts created out from 1 circle can be represented as 7/9. Diagrammatic representation of this circle is as follows: The numerator and denominator diagram clearly show seven equal parts taken out when the whole circle is divided into nine equal parts. [Image will be uploaded soon] ### Definition Whole Number The complete set of natural numbers in addition to ‘0’ are called whole numbers. That said, the whole numbers are the part of the number system in which it takes into account all the positive integers from zero (0) to infinity. Since these numbers take place in the number line. Thus, they are all known as real numbers. With this, we can also conclude that all the whole numbers are real numbers, but not all the real numbers are whole numbers. The examples include: 0, 11, 25, 36, 999, 1200, etc. The whole numbers are the numbers without fractions and it is an assemblage of positive integers and zero. It is denoted by a symbol “W” and the set of numbers are {0, 1, 2, 3, 4, 5, 6, 7, 8, 9,....}. Zero as a whole denotes nothing or a null value. ### Properties of Whole Numbers Following are the properties of whole numbers: • Whole numbers are closed under operations of addition and multiplication • The multiplication and addition of whole numbers is associative • The multiplication and addition of whole numbers is commutative • It abides by the distributive property of multiplication over addition • The additive identity of whole numbers is equivalent to 0 • The multiplicative identity of whole numbers is equivalent to 1 ### Solved Examples on Numerator Now that you are well aware of what a numerator is and what is a numerator and denominator definition, let’s do some practice examples. Question: Is 15/9 a Fraction? Solution: Yes, it is. It is known as an improper fraction. Question: Convert 150.1400 into a Fraction. Solution: Here, we will use the concept of how to convert decimals into fractions 150.1400 = 150.1400/10000 = 15014/100 ### Fun Facts • The term “numerator” is derived from the Latin word numerātor that indicates counter. • If the numerator is 0, then the whole of the fraction becomes zero, irrespective of what the denominator is! For example, 0 ⁄ 50 is 0; 0 ⁄ 4 is 0, and so on. • If the numerator is the same as the denominator of a fraction, then the value of the fraction becomes 1. For example, if the fraction is 70 ⁄ 70, then its value will be 1. • A major misconception about numerators is that it is always smaller than the denominator. • The numerator is not necessarily smaller than the denominator. For example, 38 / 26 is a fraction, wherein 38 is the numerator, and is greater than the denominator. • Fractions with greater numerator value are referred to as improper fractions and are always greater than 1. ## FAQs on Numerator 1. What is the Difference Between a Numerator and a Denominator? Answer: In a fraction, the top number is what we call the numerator while the bottom number is what we call the denominator. For example, 9/11 is a fraction. Here, 9 is the numerator whereas 11 is the denominator. In the same manner, the numerator describes the number of parts we have and the bottom number describes the total number of equal parts the object is divided into. 2. What are Fractions? Answer: In Mathematics, a fraction represents a numerical value that defines the parts of a whole. The whole can be a number or any particular value or an object. In other words, it is also referred to as a section or portion of any quantity. It is represented by using the ‘/’ symbol, such as a/b. This is to say, if a number has to be divided into five parts, then it is denoted as x/5. Thus, the fraction here, x/5, describes 1/5th of number x. For example, 6/9 is a fraction where the upper part represents the numerator while the lower part is the denominator. A fraction is a term that has long ago originated from Latin. In Latin, “Fractus” indicates “broken”. In real life, when we cut an apple pie from the whole of it, say 2/5th of it, then the portion is the fraction of the pie. 3. How Many Types of Fractions are there? Answer: Depending upon the properties of numerator and denominator, fractions are classified into different types. They are: • Proper fractions • Improper fractions • Like fractions • Unlike fractions • Mixed fractions • Equivalent fractions Remember that a numerator greater than the denominator makes for an improper fraction. Comment
{}
# More Questions from Mathematical Analysis by Apostol I was solving the exercise questions of the book "Mathematical Analysis - 2nd Edition" by Tom Apostol and I came across the questions mentioned below. While I was able to solve a few questions, the others I did not even get any hint of! 1. (a) By equating imaginary parts in DeMoivre's Formula prove that $$\sin {n\theta} = \sin^n\theta \left\lbrace \binom{n}{1} \cot^{n - 1}\theta - \binom{n}{3} \cot^{n - 3}\theta + \binom{n}{5} \cot^{n - 5}\theta - + \cdots \right\rbrace$$ (b) If $0 < \theta < \dfrac{\pi}{2}$, prove that $$\sin{\left( 2m + 1 \right)\theta} = \sin^{2m+1}\theta . P_m\left( \cot^2 \theta \right)$$ where $P_m$ is the polynomial of degree $m$ given by $$P_m(x) = \binom{2m + 1}{1} x^m - \binom{2m + 1}{3} x^{m - 1} + \binom{2m + 1}{5} x^{m - 2} - + \cdots$$ Use this to show that $P_m$ has zeros at $m$ distinct points $x_k = \cot^2 \left( \dfrac{k\pi}{2m + 1} \right)$ for $k = 1, 2, \dots, m$. (c) Show that the sum of zeros of $P_m$ is given by $$\sum\limits_{k = 1}^{m} \cot^2 \dfrac{k\pi}{2m + 1} = \dfrac{m \left( 2m - 1 \right)}{3}$$ and that the sum of there squares is given by $$\sum\limits_{k = 1}^{m} \cot^4 \dfrac{k\pi}{2m + 1} = \dfrac{m\left( 2m - 1 \right) \left( 4m^2 + 10m - 9 \right)}{45}$$ 1. Prove that $z^n - 1 = \prod\limits_{k = 1}^{n} \left( z - e^{\dfrac{2ki\pi}{n}} \right)$ for all complex $z$. Use this to derive the formula $$\prod\limits_{k = 1}^{n - 1} \sin \dfrac{k\pi}{n} = \dfrac{n}{2^{n - 1}}$$ As far as the solutions are concerned, I am through with the 1st part of 1st question and even half of the second part. But, in the second question, proving the zeros and their sum (and the sum of their squares) is getting really difficult. I am not getting any sort of hint as to how to prove it further. And for the second question, I could do the first half part since it was essentially finding the $n$ roots of unity. But for the second part, I have nearly proved everything but what was asked. Many times I came to the conclusion that $$\prod\limits_{k = 1}^{n} \sin \dfrac{k\pi}{n} = 0$$ which is obvious because at $k = n$, we have a term of $\sin \pi$ which is equal to $0$. I am not getting how to remove that last term from the product using the result we just proved above! Help will be appreciated! • For the future, try to avoid asking multiple questions in one post. – rtybase Feb 13 '18 at 22:46 • @rtybase Surely, I will take of this from next time! – Aniruddha Deshmukh Feb 14 '18 at 6:09 With the 2nd question, 2nd part, you are asked to $$z^n - 1 = \prod\limits_{k = 1}^{n} \left( z - e^{\dfrac{2ki\pi}{n}} \right) \Rightarrow \prod\limits_{k = 1}^{n - 1} \sin \dfrac{k\pi}{n} = \dfrac{n}{2^{n - 1}}$$ Note that when $k=n$ $$e^{\frac{2ki\pi}{n}}=e^{2i\pi}=1$$ also $$z^n-1=(z-1)(z^{n-1}+z^{n-2}+z^{n-3}+...+z^2+z+1)$$ altogether $$\color{red}{(z-1)}(z^{n-1}+z^{n-2}+z^{n-3}+...+z^2+z+1)=\color{red}{(z-1)}\prod\limits_{k = 1}^{n-1} \left( z - e^{\frac{2ki\pi}{n}} \right)$$ which is $$z^{n-1}+z^{n-2}+z^{n-3}+...+z^2+z+1=\prod\limits_{k = 1}^{n-1} \left( z - e^{\frac{2ki\pi}{n}} \right)$$ and substituting $z=1$ $$n=\prod\limits_{k = 1}^{n-1} \left( 1 - e^{\frac{2ki\pi}{n}} \right)= \prod\limits_{k = 1}^{n-1} e^{\frac{ki\pi}{n}} \left( e^{-\frac{ki\pi}{n}} - e^{\frac{ki\pi}{n}} \right)=\\ (2i)^{n-1} (-1)^{n-1} \cdot \prod\limits_{k = 1}^{n-1} e^{\frac{ki\pi}{n}} \left(\frac{ e^{\frac{ki\pi}{n}} - e^{-\frac{ki\pi}{n}}}{2i} \right)=\\ 2^{n-1} (-i)^{n-1} \cdot \prod\limits_{k = 1}^{n-1} e^{\frac{ki\pi}{n}} \sin{\left(\frac{k\pi}{n}\right)}=\\ 2^{n-1} (-i)^{n-1} e^{\sum\limits_{k=1}^{n-1}\frac{ki\pi}{n}} \cdot \prod\limits_{k = 1}^{n-1} \sin{\left(\frac{k\pi}{n}\right)}=\\ 2^{n-1} (-i)^{n-1} e^{\frac{i\pi}{n}\frac{n(n-1)}{2}} \cdot \prod\limits_{k = 1}^{n-1} \sin{\left(\frac{k\pi}{n}\right)}=\\ 2^{n-1} (-i)^{n-1} i^{n-1} \cdot \prod\limits_{k = 1}^{n-1} \sin{\left(\frac{k\pi}{n}\right)}=2^{n-1} \cdot \prod\limits_{k = 1}^{n-1} \sin{\left(\frac{k\pi}{n}\right)}$$ • Any hints for 1st question? – Aniruddha Deshmukh Feb 14 '18 at 6:10
{}
1. ## A divisible subset I got the following from a book: Let $\displaystyle n \in \mathbb{N}$. Show that for every set S of $\displaystyle n$ integers, there is a nonempty subset $\displaystyle \color{blue}T$ of $\displaystyle \color{blue}S$ such that $\displaystyle \color{blue}n$ divides the sum of elements of $\displaystyle \color{blue}T$. $\displaystyle \text{Proof:}$ Let $\displaystyle S_k=\{a_1,a_2,...,a_k\}$ for each integer $\displaystyle k$ with $\displaystyle 1\leq k \leq n$. For each integer $\displaystyle k(1\leq k \leq n),$ $\displaystyle \Sigma_{i=1}^k a_i\equiv r \: \text{(mod n)}$ for some integer $\displaystyle r$, where $\displaystyle 0 \leq r \leq n-1$. We consider two cases: Case 1: $\displaystyle \Sigma_{i=1}^k a_i \equiv 0\: \text{(mod n)}$ for some integer $\displaystyle k$. Then $\displaystyle n|\Sigma_{i=1}^k$, that is, $\displaystyle n$ divides the sum of the elements $\displaystyle S_k$. Case 2: $\displaystyle \Sigma_{i=1}^k a_i \not\equiv 0\: \text{(mod n)}$ for all integer $\displaystyle k (1\leq k \leq n)$. Hence there exist integers there exist integers $\displaystyle s$ and $\displaystyle t$ with $\displaystyle 1<s<t\leq n$ such that $\displaystyle \Sigma_{i=1}^s \equiv r\: \text{(mod n)}$ and $\displaystyle \Sigma_{i=1}^t \equiv r\: \text{(mod n)}$ for integer $\displaystyle r$ with $\displaystyle 1\leq r \leq n-1$. Therefore, $\displaystyle \Sigma_{i=1}^s a_i \equiv \Sigma_{i=1}^t a_i \: \text{(mod n)}$ and so $\displaystyle n|(\Sigma_{i=1}^t a_i - \Sigma_{i=1}^s a_i )$ Hence $\displaystyle n|\Sigma_{i=s+1}^s a_i )$ Remark: 1. I have tested a set of $\displaystyle n$ elements on the computer and found that at least one of the $\displaystyle 2^n-1$ nonempty subsets is divisible by $\displaystyle n$. 2. In regard to case 2 in the proof above, I can't see how $\displaystyle \Sigma_{i=1}^k a_i \not\equiv 0\: \text{(mod n)}$ implies the existence of $\displaystyle \Sigma_{i=1}^s \equiv r\: \text{(mod n)}$ and $\displaystyle \Sigma_{i=1}^t \equiv r\: \text{(mod n)}$. It might be true by chance. Question: Does anyone here have a better proof than this? 2. Originally Posted by novice I got the following from a book: 2. In regard to case 2 in the proof above, I can't see how $\displaystyle \Sigma_{i=1}^k a_i \not\equiv 0\: \text{(mod n)}$ implies the existence of $\displaystyle \Sigma_{i=1}^s \equiv r\: \text{(mod n)}$ and $\displaystyle \Sigma_{i=1}^t \equiv r\: \text{(mod n)}$. It might be true by chance. Question: Does anyone here have a better proof than this? What do you mean, true by chance? There are only $\displaystyle n-1$ possible values for $\displaystyle \Sigma_{i=1}^k a_i \mod n$, since the value $\displaystyle 0$ is never taken; by the pigeonhole principle, two of the values must coincide. I don't think there is a better proof. I believe that this theorem and its proof are by Erdös, and, in fact, I think that it appears in "Proofs from the Book" : meaning that a better proof is incredibly unlikely!
{}
Is partial derivative a vector or dual vector? + 7 like - 0 dislike 3325 views The textbook(Introduction to the Classical Theory of Particles and Fields, by Boris Kosyakov) defines a hypersurface by $$F(x)~=~c,$$ where $F\in C^\infty[\mathbb M_4,\mathbb R]$. Differentiating gives $$dF~=~(\partial_\mu F)dx^\mu~=~0.$$ The text then says $dx^\mu$ is a covector and $\partial_\mu F$ a vector. I learnt from another book that $dx^\mu$ are 4 dual vectors(in Minkowski space), $\mu$ indexes dual vector themselves, not components of a single dual vector. So I think $\partial_\mu F$ should also be 4 vectors, each being the directional derivative along a coordinate axis. But this book later states that $(\partial_\mu F)dx^\mu=0$ describes a hyperplane $\Sigma$ with normal $\partial_\mu F$ spanned by vectors $dx^\mu$, and calls $\Sigma$ a tangent plane (page 33-34). This time, it seems to treat $\partial_\mu F$ as a single vector and $dx^\mu$ as vectors. But I think $dx^\mu$ should span a cotangent space. I need some help to clarify these things. [edit by Ben Crowell] The following appears to be the text the question refers to, from Appendix A (which Amazon let me see through its peephole): Elie Cartan proposed to use differential coordinates $dx^i$ as a convenient basis of 1-forms. The differentials $dx^i$ transform like covectors [...] Furthermore, when used in the directional derivative $dx^i \partial F/\partial x^i$, $dx^i$ may be viewed as a linear functional which takes real values on vectors $\partial F/\partial x^i$. The line elements $dx^i$ are called [...] 1-forms. This post imported from StackExchange Physics at 2014-11-11 14:50 (UTC), posted by SE-user elflyao Related: physics.stackexchange.com/q/79013/2451 This post imported from StackExchange Physics at 2014-11-02 19:45 (UTC), posted by SE-user Qmechanic Related: physics.stackexchange.com/q/79013/2451 This post imported from StackExchange Physics at 2014-11-11 14:50 (UTC), posted by SE-user Qmechanic After writing an answer and then succeeding in getting a look at what Kosyakov wrote, I'm just as confused as elflyao. I would be interested in hearing from others who might have broader experience or be able to explain whether Kosyakov has an unusual point of view. This post imported from StackExchange Physics at 2014-11-02 19:45 (UTC), posted by SE-user Ben Crowell After writing an answer and then succeeding in getting a look at what Kosyakov wrote, I'm just as confused as elflyao. I would be interested in hearing from others who might have broader experience or be able to explain whether Kosyakov has an unusual point of view. This post imported from StackExchange Physics at 2014-11-11 14:50 (UTC), posted by SE-user Ben Crowell This post imported from StackExchange Physics at 2014-11-11 14:50 (UTC), posted by SE-user Ben Crowell This post imported from StackExchange Physics at 2014-11-11 14:50 (UTC), posted by SE-user QuantumBrick I compared Kosyakov's definitions with Wald, General Relativity, pp. 15 and 20ff. There are some inconsistencies. Wald defines $\mathscr{F}$ as the set of smooth scalar fields on a manifold $M$, and defines a tangent vector $v\in V_p$ as a map $v:\mathscr{F}\rightarrow \mathbb{R}$ that is linear and obeys the Leibniz rule at $p$, so that it can be interpreted as a directional derivative at $p$. This makes a partial derivative a vector as a matter of definition, and it also means that for $F\in\mathscr{F}$, $\partial F/\partial x^i$ is a real number, not a vector as Kosyakov describes it. This post imported from StackExchange Physics at 2014-11-11 14:50 (UTC), posted by SE-user Ben Crowell + 7 like - 0 dislike Below follows a handful of excerpts from the book Introduction to the Classical Theory of Particles and Fields (2007) by B. Kosyakov. Controversial/misleading/wrong statements are marked in $\color{Red}{\rm red}$. We agree with OP that the statements marked in $\color{Red}{\rm red}$ are opposite standard terminology/conventions. Some (not all) correct statements are marked in $\color{Green}{\rm green}$. 1.2 Affine and Metric Structures [...] Let ${\bf e}_1$, $\ldots$, ${\bf e}_n$ and ${\bf e}^{\prime}_1$, $\ldots$, ${\bf e}^{\prime}_n$ be two arbitrary bases. Each $\color{Green}{\rm vector}$ of the latter basis can be expanded in terms of $\color{Green}{\rm vectors}$ of the former basis: $${\bf e}^{\prime}_i ~=~ {\bf e}_j~L^j{}_i .\tag{1.37}$$ [...] Thus, linear functionals form the dual vector space $V^{\prime}$. If $V$ is $n$-dimensional, so is $V^{\prime}$. Indeed, let ${\bf e}_1$, $\ldots$, ${\bf e}_n$ be a basis in $V$. Then any $\omega\in V^{\prime}$ is specified by $n$ real numbers $\omega_1=\omega({\bf e}_1)$, $\ldots$, $\omega_n=\omega({\bf e}_n)$, and the value of $\omega$ on ${\bf a} = a^i {\bf e}_i$ is given by $$\omega({\bf a}) ~=~ \omega_i a^i .\tag{1.52}$$ We see that $V^{\prime}$ is isomorphic to $V$. That is why we sometimes refer to linear functionals as $\color{Green}{covectors}$. A closer look at (1.52) shows that a $\color{Green}{\rm vector}$ ${\bf a}$ can be regarded as a linear functional on $V^{\prime}$. One can show (Problem 1.2.3) that changing the basis (1.37) implies the transformation of $\omega_i$ according to the same law: $$\omega^{\prime}_i ~=~\omega_j ~L^j{}_i .\tag{1.53}$$ We will usually suppress the argument of $\omega({\bf a})$, and identify $\omega$ with its components $\omega_i$. [...] 1.3 Vectors, Tensors, and $n$-Forms [...] A simple generalization of vectors and covectors are tensors. Algebraically, a tensor $T$ of rank $\color{Green}{(m,n)}$ is a multilinear mapping $$\color{Green}{T: \underbrace{V^{\prime} \times\ldots\times V^{\prime}}_{m\text{ times}} \times \underbrace{V \times\ldots\times V}_{n\text{ times}} \to \mathbb{R}}. \tag{1.112}$$ We have already encountered examples of tensors in the previous section: a scalar is a rank $(0,0)$ tensor, a $\color{Green}{\rm vector}$ is a rank $\color{Green}{(1,0)}$ tensor, a $\color{Green}{\rm covector}$ is a rank $\color{Green}{(0,1)}$ tensor, the metric $g_{ij}$ is a rank $(0,2)$, while $g^{ij}$ is a rank $(2,0)$ tensor, and the Kronecker delta $\delta^i{}_j$ is a rank $(1,1)$ tensor. Just as $\color{Green}{\rm four~vectors}$ can be regarded as objects which transform according to the law $$a^{\prime \mu} ~=~ \color{Green}{\Lambda^{\mu}{}_{\nu}} ~a^{\nu} ,\tag{1.113}$$ where $\Lambda^{\mu}{}_{\nu}$ is the Lorentz transformation matrix relating the two frames of reference, so tensors of rank $(m,n)$ can be described in terms of Lorentz group representations by the requirement that their transformation law be $$T^{\prime\mu_1\cdots \mu_m}{}_{\nu_1\cdots \nu_n} ~=~\color{Green}{\Lambda^{\mu_1}{}_{\alpha_1}\ldots\Lambda^{\mu_m}{}_{\alpha_m}}~ T^{\alpha_1\cdots \alpha_m}{}_{\beta_1\cdots \beta_n}~ \color{Red}{\Lambda^{\beta_1}{}_{\nu_1}\ldots\Lambda^{\beta_n}{}_{\nu_n}}. \tag{1.114}$$ [...]The differential operator $$\partial_{\mu}~=~\frac{\partial}{\partial x^{\mu}} \tag{1.140}$$ transforms like a $\color{Green}{\rm covariant~vector}$. To see this, we use the chain rule for differentiation: $$\frac{\partial}{\partial x^{\mu}}~=~\frac{\partial x^{\prime \nu}}{\partial x^{\mu}} \frac{\partial}{\partial x^{\prime \nu}}, \tag{1.141}$$ and note that, for linear coordinate transformations $x^{\prime\mu} = \color{Green}{\Lambda^{\mu}{}_{\nu}}~x^{\nu} + a^{\mu}$ $$\frac{\partial x^{\prime \mu}}{\partial x^{\nu}} ~=~\color{Green}{\Lambda^{\mu}{}_{\nu}}. \tag{1.142}$$ We will always use the shorthand notation $\partial_{\mu}$, and treat this differential operator as an ordinary $\color{Green}{\rm vector}$. [...] 1.4 Lines and Surfaces [...] We define a hypersurface $M_{n−1}$ by $$F(x) ~=~ C , \tag{1.176}$$ where $F$ is an arbitrary smooth function $\mathbb{M}_4 \to \mathbb{R}$. Differentiating (1.176) gives $$(\partial_{\mu}F) dx^{\mu} ~=~ 0 . \tag{1.177}$$ One may view $dx^{\mu}$ as a $\color{Green}{\rm covector}$, and $\partial_{\mu}F$ as a $\color{Red}{\rm vector}$. Indeed, $dx^{\mu}$ transforms like a $\color{Red}{\rm covector}$ under linear coordinate transformations $x^{\prime\mu} = \color{Green}{\Lambda^{\mu}{}_{\nu}}~x^{\nu} + a^{\mu}$, $$dx^{\prime\mu} ~=~ \frac{\partial x^{\prime\mu}}{\partial x^{\nu}}dx^{\nu} ~=~\color{Green}{\Lambda^{\mu}{}_{\nu}}~dx^{\nu}, \tag{1.178}$$ and $\partial_{\mu}F$ transforms like a $\color{Red}{\rm vector}$: $$\frac{\partial F}{\partial x^{\prime\mu}} ~=~\frac{\partial F}{\partial x^{\nu}} \frac{\partial x^{\nu}}{\partial x^{\prime\mu}} ~=~\frac{\partial F}{\partial x^{\nu}}\color{Red}{\Lambda^{\nu}{}_{\mu}}. \tag{1.179}$$ In Minkowski space, vectors and covectors can be converted to each other according to (1.121). For this reason, we will often regard $dx^{\mu}$ as vectors. [...] A. Differential Forms [...] Elie Cartan proposed to use differential coordinates $dx^i$ as a convenient basis of $\color{Green}{\rm one~forms}$. The differentials $dx^i$ transform like $\color{Red}{\rm covectors}$ under a local coordinate change, $$dx^{\prime j}~=~ \frac{\partial x^{\prime j}}{\partial x^i}dx^i. \tag{A.1}$$ [If the coordinate change is specialized to Euclidean transformations $x^{\prime j} =\color{Red}{L^j{}_i} ~x^i + c^j$, then $\partial x^{\prime j} /\partial x^i$ reduces to $\color{Red}{L^j{}_i}$, an orthogonal matrix with constant entries, and (A.1) $\color{Red}{\rm becomes}$ (1.53), the transformation law for $\color{Green}{\rm covectors}$.] [...] Notes: 1. The corrected eq. (1.114) reads $$T^{\prime\mu_1\cdots \mu_m}{}_{\nu_1\cdots \nu_n} ~=~\Lambda^{\mu_1}{}_{\alpha_1}\ldots\Lambda^{\mu_m}{}_{\alpha_m}~ T^{\alpha_1\cdots \alpha_m}{}_{\beta_1\cdots \beta_n}~ (\Lambda^{-1})^{\beta_1}{}_{\nu_1}\ldots(\Lambda^{-1})^{\beta_n}{}_{\nu_n}. \tag{1.114}$$ 2. The corrected eq. (1.179) reads $$\frac{\partial F}{\partial x^{\prime\mu}} ~=~\frac{\partial F}{\partial x^{\nu}} \frac{\partial x^{\nu}}{\partial x^{\prime\mu}} ~=~\frac{\partial F}{\partial x^{\nu}}(\Lambda^{-1})^{\nu}{}_{\mu}. \tag{1.179}$$ 3. To explain why (A.1) does not becomes (1.53), let ${\bf e}^1$, $\ldots$, ${\bf e}^n$, be a (dual) basis in $V^{\prime}$. In light of (1.53), in order for a covector $\omega=\omega_i{\bf e}^i\in V^{\prime}$ to be independent of the choice of basis, the dual basis must transform as $${\bf e}^{\prime i} ~=~ M^i{}_j ~{\bf e}^j, \tag{*}$$ where $$M~=~L^{-1}. \tag{1.45}$$ Identifying the dual bases ${\bf e}^i\leftrightarrow dx^i$, the above eq.(*) becomes (A.1). Moreover, in the sentence below eq. (A.1), the $L$ matrix should be replaced with the $M$ matrix in two places. 4. Finally, let us answer OP's title question: A partial derivative $\partial_{\mu}F$ (of a scalar function $F$) is a component of a cotangent vector $dF=(\partial_\mu F)dx^\mu$, while the un-applied partial derivative $\partial_{\mu}$ is a local basis element of a tangent vector. Both $\partial_{\mu}F$ and $\partial_{\mu}$ transform as covectors. This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user Qmechanic answered Nov 3, 2014 by (3,110 points) well point 4. is nicely put, but i dont like so much the red/green tags, as this is convention than anything else (of course any book errata excepted) This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user Nikos M. @NikosM.: Qmechanic's points 1 and 2 are not matters of convention; Kosyakov has made mistakes in those spots. This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user Ben Crowell + 6 like - 0 dislike I took a quick look at pages 59 and 60 of "Gravitation", section 2.6 "Gradients and Directional Derivatives", to see if there's anything there we can use to clarify this issue. In this section, the gradient of $f$ is $\mathbf df$, the directional derivative along the vector $\mathbf v$ is $\partial_{\mathbf v}f$ and the following relationship holds: $$\partial_{\mathbf v}f = \langle\mathbf df, \mathbf v \rangle$$ Then assuming a set of basis forms $\mathbf dx^{\mu}$ and dual basis vectors $\mathbf e_{\mu}$ we have $$\partial_{\mu} f \equiv \partial_{\mathbf e_{\mu}}f = \langle\mathbf df, \mathbf e_{\mu} \rangle = \frac{\partial f}{\partial x^{\mu}}$$ So, according to MTW in this section, $\partial_{\mu} f$ are the components of $\mathbf df$ on this basis. Thus, it must be that, per the 2nd equation in the question, $$\mathbf df = (\partial_{\mu} f) \mathbf dx^{\mu}$$ which is just the expansion of the form $\mathbf df$ on the basis forms $\mathbf dx^{\mu}$ As to why Kosyakov would identify this as a contraction of a form and a vector I haven't a clue. This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user Alfred Centauri answered Nov 2, 2014 by (110 points) + 4 like - 0 dislike I believe this is just imprecise use of language by the author - there is nothing mysterious happening, it is just not well stated: As stated in the question, for a hypersurface $\Sigma$ defined by $$F(x) = c \in \mathbb{R}$$ we find that $$\mathrm{d}F = 0$$ must hold on $\Sigma$. This is crucial - it means that the 1-form $\mathrm{d}F$ acting upon tangent vectors of $\Sigma$ must vanish identically: $$\forall v \in T_x\Sigma : \mathrm{d}F(v) = (\partial_\mu F)v^\mu = 0$$ But we can recognize $(\partial_\mu F)v^\mu$ as the scalar product of the vectors $v$ and $g(\mathrm{d}F,\dot{})$, the latter being the usual dual of $\mathrm{d}F$ with components $\partial^\mu F$. Since $T_x\Sigma \subset T_x\mathbb{M}^4$ naturally, this means that $\mathrm{d}F = 0$ indeed sweeps out a hypersurface in the tangent space that has, in sloppy diction, the gradient as its normal (although it is really its dual). This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user ACuriousMind answered Nov 2, 2014 by (910 points) I think the crucial point is in the sentence "But we can recognize...," and it's here that I don't follow you. Since the gradient $dF$ is a 1-form, its dual $\partial^\mu F$ is a vector. If so, then we can't take the scalar product of the vector $\partial^\mu F$ with a vector $v^\mu$. I don't see any reason for taking duals anywhere at all. Even if we didn't have a metric, and therefore couldn't take duals, we could simply have $(\partial_\mu F)v^\mu$, the scalar product of a covector with a vector. This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user Ben Crowell @BenCrowell: Yes, indeed. That will still define a hypersurface in the tangent space, but we will not have a "normal vector" to describe it. I agree that there is no need to take duals - but I think Kosyakov implicitly does exactly that when he talks of $\partial_\mu F$ being a normal vector. Your nomenclature seems a bit unorthodox to me though - a scalar product is between two vectors or two covectors (and usually induced by the metric) - applying a covector to a vector is not a scalar product. This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user ACuriousMind OK, by "scalar product" I simply meant a product that transforms as a scalar. So taking "scalar product" in your answer to mean $g(\cdot,\cdot)$, I don't understand why one would describe $(\partial_\mu F)v^\mu$ as a scalar product of $v^\mu$ with $\partial^\mu$. That might indicate that we take the gradient, raise its index, lower its index, and then contract. I don't see the point of raising an index and then immediately lowering it again. Or we could raise the gradient's index, lower $v^\mu$'s index, and contract. Again, why raise or lower at all? This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user Ben Crowell @BenCrowell: Because it yields the geometric interpretation of $\partial^\mu F$ as the normal vector to the hyperplane of tangent vectors of $\Sigma$. I don't think there's anything deeper than that here. This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user ACuriousMind + 3 like - 0 dislike On a manifold without extra structure, $dF$ makes sense only as a covector  (= 1-form = a section of the cotangent space). For example, like any 1-form, one can integrate $dF$ along a path (but unlike for a general 1-form, the integral only depends on the endpoints). To interpret $dF$ as a vector, one needs a metric that can be used to identify vectors and covectors. You can see that $dF$ must be a covector from your formula, since as a linear combination of the covectors $dx^\mu$,  $dF$ is itself a covector - with components $dF_\mu$.  The term $dF_\mu(x)$ itself is just a number, and $dF_\mu$ a noncovariant function. Physicists using the Einstein summation convention refer to a tensor by its indices, then $dF_\mu$ is what the mathematicians call $dF$ - a covector. (But the physicists notation gets confusing when it is applied to $dx^\mu$, which looks like a vectoraccording to its indeices, but in fact is for each fixed $\mu$ a covector.) answered Nov 9, 2014 by (15,488 points) + 3 like - 1 dislike Hmm I guess the that the simplest way to think about it is the following. If you have a differentiable manifold $X$(which for the shake of argument let us suppose it enjoys all nice properties we like) then it is possible to define the tangent and co-tangent space at this point such  that the partial derivatives $\partial_i \Big|_{p \in X}$form an orthonormal basis in the tangent space $T_pX$ and the 1-forms $dx^{i}$are elements of the co-tangent space $T_p^*X$. Both of these quantities form a real vector space and they are isomorphic to each other. By the way, your first formula gives zero because F is constant. F is a 4-dimensional plane In general we have $dF = \sum_i \partial_i F \, dx^i$ You can understand the partial derivative as a vector if you really think of what it is doing upon acting on a scalar. It gives it a direction! Draw a graph of a smooth scalar function and take the tangent at a point. Intuitively you have drawn a vector (dir. derivative as you mention) from the point of contact to some direction.  Finally if your space is Minkowski then you have $i,j=0,1,2,3$. Do not get confused on the notation. Each element $dx^{i}$denotes a cotangent vector. For example in 4d Minkowski  you have $dx^{0}, dx^{1}, dx^{2}, dx^{3}$. You can find some nice explanations in Nakahara's book Geometry Topology and Physics and also in almost any General Relativity textbook. Also check out this. Hope it helps. answered Nov 8, 2014 by (3,625 points) $dF_\mu$ cannot have the meaning of a vector unless one has a (pseudo-)Riemannian manifold! See my answer. This should be included in "all the nice properties". To my best knowledge we, physicists, almost always discuss (pseudo-)Riemannian manifolds. To a physicists, I think, my answer is quite right. I guess a mathematician would not like it though. In the end of the day, it seems we are talking about simple GR nor I said anywhere that $dF_{\mu}$ is a vector. I said it for the partial derivative. As for the metric, well, M4 is clearly a metric space, so not sure what is the target of your comment although your answer is very nice! But the question itself distinguishes between vectors and covectors, hence assumes no metric. Otherwise there wouldn't be any confusion to begin with. In GR many times we call the partial derivatives as vectors and and the one-forms as co-vectors in the sense that they belong in the cotangent bundle of the specified point. Furthermore the whole question assumes Minkowski 4, at least this is what I got.  I completely agree with your answer and in specific that sometimes we mix notation but I think that, at least intuitively, there is no error in my answer ) + 1 like - 0 dislike I believe you are confused because you are mixing up related but slightly different quantities. Yes, a partial derivative is a vector and yes, a vector is an object with an upper index. The above statement may seem contradictory, but in fact it is not for the following reason. A vector is an abstract quantity that is an element of a "vector space". In this case, the vector space that is being discussed is the tangent space. On a vector space, one can choose a basis, any basis. Once a basis has been chosen, any other vector in the vector space can be described by simply prescribing a set of numbers. For instance in ${\mathbb R}^2$ (rather the corresponding affine space), one can choose a basis of vectors as ${\hat x}$ and ${\hat y}$. Once this has been done any other vector can be described simply by 2 numbers. For instance the numbers $(1,2)$ really implies that we are talking about the vector ${\hat x} + 2 {\hat y}$. How does the discussion above apply here? On the tangent space, a natural choice of basis are the set of partial derivatives $\partial_\mu = \{ \partial_0 , \partial_1 , \partial_2 , \partial_3 \}$ (assuming we are in $M_4$. Each partial derivative is in itself a vector. Now, once this basis has been chosen, every other vector can be described by a set of 4 numbers $v^\mu = (v^0 , v^1 , v^2 , v^3)$ which corresponds to the vector $v^\mu \partial_\mu$. It is this sense, that the bold statement above is true. Often, since the basis of partial derivatives is obvious, one simply describes a vector as an object with an upper index $v^\mu$. Next, let us discuss co-vectors (quantities with a lower index). These are lements of the dual vector space (which is the space of linear functions on the vector space) of the tangent space. Given the partial derivative basis on the tangent space, one then has a natural basis in the cotangent space denoted by $dx^\mu = \{ dx^0 , dx^1 , dx^2 , dx^3 \}$. Note that each differential itself is a covector. This natural basis is defined by the relation $dx^\mu (\partial_\nu ) = \delta^\mu_\nu$. As before, once this natural basis has been chosen, any element of the cotangent space can be described by 4 numbers, namely $v_\mu = \{ v_0, v_1 , v_2 , v_3 \}$ which corresponds to the covector $v_\mu dx^\mu$. In summary, $\partial_\mu$ for each $\mu$ corresponds to a 4-dimensional vector whereas $v^\mu$ for each $\mu$ corresponds to 4 components of a single vector. Similarly, $dx^\mu$ for each $\mu$ corresponds to a 4-dimensional covector whereas $v_\mu$ for each $\mu$ corresponds to 4 components of a single covector. PS 1 - Sometimes people like to use bases other than $\partial_\mu$ and $dx^\mu$ on the tangent and cotangent spaces respectively. These are known as non-coordinate bases. PS 2 - Just to be clear, $\partial_\mu$ is a vector, but $\partial_\mu F$ is a function This post imported from StackExchange Physics at 2014-11-11 14:51 (UTC), posted by SE-user Prahar answered Nov 3, 2014 by (540 points) Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverfl$\varnothing$wThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{}
Lemma 15.28.10. Let $R$ be a ring. Let $\varphi : E \to R$ be an $R$-module map. Let $f, g \in R$. Set $E' = E \oplus R$ and define $\varphi '_ f, \varphi '_ g, \varphi '_{fg} : E' \to R$ by $\varphi$ on $E$ and multiplication by $f, g, fg$ on $R$. The complex $K_\bullet (\varphi '_{fg})$ is isomorphic to the cone of a map of complexes $K_\bullet (\varphi '_ f)[1] \longrightarrow K_\bullet (\varphi '_ g).$ Proof. By Lemma 15.28.7 the complex $K_\bullet (\varphi '_ f)$ is isomorphic to the cone of multiplication by $f$ on $K_\bullet (\varphi )$ and similarly for the other two cases. Hence the lemma follows from Lemma 15.28.9. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
# Why was question closed without any consideration around the content? Server responds with empty packet during session negotiation resulting in client giving a malformed packet error I laid out clear terms as to what was going on, suggesting it was likely a configuration issue on the specific server I was trying to connect to (eg, happens on server 1, but not server 2, though they are the same version). That sort of thing is on-topic: If you have a question about... Database Administration including configuration and backup / restore I provided clear and detailed evidence and output of what I had tried and what failed, with exact syntax errors and pcap data. Yet it got closed as off topic as a possibly typo or command error? That is the sort of action I'd expect from someone who doesn't understand or want to try to understand how a protocol works - not what I'd expect from a community of experts... In any case, I dug through the protocol myself and figured out it was in fact a configuration issue, not a simple "too old" of a version as suggested in comments without providing any evidence or reasoning as to why he thought that... In any case, I still think it is valid (yes 5.0 is old, but old does not mean out of production). Lots of enterprises still use old software simply because it still works, and that makes this sort of issue more likely to be seen. I added all the proper documentation from the MySQL site showing the configuration that caused the problem. I was not one of the reviewers or close voters, but a contributory factor may have been the first comment (now deleted): A vote to close as too localized around the same time initiated a Close Review process. Privileged users taking part in that review had the following options: • Vote to leave the question open; (three needed) • Edit the question to improve it; (one needed, leaves the question open) • Skip the item; • Add another vote to close. (five needed) Three votes to close were acquired via the review process. None of the other available options were selected by reviewers. A further two close votes were placed outside the review process, bringing the total to the five community close votes needed to place the question on hold. It seems (to me) at least possible that reviewers and direct voters were influenced by the apparent "basic error" pointed out in the first comment. This is one of the reasons I dislike answers in comments so much; alongside the fact that 'comment answers' cannot be downvoted if incorrect (and other factors, but I will try not to rant). The question has now been reopened (by five community votes). You did the right thing by bringing this up on meta, thank you. Note: I have invited all the closing process participants to contribute here, should they wish to explain their own reasoning. The "close reason" text: Too localized - this could be because your code has a typo, basic error, or is not relevant to most of our audience. I've italicized the part I feel is relevant to your question. Please don't take this personally; I think you did a fantastic job of providing a lot of detail, even going so far as to provide packet data, which most people don't even know how to do, let alone that they might provide it. In my mind, since the question appears to be related to the protocol, it would only appeal to a very small slice of DBAs; those with network-sniffer experience. I've voted to re-open the question since you've added details that I think help make the question have broader appeal. Assuming the question gets enough "re-open" votes, you might want to provide the answer, which I'm certain will be up-voted. • Question has been reopened. – ypercubeᵀᴹ Apr 15 '16 at 5:42 All my fault. I missed: I would accept that answer if it didn't work perfectly with another server of the same version already ... so you can blame me :-) That part completely changed the question, and I skimmed over it. I enjoy admitting being wrong, so it's all good.
{}
# Abstract Nonsense ## The Dimension Theorem Point of post: In this post we prove what is called the ‘dimension’ theorem which in essence says that the degree of any irrep of a finite group divides the order of the group. Motivation So far we’ve obtained some interesting information about the degrees of the irreps of a finite group $G$. We’ve proven that the sum of the degrees squared must equal the order of the group. Also, we’ve proven that the number of degree one irreps of $G$ is equal to the order of the abelinization of $G$. In this post we’ll prove the supremely interesting result that the degree of any irrep must divide the order of the group. One of many uses for this will be that we will be able to prove some interesting results about finite groups. The Dimension Theorem Let as always $G$ be a finite group and suppose that we’ve chosen particular representatives $\displaystyle D^{(\alpha)}$ and thus irreducible characters $\chi^{(\alpha)}$ have the representation $\displaystyle \chi^{(\alpha)}(x)=\sum_{j=1}^{n}D^{(\alpha)}_{j,j}(x)$. Our first theorem will show the connection between irreducible characters and algebraic integers. Namely: Theorem: Let $G$ be a finite group then for every $\alpha\in\widehat{G}$ and every $g\in G$ one has that $\chi^{(\alpha)}(g)\in\mathbb{A}$. Proof: We merely note that since $|G|<\infty$ one has that $g^{|G|}=e$ and so $D^{(\alpha)}(g)^|G|=D^{(\alpha)}\left(g^|G|\right)=D^{(\alpha)}(g)=I$ and so by basic matrix analysis every eigenvalue of $D^{(\alpha)}$ is a $|G|$-root of unity and thus trivially an algebraic integer. Thus, $\chi^{(\alpha)}(g)$ being the sum of these algebraic integers (it of course, being the trace of $D^{(\alpha)}(g)$) is an algebraic integers (since they form a ring). $\blacksquare$ We now use this to show that $d_\alpha\mid |G|$ for every $\alpha\in\widehat{G}$. Theorem: Let $G$ be a finite group and $\alpha\in\widehat{G}$. Then, $d_\alpha\mid |G|$. Proof: Choose an ordering for $G$ so that we can list $G$ as $(g_1,\cdots,g_{n})$ where $|G|=n$. We then define a matrix $A$ by $A_{i,j}=\chi^{(\alpha)}\left(g_i g_j^{-1}\right)$. We then define $v\in\mathbb{C}^n$ by $v=(\chi^{(\alpha)}(g_1),\cdots,\chi^{(\alpha)}(g_n))^{\top}$. We note then by the convolution relations that \displaystyle \begin{aligned}Av &=\left(\sum_{k=1}^{n}\chi^{(\alpha)}\left(g_1 g_k^{-1}\right)\chi^{(\alpha)}(g_k),\cdots,\sum_{k=1}^{n}\chi^{(\alpha)}\left(g_n g_k^{-1}\right)\chi^{(\alpha)}\left(g_k\right)\right)^{\top}\\ &= \left(\frac{|G|}{d_\alpha}\chi^{(\alpha)}(g_1),\cdots,\frac{|G|}{d_\alpha}\chi^{(\alpha)}(g_n)\right)^{\top}\\ &= \frac{|G|}{d_\alpha}v\end{aligned} and thus $\displaystyle \frac{|G|}{d_\alpha}$ is an eigenvalue for $A$. That said, by the previous theorem we know that every entry of $A$ is an algebraic integer and thus by previous theorem we may conclude that $\displaystyle \frac{|G|}{d_\alpha}\in\mathbb{A}$. But, since evidently $\displaystyle \frac{|G|}{d_\alpha}\in\mathbb{Q}$ we may conclude that $\displaystyle \frac{|G|}{d_\alpha}\in\mathbb{Z}$ and thus $d_\alpha\mid |G|$ as desired. $\blacksquare$ References: 1.Simon, Barry. Representations of Finite and Compact Groups. Providence, RI: American Mathematical Society, 1996. Print. March 3, 2011 - 1. […] our last post we proved the incredible fact that the degree of any irrep divides the order of the group. We now […] Pingback by Representation Theory: The Dimension Theorem (Strong Version) « Abstract Nonsense | March 7, 2011 | Reply 2. […] This follows from the fact that the values of every character of a group are algebraic […] Pingback by Representation Theory: Character Tables « Abstract Nonsense | March 21, 2011 | Reply 3. […] the sum runs over all with . We recall next that for every we must have that so that but since this implies that . Thus, with these […] Pingback by Representation Theory: Character Table of S_3 By Finding the Irreducible Representations « Abstract Nonsense | March 22, 2011 | Reply 4. […] the dimension theorem tells us that each and so . But, since there is always the degree of which is we may conclude […] Pingback by Groups of Order pq (pt. I) « Abstract Nonsense | April 19, 2011 | Reply 5. […] irrep of is complex. Thus, for every in one has that and . But, since is odd and we know that we may conclude that for some . Thus, since we know that the cardinality of is (where ) we […] Pingback by Representation Theory: The Number of Conjugacy Classes of a Finite Group of Odd Order is Equivalent to The Order of The Group Modulo Sixten « Abstract Nonsense | April 23, 2011 | Reply
{}
# Efficient nonparametric Bayesian inference for $X$-ray transforms @article{Monard2017EfficientNB, title={Efficient nonparametric Bayesian inference for \$X\$-ray transforms}, author={Franccois Monard and Richard Nickl and Gabriel P. Paternain}, journal={The Annals of Statistics}, year={2017} } • Published 21 August 2017 • Mathematics • The Annals of Statistics We consider the statistical inverse problem of recovering a function $f: M \to \mathbb R$, where $M$ is a smooth compact Riemannian manifold with boundary, from measurements of general $X$-ray transforms $I_a(f)$ of $f$, corrupted by additive Gaussian noise. For $M$ equal to the unit disk with flat' geometry and $a=0$ this reduces to the standard Radon transform, but our general setting allows for anisotropic media $M$ and can further model local attenuation' effects -- both highly relevant… 43 Citations ## Figures from this paper ### Bernstein-von Mises Theorems and Uncertainty Quantification for Linear Inverse Problems • Mathematics SIAM/ASA J. Uncertain. Quantification • 2020 It is proved that semiparametric posterior estimation and uncertainty quantification are valid and optimal from a frequentist point of view, and frequentist guarantees for certain credible balls centred at $\bar{f}$ are derived. ### Statistical guarantees for Bayesian uncertainty quantification in nonlinear inverse problems with Gaussian process priors • Mathematics The Annals of Statistics • 2021 ### Nonparametric Bernstein–von Mises theorems in Gaussian white noise • Mathematics • 2013 Bernstein-von Mises theorems for nonparametric Bayes priors in the Gaussian white noise model are proved. It is demonstrated how such results justify Bayes methods as efficient frequentist inference ### The Bayesian Approach to Inverse Problems • Mathematics • 2017 These lecture notes highlight the mathematical and computational structure relating to the formulation of, and development of algorithms for, the Bayesian approach to inverse problems in ### Computationally Efficient Markov Chain Monte Carlo Methods for Hierarchical Bayesian Inverse Problems • Computer Science, Mathematics • 2016 A computationally efficient MCMC sampling scheme for ill-posed Bayesian inverse problems by employing a Metropolis-Hastings-within-Gibbs (MHwG) sampler with a proposal distribution based on a low-rank approximation of the prior-preconditioned Hessian. ### Bayesian inverse problems with non-conjugate priors We investigate the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases. A theorem is proved in a ### Stability estimates for the X-ray transform of tensor fields and boundary rigidity • Mathematics • 2004 We study the boundary rigidity problem for domains in Rn: is a Riemannian metric uniquely determined, up to an action of diffeomorphism fixing the boundary, by the distance function g.x; y/ known for
{}
# zbMATH — the first resource for mathematics A new approach to Tikhonov well-posedness for Nash equilibria. (English) Zbl 0881.90136 Summary: It is suggested a new approach to Tikhonov well-posedness for Nash equilibria. Loosely speaking, Tikhonov well-posedness of a problem means that approximate solutions converge to the true solution when the degree of approximation goes to zero. The novelty of our approach consists in a suitable definition of what could be considered an approximate solution of a Nash equilibrium problem. We add to the requirement of being an $$\varepsilon$$-equilibrium also that of being $$\varepsilon$$ close in value to some Nash equilibrium. In this way, we can get rid of some problems which affect Tikhonov well-posedness when the last condition is not taken into account, like the usual lack of uniqueness for Nash equilibria. Furthermore, it can be proved that this property of well-posedness is preserved under monotonic transformations of the payoffs: a result which is relevant in view of economic interpretation. ##### MSC: 91A10 Noncooperative games 49J45 Methods involving semicontinuity and convergence; relaxation Full Text: ##### References: [1] Bednarczuk E., Control and Cybern 23 pp 107– (1994) [2] DOI: 10.1287/moor.17.3.715 · Zbl 0767.49011 · doi:10.1287/moor.17.3.715 [3] Cavazzuti, E. and Morgan, J. Optimization, theory and algorithms. Proc. Conf. Confolant. 1981, France. Edited by: Hiriart-Urruty, J.B., Oettli, W. and Stoer, J. pp.61–76. Well-Posed Saddle Point Problems [4] Dontchev, A. and Zolezzi, T. 1993. ”Well-Posed Optimization Problems”. Berlin: Springer. · Zbl 0797.49001 [5] DOI: 10.1006/game.1995.1012 · Zbl 0835.90122 · doi:10.1006/game.1995.1012 [6] DOI: 10.1007/BF00927717 · Zbl 0177.12904 · doi:10.1007/BF00927717 [7] Levitin E.S., Societ Matiz. Dokl 7 pp 764– (1966) [8] Loridan P., Recent Decelopmeizts in Well-Posed Variational Problenzs pp 171– (1995) [9] DOI: 10.1080/01630568108816100 · Zbl 0479.49025 · doi:10.1080/01630568108816100 [10] DOI: 10.1080/01630568308816145 · Zbl 0517.49007 · doi:10.1080/01630568308816145 [11] Lucchetti, R. and Revalski, J. 1995. ”Recent Developments in Well-Posed Variational Problems”. Edited by: Lucchetti, R. and Revalski, J. Kluwer: Dordrecht. · Zbl 0823.00006 [12] Morgan J., Non-Smooth Optinzization and Related Topics (1989) [13] Myerson, R.B. 1991. ”Game Theory: Analysis of Conflict”. Cambridge, MA: Harvard University Press. · Zbl 0729.90092 [14] Patrone F., Riv. Mat. Pura Appl 1 pp 95– (1987) [15] Patrone F., Recent Deceloptnents in Well-Posed Variational Problerns pp 211– (1995) [16] Patrone F. Pusillo Chicco L. Antagonism for two-person games: taxsonomy and applications to Tikhonov well-posedness 1995 preprint · Zbl 0881.90136 [17] Revalski, I.P. Mathematics and Edzicatioil $$si;in$$ei; Mathematics. Proc. 14th Spring Confer. of the Union of Bulgarian Mathematicians. Sofia. Variational inequalities with unique solution [18] Revalski J.P., Acta Univ. Carolinae Math,et Phys 28 pp 117– (1987) [19] Tikhonov A.N., USSR J. Comp. Math. Math. Phys 6 pp 631– (1966) [20] DOI: 10.1137/0121011 · doi:10.1137/0121011 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
The randNorm() Command Command Summary Generates a random normally-distributed number. Command Syntax randNorm(mean,std-dev) Menu Location • Press 2nd MATH to enter the MATH popup menu. • Press 7 to enter the Probability submenu. • Press 5 to select randNorm(. This command works on all calculators. 1 byte The randNorm() command generates a random number that is normally distributed, with a given mean (first parameter) and standard deviation (second parameter). This means that on average, randNorm(x,y) will give a result of about x; it's 95% certain to be within 2*y of x. See rand() and RandSeed for more details on the random number generator. # Formula The formula for randNorm() is different from the one used by the TI-83 series. To generate normally-distributed values from the output of rand(), the calculator uses the polar form of the Box-Muller transform. The algorithm goes as follows: First, generate two uniformly distributed numbers u and v in [-1,1]. Keep doing this until the result lies in the unit circle; the point (0,0), though it's unlikely to occur, is discarded as well. Let s equal u2+v2. Usually, Box-Muller is used to produce two normally-distributed random numbers, by the formula below. The TI only uses the second of the results: (1) \begin{align} z_0=u\cdot\sqrt{\frac{-2\ln s}{s}}\hspace{2em}z_1=v\cdot\sqrt{\frac{-2\ln s}{s}} \end{align} The result is distributed according to the standard normal distribution: that is, with mean 0 and standard deviation 1. It's easy to get any other normal distribution by scaling: multiplying by the standard deviation, and then adding the mean. In TI-Basic, the code for randNorm(μ,σ) would be: :Loop : 2*rand()-1→u : 2*rand()-1→v : u^2+v^2→s : If 0<s and s<1 : Return μ+σ*v*√(-2*ln(s)/s) :EndLoop # Related Commands Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-Noncommercial-No Derivative Works 2.5 License.
{}
# Florida's coastline is 118 miles shorter than four times the coastline of Texas. It is also 983 miles longer than the coastline of Texas. What are the lengths of the coastlines of Florida and Texas? Mar 26, 2018 Coastline of Florida is $1271.3 \overline{3} \setminus \setminus \setminus$ miles. Coastline of Texas is $288.3 \overline{3} \setminus \setminus \setminus \setminus \setminus \setminus \setminus$ miles. #### Explanation: Let $F$ be Florida's coastline and $T$ be Texas' coastline: Florida's coastline is 118 miles shorter than four times the coastline of Texas. We can express this as follows: $F - 118 = 4 T$ Florida's coastline is 983 miles longer than the coastline of Texas. $F = T + 983$ We now solve these equations simultaneously: Substituting $F = T + 983$ into $F - 118 = 4 T$: $T + 983 - 118 = 4 T$ $T + 865 = 4 T$ Subtract T from both sides: $3 T = 865$ Divide by 3: $T = \frac{865}{3} = 288.3 \overline{3}$ Substituting this in $F = T + 983$ $F = \frac{865}{3} + 983$ $F = \frac{3814}{3} = 1271.3 \overline{3}$ Coastline of Florida is $1271.3 \overline{3} \setminus \setminus \setminus$ miles. Coastline of Texas is $288.3 \overline{3} \setminus \setminus \setminus \setminus \setminus \setminus \setminus$ miles.
{}
# IMac G5 1. Sep 1, 2004 ### Dagenais The G5 iMac is finally out after a very long wait from Apple Computer enthusiasts! All new look too! 2. Sep 1, 2004 ### faust9 I'm undecided as of yet. I may get a new iMac for my wife, but I'll hold off until Tiger is released. By then a 2G iMac should be available. Removed commentary about 5200 chipset. Last edited: Sep 1, 2004 3. Sep 7, 2004 ### Dagenais Wow, he's pretty harsh on the new iMac. Putting the computer in the back of the monitor isn't a new thing. Gateway and Sony has done it before. He also says that Apple is concentrating too much on its line of MP3 players and that you can't "change monitors" with the iMac. Why would you even buy an iMac if you wanted to change monitors? I wonder when benchmarks are going to come out for the iMac. 4. Sep 8, 2004 ### Tom McCurdy You could mount the imac on the wall then just have the wires already connected and get a usb hub its just so elegant looking 5. Sep 19, 2004 ### Moonbear Staff Emeritus Dagenais, apparently that critique didn't come from someone forced to work in a cubicle! I just took a look at it and that design would be perfect for the cramped spaces of our students' cubicles in the lab. We have two old G3 iMacs that are getting ready for retirement (they take up probably a 1/4 of the desk and these are pretty old...the gray ones that came out right after the first fruit-flavored ones). Having all the cords come out the back would be perfect, then they could all head straight down through the little hole at the back of the desk made for cords. The only complaint I've ever had with iMacs is the cord from the computer to the keyboard is always too short, so you need to buy an extension to run it down the back of the desk and under to a keyboard tray. Considering most people use keyboard trays in offices (all those ergonomic police running around), it would be nice if they'd make a longer cord standard. Afterall, with these space-saving designs, what's the point of a smaller footprint if you still need to pull the computer toward the front of the desk to reach the keyboard? The only hesitation I have about buying the new iMacs for our lab is that then my students would have faster computers than I do! 6. Sep 25, 2004 ### modmans2ndcoming Best thing about the new iMac is that you can spend a faw more dollars and get it totally wireless, except for the power. 7. Sep 28, 2004 ### Dagenais I rarely hear or see of people that purchase expensive, all-in-one's for students. It's not very cost efficient nor do they have a very long life (you can't upgrade easily). Most of the computers I've seen in schools, including post-secondary, are towers. Is there any specefic reason that you're lab and students use iMacs? (It would be interesting to know why, in the science field, a Mac is better than a PC running Windows). 8. Sep 29, 2004 One reason I can think of is that PCs (particularly those running Windows) cannot compare to Macs in stability and reliability. If his students are also lucky enough to have access to programs such as Wolfram's amazing Mathematica, they probably appreciate this greatly. If you have the $to spend on it, go for it. I am buying an iMac G5 in a year (hopefully the one-step up model). Sirus 9. Oct 2, 2004 ### modmans2ndcoming Macs last quite a long time actually. my cousins school is still using the original iMacs. that is a very long time to use a computer with out needing an upgrade. all in ones are not bad things at all, especially in schools. 10. Oct 2, 2004 ### Tom McCurdy Does anyone have the percentage of college campus for macs vs pc? Since most campuses have both I mean like what percentage per campus. 11. Oct 2, 2004 ### Moonbear Staff Emeritus A lot of reasons for running Macs. They don't become obsolete as quickly as PCs, so actually are pretty cost-efficient (even our G3's are still running current software, while PCs of the same generation are completely obsolete). They are far more stable, as in, they don't crash twice daily like PCs. And with OSX, even if one program shuts down, you don't lose everything that's open. Our biggest reason for using Macs, though, is we do a lot of image analysis (a lot of microscopy...the software for our microscope cameras runs on Macs too). Plus, towers take up a lot of space and sit on the floor where the janitor can destroy them with the mop (don't ask), and the all-in-ones aren't that expensive at all. And the best part, when the students open every attachment they receive in email, Macs don't catch viruses as easily. 12. Oct 3, 2004 ### Moonbear Staff Emeritus The lab I did my postdoc in is still using Mac Pluses...at least they were a few years ago. We had ancient custom software that would only run on those that was essential for our data analysis. It was quite traumatic to my post-doc mentor when we finally convinced him we needed to upgrade for Y2K, but he still clings to one last Mac Plus. They were painfully slow, but still running (we put a sign up over it, something like, "I'm slow and stupid, please be patient with me.") But, you know what? That thing could be left on for months and wouldn't crash or freeze or anything. I miss the days when computers were that stable! 13. Oct 3, 2004 ### hitssquad Mac flexibility vs PC flexibility How could two artifacts from the same technology category obsolesce at different rates? Could you give an example of a specific Macintosh and a specific x86 PC obsolescing at different rates? For 2+ years I have owned one particular Dell Inspiron 4100 laptop running Windows XP SP1. I have used this computer for ~8,000 total hours and have never experienced on it an operating system crash. Do you have some PC vs Mac crash statistics? Ditto for XP. This software is not ported to XP? What brand/model microscope cameras do you use? PC solutions come in many different form factors. If you don't like towers you can have your students build lanboys or pizza boxes instead. You might even want to get rid of the lab computers and instead hook up the microscope cameras to servers and let the students download the images wirelessly to their laptops. (And if they forgot to grab all the images, leaving them on the server would allow the sudents to download them while they are walking down the street, in a coffeeshop, in another class, etc.) Here are some all in one PCs. Here are some PCs you can wear. Here is a PC the size of a paperback book. A typical modern PC can be built by your students for around$200-\$300 in parts. Would you write a virus for an operating system hardly anyone uses? The Mac security model is security through obscurity, not rock solid security. 14. Oct 3, 2004 ### Dr-NiKoN What obscurity? You are aware that the various security implementation details of "OS X" both in kernel-space and user-space is open-sourced? 15. Oct 4, 2004 ### Anttech OSX kernal is based on BSD .. AFAIK... I think what he means by "security model is security through obscurity" is that nobody has written virus for Mac's but this does not mean they are more secure ... To add to that people also have not proded and tested Mac security to the extent they have UNIX or Windows, thus you could conclude that a Windows or UNIX box would be MORE secure, as vunerabilities are well known (in security circles) unlike Mac's vunerabilities. Thus they are "secure through obscurity"... Not becuase the OS is written with security in mind.... 16. Oct 4, 2004 what do you mean by that ? can you paraphrase it again ? 17. Oct 4, 2004 ### Anttech Mac's are "thought" of as more secure becuase nobody has bothered a great deal to find vunerabilities, thus they are deemed "secure through obscurity" by people who know about security http://computing-dictionary.thefreedictionary.com/security through obscurity Windows UNIX and Linux vunerabilities are documented and are well know... so you can patch up against attacks, thus your system will have a better security index (in the long run) Last edited: Oct 4, 2004 18. Oct 4, 2004 ### Dr-NiKoN Uhm? Let me see if I can get this straight. Darwin, which is 100% open-sourced, and probably consists of 90% code that also is in other Operating Systems is less secure than those operating systems because it's Mac OS? Let me ask one question, is OpenBSD more secure than OS X? Let me remind you that there are fewer users of OpenBSD and more security auditing done of OS X code than OpenBSD. OpenBSD has a bigger amount of security trough obscurity than OS X, so OS X has a better "security index" than OpenBSD? OS X is not less secure just because people are finding fewer security holes in it as compared to Windows, or other operating systems. On the contrary. If you find a security hole in OS X that only applies to OS X it is either very high-level in the user-interface or very low-level in the Mach3 micro-kernel. All other holes would probably also be applicable to the *BSD's, thus your point about obscurity is kind of mute. Also, Windows is not better documented than OS X. That's a silly statement. What's better documentation than the source-code? 19. Oct 4, 2004 ### Anttech "If you find a security hole in OS X that only applies to OS X it is either very high-level in the user-interface or very low-level in the Mach3 micro-kernel. All other holes would probably also be applicable to the *BSD's, thus your point about obscurity is kind of mute." no, it is not mute, as you said BSD has not been probbed as much as other flavours of UNIX / Linux / Windows... Thus the "security through obscurity" of OSx still holds.... Mark or dont mark my words, but a lot of top security sepicalists views on Mac security is changing... and for the worse... http://www.newsfactor.com/story.xhtml?story_title=How_Secure_Is_OS_X_&story_id=23467 http://www.securityfocus.com/columnists/256 http://www.techworld.com/security/news/index.cfm?newsid=1798 http://tonytalkstech.com/2004/07/06/mac-os-x-not-so-secure-according-to-security-statistics/ I am not an OS bigget at all, I just aggree with the fact that OSx is secured through obsecurity.... only time will tell... This does not mean that it isnt a very secure OS (currently)... What I do find interesting is that Mac users are very aggresive towards any critism...This was not directed to you or anyone on this board, but from reading othere boards out there in the WWW 20. Oct 4, 2004 ### ComputerGeek OS X is as secure (design wise) as BSD is, and BSD has a great security record
{}
# Rename caption category Scenario: I am working with Writer and I have been using automatic captions for images so they are labeled Image xx. However, I made up my mind and I would like to change the category of all the already existing captions so they are called Figure xx instead. Unfortunately it is not so easy as simply renaming the caption text, as I encounter two problems: 1. I have plenty of images and renaming one by one is quite time consuming. Since the word Image is very common I cannot perform a regular search and replace. 2. I am using automatic images' index, so the new and old images need to belong to the same category, as indexes only allow one category at a time. So my question is: Is there any automated way to rename an already used caption category? edit retag close merge delete In the caption dialog box you can simply overwrite the default category name, eg where it says 'illustration' and you want 'figure', double click 'illustration' and type 'figure', then add the caption in the field above. Subsequently when you add a caption I find that it defaults to 'figure'. ( 2018-05-11 07:12:04 +0200 )edit Sort by » oldest newest most voted You can do the search&replace on steps: 1. Search for styles, picking the paragraph style used on captions 2. Select all 3. Turn back to normal search but pick "selection only" 4. Now, search for Image and replace with Figure 5. Replace all That's all. more Thank you very much for your answer! It is almost what I was looking for, as it is partially working. Although you provided an easy way to change the label, it only does that, but the category is not changed (In my example, I would have a caption saying Figure 1 instead of Image 1 but its category would still be Figure, which means that images' index would still refer to Image category. (don't know if I made myself clear enough) ( 2017-06-08 08:47:12 +0200 )edit 1 Perfectly clear! Now I understand your problem better. Sorry, but the only way to do that that I can think of is to "hack" the .odt file. If you change the file extension from odt to zip and uncompress it, you'll find a series of folders and xml files: on contents.xml the content is declared and with a bit of RegEx magic maybe you'll be able to change the categories. What do you need to change I do not know for sure, but most of the time the file syntax is quite straightforward. ( 2017-06-08 11:02:43 +0200 )edit Oh, what a pitty. I see your point. I tried to do what you said but I'm afraid I corrupted the file. I don't know if it was because I used a slash as caption category or because compressing in zip format does not work well in my linux distribution. I'll give a further try, although I adventure that would be a hard task, as I have plenty of figures and content now and editing the xml is quite cumbersome :( ( 2017-06-08 17:27:36 +0200 )edit ## Stats Seen: 595 times Last updated: Jun 06 '17
{}
# A gas canister can tolerate internal pressures up to 210 atmospheres. If a 2.0 L canister holding 3.5 moles of gas is heated to 1350° C, will the canister explode? Mar 5, 2016 The canister will explode. #### Explanation: This appears to be an Ideal Gas Law problem. The Ideal Gas Law is color(blue)(|bar(ul(PV = nRT)|), where • $P$ is the pressure • $V$ is the volume • $n$ is the number of moles • $R$ is the Universal Gas Constant • $T$ is the temperature $V = \text{2.0 L}$ $n = \text{3.5 mol}$ $R = \text{0.082 06 L·atm·K"^"-1""mol"^"-1}$ $T = \text{(1350 + 273.15) K = 1623.15 K}$ We can rearrange the Ideal Gas Law to get P = (nRT)/V = (3.5 color(red)(cancel(color(black)("mol"))) × "0.082 06" color(red)(cancel(color(black)("L")))"·atm·"color(red)(cancel(color(black)("K"^"-1""mol"^"-1"))) × 1623.15 color(red)(cancel(color(black)("K"))))/(2.0 color(red)(cancel(color(black)("L")))) = "230 atm" This exceeds the burst pressure of the canister. The canister will explode!
{}
# Dynamics - Cylindrical Coordinates ## Homework Statement A cam has a shape that is described by the function r = r_0(2 - cos $$\theta$$), where r_0 = 2.25 ft. A slotted bar is attached to the origin and rotates in the horizontal plane with a constant angular velocity ($$\dot{\theta}$$ dot) of 0.85 radians/s. The bar moves a roller weighing 25.6 lb along the cam's perimeter. A spring holds the roller in place; the spring's spring constant is 1.75 lb/ft. The friction in the system is negligible. When $$\theta$$ = 102 degrees, what are F_r and F_$$\theta$$, the magnitudes of the cylindrical components of the total force acting on the roller? ## Homework Equations r = r_0(2 - cos $$\theta$$) = 2.25(2-cos102) = 4.9678 m $$\dot{\theta}$$ = 0.85 rad/s $$\ddot{\theta}$$ = 0 rad/s2 $$\dot{r}$$ = (r_0sin$$\theta$$)($$\dot{\theta}$$) = 2.25(.85)sin(102) = 1.87 m/s $$\ddot{r}$$ = (r_0cos$$\theta$$)($$\dot{\theta}$$)^2 + (r_0sin$$\theta$$)($$\ddot{\theta}$$) = 2.25(.852)cos(102) = -0.338 m/s2 a_r = $$\ddot{r}$$ - r($$\dot{\theta}$$)^2 a_$$\theta$$ = r$$\theta$$ + 2$$\dot{r}$$$$\dot{\theta}$$ F_r = W/g * a_r F_$$\theta$$ = W/g * a_$$\theta$$ F_s = ks (spring) ## The Attempt at a Solution a_r = $$\ddot{r}$$ - r($$\dot{\theta}$$)^2 = -0.338 - 4.9678(.85^2) = -3.927 m/s2 a_$$\theta$$ = r$$\theta$$ + 2$$\dot{r}$$$$\dot{\theta}$$ = 4.9678*0 + 2*1.87*.85 = 3.179 rad/s2 F_r = W/g * a_r = (25.6/32.2)*(-3.927) = 3.12 lb F_$$\theta$$ = W/g * a_$$\theta$$ = (25.6/32.2)*(3.179) = 2.53 lb But it's wrong and I didn't consider the spring force. How do I incorporate Fs? Please help. I'm stuck #### Attachments • 12.1 KB Views: 334 Last edited: Related Advanced Physics Homework Help News on Phys.org it opposes the acceleration along r Last edited: I found F_r = -3.122 lb and F_$$\theta$$ = 2.531 lb when a_r = -3.927 rad/s2 and a_$$\theta$$ = 3.179 rad/s2 What are F_t and N, the magnitudes of the tangential force, F_t, and the normal force, N, acting on the roller when $$\theta$$ = 102 degrees? a = a_t + a_n a_t = dv/dt (angular velocity) a_n = v2/$$\rho$$ where $$\rho$$ = radius of curvature Equations of Motion: F_t = W/g * a_t F_n = W/g * a_n where g = 32.2 ft/s2 How do I find a_t and a_n to get the tangential force and the Normal? Please help.
{}
A limit process for partial match queries in random quadtrees and 2-d trees Abstract : We consider the problem of recovering items matching a partially specified pattern in multidimensional trees (quadtrees and k-d trees). We assume the classical model where the data consist of independent and uniform points in the unit square. For this model, in a structure on $n$ points, it is known that the complexity, measured as the number of nodes $C_n(\xi)$ to visit in order to report the items matching a random query $\xi$, independent and uniformly distributed on $[0,1]$, satisfies $E{C_n(\xi)}\sim \kappa n^{\beta}$, where $\kappa$ and $\beta$ are explicit constants. We develop an approach based on the analysis of the cost $C_n(s)$ of any fixed query $s\in [0,1]$, and give precise estimates for the variance and limit distribution. Moreover, a functional limit law for a rescaled version of the process $(C_n(s))_{0\le s\le 1}$ is derived in the space of c\'{a}dl\'{a}g functions with the Skorokhod topology. For the worst case complexity $\max_{s\in [0,1]} C_n(s)$ the order of the expectation as well as a limit law are given. Document type : Journal articles Domain : Complete list of metadatas https://hal.inria.fr/hal-00773363 Contributor : Nicolas Broutin <> Submitted on : Sunday, January 13, 2013 - 4:03:47 PM Last modification on : Monday, February 18, 2019 - 7:52:04 PM Citation Nicolas Broutin, Ralph Neininger, Henning Sulzbach. A limit process for partial match queries in random quadtrees and 2-d trees. Annals of Applied Probability, Institute of Mathematical Statistics (IMS), 2013, 23 (6), pp.2560-2603. ⟨10.1214/12-AAP912⟩. ⟨hal-00773363⟩ Record views
{}
# Plotting the direction field of a differential equation I have to sketch the direction field for the following differential equation: $$\frac{dy}{dx}=\frac{-0.02 y +0.00002 xy}{0.08 x-0.001xy}$$ This is the code I used, which gives an incorrect plot: StreamPlot[Normalize[{1, (y (-0.016 + 0.00008 x))/(x (0.12 - 0.006 y))}], {x, -200, 200}, {y, -200, 200}, Axes -> True] The following picture shows what I need to get: - I think this is duplicate mathematica.stackexchange.com/questions/8841/… "How can I plot the direction field for a differential equation?" –  Nasser Sep 19 '13 at 0:20 @Nasser The user has already tried StreamPlot as shown there and says the result is incorrect, so I don't think it is a duplicate outright. Thanks for looking for duplicates however! –  Mr.Wizard Sep 19 '13 at 0:22 data = Table[{1, dyx}, {x, 1, 3000, 10}, {y, 1, 150}]; ListStreamPlot[data] comes close. I'm not saying it's an answer, just 'closer'. –  Ian Schumacher Sep 19 '13 at 3:31 I believe this is getting close to what you want: f[x_, y_] := (-0.02 y + 0.00002 x y)/(0.08 x - 0.001 x y) VectorPlot[{1, f[x, y]}, {x, 0, 3100}, {y, 0, 150}, VectorStyle -> Arrowheads[{}]] - VectorPlot is addressed in the post that Nasser linked to... besides, OP's main fault is plotting a different equation than the one that they actually want! –  rm -rf Sep 19 '13 at 0:57 @rm-rf Oh, well I guess we can delete this in a little while then. –  Mr.Wizard Sep 19 '13 at 0:58 VectorPlot[{5 , f[x, y]}, {x, 0, 3100}, {y, 0.1, 150}, PlotRange -> {{0, 3100}, {0, 150}}, VectorScale -> .03] is a little more aligned with the OP's plot –  belisarius Sep 19 '13 at 1:04 By the way, how to get arrows with the equal lengths? Because of the large aspect ratio (3100/150), they always have different lengths... –  ybeltukov Sep 19 '13 at 1:05 @ybeltukov I'll admit I tried for a few minutes to get that (I was using VectorScale) but I failed. I intend to return to it later. Let me know if you figure it out before I do! –  Mr.Wizard Sep 19 '13 at 1:11 I think the following is a bit closer f[x_, y_] := (-0.02 y + 0.00002 x y)/(0.08 x - 0.001 x y) {X1, X2} = {0, 3100}; {Y1, Y2} = {0, 150}; AR = 0.5; length = 0.04; VectorPlot[{1, f[x, y]}, {x, X1, X2}, {y, Y1, Y2}, AspectRatio -> AR, PlotRange -> {{X1, X2}, {Y1, Y2}}, VectorStyle -> Arrowheads[{}], VectorScale -> {length, Automatic, If[#5 > 0, 1/Sqrt[#3^2 + (AR(X1-X2)#4/(Y1-Y2))^2],0] &}] This solution gives the equal lengths of the arrows with taking into account the aspect ratio. - +1 for getting it done. :-) –  Mr.Wizard Sep 19 '13 at 5:11 I'll assume that what you really want is a StreamPlot and not a vector plot, because that's in your code. The equation you're plotting in the question isn't the one in the first equation. But even if we correct this, the StreamPlot looks bad because it cuts off the automatically generated streamlines before they are long enough to outline the shape of the slope field. To remedy this, you can try specifying a minimum length for the streamlines, and also choose them to go through the interesting points in the plot. I've taken your (corrected) StreamPlot and added the necessary StreamPoints option: StreamPlot[ Normalize[{1, (-0.02 y + 0.00002 x y)/(0.08 x - 0.001 x y)}], {x, 0, 3000}, {y, 0, 150}, Axes -> True, StreamPoints -> {Table[{1040, i}, {i, 13, 150, 5}], Automatic, 3000}, PlotRange -> All] -
{}
# Multiplication allows for different units: why can't we multiply apples by apples? In school, we are taught that addition must use the same units (we can't add 3 apples + 4 bananas). On the other hand, multiplication (and division) is allowed between quantities of different units, and is used quite a lot in physics. And is has a meaning: I can understand what of $$m/s$$, $$m^2$$, $$Nm$$, etc. represent. Now, if multiplication is allowed between different units, what is the result of 3 apples times 4 apples ? I know it sounds naïve, but if $$m\cdot m$$=surface, why does "apples times apples" is meaningless ? (and no, in my head at least, an "apple square" does not represent anything...) Do we live in a multidimensional space but only "one-dimensional-applewise" world ? (maybe this question is purely philosophical... if so, let me know and I'll delete it) • And is has a meaning: I can understand what of $m/s$, $m^2$, $Nm$, etc. represent. Can you understand $1\text{ cm}^{3/2}⋅\text{g}^{1/2}⋅\text{s}^{−1}$ (a statcoulomb)? Units do not have to have a “meaning” beyond what they are. Jan 15, 2021 at 20:52 • No physicist would write “$N_\text{apples}=3\text{ apples}$”. They would write “$N_\text{apples}=3$”. Jan 15, 2021 at 21:00 • I see your point, and I can't indeed understand this statcoulomb. But, on the other hand, apart form the understanding, it seems to me that a "square meter" has an existence, a reality (it represents a "surface" something I can see), that a "square apple" lacks... no ? Jan 15, 2021 at 21:04 • Quantities which are the number of something are dimensionless. Jan 15, 2021 at 21:11 • I’m voting to close this question because it's not about physics. Jan 16, 2021 at 1:59 Simple, it's $$3\times 4 = 12 ~\text{apple}^2 \equiv \text{Area}$$ in terms of apples : Definition of area doesn't changes if you use $$m^2$$ or $$cm^2$$ or $$\text{apple}^2$$, or anything else. So to say, mathematical expression is invariant of the units used at hand. One can even say that $$t^2$$ is hyper-surface in time. No problems at all. • Thanks for the answer (and for the nice drawing), but I must still insist: I do not want to use apples to measure a length, but just to measure the number of apples. Can't "number of apples" be a physical quantity, like distance, speed, time, electric charge ? Jan 15, 2021 at 21:08 • I realise now that I should maybe completely edit my question. Take Coulomb instead of apples. Do we live in a (spatially speaking) 3-dimensional universe, that is ("electrical charge-ly" speaking) only one-dimensional ? Jan 15, 2021 at 21:12 • @xdutoit Take Coulomb instead of apples It’s common to see, for example, square coulombs. There is nothing 1D about it. Jan 15, 2021 at 21:18 • I gave you an example in my post with $t^2$, area is area, no matter what units you will use for it. $A=a \cdot b$ is valid formula for any unit. Be it Coulomb, second, $\text{Hz}$ or anything else. The difference will be that if you will use a non-spatial unit which doesn't have direct relationship to length (like Coulomb), then you'll get "hyper-area" some type of area which exist on some hyper-surface (not spatial of course). Jan 15, 2021 at 21:18 • I have problems with this answer. You don't make it clear that you start from understanding the unit apple as a length unit derived from some standard apple or some averaging. This is not in any sense the "true" physics answer, which doesn't exist. It's just one way of giving the question a meaning, but it is an arbitrary choice, because the unit apple has no standard meaning in physics, since it would be horribly ill-defined. Jan 16, 2021 at 2:57 One often arrives at delicate points when trying to find a literal connection between mathematical and physical approaches to problems. As G. Smith already commented, the important aspect here is modeling. Roughly, this goes in three steps: 1. You have a real (physical) system and you map it to some mathematical structure. 2. You analyze the properties of the mathematical structure and find some connections. 3. In the end, you map the result back to reality, giving you a prediction. Typical example: You want to calculate the path of an arrow and map its position to a position and momentum variable in sapcetime with time evolution embedded in Newtonian laws. You do some math manipulations and get, say, the coordinates describing the impact point of your arrow. You conclude from these coordinates (which are numbers) where the arrow will hit (which is a real positon and someone might go ouch). The problematic bit about your question is that it asks about a way to intuitively understand some math without stating from which physical theory the math should come from. When you talk about apples squared and adding apples and bananas, you are implying there is some canonical (standard) way to incorporate a measurement quantity like "apple" into existing theories of physics. But that isn't the case. You are at liberty to define what you would consider the unit of apple to be in your model. To make this more concrete, let's challenge the everybody-learned-this-in-school statement you started out from. You say that addition must use the same units. But that's not necessarily true. As long as you haven't specified how you describe your quantities, there are ample options where you can add apples and bananas. If you say that the tuple $$Q = \begin{pmatrix} N_A \\ N_B \end{pmatrix}$$ describes the number of apples $$N_A$$ and bananas $$N_B$$ in a box, then you can easily add them by the usual vector space addition, $$Q + Q' = \begin{pmatrix} N_A \\ N_B \end{pmatrix} + \begin{pmatrix} N_A' \\ N_B' \end{pmatrix} = \begin{pmatrix} N_A + N_A' \\ N_B + N_B' \end{pmatrix}.$$ This isn't meant to say that the statement "You can't add apples and bananas" is wrong. Understood in the typical intuitive way, it is quite right. It just goes to say that the connection between mathematical formalism and reality isn't "obvious" or that all alternatives must be arcane bullshit. The tuple-notation above is basically what any spreadsheet table does, and it is essential in bookkeeping. That said, there is no physical reality to "apple$$^2$$" because there is no theory that fixes its meaning. This makes answering this question hard - you can define some sense in which the unit apple can mean something, as Agnius Vasiliauskas did by using apple as a (possibly anisotropic) length unit, just like the size of some bishop's feet were used as a length unit in the middle ages. But that isn't the "counting" unit for apples. As others already pointed out, there are situations where a squared quantity has a clear-cut meaning. That often comes from proportionality concepts. The mass squared appears in gravitational laws because doubling either of the masses will double the strength of the force. And doubling the distance will cut the force by four. If you were to change the measurement units, then the numbers in front of them would have to change, too. In that way, when we leave the units in the equation, we have an intuitive and straightforward way to calculate how the numbers change if we apply a different unit convention. • Thanks for your answer. I realize now that my question is indeed almost philosophical. The way you point out the relation between math and physics is enlightening ! Jan 16, 2021 at 7:16 The thing missing from the units given in examples (m, C, etc) in multiple dimensions is a qualifier of which dimension. For example, when we identify an area and give it a unit, say $$m^2$$, we really mean $$m$$ in direction x times $$m$$ in direction perpendicular to x. When we say $$C^2$$ (Coulomb), we mean $$C$$ of one entity times $$C$$ of a different entity. These important qualifiers ("of one entity" and "of a different entity") disappear - become implicit - when we multiply the quantities and assign units. We can, of course, have $$C^2$$ refer to the same apparent entity, like the nucleus of an atom (e.g. $$Z^2$$), but, really, terms like these still imply separate charges existing within the nucleus, or sometimes the charge of an electron interacting with the charge of a nucleus, or sometimes just are indirect but convenient references by proportionality to the surface area, volume, or various symmetries of nuclear constituents (https://en.wikipedia.org/wiki/Semi-empirical_mass_formula). So, for apples, only if there is a quantity proportional to apple would a unit be derived. For example, the quantity of applesauce created when smashing apples together inside a piston is proportional to the volume of apples, which is proportional to the number of apples. So $$Q \propto A$$. But the rate at which applesauce is created by smashing apples together in a shaking machine (?) is proportional to the rate of collision of (different) apples, which is $$R \propto A^2$$. (See reaction rate of colliding particles in a population of particles, so called "Collision Theory") Here, each $$A$$ in $$A^2$$ is implicitly referring to one of each apple in a pair (different apples). The reason we don't keep these qualifiers of units or, in the case of numbering objects, units themselves, is mere convenience; however, when performing dimensional analysis, it may be extremely important to keep the qualifiers so as not to mistake a green apple for a red. A perfectly precise description would include the unit apple, and for that matter which apple. Just as we accept that frequency has a unit $$s^{-1}$$ and leave it to the reader to understand that we really mean $$\frac{\text{cycles of whatever wave we're discussing}}{s}$$. • Thanks for the answer and for the nice example of the applesauce maker (I like how a naïve example can be taken literally and can lead to meaningful extensions...) Jan 16, 2021 at 7:15 If $$apple$$ is a valid unit then $$apple^2$$ is also a valid unit. Simply because it is valid does not imply that it is meaningful. Whether or not a unit is meaningful has to do with whether it is used in any physical formulas. I don’t know of any formulas that involve the unit $$apple$$, nor any that take products of quantities measured in $$apples$$. So to my knowledge it is not meaningful. If there were such a formula then the meaning of the unit $$apple^2$$ would be as a reference standard for the quantity described by the corresponding variable in the formula. You cannot talk about the meaning of a unit in the absence of a physical formula using that unit. For example the unit $$\text{kg m}^2 \text{ s}^{-2}$$ means different things in different contexts.
{}
# IEEE Transactions on Signal Processing ## Filter Results Displaying Results 1 - 25 of 51 Publication Year: 2010, Page(s):C1 - C4 | PDF (134 KB) • ### IEEE Transactions on Signal Processing publication information Publication Year: 2010, Page(s): C2 | PDF (39 KB) • ### Learning Graphical Models for Hypothesis Testing and Classification Publication Year: 2010, Page(s):5481 - 5495 Cited by:  Papers (19)  |  Patents (1) | | PDF (1368 KB) | HTML Sparse graphical models have proven to be a flexible class of multivariate probability models for approximating high-dimensional distributions. In this paper, we propose techniques to exploit this modeling ability for binary classification by discriminatively learning such models from labeled training data, i.e., using both positive and negative samples to optimize for the structures of the two mo... View full abstract» • ### Data-Aided SNR Estimation in Time-Variant Rayleigh Fading Channels Publication Year: 2010, Page(s):5496 - 5507 Cited by:  Papers (18) | | PDF (547 KB) | HTML This paper addresses the data-aided (DA) signal-to-noise ratio (SNR) estimation for constant modulus modulations over time-variant flat Rayleigh fading channels. The time-variant fading channel is modeled by considering the Jakes' model and the first order autoregressive (AR1) model. Closed-form expressions of the Cramér-Rao bound (CRB) for DA SNR estimation are derived for known and unkno... View full abstract» • ### Enhanced Illumination Sensing Using Multiple Harmonics for LED Lighting Systems Publication Year: 2010, Page(s):5508 - 5522 Cited by:  Papers (12) | | PDF (729 KB) | HTML This paper considers frequency division multiplexing (FDM) based illumination sensing in light emitting diode (LED) lighting systems. The purpose of illumination sensing is to identify the illumination contributions of spatially distributed LEDs at a sensor location, within a limited response time. In the FDM scheme, LEDs render periodical illumination pulse trains at different frequencies with pr... View full abstract» • ### Bearings-Only Target Motion Analysis via Instrumental Variable Estimation Publication Year: 2010, Page(s):5523 - 5533 Cited by:  Papers (10) | | PDF (418 KB) | HTML This paper deals with the instrumental variable (IV) estimation for the problem of target motion analysis from bearing-only measurements (BO-TMA). By taking asymptotical analysis of the IV estimation, a systematic method for developing consistent IV estimate and the sufficient condition for its asymptotical normality are proposed. The asymptotical covariance of IV estimate is also derived explicit... View full abstract» • ### Barankin-Type Lower Bound on Multiple Change-Point Estimation Publication Year: 2010, Page(s):5534 - 5549 Cited by:  Papers (7) | | PDF (957 KB) | HTML We compute lower bounds on the mean-square error of multiple change-point estimation. In this context, the parameters are discrete and the Cramér-Rao bound is not applicable. Consequently, we focus on computing the Barankin bound (BB), the greatest lower bound on the covariance of any unbiased estimator, which is still valid for discrete parameters. In particular, we compute the multi-para... View full abstract» • ### On Construction and Simulation of Autoregressive Sources With Near-Laplace Marginals Publication Year: 2010, Page(s):5550 - 5559 Cited by:  Papers (1) | | PDF (774 KB) | HTML In this paper, we focus upon the problem of modeling and simulation of stationary non-Gaussian time series. In particular, we consider a first order autoregressive process whose marginal distribution is close to the Laplace density. This model allows us to simulate correlated non-Gaussian signals typically appearing in speech analysis, compression, and noise synthesis. The Monte Carlo rejection me... View full abstract» • ### A Hierarchical Bayesian Model for Frame Representation Publication Year: 2010, Page(s):5560 - 5571 Cited by:  Papers (19) | | PDF (2493 KB) | HTML In many signal processing problems, it is fruitful to represent the signal under study in a frame. If a probabilistic approach is adopted, it becomes then necessary to estimate the hyperparameters characterizing the probability distribution of the frame coefficients. This problem is difficult since in general the frame synthesis operator is not bijective. Consequently, the frame coefficients are n... View full abstract» • ### Adaptive Target Detection With Application to Through-the-Wall Radar Imaging Publication Year: 2010, Page(s):5572 - 5583 Cited by:  Papers (22) | | PDF (1431 KB) | HTML An adaptive detection scheme is proposed for radar imaging. The proposed detector is a postprocessing scheme derived for one-, two-, and three-dimensional data, and applied to through-the-wall imaging using synthetic aperture radar. The target image statistics depend on the target three-dimensional orientation and position. The statistics can also vary with the standoff distance of the imaging sys... View full abstract» • ### Effortless Critical Representation of Laplacian Pyramid Publication Year: 2010, Page(s):5584 - 5596 Cited by:  Papers (5) | | PDF (325 KB) | HTML The Laplacian pyramid (LP) is a multiresolution representation introduced originally for images, and it has been used in many applications. A major shortcoming of the LP representation is that it is oversampled. The dependency among the LP coefficients is studied in this paper. It is shown that whenever the LP compression filter is interpolatory, the redundancy in the LP coefficients can be remove... View full abstract» • ### Two-Dimensional $2times$ Oversampled DFT Modulated Filter Banks and Critically Sampled Modified DFT Modulated Filter Banks Publication Year: 2010, Page(s):5597 - 5611 Cited by:  Papers (7) | | PDF (2389 KB) | HTML Media This paper investigates two-dimensional (2D) 2 oversampled DFT modulated filter banks and 2D critically sampled modified DFT (MDFT) modulated filter banks as well as their design. The structure and perfect reconstruction (PR) condition of 2D 2× oversampled DFT modulated filter banks are presented in terms of the polyphase decompositions of prototype filters (PFs). In the double-prototype ca... View full abstract» • ### A Spectral Approach for Sifting Process in Empirical Mode Decomposition Publication Year: 2010, Page(s):5612 - 5623 Cited by:  Papers (14) | | PDF (1340 KB) | HTML In this paper, we propose an alternative to the algorithmic definition of the sifting process used in the original Huang's empirical mode decomposition (EMD) method. Although it has been proven to be particularly effective in many applications, EMD method has several drawbacks. The major problem with EMD is the lack of theoretical Framework which leads to difficulties for the characterization and ... View full abstract» • ### Regularized Sampling of Multiband Signals Publication Year: 2010, Page(s):5624 - 5638 Cited by:  Papers (4) | | PDF (581 KB) | HTML This paper presents a regularized sampling method for multiband signals, that makes it possible to approach the Landau limit, while keeping the sensitivity to noise and perturbations at a low level. The method is based on band-limited windowing, followed by trigonometric approximation in consecutive time intervals. The key point is that the trigonometric approximation “inherits” the ... View full abstract» • ### No-Go Theorem for Linear Systems on Bounded Bandlimited Signals Publication Year: 2010, Page(s):5639 - 5654 | | PDF (632 KB) | HTML In this paper we analyze the existence of efficient bandpass-type systems for the space of bounded bandlimited signals. Here efficient means that the system fulfills the following properties: every output signal contains only frequencies within the passband; every input signal that has only frequencies within the passband is not disturbed by the system; and the system is stable. Without using any ... View full abstract» • ### A Closed-Form Robust Chinese Remainder Theorem and Its Performance Analysis Publication Year: 2010, Page(s):5655 - 5666 Cited by:  Papers (46) | | PDF (669 KB) | HTML Chinese remainder theorem (CRT) reconstructs an integer from its multiple remainders that is well-known not robust in the sense that a small error in a remainder may cause a large error in the reconstruction. A robust CRT has been recently proposed when all the moduli have a common factor and the robust CRT is a searching based algorithm and no closed-from is given. In this paper, a closed-form ro... View full abstract» • ### Distributed Learning in Multi-Armed Bandit With Multiple Players Publication Year: 2010, Page(s):5667 - 5681 Cited by:  Papers (121) | | PDF (897 KB) | HTML We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, an... View full abstract» • ### Single Antenna Power Measurements Based Direction Finding Publication Year: 2010, Page(s):5682 - 5692 Cited by:  Papers (9) | | PDF (1678 KB) | HTML In this paper, the problem of estimating direction-of-arrival (DOA) of multiple uncorrelated sources from single antenna power measurements is addressed. Utilizing the fact that the antenna pattern is bandlimited and can be modeled as a finite sum of complex exponentials, we first show that the problem can be transformed into a frequency estimation problem. Then, we explain how the annihilating fi... View full abstract» • ### Tensor Algebra and Multidimensional Harmonic Retrieval in Signal Processing for MIMO Radar Publication Year: 2010, Page(s):5693 - 5705 Cited by:  Papers (69)  |  Patents (1) | | PDF (1305 KB) | HTML Detection and estimation problems in multiple-input multiple-output (MIMO) radar have recently drawn considerable interest in the signal processing community. Radar has long been a staple of signal processing, and MIMO radar presents challenges and opportunities in adapting classical radar imaging tools and developing new ones. Our aim in this article is to showcase the potential of tensor algebra... View full abstract» • ### Low Complexity Equalization for Doubly Selective Channels Modeled by a Basis Expansion Publication Year: 2010, Page(s):5706 - 5719 Cited by:  Papers (33)  |  Patents (4) | | PDF (369 KB) | HTML We propose a novel equalization method for doubly selective wireless channels, whose taps are represented by an arbitrary Basis Expansion Model (BEM). We view such a channel in the time domain as a sum of product-convolution operators created from the basis functions and the BEM coefficients. Equivalently, a frequency-domain channel can be represented as a sum of convolution-products. The product-... View full abstract» • ### Tensor-Based Channel Estimation and Iterative Refinements for Two-Way Relaying With Multiple Antennas and Spatial Reuse Publication Year: 2010, Page(s):5720 - 5735 Cited by:  Papers (65) | | PDF (1459 KB) | HTML Relaying is one of the key technologies to satisfy the demands of future mobile communication systems. In particular, two-way relaying is known to exploit the radio resources in a very efficient manner. In this contribution, we consider two-way relaying with amplify-and-forward (AF) MIMO relays. Since AF relays do not decode the signals, the separation of the data streams has to be performed by th... View full abstract» • ### A Class of Channels Resulting in Ill-Convergence for CMA in Decision Feedback Equalizers Publication Year: 2010, Page(s):5736 - 5743 Cited by:  Papers (4) | | PDF (742 KB) | HTML This paper analyzes the convergence of the constant modulus algorithm (CMA) in a decision feedback equalizer using only a feedback filter. Several works had already observed that the CMA presented a better performance than decision directed algorithm in the adaptation of the decision feedback equalizer, but theoretical analysis always showed to be difficult specially due to the analytical difficul... View full abstract» • ### An Optimal Basis of Band-Limited Functions for Signal Analysis and Design Publication Year: 2010, Page(s):5744 - 5755 Cited by:  Papers (4) | | PDF (631 KB) | HTML This paper studies signal concentration in the time and frequency domains using the general constrained variational method of Franks. The minimum k th (k = 0, 2, 4,...) moment time-duration measure for band-limited signals is formulated. A complete, orthonormal set of band-limited functions in L2([-W,W]) with the minimum fourth-moment time-duration measure is obtained. Numerical investi... View full abstract» • ### A Monte Carlo Implementation of the SAGE Algorithm for Joint Soft-Multiuser Decoding, Channel Parameter Estimation, and Code Acquisition Publication Year: 2010, Page(s):5756 - 5766 Cited by:  Papers (2) | | PDF (822 KB) | HTML This paper presents an iterative scheme for joint timing acquisition, multi-channel parameter estimation, and multiuser soft-data decoding. As an example, an asynchronous convolutionally coded direct-sequence code-division multiple-access system is considered. The proposed receiver is derived within the space-alternating generalized expectation-maximization framework, implying that convergence in ... View full abstract» • ### A Nondata-Aided SNR Estimation Technique for Multilevel Modulations Exploiting Signal Cyclostationarity Publication Year: 2010, Page(s):5767 - 5778 Cited by:  Papers (9) | | PDF (1103 KB) | HTML Signal-to-noise ratio (SNR) estimators of linear modulation schemes usually operate at one sample per symbol at the matched filter output. In this paper we propose a new method for estimating the SNR in the complex additive white Gaussian noise (AWGN) channel that operates directly on the oversampled cyclostationary signal at the matched filter input. Exploiting cyclostationarity proves to be adva... View full abstract» ## Aims & Scope IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals Full Aims & Scope ## Meet Our Editors Editor-in-Chief Sergios Theodoridis University of Athens
{}
# nLab singular homology ### Context #### Topology topology algebraic topology ## Examples #### Homological algebra homological algebra and nonabelian homological algebra diagram chasing # Contents ## Idea The singular homology of a topological space $X$ is the simplicial homology of its singular simplicial complex: a singular $n$-chain on $X$ is a formal linear combination of singular simplices $\sigma : \Delta^n \to X$, and a singular $n$-cycle is such a chain such that its oriented boundary in $X$ vanishes. Two singular chains are homologous if they differ by a boudary. The singular homology of $X$ in degree $n$ is the group of $n$-cycles modulo modulo those that are boundaries. Singular homology of a topological space conincide with its ordinary homology as defined more abstractly (see at generalized homology theory). ## Definition Let $X \in$ Top be topological space. Write $Sing X \in$ sSet for its singular simplicial complex. ###### Definition For $n \in \mathbb{N}$, a singular $n$-chain on $X$ is an element in the free abelian group $\mathbb{Z}[(Sing X)_n]$: a formal linear combinations of singular simplices in $X$. ###### Remark These are the chains on a simplicial set on $Sing X$. The groups of singular chains combine to the simplicial abelian group $\mathbb{Z}[Sing X] \in Ab^{\Delta^{op}}$. ###### Definition $C_\bullet(X) \coloneqq C_\bullet(\mathbb{Z}[Sing X]) \in Ch_\bullet$ is the singular complex of $X$. Its chain homology is the ordinary singular homology of $X$. One usually writes $H_n(X, \mathbb{Z})$ or just $H_n(X)$ for the singular homology of $X$ in degree $n$. See also at ordinary homology. ###### Remark So we have $C_\bullet(X) = [ \cdots \stackrel{\partial_2}{\to} \mathbb{Z}[(Sing X)_2] \stackrel{\partial_1}{\to} \mathbb{Z}[(Sing X)_1] \stackrel{\partial_0}{\to} \mathbb{Z}[(Sing X)_0] ]$ where the differentials are defined on basis elements $\sigma \in (Sing X)_n$ by $\partial_n \sigma = - \sum_{i = 0}^n (-1) d_i \sigma$ (with $d_i$ the $i$ simplicial face map) and then extended linearly. (One may change the global signs and obtain a quasi-isomorphic complex, in particular with the same homology groups.) ###### Remark This means that a singular chain is a cycle if the formal linear combination of the oriented boundaries of all its constituent singular simplices sums to 0. See the basic examples below More generally, for $R$ any unital ring one can form the degreewise free module $R[Sing X]$ over $R$. The corresponding homology is the singular homology with coefficients in $R$, denoted $H_n(X,R)$. ###### Definition Given a continuous map $f : X \to Y$ between topological spaces, and given $n \in \mathbb{N}$, every singular $n$-simplex $\sigma : \Delta^n \to X$ in $X$ is sent to a singular $n$-simplex $f_* \sigma : \Delta^n \stackrel{\sigma}{\to} X \stackrel{f}{\to} Y$ in $Y$. This is called the push-forward of $\sigma$ along $f$. Accordingly there is a push-forward map on groups of singular chains $(f_*)_n : C_n(X) \to C_n(Y) \,.$ ###### Proposition These push-forward maps make all diagrams of the form $\array{ C_{n+1}(X) &\stackrel{(f_*)_{n+1}}{\to}& C_{n+1}(Y) \\ \downarrow^{\mathrlap{\partial^X_n}} && \downarrow^{\mathrlap{\partial^Y_n}} \\ C_n(X) &\stackrel{(f_*)_n}{\to}& C_n(Y) }$ commute. In other words, push-forward along $f$ constitutes a chain map $f_* : C_\bullet(X) \to C_\bullet(Y) \,.$ ###### Proof It is in fact evident that push-forward yields a functor of singular simplicial complexes $f_* : Sing X \to Sing Y \,.$ From this the statement follows since $\mathbb{Z}[-] : sSet \to sAb$ is a functor. Accordingly we have: ###### Proposition Sending a topological space to its singular chain complex $C_\bullet(X)$, def. 2, and a continuous map to its push-forward chain map, prop. 1, constitutes a functor $C_\bullet(-,R) : Top \to Ch_\bullet(R Mod)$ from the category Top to the category of chain complexes. In particular for each $n \in \mathbb{N}$ singular homology extends to a functor $H_n(-,R) : Top \to R Mod \,.$ ## Examples ### Basic examples ###### Example Let $X$ be a topological space. Let $\sigma^1 : \Delta^1 \to X$ be a singular 1-simplex, regarded as a 1-chain $\sigma^1 \in C_1(X) \,.$ Then its boundary $\partial \sigma \in H_0(X)$ is $\partial \sigma^1 = \sigma(0) -\sigma(1)$ or graphically (using notation as for orientals) $\partial \left( \sigma(0) \stackrel{\sigma}{\to} \sigma(1) \right) = (\sigma(0)) - (\sigma(1)) \,.$ Let $\sigma^2 : \Delta^2 \to X$ be a singular 2-chain. The boundary is $\partial \left( \array{ && \sigma(1) \\ & {}^{\mathllap{\sigma(0,1)}}\nearrow & \Downarrow^{\mathrlap{\sigma}}& \searrow^{\mathrlap{\sigma^{1,2}}} \\ \sigma(0) &&\underset{\sigma(0,2)}{\to}&& \sigma(2) } \right) = \left( \array{ && \sigma(1) \\ & {}^{\mathllap{\sigma(0,1)}}\nearrow & & \\ \sigma(0) } \right) - \left( \array{ && \\ & & & \\ \sigma(0) &\underset{\sigma(0,2)}{\to}& \sigma(2) } \right) + \left( \array{ && \sigma(1) \\ & & & \searrow^{\mathrlap{\sigma^{1,2}}} \\ && && \sigma(2) } \right) \,.$ Hence the boundary of the boundary is \begin{aligned} \partial \partial \sigma &= \partial \left( \left( \array{ && \sigma(1) \\ & {}^{\mathllap{\sigma(0,1)}}\nearrow & & \\ \sigma(0) } \right) - \left( \array{ && \\ & & & \\ \sigma(0) &\underset{\sigma(0,2)}{\to}& \sigma(2) } \right) + \left( \array{ && \sigma(1) \\ & & & \searrow^{\mathrlap{\sigma^{1,2}}} \\ && && \sigma(2) } \right) \right) \\ & = \left( \array{ && \\ & & & \\ \sigma(0) } \right) - \left( \array{ && \sigma(1) \\ & & & \\ } \right) - \left( \array{ && \\ & & & \\ \sigma(0) && } \right) + \left( \array{ && \\ & & & \\ && \sigma(2) } \right) + \left( \array{ && \sigma(1) \\ & & & \\ && && } \right) - \left( \array{ && \\ & & & \\ && && \sigma(2) } \right) \\ & = 0 \end{aligned} For more illustrations see for instance (Ghrist, (4.5)). ### Homology of cells: disks and spheres ###### Proposition For all $n \in \mathbb{N}$ the reduced singular homology of the $n$-sphere $S^n$ is $\tilde H_k(S^n) = \left\{ \array{ \mathbb{Z} & if\; k = n \\ 0 & otherwise } \right. \,.$ ###### Proof The $n$-sphere may be realized as the pushout $S^n \simeq D^n/S^{n-1} \coloneqq D^{n} \coprod_{S^{n-1}} *$ which is the $n$-ball with its boundary $(n-1)$-sphere identified with the point. The inclusion $S^{n-1} \hookrightarrow D^n$ is a “good pair” in the sense of def. 5, and so the long exact sequence from prop. 7 yields a long exact sequence $\cdots \to \tilde H_{k+1}(S^n) \to \tilde H_k(S^{n-1}) \to \tilde H_k(D^n) \to \tilde H_k(S^n) \to \tilde H_{k-1}(S^{n-1}) \to \cdots \,.$ Since the disks are all contractible topological spaces we have $H_k(D^n) \simeq 0$ for all $k,n$ by this example at reduced homology. This means that in the above long exact sequence all the morphisms $\tilde H_{k+1}(S^{n+1}) \to \tilde H_k(S^n)$ are isomorphisms, for all $k \in \mathbb{N}$. Since $\tilde H_n(S^0) \simeq \left\{ \array{ \mathbb{Z} & if \; n = 0 \\ 0 & otherwise } \right.$ (by this example at reduced homology) the statement follows by induction on $n$. ## Properties ### Homotopy invariance Singular homology is homotopy invariant: ###### Proposition If $f : X \to Y$ is a continuous map between topological spaces which is a homotopy equivalence, then the induced morphism on singular homology groups $H_n(f) : H_n(X) \to H_n(Y)$ is an isomorphism. In other words: the singular chain functor of prop. 2 sends weak homotopy equivalences to quasi-isomorphisms. A proof (via CW approximations) is spelled out for instance in (Hatcher, prop. 4.21). ### Relation to homotopy groups The singular homology groups of a topologial space serve to some extent as an approximation to the homotopy groups of that space. ###### Definition (Hurewicz homomorphism) For $(X,x)$ a pointed topological space, the Hurewicz homomorphism is the function $\Phi : \pi_k(X,x) \to H_k(X)$ from the $k$th homotopy group of $(X,x)$ to the $k$th singular homology group defined by sending $\Phi : (f : S^k \to X)_{\sim} \mapsto f_*[S_k]$ a representative singular $k$-sphere $f$ in $X$ to the push-forward along $f$ of the fundamental class $[S_k] \in H_k(S^k) \simeq \mathbb{Z}$. ###### Proposition For $X$ a topological space the Hurewicz homomorphism in degree 0 exhibits an isomorphism between the free abelian group $\mathbb{Z}[\pi_0(X)]$ on the set of connected components of $X$ and the degree-0 singular homlogy: $\mathbb{Z}[\pi_0(X)] \simeq H_0(X) \,.$ Since a homotopy group in positive degree depends on the homotopy type of the connected component of the base point, while the singular homology does not depend on a basepoint, it is interesting to compare these groups only for the case that $X$ is connected. ###### Proposition For $X$ a connected topological space the Hurewicz homomorphism in degree 1 $\Phi : \pi_1(X,x) \to H_1(X)$ is surjective. Its kernel is the commutator subgroup of $\pi_1(X,x)$. Therefore it induces an isomorphism from the abelianization $\pi_1(X,x)^{ab} \coloneqq \pi_1(X,x)/[\pi_1,\pi_1]$: $\pi_1(X,x)^{ab} \stackrel{\simeq}{\to} H_1(X) \,.$ For higher connected $X$ we have the ###### Theorem If $X$ is (n-1)-connected for $n \geq 2$ then $\Phi : \pi_n(X,x) \to H_n(X)$ is an isomorphism. This is known as the Hurewicz theorem. ### Relation to relative homology For the present purpose one makes the following definition. ###### Definition A topological subspace inclusion $A \hookrightarrow X$ in Top is called a good pair if 1. $A$ is inhabited and closed in $X$; 2. $A$ has a neighbourhood in $X$ of which it is a deformation retract. Write $X/A$ for the cokernel of the inclusion, hence for the pushout $\array{ A &\hookrightarrow& X \\ \downarrow && \downarrow \\ * &\to& X/A }$ in Top. ###### Proposition If $A \hookrightarrow X$ is a good pair, def. 5, then the singular homology of $X/A$ coincides with the relative homology of $X$ relative to $A$. In particular, therefore, it fits into a long exact sequence of the form $\cdots \to \tilde H_n(A) \to \tilde H_n(X) \to \tilde H_n(X/A) \to \tilde H_{n-1}(A) \to \tilde H_{n-1}(X) \to \tilde H_{n-1}(X/A) \to \cdots \,.$ For instance (Hatcher, theorem 2.13). ### Relation to generalized homology Singular homology computes the generalized homology with coefficients in the Eilenberg-MacLane spectrum $H \mathbb{Z}$ or $H R$. ## References ### General Lecture notes include Textbook discussion in the context of homological algebra is around Application 1.1.4 of and in the context of algebraic topology in chapter 2.1 of and chapter 4 of Discussion in the context of computing homotopy groups is in Lecture notes include
{}
Search by Topic Resources tagged with Working systematically similar to Dicey Operations: Filter by: Content type: Stage: Challenge level: There are 333 results Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically Two and Two Stage: 2 and 3 Challenge Level: How many solutions can you find to this sum? Each of the different letters stands for a different number. Football Sum Stage: 3 Challenge Level: Find the values of the nine letters in the sum: FOOT + BALL = GAME Cayley Stage: 3 Challenge Level: The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"? Special Numbers Stage: 3 Challenge Level: My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be? Stage: 3 Challenge Level: How many different symmetrical shapes can you make by shading triangles or squares? Weights Stage: 3 Challenge Level: Different combinations of the weights available allow you to make different totals. Which totals can you make? Number Daisy Stage: 3 Challenge Level: Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25? Stage: 3 Challenge Level: If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why? Inky Cube Stage: 2 and 3 Challenge Level: This cube has ink on each face which leaves marks on paper as it is rolled. Can you work out what is on each face and the route it has taken? Twinkle Twinkle Stage: 2 and 3 Challenge Level: A game for 2 people. Take turns placing a counter on the star. You win when you have completed a line of 3 in your colour. Sociable Cards Stage: 3 Challenge Level: Move your counters through this snake of cards and see how far you can go. Are you surprised by where you end up? Medal Muddle Stage: 3 Challenge Level: Countries from across the world competed in a sports tournament. Can you devise an efficient strategy to work out the order in which they finished? More Plant Spaces Stage: 2 and 3 Challenge Level: This challenging activity involves finding different ways to distribute fifteen items among four sets, when the sets must include three, four, five and six items. Pair Sums Stage: 3 Challenge Level: Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers? Crossing the Town Square Stage: 2 and 3 Challenge Level: This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares. More Children and Plants Stage: 2 and 3 Challenge Level: This challenge extends the Plants investigation so now four or more children are involved. 9 Weights Stage: 3 Challenge Level: You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance? How Old Are the Children? Stage: 3 Challenge Level: A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?" More Magic Potting Sheds Stage: 3 Challenge Level: The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? Ones Only Stage: 3 Challenge Level: Find the smallest whole number which, when mutiplied by 7, gives a product consisting entirely of ones. M, M and M Stage: 3 Challenge Level: If you are given the mean, median and mode of five positive whole numbers, can you find the numbers? Ben's Game Stage: 3 Challenge Level: Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. Counting on Letters Stage: 3 Challenge Level: The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern? Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? Difference Sudoku Stage: 3 and 4 Challenge Level: Use the differences to find the solution to this Sudoku. Summing Consecutive Numbers Stage: 3 Challenge Level: Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? Stage: 3 Challenge Level: Rather than using the numbers 1-9, this sudoku uses the nine different letters used to make the words "Advent Calendar". American Billions Stage: 3 Challenge Level: Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... Masterclass Ideas: Working Systematically Stage: 2 and 3 Challenge Level: A package contains a set of resources designed to develop students’ mathematical thinking. This package places a particular emphasis on “being systematic” and is designed to meet. . . . More on Mazes Stage: 2 and 3 There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper. Colour Islands Sudoku Stage: 3 Challenge Level: An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine. One to Fifteen Stage: 2 Challenge Level: Can you put the numbers from 1 to 15 on the circles so that no consecutive numbers lie anywhere along a continuous straight line? Squares in Rectangles Stage: 3 Challenge Level: A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all? Stage: 3 Challenge Level: A few extra challenges set by some young NRICH members. Consecutive Numbers Stage: 2 and 3 Challenge Level: An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. Consecutive Negative Numbers Stage: 3 Challenge Level: Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? Oranges and Lemons, Say the Bells of St Clement's Stage: 3 Challenge Level: Bellringers have a special way to write down the patterns they ring. Learn about these patterns and draw some of your own. Making Maths: Double-sided Magic Square Stage: 2 and 3 Challenge Level: Make your own double-sided magic square. But can you complete both sides once you've made the pieces? Creating Cubes Stage: 2 and 3 Challenge Level: Arrange 9 red cubes, 9 blue cubes and 9 yellow cubes into a large 3 by 3 cube. No row or column of cubes must contain two cubes of the same colour. Coins Stage: 3 Challenge Level: A man has 5 coins in his pocket. Given the clues, can you work out what the coins are? First Connect Three for Two Stage: 2 and 3 Challenge Level: First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line. Tea Cups Stage: 2 and 3 Challenge Level: Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour. Intersection Sums Sudoku Stage: 2, 3 and 4 Challenge Level: A Sudoku with clues given as sums of entries. Fault-free Rectangles Stage: 2 Challenge Level: Find out what a "fault-free" rectangle is and try to make some of your own. Stage: 3 and 4 Challenge Level: Four numbers on an intersection that need to be placed in the surrounding cells. That is all you need to know to solve this sudoku. Cuboids Stage: 3 Challenge Level: Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? Corresponding Sudokus Stage: 3, 4 and 5 This second Sudoku article discusses "Corresponding Sudokus" which are pairs of Sudokus with terms that can be matched using a substitution rule. Being Thoughtful - Primary Number Stage: 1 and 2 Challenge Level: Number problems at primary level that require careful consideration. 1 to 8 Stage: 2 Challenge Level: Place the numbers 1 to 8 in the circles so that no consecutive numbers are joined by a line. Factors and Multiple Challenges Stage: 3 Challenge Level: This package contains a collection of problems from the NRICH website that could be suitable for students who have a good understanding of Factors and Multiples and who feel ready to take on some. . . .
{}
# smallest number of vertices of degree $1$ in tree with $3$ vertices of degree $4$ and $2$ of degree $5$ Find, with proof, the smallest number of vertices of degree $$1$$ in a tree with $$3$$ vertices of degree $$4$$ and $$2$$ of degree $$5$$. Provide an example of such a tree. I'm not sure how to find this. I know that every tree with at least $$2$$ vertices has at least $$2$$ leaves, trees are bipartite, trees have no cycles. Also, every tree is connected. But I'm not sure how to make use of these properties to get the desired proof. So far, the smallest value I've been able to come up with is $$14,$$ though I think that number can be made smaller. My basic idea is to make it so that as many vertices are as "tightly joined or connected" as possible, so as to maximize the number of vertices with degree greater than $$1$$. Hint: The main property of a tree you want to use here is its number of edges. A tree with $$n$$ vertices has $$m=n-1$$ edges. You can use the fact that the sum of the degree is $$2m$$ (twice the number of edges) to prove how many degree $$1$$ vertices you need. Let $$T=(V,E)$$ be a tree. Consider the sum $$\sum_{u\in V} d(v) = 2|E| = 2n-2$$. From this we get that $$\sum_{u\in V}(d(u) - 2) = -2$$. Note that vertices of degree $$1$$ contribute $$-1$$ to the left and vertices of degree $$2$$ contribute $$0$$. All other vertices add a positive amount to the left. If we have three vertices of degree $$4$$ and two of degree $$5$$, this gives us a sum $$3\cdot(4-2) + 2\cdot(5-2) = 12$$. Thus your figure of $$14$$ was correct ($$-14 + 12 = -2$$), you need at least $$14$$ degree $$1$$ vertices to make the total left hand side sum to $$-2$$. Since your minimum number was correct, I assume you've found a tree that works as an example (any tree with exactly three degree $$4$$, two degree $$5$$ and 14 degree $$1$$ vertices will work).
{}
# Exponential Distribution of Independent Events A computer system has two independent processors, and functions as long as at least one processor has not failed. The times to failure of each processor are independent, each exponentially distributed with parameter $1$. Let $T_{1}$ be the time when the first processor fails, and let $T_{2}$ be the remaining time until the second processor fails, so the total life of the system is $T_{1} + T_{2}$. 1. Find $\mathbb{E}\,T_{1}$ 2. Find $\mathbb{E}\,(T_{1}+T_{2})$ 3. What is $\mathrm{Cov}(T_{1},T_{2})$ 4. Find $\mathrm{Var}(T_{1}+T_{2})$ Since the rvs are independent, their covariance is always 0, which answers the 3rd question. Since both variables are exponential, $ET_1= \mu=1$ ($\mu$ is the parameter of the rv). $\mathbf{E}(T_1 + T_2)=1+1=2, \ Var[T_1 + T_2]= 1+1=2$
{}
NEW New Website Launch Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc... 1 ### WB JEE 2009 MCQ (Single Correct Answer) Angle between y2 = x and x2 = y at the origin is A $$2{\tan ^{ - 1}}\left( {{3 \over 4}} \right)$$ B $${\tan ^{ - 1}}\left( {{4 \over 3}} \right)$$ C $$\pi$$/2 D $$\pi$$/4 ## Explanation Tangent at (0, 0) on the curve y2 = x is y-axis while tangent at (0, 0) on the curve x2 = y is x-axis, so from figure angle between x-axis and y-axis is 90$$^\circ$$. 2 ### WB JEE 2009 MCQ (Single Correct Answer) If the rate of increase of the radius of a circle is 5 cm/sec., then the rate of increase of its area, when the radius is 20 cm, will be A 10$$\pi$$ B 20$$\pi$$ C 200$$\pi$$ D 400$$\pi$$ ## Explanation Let radius r $$\therefore$$ $${{dr} \over {dt}} = 5$$ cm/sec (given) Area of circle $$(A) = \pi {r^2}$$ $$\therefore$$ $${{dA} \over {dt}} = {{d(\pi {r^2})} \over {dt}} = \pi (2r){{dr} \over {dt}}$$ when r = 20 then $${{dA} \over {dt}} = \pi \,.\,2\,.\,20\,.\,5 = 200\pi$$. 3 ### WB JEE 2009 MCQ (Single Correct Answer) The distance covered by a particle in t seconds is given by x = 3 + 8t $$-$$ 4t2. After 1 second its velocity will be A 0 unit/second B 3 units/second C 4 units/second D 7 units/second ## Explanation x = 3 + 8t $$-$$ 4t2 $${{dx} \over {dt}} = 8 - 8t$$ $$\therefore$$ Velocity at $$t = 1 = {\left( {{{dx} \over {dt}}} \right)_{t = 1}} = 8 - 8\,.\,1 = 0$$ 4 ### WB JEE 2009 MCQ (Single Correct Answer) The Rolle's theorem is applicable in the interval $$-$$1 $$\le$$ x $$\le$$ 1 for the function A f(x) = x B f(x) = x2 C f(x) = 2x3 + 3 D f(x) = |x| ## Explanation (a) f(x) = x $$f'(x) = {{df(x)} \over {dx}} = 1$$ which is greater than zero $$\therefore$$ f (x) is strictly increasing in [$$-$$1, 1]. So Rolle's theorem is not applicable. (b) $$\because$$ f($$-$$1) = f(1) = 1 Also f(x) = x2 is continuous in [$$-$$1, 1] and differentiable in ($$-$$1, 1) $$\therefore$$ Rolle's theorem is applicable. (c) f(x) = 2x3 + 3 $$\Rightarrow$$ f'(x) = 6x2 > 0 $$\therefore$$ f(x) is strictly increasing in [$$-$$1, 1]. So, Rolle's theorem is not applicable. (d) f(x) = |x| = x, x $$\ge$$ 0 and $$-$$x, x < 0 f(1) = f($$-$$1) = 1, also f(x) is continuous but f(x) is not differentiable at x = 0 $$\in$$ ($$-$$1, 1). So all conditions of Rolle's theorem is not satisfied. ### Joint Entrance Examination JEE Main JEE Advanced WB JEE ### Graduate Aptitude Test in Engineering GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN NEET Class 12
{}
## Moments of net-charge multiplicity distribution in Au+Au collisions measured by the PHENIX experiment at RHIC    [PDF] P. Garg Beam Energy Scan (BES) program at RHIC is important to search for the existence of the critical point in the QCD phase diagram. Lattice QCD have shown that the predictions of the susceptibility of the medium formed in heavy-ion collisions can be sensitive to the various moments (mean ($\mu$) =${}$, variance ($\sigma^2$) = ${<(x-\mu)^2>}$, skewness (S) = $\frac{<(x-\mu)^3>}{\sigma^3}$ and kurtosis ($\kappa$) =$\frac{<(x-\mu)^4>}{\sigma^4} -3$) of conserved quantities like net-baryon number ($\Delta$B), net-electric charge ($\Delta$Q) and net-strangeness ($\Delta$S). Any non-monotonic behavior of the higher moments would confirm the existence of the QCD critical point. The recent results of the higher moments of net-charge multiplicity distributions for Au+Au collisions at $\sqrt{s}_{NN}$ varying from 7.7 GeV to 200 GeV from the PHENIX experiment at RHIC are presented. The energy and centrality dependence of the higher moments and their products (S$\sigma$ and $\kappa\sigma^{2}$) are shown for the net-charge multiplicity distributions. Furthermore, the results are compared with the values obtained from the heavy-ion collision models, where there is no QCD phase transition and critical point. View original: http://arxiv.org/abs/1305.7327
{}
## Handy Makefile The following is a snippet for a handy and concise Makefile that will make executables for all C files in a directory.  It’s good for “sandboxing” this illustrates some of Make’s useful features without before consulting a larger resource.  Tip of the hat to Erik for helping me polish it up. ```# Flags for gcc FLAGS = -D_GNU_SOURCE -O3 -g # All C files as sources, and chop off the .c for targets SOURCES = \$(wildcard *.c) TARGETS = \$(patsubst %.c, %, \$(SOURCES)) all: \$(TARGETS) # All targets without an extension depend on their .c files %: %.c @echo "Building \$@" @gcc \$(FLAGS) \$< -o \$@ # The "@" symbol suppresses Make actually displaying the command. clean: @echo "Removing hidden files" @rm -rf .*.swp *.dSYM ._* 2> /dev/null @echo "Removing executables" @rm -rf \$(TARGETS) 2> /dev/null``` The nice thing about Make is that it’s useful not only for things like C code. I’ve even used it (quite some time ago) to piece together tracks of music using ecasound. ## MacBook and 64 bit address spaces I occasionally like to show off how I can do some pretty sophisticated calculations and experiments right on my Macbook. For these tasks, over-speced machines and special purpose hardware are sometimes assumed to be the right solution, but a judicious use of the resources on a MacBook may get the job done in a similar time. One thing that would be nice is if the OS actually supported 64 bit address spaces (after all, the Core 2 Duo processor is a 64 bit one).  This comes in handy when one wants to mmap large files (e.g. greater than a few gigabytes).  On 32-bit architectures, the address space is limited by the word size (in this case $2^{32}$ bits or about 4 GB of space and even less-so with kernel limitations). Supposedly the next version of OS X will have support for this (and will come on the MacBook Pro installations, but I assume not the MacBook).
{}
Geometry # 3D Coordinate Geometry - Equation of a Plane What is the equation of the plane which contains the following two parallel lines: $\frac{x+1}{6} = \frac{y-2}{7} = z \hspace{.3cm} \text{ and } \hspace{.3cm} \frac{x-3}{6} = \frac{y+4}{7} = z-1?$ What is the equation of the plane that meets perpendicularly with the line $x-3=\frac{y-4}{3}=\frac{z-2}{-2}$ at $(2,1,4)?$ The equation of the plane that passes through the three points: $\begin{array}{c}&(0,-2,3),&&(1,0,1),&&(-1,-1,0)\end{array}$ is $ax+by+cz+1=0.$ Find the value of $a+b+c.$ Which of the following is the equation of the $xy$-plane? What is the normal vector of the following plane: $\frac{x-2}{-3}+\frac{y+3}{3}+\frac{z-5}{4}=0\text{?}$ ×
{}
# separating solution A separating solution of a smooth planar dynamical system is an oriented smooth curve such that • the curve is composed of trajectories of the dynamical system, • the curve does not go through any equilibrium point, • the curve is the boundary of some region in the plane. The orientation of the curve is the same as the orientation of the solutions. In addition the curve does not need to be connected. [KGA] ## References Title separating solution SeparatingSolution 2013-03-22 14:06:17 2013-03-22 14:06:17 Daume (40) Daume (40) 10 Daume (40) Definition msc 34A99 msc 34-00
{}
× # JBMO!HELP! Hey guys,so just a couple of days ago i was a contestant in my country's Albania JBMO(Junior Balkan Mathematical Olympiad) test.So i think i did pretty good in it and i might have a real chance to qualify .So i wanted your help to suggest some books that can prepare me for this olympiad? Balkan brillantiers and of course other excellent brillantiers,help me! Note by Lawrence Bush 2 years, 10 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Well , I'm not an excellent Brilliant user but I will suggest that you download pdfs from the Internet , that's the best choice instead of going out and buying books . Combinatorics : A course in Combinatorics , Introductory Combinatorics - by Richard Brualdi Calculus : I use Paul's Online Notes There are a lot of other books too , but use only those which you'll be able to work with comfortably :) - 2 years, 10 months ago
{}
# Tag Info ## Hot answers tagged resonance 45 If there were only one prong (imagine holding a metal rod in your hand), then the oscillation energy of the prong would quickly be dissipated by its contact with your hand. On the other hand, a fork with two prongs oscillates in such a way that the point of contact with your hand does not move much due to the oscillation of the fork. This causes the ... 36 I am by no means an expert in tuning fork design, but here are some physical considerations: Different designs may have different "purities," but don't take this too far. It is certainly possible to tune to something not a pure tone; after all, orchestras usually tune to instruments, not tuning forks. Whatever mode(s) you want to excite, you don't want to ... 24 The reason for having two prongs is that they oscillate in antiphase. That is, instead of both moving to the left, then both moving to the right, and so on, they oscillate "in and out" - they move towards each other then move away from each other, then towards, etc. That means that the bit you hold doesn't vibrate at all, even though the prongs do. You ... 12 Q. How do two coupled vibrating prongs isolate a single frequency? howstuffworks.com has an article on How Tuning Forks Work The way a tuning fork's vibrations interact with the surrounding air is what causes sound to form. When a tuning fork's tines are moving away from one another, it pushes surrounding air molecules together, forming small, ... 10 There seem to be a lot of human body mechanical models, such as this one: As for applications, I have heard that sub-audio frequency vibrations have been considered as nonlethal weapons for riot control. 6 It would depend on damping effects being taken into account or not. Invoking Newton's 2nd Law of motion, a differential equation for the motion of a damped harmonic oscillator can be written (including an external, sinusoidal driving force term): $m\frac{d^2x}{dt^2}+2m\xi\omega_0\frac{dx}{dt}+m\omega_0^2x=F_0\sin\left(\omega t\right)$ Where $m$ is the ... 5 The first generation of elementary particles are by observation not composite and therefore not seen to decay. They are shown in this table of the standard model of particle physics in column I. The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column and the Higgs boson in the fifth. All ... 4 Any physics-oriented FEM solver should do this. I have only done it with COMSOL, which is proprietary and expensive, but searching Ubuntu's repository of free software turns up at least two promising candidates: Elmer and FreeFEM. I'm trying out Elmer now. http://www.csc.fi/english/pages/elmer http://en.wikipedia.org/wiki/Elmer_FEM_solver This example ... 4 I have just noticed the question. Indeed, the body does have very clear resonances. Nature has prioritised speed of movement over stability so limbs are underdamped and naturally resonant. It is likely that many rhythmic movements occur at the resonant frequency of the body parts involved (rather similar to the oscillation of some insect wings). A ... 4 The Moon moves away about four inches a year on Earth, 15 while the Earth's rotation is slowing down, which will in the distant future total solar eclipses occur stop the moon not having sufficient size to cover the solar disk. In theory, this separation should continue until the Moon takes 47 days to complete one orbit around our planet at which our planet ... 4 In an experiment in which particles are collided, a resonance is a large peak in a cross section (rate at which a process occurs) when plotted against the energy of the incoming particles. For example, when LEP collided electrons with positrons, they saw a resonance when the energy of the incoming particles equalled the mass of the $Z$-boson. Resonances ... 3 No, because in a vacuum, there is no way for the two tuning forks (I think you meant this, rather than pendulums) to communicate. The reason a second tuning fork with the same resonance frequency will begin resonating is because, physically, sound waves are hitting it at its natural frequency. Sound waves travel in a medium, so in a vacuum, there's nothing ... 3 The oscillator frequency $\omega$ says nothing about the actual oscillator phase. Let us suppose that your oscillator oscillates freely like this: $$x(t) = A_0\cdot\cos(\omega t + \phi_0),\; t<0.$$ At $t=0$ it has a phase $\phi_0$. Depending on its value the oscillator can be moving forward or backward with some velocity. If you switch your external force ... 3 The inductor and capacitor form a resonant circuit, which will pass only a specific frequency - the one you are tuning the radio to recieve. You normally tune it by making either the inductor or capacitor adjustable. edit: As described in How does radio receives signal from particular station? it's very much like a pendulum. Current flows freely in the ... 3 Both "perfectly open" (zero acoustic impedance) and "perfectly closed" (infinite acoustic impedance) boundary conditions are only idealizations that never occur in practice. For the case of the human vocal tract, they aren't even very good approximations. The "bottom end" of the resonating cavity is not, in fact, the lungs, but the vocal folds (as Georg ... 3 The first resonant vibrational mode for a string clamped at both ends looks like: You should be able to deduce the wavelength from that diagram. The second mode looks like: Both of the images above are from http://www.clickandlearn.org/Physics/sph3u/Music/Music.htm and that site will spell it out in more detail for you. If your string length is ... 3 For an account of modern instances of resonance damaging structures, see the Sketpics SE post listed by Dr. RedGrittyBrick listed in the comments. I don't know of any historians' recording of events such as you describe, so hopefully another answer can do this. As for understanding the $Q$-factor and its effect on resonance: classical resonances comprise ... 3 Why does maximum resonance occur at triple the length of air column for the previous maximum resonance? Because resonance in a pipe that is closed at one end occurs when a standing wave of air is generated within the pipe, and this can only happen if the open end of the air column is a displacement antinode (where the wave is at its max amplitude), ... 3 Re question 1: when you learn this stuff in school you usually simplify the system by modelling it as a simple harmonic oscillator so the amplitude of the system will be given by some equation like: $$A(t) = A_0 e^{i\omega_0 t}$$ where $\omega_0$ is the natural frequency of oscillation. Typically you study what happens if you apply a force that also ... 2 There's an interesting question in here if you look hard enough. First of all, there's nothing special about the resonant frequency of something made of little magnets. It might as well be a piece of ordinary string, a metal bar, or whatever. In fact I think the fact that it's made of separate little magnets stuck together would give it a much lower Q ... 2 If you have two decoupled oscillators, they satisfy differential equations $$-\frac{d^2}{dt^2}x_i=\omega^2_{i} x_i$$ where $i=1,2$. The solutions are clearly multiples of $\cos(\omega_i t+\phi_i)$. Now, consider two interacting oscillators. Each oscillator must know about the phase of the other, so the simplest dependence is to add a multiple of $x_2$ (a ... 2 OK, the simple answer: When there is a resonance in the antenna you have a coherent phenomenon. All the bands of electrons of the antenna are marching in tune. The black body radiation is an incoherent phenomenon coming from the individual atoms of the antenna. Even if the peak of the black body radiation were sitting on the resonance of the antenna it ... 2 The derivative-like line shape is a result of the use of field modulation. In order to get sufficient signal to noise, the B_0 field (large, static field) is modulated (usually at 100kHz) and a lock-in amplifier (or equivalent) is used to reject any frequencies beside 100kHz. The result of this field modulation is that the signal that is obtained is not ... 2 Real LC circuits have some resistance, which wastes some of the energy as thermal radiation, and the cycling eventually dies. I think they also have some other non-idealities that allow energy to escape as far field electromagnetic radiation, correct? What are these non-idealities? Are they independent of the resistive component? ... 2 This must be impossible, even for lady Castafiore with her earthquake voice. For a glass to break by sheer sound you need to produce a tone equal to the glass's natural frequency - the frequency at which a body vibrates with the least amount of energy. In other words: there you get the most vibration with a minimum of effort. This is also called resonance. ... 2 Vibrations begin to resonate together into sound waves we can hear. We can make the sounds loud or soft depending on how much pressure we place on finger. The pitch of the sound can also be changed by adjusting the amount of water in the glass.As you rub your finger on the rim, your finger first sticks to the glass and then slides. This stick and slide ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# X 25 and ii diverges when x 25 but then the series 2 • Notes • 15 • 100% (1) 1 out of 1 people found this document helpful This preview shows page 11 - 13 out of 15 pages. Subscribe to view the full document. Unformatted text preview: x | < 25, and (ii) diverges when | x | > 25. But then the series (2) ∞ summationdisplay n = 0 c n x 2 n will (i) converge when | x 2 | < 25, and (ii) diverges when | x 2 | > 25. Consequently, series (2) (i) converges when | x | < 5, and (ii) diverges when | x | > 5, and so series (2) will have radius of conver- gence R = 5 . keywords: PowerSeries, PowerSeriesExam, 023 10.0 points If the series ∞ summationdisplay n =0 c n x n converges when x =- 3 and diverges when x = 5, which of the following series must converge without further restrictions on { c n } ? A. ∞ summationdisplay n = 0 c n (- 2) n B. ∞ summationdisplay n = 0 c n (- 3) n +1 1. neither of them 2. A only 3. B only 4. both of them correct Explanation: A. The interval of convergence of series ∞ summationdisplay n =0 c n x n contains (- 3 , 3). Since x =- 2 belongs to this interval, the series summationdisplay n =0 c n (- 2) n converges also. B. Since ∞ summationdisplay n = 0 c n (- 3) n +1 =- 3 parenleftBigg ∞ summationdisplay n =0 c n (- 3) n parenrightBigg the series ∞ summationdisplay n =0 c n (- 3) n +1 converges. 024 10.0 points Determine the interval of convergence of the power series ∞ summationdisplay n = 1 (- 3) n √ n ( x- 1) n . keller (mjk2535) – HW03 – kalahurka – (55250) 12 1. interval of cgce = bracketleftBig 2 3 , 4 3 bracketrightBig 2. interval of cgce = bracketleftBig- 4 3 ,- 2 3 bracketrightBig 3. interval of cgce = bracketleftBig- 4 3 ,- 2 3 parenrightBig 4. interval of cgce = parenleftBig 2 3 , 4 3 parenrightBig 5. interval of cgce = parenleftBig- 4 3 ,- 2 3 parenrightBig 6. interval of cgce = parenleftBig 2 3 , 4 3 bracketrightBig correct 7. interval of cgce = parenleftBig- 4 3 ,- 2 3 bracketrightBig 8. interval of cgce = bracketleftBig 2 3 , 4 3 parenrightBig Explanation: The given series has the form ∞ summationdisplay n =1 (- 1) n a n ( x- 1) n where a n = 3 n √ n . But then vextendsingle vextendsingle vextendsingle a n +1 a n vextendsingle vextendsingle vextendsingle = 3 parenleftBig √ n √ n + 1 parenrightBig = 3 parenleftBig radicalbigg n n + 1 parenrightBig . in which case, lim n →∞ vextendsingle vextendsingle vextendsingle a n +1 a n vextendsingle vextendsingle vextendsingle = 3 . By the Ratio test, therefore, the given series (i) converges when | x- 1 | < 1 / 3, and (ii) diverges when | x- 1 | > 1 / 3; in particular, it converges when- 1 3 + 1 < x < 1 3 + 1 , i.e. , on the interval parenleftbigg 2 3 , 4 3 parenrightbigg . To check for convergence at the endpoints of this interval, observe first that when x = 2 3 , the series becomes ∞ summationdisplay n =1 1 √ n which diverges by the p-series test. On the other hand, when x = 4 3 , the series becomes ∞ summationdisplay n =1 (- 1) n √ n which converges by the Alternating Series Test. Consequently, the given series has interval of cgce = parenleftBig 2 3 , 4 3 bracketrightBig .... View Full Document
{}
A case against the x<>y key 05-09-2015, 10:49 PM (This post was last modified: 05-10-2015 11:53 AM by hansklav.) Post: #1 hansklav Member Posts: 58 Joined: Jun 2014 A case against the x<>y key Lately I had ample time to delve into some of the example calculations described in an attractive User’s Manual of one of my classical RPN calculators. I was especially interested in a challenging calculation given as illustration of the rules of operator precedence, because in my RPN Tutorial I’ve also written on this subject. I was a bit irritated when I didn’t obtain the given solution in one try, so I tried again. When I got the same answer the second time I took my Prime and entered the calculation in Textbook Mode. This confirmed my previous answers, different from the manual’s. Then I scutinized the keystroke sequence that led to the flawed solution in the manual, and it was not difficult to see the mistake. Before reading further try the calculation for yourself, preferably using a classical RPN calculator with four stack levels (angles in sexagesimal degrees): If your answer starts with negative one seven one nine then you made the same mistake, if it starts with one six five seven then congratulations! Most people will start from within the innermost parentheses and then work backwards to the start of the calculation. The tricky part is how to proceed after calculating $$2\times3^4\div5$$ (and before adding this term to the sum), so the part $$1-…+…$$ If you use the x<>y (exchange x and y) key indiscriminately every time you encounter a minus ’backwards‘ (ending the key sequence as the manual does with … + 1 x<>y ) things go wrong, because: $$1-b+c+d ≠ 1-(b+c+d)$$ The implied parentheses should be placed as follows: $$1 - b + c + d = (1 - b) + c + d$$ So you have to do this part of the calculation backwards using …CHS + 1 +: $$1 - b + c + d = 1 + -b + c + d$$ or you should do the $$1 - b$$ before adding it to the rest of the calculation: … 1 x<>y – +. For me encountering this mistake is an eye-opener because the above mentioned manual is written by an academically educated professional and (I’m sure) seasoned RPN-user. So even experienced and intelligent RPN-users are not immune to making this type of mistake while using the x<>y key. This is evidence that the use of the x<>y key to do non-commutative operations backwards should be considered potentially harmful. It is safe to use this key to do an isolated non-commutative operation backwards, but you cannot use it when this operation is part of a series of operations of equal operator precedence on the same level of parentheses (e.g. a series of additions and subtractions or a series of multiplications and divisions), unless it’s the rightmost one. The HP Manuals that mention the use of the x<>y key in case of non-commutative operations (e.g. HP-41CX, HP 33s, HP 35s) do not warn against its misuse. The HP-41CX Owner’s Manual (Vol. 1) is the only one I know that mentions the use of CHS or 1/x in such situations. My RPN Tutorial contains a warning against this type of error and gives the methods of changing a subtraction into addition of a negative using CHS and + or changing division into multiplication of a reciprocal using 1/x and $$\times$$ as safe alternatives. But having seen the ease with which mistakes can be made using the x<>y key I will change 6. of the ’Cheat Sheet‘ of my RPN Tutorial. In the new version (not finished yet) I will advise to use CHS and 1/x preferentially when doing subtractions and divisions backwards and x<>y only when the user knows exactly what the consequences are. Hans P.S. the correct answer to eight decimals is 1657.00894809 05-10-2015, 12:56 AM Post: #2 Les Bell Member Posts: 188 Joined: Dec 2013 RE: A case against the x<>y key (05-09-2015 10:49 PM)hansklav Wrote:  I was a bit irritated when I didn’t obtain the given solution in one try, so I tried again. When I got the same answer the second time I took my Prime and entered the calculation in Textbook Mode. This confirmed my previous answers, different from the manual’s. [...] P.S. the correct answer to eight decimals is 1657.00894809 I just banged through that calculation near-subconsciously, without giving any thought to the alleged evils of x<>y, and got the same answer. Which manual did you take this from? I'd like to see the original. But I suspect you're massively over-complicating something that is really quite simple, and possibly misinterpreting the original example. --- Les [http://www.lesbell.com.au] 05-10-2015, 01:13 AM Post: #3 Paul Dale Senior Member Posts: 1,473 Joined: Dec 2013 RE: A case against the x<>y key There is only one point in the formula where you overflow a four level stack working left to right (45 to the six sevenths term). In SSIZE8 on my trusty 34S, there is no issue at all. - Pauli 05-10-2015, 02:20 AM Post: #4 Mark Hardman Senior Member Posts: 484 Joined: Dec 2013 RE: A case against the x<>y key (05-09-2015 10:49 PM)hansklav Wrote:  The implied parentheses should be placed as follows: $$1 - b + c + d = (1 - b) + c + d$$ So you have to do this part of the calculation backwards using …CHS 1 +: $$1 - b + c + d = 1 + -b + c + d$$ or you should do the $$1 - b$$ before adding it to the rest of the calculation: … 1 x<>y – +. Your point is valid. But, why do you feel that a negative b term (-b) is even necessary? Simply let the stack do its job: Code: 1 [Enter] 3 [Enter] 4 Y^X 2 x 5 / - etc. "Inside out, left to right" Ceci n'est pas une signature. 05-10-2015, 04:06 AM (This post was last modified: 05-10-2015 04:18 AM by Tugdual.) Post: #5 Tugdual Senior Member Posts: 736 Joined: Dec 2013 RE: A case against the x<>y key I would spontaneously not use a stack swap, just push things on stack as they come and read from left to right (as Mark said). 05-10-2015, 05:16 AM Post: #6 d b Senior Member Posts: 489 Joined: Dec 2013 RE: A case against the x<>y key I seldom use the X<>Y in hand calculations but have used it in programs more. For instance entering N & E for a point together then the same for another point so as to use the distance formula (or R-P) on them. Surveyors call this inversing and we do it often. I let the program do the ordering and re-ordering and that uses X<>Y each go-around. BTW: the reason surveyors use programs to do something this simple is that we do do it a lot and sooner or later we all fat-finger a key. Data entry itself is enough of a chance for error to creep in. 05-10-2015, 10:25 AM Post: #7 hansklav Member Posts: 58 Joined: Jun 2014 RE: A case against the x<>y key (05-10-2015 12:56 AM)Les Bell Wrote: (05-09-2015 10:49 PM)hansklav Wrote:  I was a bit irritated when I didn’t obtain the given solution in one try, so I tried again. When I got the same answer the second time I took my Prime and entered the calculation in Textbook Mode. This confirmed my previous answers, different from the manual’s. [...] P.S. the correct answer to eight decimals is 1657.00894809 I just banged through that calculation near-subconsciously, without giving any thought to the alleged evils of x<>y, and got the same answer. Which manual did you take this from? I'd like to see the original. But I suspect you're massively over-complicating something that is really quite simple, and possibly misinterpreting the original example. I sent you a copy of the relevant page. Hans 05-10-2015, 10:37 AM (This post was last modified: 05-10-2015 10:38 AM by hansklav.) Post: #8 hansklav Member Posts: 58 Joined: Jun 2014 RE: A case against the x<>y key (05-10-2015 01:13 AM)Paul Dale Wrote:  There is only one point in the formula where you overflow a four level stack working left to right (45 to the six sevenths term). That's why it’s wise to start such a calculation from within the innermost parentheses using an RPN calculator with only four stack levels. Quote:In SSIZE8 on my trusty 34S, there is no issue at all. True, but you still have to count the number of stack levels you use to be sure that your answer is correct (because there is no stack overflow sensing built into the WP 34S). So to avoid that even on the WP 34S or WP 31S in SSIZE8-mode I personally would start from within the innermost parentheses. Hans 05-10-2015, 10:43 AM (This post was last modified: 05-10-2015 11:12 AM by hansklav.) Post: #9 hansklav Member Posts: 58 Joined: Jun 2014 RE: A case against the x<>y key (05-10-2015 02:20 AM)Mark Hardman Wrote: (05-09-2015 10:49 PM)hansklav Wrote:  The implied parentheses should be placed as follows: $$1 - b + c + d = (1 - b) + c + d$$ So you have to do this part of the calculation backwards using …CHS 1 +: $$1 - b + c + d = 1 + -b + c + d$$ or you should do the $$1 - b$$ before adding it to the rest of the calculation: … 1 x<>y – +. Your point is valid. But, why do you feel that a negative b term (-b) is even necessary? Simply let the stack do its job: Code: 1 [Enter] 3 [Enter] 4 Y^X 2 x 5 / - etc. "Inside out, left to right" Well, as Paul Dale pointed out, on a four level stack RPN calculator then you will run into trouble (stack overflow, leading to a wrong answer, without warning), unless, of course, you make use of STO and RCL. Hans 05-10-2015, 10:59 AM (This post was last modified: 10-12-2015 11:42 PM by hansklav.) Post: #10 hansklav Member Posts: 58 Joined: Jun 2014 RE: A case against the x<>y key (05-10-2015 05:16 AM)Den Belillo (Martinez Ca.) Wrote:  I seldom use the X<>Y in hand calculations but have used it in programs more. For instance entering N & E for a point together then the same for another point so as to use the distance formula (or R-P) on them. Surveyors call this inversing and we do it often. I let the program do the ordering and re-ordering and that uses X<>Y each go-around. BTW: the reason surveyors use programs to do something this simple is that we do do it a lot and sooner or later we all fat-finger a key. Data entry itself is enough of a chance for error to creep in. OK, that is a valid and safe use of the x<>y key. Also I think when doing several ’stacked‘ power calculations backwards, like in $$9^{2^{3}}$$, the use of this key is safe. My point is that when doing the most frequently occurring non-commutative calculations (subtractions and divisions) backwards its use is unsafe in some situations. And you don't have to be a newbie to make a mistake in such situations, as this case proves. Hans P.S. The title exaggerates the problem a bit, on purpose ;-) 05-10-2015, 12:35 PM Post: #11 Thomas Radtke Senior Member Posts: 729 Joined: Dec 2013 RE: A case against the x<>y key (05-10-2015 10:25 AM)hansklav Wrote:  I sent you a copy of the relevant page. Would you mind stating which calculator manual this is from? 05-10-2015, 12:49 PM Post: #12 Thomas Radtke Senior Member Posts: 729 Joined: Dec 2013 RE: A case against the x<>y key (05-10-2015 10:43 AM)hansklav Wrote:  Well, as Paul Dale pointed out, on a four level stack RPN calculator then you will run into trouble (stack overflow, leading to a wrong answer, without warning), unless, of course, you make use of STO and RCL. No need to. You can evaluate from right to left until the first two terms 1-2*3^4/5, which have to be .calculated befor adding the sum of the two rightmost terms. 1688... in x, then [3][ENTER][4][y^x][5][/][2][*][1][x<>y][-][+] Fits a 4-level stack perfectly. . 05-10-2015, 02:32 PM Post: #13 Mark Hardman Senior Member Posts: 484 Joined: Dec 2013 RE: A case against the x<>y key (05-10-2015 10:43 AM)hansklav Wrote: (05-10-2015 02:20 AM)Mark Hardman Wrote:  Your point is valid. But, why do you feel that a negative b term (-b) is even necessary? Simply let the stack do its job: Code: 1 [Enter] 3 [Enter] 4 Y^X 2 x 5 / - etc. "Inside out, left to right" Well, as Paul Dale pointed out, on a four level stack RPN calculator then you will run into trouble (stack overflow, leading to a wrong answer, without warning), unless, of course, you make use of STO and RCL. Hans No you don't. You say you've written a tutorial on RPN? Ceci n'est pas une signature. 05-10-2015, 02:32 PM Post: #14 hansklav Member Posts: 58 Joined: Jun 2014 RE: A case against the x<>y key (05-10-2015 10:43 AM)hansklav Wrote:  Well, as Paul Dale pointed out, on a four level stack RPN calculator then you will run into trouble (stack overflow, leading to a wrong answer, without warning), unless, of course, you make use of STO and RCL. No need to. You can evaluate from right to left until the first two terms 1-2*3^4/5, which have to be .calculated befor adding the sum of the two rightmost terms. 1688.40894809 in x, then [3][ENTER][4][y^x][5][/][2][*][1][x<>y][-][+] Fits a 4-level stack perfectly. True, but my reply was to Mark Hardman’s post, who started the whole calculation from the left side, and then a 4-level stack will not suffice. But Mark also wrote "Inside out, left to right", and possibly he meant for this calculation "first do the rightmost part from inside out, and then the rest (his listing) from left to right". Your solution partially uses that adagium. (05-10-2015 12:35 PM)Thomas Radtke Wrote:  Would you mind stating which calculator manual this is from? The author himself of said manual (the spiral bound version of the WP 31S User’s Manual) sent me the simplest fix, which is even one keystroke shorter than yours: 1688.40894809 in x, then [3][ENTER][4][y^x][5][/][2][*][-][1][+] So no need to use either CHS or x<>y ! The relevant part of the corrected page will now look as follows: I must admit that I didn’t think of the latter possibility myself, probably because mentally it also uses the addition of a negative and then the use of CHS is more intuitive. So now we’re left with several solutions to the same problem. The question is: which one should we teach to newbies? I’ll come back to that later. Hans Post: #15 Thomas Radtke Senior Member Posts: 729 Joined: Dec 2013 RE: A case against the x<>y key (05-10-2015 02:32 PM)hansklav Wrote:  True, but my reply was to Mark Hardman’s post, who started the whole calculation from the left side, and then a 4-level stack will not suffice. I see (not everything-should work from left to right, too), sorry! (05-10-2015 02:32 PM)hansklav Wrote:  The question is: which one should we teach to newbies? The simplest not requiring doing any calculations in mind, but pointing out that algebraic considerations are unfortunately necessary: Mine . 05-10-2015, 03:12 PM (This post was last modified: 05-10-2015 08:16 PM by Mark Hardman.) Post: #16 Mark Hardman Senior Member Posts: 484 Joined: Dec 2013 RE: A case against the x<>y key (05-10-2015 02:32 PM)hansklav Wrote:  True, but my reply was to Mark Hardman’s post, who started the whole calculation from the left side, and then a 4-level stack will not suffice. Again, you've written an RPN tutorial!?! Code:               x          y          z          t 1 [Enter]     1          -          -          - 3 [Enter]     3          1          -          - 4             4          3          1          - y^x          81          1          -          - 2             2         81          1          - x           162          1          -          - 5             5        162          1          - /            32.4        1          -          - -           -31.4        -          -          - 6 [Enter]     6        -31.4        -          - 7             7          6        -31.4        - x^2          49          6        -31.4        - 3             3         49          6        -31.4 1/x           0.3333    49          6        -31.4 y^x           3.6593     6        -31.4      -31.4 -             2.3407   -31.4      -31.4      -31.4 sin           0.0408   -31.4      -31.4      -31.4 8             8          0.0408   -31.4      -31.4 x!        40320          0.0408   -31.4      -31.4 x          1646.7276   -31.4      -31.4      -31.4 +          1615.3276   -31.4      -31.4      -31.4 45 [Enter]   45       1615.3276   -31.4      -31.4 6  [Enter]    6         45       1615.3276   -31.4 7             7          6         45       1615.3276 /             0.8571    45       1615.3276  1615.3276 y^x          26.1240  1615.3276  1615.3276  1615.3276 2 [Enter]     2         26.1240  1615.3276  1615.3276 3             3          2         26.1240  1615.3276 y^x           8         26.1240  1615.3276  1615.3276 9 [Chs]      -9          8         26.1240  1615.3276 x<>y          8         -9         26.1240  1615.3276 y^x        4.3047e07    26.1240  1615.3276  1615.3276 x          1.1246e09  1615.3276  1615.3276  1615.3276 x^2        1.2646e18  1615.3276  1615.3276  1615.3276 ln        41.6813     1615.3276  1615.3276  1615.3276 +       1657.0089     1615.3276  1615.3276  1615.3276 Ceci n'est pas une signature. 05-10-2015, 03:48 PM Post: #17 J-F Garnier Senior Member Posts: 304 Joined: Dec 2013 RE: A case against the x<>y key (05-10-2015 02:32 PM)hansklav Wrote: (05-10-2015 12:35 PM)Thomas Radtke Wrote:  Would you mind stating which calculator manual this is from? The author himself of said manual (the spiral bound version of the WP 31S User’s Manual) sent me the simplest fix ... (emphasis is mine) Oh, I thought you were speaking about "one of [your] classical RPN calculators". So I don't need to worry any more, no mistake in a HP classical RPN calculator manual :-) J-F 05-10-2015, 03:51 PM (This post was last modified: 05-10-2015 03:56 PM by hansklav.) Post: #18 hansklav Member Posts: 58 Joined: Jun 2014 RE: A case against the x<>y key (05-10-2015 03:12 PM)Mark Hardman Wrote: (05-10-2015 02:32 PM)hansklav Wrote:  True, but my reply was to Mark Hardman’s post, who started the whole calculation from the left side, and then a 4-level stack will not suffice. Again, you've written an RPN tutorial!?! You don’t believe it? Please, feel free to let me know where it can be made better ;-) Quote: Code:               x          y          z          t 1 [Enter]     1          -          -          - 3 [Enter]     3          1          -          - 4             4          3          1          - y^x          81          1          -          - 2             2         81          1          - x           162          1          -          - 5             5        162          1          - /            32.4        1          -          - -           -31.4        -          -          - 6 [Enter]     6        -31.4        -          - 7             7          6        -31.4        - x^2          49          6        -31.4        - 3             3         49          6        -31.4 1/x           0.3333    49          6        -31.4 y^x           3.6593     6        -31.4      -31.4 -             2.3407   -31.4      -31.4      -31.4 sin           0.0408   -31.4      -31.4      -31.4 8             8          0.0408   -31.4      -31.4 x!        40320          0.0408   -31.4      -31.4 x          1646.7276   -31.4      -31.4      -31.4 +          1615.3276   -31.4      -31.4      -31.4 45 [Enter]   45       1615.3276   -31.4      -31.4 6  [Enter]    6         45       1615.3276   -31.4 7             7          6         45       1615.3276 /             0.8571    45       1615.3276  1615.3276 y^x          26.1240  1615.3276  1615.3276  1615.3276 2 [Enter]     2         26.1240  1615.3276  1615.3276 3 [Enter]     3          2         26.1240  1615.3276 y^x           8         26.1240  1615.3276  1615.3276 9 [Chs]      -9          8         26.1240  1615.3276 x<>y          8         -9         26.1240  1615.3276 y^x        4.3047e07    26.1240  1615.3276  1615.3276 x          1.1246e09  1615.3276  1615.3276  1615.3276 x^2        1.2646e18  1615.3276  1615.3276  1615.3276 ln        41.6813     1615.3276  1615.3276  1615.3276 +       1657.0089     1615.3276  1615.3276  1615.3276 Amazing! If you hadn’t written it out so neatly I wouldn’t have believed it possible. I took Paul Dale's verdict for granted and didn’t even try to do it like this. I certainly wouldn’t advise a starting RPN-user to do it like this on a 4-level calculator without SOS. Hans 05-10-2015, 04:10 PM Post: #19 Massimo Gnerucci Senior Member Posts: 1,747 Joined: Dec 2013 RE: A case against the x<>y key (05-10-2015 03:12 PM)Mark Hardman Wrote:  Again, you've written an RPN tutorial!?! Code:               x          y          z          t 1 [Enter]     1          -          -          - 3 [Enter]     3          1          -          - 4             4          3          1          - y^x          81          1          -          - 2             2         81          1          - x           162          1          -          - 5             5        162          1          - /            32.4        1          -          - -           -31.4        -          -          - 6 [Enter]     6        -31.4        -          - 7             7          6        -31.4        - x^2          49          6        -31.4        - 3             3         49          6        -31.4 1/x           0.3333    49          6        -31.4 y^x           3.6593     6        -31.4      -31.4 -             2.3407   -31.4      -31.4      -31.4 sin           0.0408   -31.4      -31.4      -31.4 8             8          0.0408   -31.4      -31.4 x!        40320          0.0408   -31.4      -31.4 x          1646.7276   -31.4      -31.4      -31.4 +          1615.3276   -31.4      -31.4      -31.4 45 [Enter]   45       1615.3276   -31.4      -31.4 6  [Enter]    6         45       1615.3276   -31.4 7             7          6         45       1615.3276 /             0.8571    45       1615.3276  1615.3276 y^x          26.1240  1615.3276  1615.3276  1615.3276 2 [Enter]     2         26.1240  1615.3276  1615.3276 3 [Enter]     3          2         26.1240  1615.3276 y^x           8         26.1240  1615.3276  1615.3276 9 [Chs]      -9          8         26.1240  1615.3276 x<>y          8         -9         26.1240  1615.3276 y^x        4.3047e07    26.1240  1615.3276  1615.3276 x          1.1246e09  1615.3276  1615.3276  1615.3276 x^2        1.2646e18  1615.3276  1615.3276  1615.3276 ln        41.6813     1615.3276  1615.3276  1615.3276 +       1657.0089     1615.3276  1615.3276  1615.3276 Well done Mark! Greetings, Massimo -+×÷ ↔ left is right and right is wrong 05-10-2015, 06:49 PM Post: #20 Thomas Klemm Senior Member Posts: 1,448 Joined: Dec 2013 RE: A case against the x<>y key (05-09-2015 10:49 PM)hansklav Wrote: (05-10-2015 03:12 PM)Mark Hardman Wrote: Code:               x          y          z          t 2 [Enter]     2         26.1240  1615.3276  1615.3276 3 [Enter]     3          2         26.1240  1615.3276 y^x           8         26.1240  1615.3276  1615.3276 9 [Chs]      -9          8         26.1240  1615.3276 x<>y          8         -9         26.1240  1615.3276 y^x        4.3047e07    26.1240  1615.3276  1615.3276 That's not how to calculate $$-9^{2^3}$$. The negative sign changes the whole expression: 2 ENTER 3 y^x 9 x<>y y^x CHS Only since the whole product is squared this doesn't matter. Cheers Thomas PS: The 2nd ENTER after 3 is probably a typo. « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
{}
# Thread: why is this false? 1. ## why is this false? Two vectors u, v are orthogonal if and only if ||u+v||^2 +||u-v||^2 = 2||u||^2 - 2||v||^2? Doesn't (u+v)(u+v) + (u-v)(u-v) = u^2 + v^2? 2. edited for stupidity 3. Good question man, I don't see how orthogonality plays any role. $||u+v||^2+ ||u-v||^2=(u+v)\cdot(u+v)+(u-v)\cdot(u-v)$ $||u||^2+u\cdot v + v\cdot u +||v||^2 + ||u||^2-u\cdot v - v\cdot u +||v||^2 = 2||u||^2+2||v||^2$
{}
# It's as easy as 1,2,3 (except you have to add them up) Let's consider a list $$\L\$$ (initially empty) and a pointer $$\p\$$ into this list (initialized to $$\0\$$). Given a pair of integers $$$$m,n)\$$, with $$\m\ge 0\$$ and $$\n>0\$$: 1. We set all uninitialized values in $$\L\$$ up to $$\p+m+n\$$ (excluded) to $$\0\$$. 2. We advance the pointer by adding $$\m\$$ to $$\p\$$. 3. We create a vector $$\[1,2,...,n]\$$ and 'add' it to $$\L\$$ at the position $$\p\$$ updated above. More formally: $$\L_{p+k} \gets L_{p+k}+k+1\$$ for each $$\k\$$ in $$\[0,..,n-1]\$$. We repeat this process with the next pair $$\(m,n)\$$, if any. Your task is to take a list of pairs $$\(m,n)\$$ as input and to print or return the final state of $$\L\$$. ## Example Input: [[0,3],[1,4],[5,2]] • initialization: p = 0, L = [] • after [0,3]: p = 0, L = [0,0,0] + [1,2,3] = [1,2,3] • after [1,4]: p = 1, L = [1,2,3,0,0] + [1,2,3,4] = [1,3,5,3,4] • after [5,2]: p = 6, L = [1,3,5,3,4,0,0,0] + [1,2] = [1,3,5,3,4,0,1,2] ## Rules • Instead of a list of pairs, you may take the input as a flat list $$\(m_0,n_0,m_1,n_1,...)\$$ or as two separated lists $$\(m_0,m_1,...)\$$ and $$\(n_0,n_1,...)\$$. • You may assume that the input is non-empty. • The output must not contain any trailing $$\0\$$'s. However, all intermediate or leading $$\0\$$'s must be included (if any). • This is . ## Test cases Input: [[0,3]] [[2,3]] [[0,4],[0,5],[0,6]] [[0,3],[2,2],[2,1]] [[0,1],[4,1]] [[0,3],[1,4],[5,2]] [[3,4],[3,4],[3,4]] [[0,1],[1,2],[2,3],[3,4]] [[2,10],[1,5],[2,8],[4,4],[6,5]] Output: [1,2,3] [0,0,1,2,3] [3,6,9,12,10,6] [1,2,4,2,1] [1,0,0,0,1] [1,3,5,3,4,0,1,2] [0,0,0,1,2,3,5,2,3,5,2,3,4] [1,1,2,1,2,3,1,2,3,4] [0,0,1,3,5,8,11,14,11,14,17,20,12,0,0,1,2,3,4,5] • @KevinCruijssen Given that I usually try to avoid edge cases and given that another answer is already doing something special with the empty list, supporting it is now optional. – Arnauld Oct 25 '19 at 7:39 • Thanks, that saves me a bit of trouble fixing it, since my program was outputting 0 due to the sum at the end. – Kevin Cruijssen Oct 25 '19 at 7:40 ## 18 Answers # J, 27 bytes [:+/[(>:@i.@[,~0~])"0+/\@] Try it online! Takes two separate lists for m an n - the list for n is the left argument of the function, the list for m - the right one. # K (ngn/k), 42 bytes {+/{x,'((|/#'x)-#'x)#'0}((+\x)#'0),'1+!'y} Try it online! Takes two separate lists for m an n It's too long currently, I'll try to golf it. • I tried solving this independently and came up with something very similar, but 25 bytes. Kind of fun 3 nested dyadic hooks +/@(((,~#&0)~1+i.)"0~+/$$: Try it online! – Jonah Oct 26 '19 at 3:42 • @Jonah Great! I don't mind if you post it separately. – Galen Ivanov Oct 26 '19 at 4:08 # Jelly,  10  9 bytes Ä0ẋżR}F€S A dyadic Link accepting a list of integers on each side, $$\M\$$ on the left $$\N\$$ on the right, which yields a list of integers. Try it online! Or see the test-suite. ### How? Ä0ẋżR}F€S - Link: list of integers, M; list of integers, N Ä - cumulative sums (M) = [m1, m1+m2, m1+m2+m3, ...] 0ẋ - zero repeated = [[0]*m1,[0]*(m1+m2),[0]*(m1+m2+m3), ...] R} - range (right=N) = [[[1,2,3,...,n1]],[[1,2,3,...n2]],[[1,2,3,...,n3]], ...] ż - zip together = [[[0]*m1,[[1,2,3,...,n1]]],[[0]*(m1+m2),[[1,2,3,...,n2]]],[[0]*(m1+m2+m3),[[1,2,3,...,n3]]], ...] F€ - flatten each = [[0,0,...,0,1,2,3,...,n1],[0,0,...,0,1,2,3,...,n2],[0,0,...,0,1,2,3,...,n3], ...] S - sum • Nice. I’d missed the fact that two lists were acceptable input. – Nick Kennedy Oct 25 '19 at 18:28 # Jelly, 12 11 bytes +ɼ0x;R}ʋ/€S Try it online! A full program that takes a list of lists of integers as its argument and returns a list of integers. Can be adapted to work as a link by resetting the register to zero after each call (as implemented in the footer on TIO). Saved a byte now list can be non-empty. # Java 8, 148 147 bytes N->M->{int l=M.length,s=M[l-1],p=0,L[],i=0,j;for(int n:N)s+=n;for(L=new int[s];i<l;i++)for(p+=N[i],j=s;j-->0;)L[j]-=j<p|j>=p+M[i]?0:~j+p;return L;} -1 byte thanks to @ceilingcat. Takes both integer-arrays as separated inputs. Try it online. Explanation: N->M->{ // Method with integer-array as two parameters as well as return-type int l=M.length, // Length of the input-arrays s= // Length of the output-array, M[l-1], // starting at the last pointer-item of array M p=0, // Position p as specified in the challenge, starting at 0 L[], // Output list, starting uninitialized i=0,j; // Index integers for(int n:N) // Loop over the value-list N: s+=n; // And add all of them to the output-length for(L=new int[s];// Now initialize the output-list of size s, filled with 0s by default i<l;i++) // Loop i in the range [0, l): for(p+=N[i], // Increase position p by the i'th pointer of N j=s;j-->0;)// Inner loop j in the range (s,0]: L[j]-= // Decrease the j'th item of the output-list by: j<p // If j is smaller than position p |j>= // or j is larger than or equal to p+M[i]? // pointer p and the i'th value of M combined 0 // Leave the j'th item of output-array the same by adding 0 : // Else: ~j+p; // Decrease the j'th item of the output-array by -j-1+p // (increase it by the difference between j and p, plus 1) return L;} // And after the nested loops: return the resulting array # Zsh, 59 56 54 bytes -3 bytes by changing to my Bash strategy, -2 bytes by switching back to my original strategy, now that the rules specify the list is non-empty. for m n;for ((i=0,p+=m;i<n;a[p+i]+=++i)): <<<${a/#%/0} Setting a[5]=1 causes a[1] through a[4] to be initialized empty. The final parameter expansion replaces all empty elements with 0. for m n # implicit 'in "$@"' for ((i=0, p+=m; i<n; a[p+i]+=++i)) # increment p by m, add to a[p,p+n-1] : # ':' is a no-op builtin <<<${a/#%/0} # the glob #% matches empty elements ## Haskell, 90 87 bytes (foldl(%)[].).zipWith(\o p->(0<$[1..o])++[1..p]).scanl1(+) (a:b)%(c:d)=a+c:b%d b%d=b++d Takes the input as two separate lists [m0,m1,...] and [n0,n1,...]. Try it online! A variant (function % same as above), also 87 bytes: # Python 3, 144119118111110 106 bytes def f(m,n): p=k=0;l=[] for i,j in zip(n,m):p+=j;l+=[0]*(p+i-len(l));exec("l[~k]+=i-k;k+=1;"*i) return l Try it online! Thanks to: -@mypetlion for saving me 25 bytes # Wolfram Language (Mathematica), 62605653 50 bytes Plus@@PadRight[a=0;Ramp[Range[(a+=#)+#2]-a]&@@@#]& Try it online! -3 with guarantee that input is non-empty. Takes a list of pairs as the argument. # Ruby, 9288 84 bytes ->x{n,l=0,[];x.map{|a,b|0.upto(b-1+n+=a){|i|l[i]||=0};1.upto(b){|i|l[-i]+=1+b-i}};l} Input is a list of pairs, in the form [[m1, n1], [m2, m2], ...]. The approach is pretty much the algorithm as described. Golfy tricks I've used include: • l[i]||=0 setting the value for L to 0 if it isn't already set • That's pretty much it. Thanks to Value Ink for Try it online! Ruby can be pretty terse. When the version with numbered block parameters comes out, this can be reduced by a few bytes to this (note the @1 ins: ->x{n,l=0,[];x.map{|a,b|0.upto(b-1+n+=a){l[@1]||=0};1.upto(b){l[-@1]+=1+b-@1}};l} • Why are you ending your map block with returning l and then taking [-1], when you can just do ;l after the map? Like so. – Value Ink Oct 26 '19 at 23:46 • Two more things I thought up: n,*l=0 will initiate l to the empty array for -2 bytes, and you can wrap the n+=a into your upto call like so: 0.upto(b-1+n+=a) – Value Ink Oct 26 '19 at 23:51 # J, 20 bytes +/@(0>.>:@i.@+-])+/\ Try it online! I guess this is quite different from both Galen Ivanov's and Jonah's answers. A dyadic train that takes ns as left argument and ms as right. One trick was to avoid (...)"0 (apply to each item) in favor of ...@+. The conjunction u@v has the effect of u@:v"v. The rank-forcing effect is usually not desirable when v is an arithmetic verb, but it works perfectly here. ### How it works +/@(0>.>:@i.@+-])+/\ Left(n): range generators ex) 3 4 2 Right(m): offsets ex) 0 1 5 +/\ cm: Cumulative sum of offsets ex) 0 1 6 + Add n and cm element-wise ex) 3 5 8 i.@ Generate 0..x-1 for each x above ex) 0 1 2;0 1 2 3 4;0 1 2 3 4 5 6 7 >:@ Increment each value ex) 1 2 3;1 2 3 4 5;1 2 3 4 5 6 7 8 (Implicit) Form a 2D array, padding with 0s where needed -] Subtract each item of cm from each row of above ex) 1 2 3 0 0 0 0 0 0 1 2 3 4 -1 -1 -1 -5 -4 -3 -2 -1 0 1 2 0>. Max with 0; Change negative numbers to 0s +/@( ) Take sum in column direction • very nice @Bubbler – Jonah Oct 30 '19 at 11:51 # Bash, 77 bytes for N;{ b=($N) for((i=0,p+=b;i<p+b[1];a[i]+=++i>p?i-p:0)){ :;} } echo${a[@]} Try it online! I couldn't port my first Zsh answer directly, since \${a[@]/#%/0} doesn't work in Bash. So instead of fixing the empty elements at the end, I set all the elements with a[i]+=0 along the way. In the end, this strategy works out better for Zsh anyway! # C, 204196195192 185 bytes *c(m,n,q,x,p,z)int*m,*n;{int i,j,*a=malloc(4);for(i=p=z=0;m[i]+1;i++){q=n[i]+(p+=m[i]);a=realloc(a,8*((x=z)>q?z:(z=q)));bzero(a+x,(z-x)*8);for(j=p;j<q;j++)a[j]+=j-p+1;}a[z]--;return a;} De-golfed version: * count (m, n, size_of_array, pointer_into_array, previous_size_of_array, final_element_counted_to) //all but m and n are implicitly int - return type of function is implicitly int pointer int* m, * n; { pointer_into_array = size_of_array = 0; int* array = malloc(sizeof(int)); //get a pointer so we can realloc later for (int i = 0; m[i] + 1; i++) { //iterate through m and n, stopping at -1 sentinel value final_element_counted_to = n[i] + (pointer_into_array += m[i]); //update pointer, get final element counted to on this pair array = realloc(array, sizeof(int) * 2 * ((previous_size_of_array = size_of_array) > final_element_counted_to ? size_of_array : (size_of_array = final_element_counted_to) + 1)); //reallocate array to size of max(size_of_array, final_element_counted_to+1) * sizeof(int) * 2 [multiplying by 2 for the final statement of a[z] = -1] bzero(array + previous_size_of_array, (size_of_array - previous_size_of_array) * sizeof(int) * 2); //initialises new memory to zero for(int j = pointer_into_array; j < final_element_counted_to; j++) //do the actual counting + adding array[j] += j - pointer_into_array + 1; } array[size_of_array]--; //make sure it's terminated by -1 return array; } Try it online! Function c takes two arguments (rest are for free declaration) m and n, both -1 terminated arrays corresponding respectively to the two separate lists for m and n. It returns a -1 terminated array. Please note that this requires sizeof(int) to be 4, and also requires the non-standard and deprecated but widely-implemented function bzero. • Welcome to Code Golf! You can use the header and footer sections of TIO to store boilerplate code: like this. They do not count towards your byte score. – Arnauld Oct 26 '19 at 17:39 • And it seems like you current score is actually 204. – Arnauld Oct 26 '19 at 17:39 • 171 bytes – ceilingcat Oct 30 '19 at 8:00 # K (ngn/k), 30 bytes {+/0^(1+!'y)@'-n-\:!*|y+n:+\x} Try it online! • i tried to beat this with "amend" but i couldn't. my best is: {@[&0|/y+j;i+j:+\x;+;1+i:!'y]} – ngn Oct 31 '19 at 23:39 # Clojure, 170 Bytes (fn [a](reduce #(mapv +(concat %1(repeat 0))%2)(mapv (fn [[x y]](concat(repeat x 0)(range 1(+ 1 y))))(reduce #(conj %1(mapv +[(nth(last %1)0)0]%2))[(first a)](rest a))))) Try it online! • Hello and welcome to PPCG! – Jonathan Frech Oct 30 '19 at 19:20 # Java, 642 Bytes This I think is kind of awful, but doing it with Java who knows what I was expecting. If someone could help me make this better I would really appreciate that, a bunch of bytes are taken up dealing with the fact that combiner can get two different length lists and need to add them, and also with the fact that there is a type needed for the BiConsumer lambda. I think there are a few ways, but it's getting late... Input is n, expected to be an array of arrays. IntStream.range(0, n.length).mapToObj((x) -> new int[] { n[x][0] + (x > 0 ? n[x - 1][0] : 0), n[x][1]}) .map((v) -> IntStream.range(0, v[0] + v[1]).map((x) -> x >= v[0] ? 1 + x - v[0] : 0).toArray()) .collect(Collector.of( () -> new ArrayList<>(), (BiConsumer<List<Integer>, int[]>) (l, v) -> IntStream.range(0, Math.max(l.size(), v.length)).forEach((i) -> { if (i < v.length) l.set(i, l.get(i) + v[i]); }), (a, b) -> { return IntStream.range(0, a.size()).mapToObj((i) -> a.get(i) + b.get(i)).collect(Collectors.toList()); }, Characteristics.IDENTITY_FINISH )); Improved and golf'd by Kevin Cruijssen: n->IntStream.range(0,n.length).mapToObj(x->new int[]{n[x][0]+(x>0?n[x-1][0]:0),n[x][1]}).map(v->IntStream.range(0,v[0]+v[1]).map(x->x<v[0]?0:1+x-v[0]).toArray()).collect(Collector.of(()->new Stack<>(),(java.util.function.BiConsumer<List<Integer>,int[]>)(l,v)->IntStream.range(0,Math.max(l.size(),v.length)).forEach(i->{if(i>=l.size())l.add(0);if(i<v.length)l.set(i,l.get(i)+v[i]);}),(a,b)->{while(a.size()>b.size())b.add(0);while(b.size()>a.size())a.add(0);return IntStream.range(0,a.size()).mapToObj(i->a.get(i)+b.get(i)).collect(Collectors.toList());},Collector.Characteristics.IDENTITY_FINISH)) • How does this receive input? And surely you can golf away pretty much all the whitespace at least – Jo King Oct 25 '19 at 5:32 • I'm not sure how you ended up with 581 bytes, because your current code above is 842 bytes and still missing required imports and n-> as input. Removing all whitespaces; parenthesis around the (L)-> everywhere; changing ArrayList to Stack; reversing some checks so we could use a<b?0:1 instead of a>=b?1:0; and some other basic golfs, I end up at 642 bytes. I'm sure this can be a lot shorter without the stream builtins, though. – Kevin Cruijssen Oct 25 '19 at 9:56 • Anyway, welcome to CGCC! If you haven't seen them yet tips for golfing in Java and tips for golfing in <all languages> might both be interesting to read through. Enjoy your stay! :) – Kevin Cruijssen Oct 25 '19 at 9:57 • To get 581 bytes I minified with codebeautify.org/javaviewer and pasted into mothereff.in/byte-counter . Should I be pasting the minified version into the code area when posting here? – Disco Mike Oct 25 '19 at 16:42 • I realize now that I excluded (n) -> though – Disco Mike Oct 25 '19 at 16:43
{}
## UNIT TEST PAPER - MATRIX, DETERMINANT AND APPLICATIONS OF INTEGRALS Time – 1:30 Hr                                                                                                                        M.M. 50 Very short Answer Type Questions ( 1 Mark) 1.If matrix A = $\left[\begin{array}{cc}2& 3\\ 1& 2\end{array}\right]$ and matrix B = $\left[\begin{array}{cc}2& -3\\ -1& 2\end{array}\right]$ , Then show that ${A}^{-1}=B\text{\hspace{0.17em}}and\text{\hspace{0.17em}}{B}^{-1}=A$ 2.Using properties of determinant , show that $|\begin{array}{ccc}0& 99& -998\\ -99& 0& 997\\ 998& -997& 0\end{array}|=0$ Short Answer Type Questions ( 2 Mark) 3.Using properties of determinants show that $\Delta =|\begin{array}{ccc}1& {\mathrm{log}}_{x}y& {\mathrm{log}}_{x}z\\ {\mathrm{log}}_{y}x& 1& {\mathrm{log}}_{y}z\\ {\mathrm{log}}_{z}x& {\mathrm{log}}_{z}y& 1\end{array}|=0$ 4.If A = $|\begin{array}{ccc}2& 3& 1\\ 1& 2& -1\\ 3& 4& 2\end{array}|$ is a non singular matrix of order 3x3 Show that |adj A| = |A|2 5.Using elementary column operation find the inverse of the matrix $\left[\begin{array}{cc}2& 3\\ 3& 5\end{array}\right]$ 6.If A = $\left[\begin{array}{cc}\alpha & \beta \\ \gamma & -\alpha \end{array}\right]$ is such that ${A}^{2}=I$ then show that ${\alpha }^{2}+\beta \gamma =I$ 7.Find the number of all possible matrices of order 3 x 2 with each entry 0, 1 and 2. Long Answer Type Questions – I( 4 Marks) 8.If Matrix A =$\left[\begin{array}{ccc}2& 3& 1\\ 1& 4& 5\end{array}\right]\text{\hspace{0.17em}}and\text{\hspace{0.17em}}matrix\text{\hspace{0.17em}}B=\left[\begin{array}{cc}3& 4\\ 1& 5\\ 2& 3\end{array}\right]$ then verify (AB)’ = B’A’ 9.If A = $\left[\begin{array}{cc}3& -5\\ -4& 2\end{array}\right],\text{\hspace{0.17em}}show\text{\hspace{0.17em}}that\text{\hspace{0.17em}}{A}^{2}-5A-14I=0.\text{\hspace{0.17em}}Hence,\text{\hspace{0.17em}}find\text{\hspace{0.17em}}{A}^{-1}$ 10.Using properties of determinants, show that $| sin 2 A cos 2 A sinAcosA sin 2 B cos 2 B sinBcosB sin 2 C cos 2 C sincCosC |=sin( A−B )sin(B−C)sin(C−A)$ 11.Using elementary row operation, find the inverse of matrix A = $\left[\begin{array}{ccc}2& 1& 3\\ 1& 0& 2\\ 1& 2& 1\end{array}\right]$ 12.Vertices of a $\Delta ABC$are A(2, 3), B(4, 2), C(x, 0). Area of the $\Delta ABC$ is 5 sq. units. Find the value of x. Long Answer Type Questions –I I( 6 Marks) 13.Using integration , find the area of the $\Delta ABC$ whose vertices are A(1, 0), B(2, 2) and C(3,1). 14.Using matrices, solve: $\left\{\begin{array}{c}5x-y+z=4\\ 3x+2y-5z=2\\ x+3y-2z=5\end{array}$ Also Read : Class 12 Maths Test Paper for Continuity,Differentiation and AOD Important Questions for Matrix and Determinant
{}
zone.tetra.strain.rate Syntax t := zone.tet.strain.rate(z<,ioverlay><,itetra>) Get the zone tetra strain rate based on the current velocity field. If ioverlay is specified (an integer from 1 to 2) then only the strain rates of the tetra of that overlay are returned. If itetra is specified as well, then only the strain rates of that specific tetra is returned. The total number of tetra in each overlay depends on the zone type (brick, degenerate-brick, wedge, pyramid, or tetrahedron). See also the functions zone.overlays and zone.tet.num. Returns: t - zone tetra strain rate tensor or value z - zone pointer ioverlay - optional overlay index of the tetra, from 1 to 2 itetra - optional index of the tetra in the overlay, from 1 to 5 for a brick zone type. Component Access f := zone.tet.strain.rate (z)->xx f := zone.tet.strain.rate.xx(z) Get the $$xx$$-component of the zone tetra strain rate based on the current velocity field. Returns: f - $$xx$$-component of the zone tetra strain rate tensor or value z - zone pointer Access other tensor components ($$yy$$, $$zz$$, $$xy$$, $$xz$$, $$yz$$) by substituting the component name where $$xx$$ appears above. See Member Access Operator for information about accessing members from data types using ->. Deprecated Component Access Component access by adding an optional integer in the function arguments (zone.tet.strain.rate(z,<int>,<int>,<int, <int>>)) is deprecated. It remains available but will not be supported in future versions. See Component Access with Integers in FISH on the Deprecated Commands and FISH page for details. This is hidden. Added to include remaining vector component access functions for inline help. f := zone.tet.strain.rate.yy(z) f := zone.tet.strain.rate.zz(z) f := zone.tet.strain.rate.xy(z) f := zone.tet.strain.rate.xz(z) f := zone.tet.strain.rate.yz(z)
{}
# mars.tensor.special.gamma# mars.tensor.special.gamma(x, **kwargs)[source]# gamma function. The gamma function is defined as $\Gamma(z) = \int_0^\infty t^{z-1} e^{-t} dt$ for $$\Re(z) > 0$$ and is extended to the rest of the complex plane by analytic continuation. See [dlmf] for more details. Parameters z (array_like) – Real or complex valued argument Returns Values of the gamma function Return type scalar or ndarray Notes The gamma function is often referred to as the generalized factorial since $$\Gamma(n + 1) = n!$$ for natural numbers $$n$$. More generally it satisfies the recurrence relation $$\Gamma(z + 1) = z \cdot \Gamma(z)$$ for complex $$z$$, which, combined with the fact that $$\Gamma(1) = 1$$, implies the above identity for $$z = n$$. References dlmf NIST Digital Library of Mathematical Functions https://dlmf.nist.gov/5.2#E1
{}
Linear approach to the orbiting spacecraft thermal problem # Linear approach to the orbiting spacecraft thermal problem José Gaite and Germán Fernández-Rico IDR, ETSI Aeronáuticos, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, E-28040 Madrid, Spain January 18, 2012 ###### Abstract We develop a linear method for solving the nonlinear differential equations of a lumped-parameter thermal model of a spacecraft moving in a closed orbit. Our method, based on perturbation theory, is compared with heuristic linearizations of the same equations. The essential feature of the linear approach is that it provides a decomposition in thermal modes, like the decomposition of mechanical vibrations in normal modes. The stationary periodic solution of the linear equations can be alternately expressed as an explicit integral or as a Fourier series. We apply our method to a minimal thermal model of a satellite with ten isothermal parts (nodes) and we compare the method with direct numerical integration of the nonlinear equations. We briefly study the computational complexity of our method for general thermal models of orbiting spacecraft and conclude that it is certainly useful for reduced models and conceptual design but it can also be more efficient than the direct integration of the equations for large models. The results of the Fourier series computations for the ten-node satellite model show that the periodic solution at the second perturbative order is sufficiently accurate. spacecraft thermal control, lumped-parameter models, perturbation methods ## I Introduction The thermal control of a spacecraft ensures that the temperatures of its various parts are kept within their appropriate ranges Kreith (); therm-control (); therm-control_2 (); therm-control_3 (). The simulation and prediction of temperatures in a spacecraft during a mission are usually carried out by commercial software packages. These software packages employ “lumped parameter” models that describe the spacecraft as a discrete network of nodes, with one energy-balance equation per node. The equations for the thermal state evolution are coupled nonlinear first-order differential equations, which can be integrated numerically. Given the thermal parameters of the model and its initial thermal state, the numerical integration of the differential equations yields the solution of the problem, namely, the evolution of the node temperatures. However, a detailed model with many nodes is difficult to handle, and its integration for a sufficiently long time of evolution can take considerable computer time and resources. Therefore, it is very useful to study simplified models and approximate methods of integrating the differential equations. Many spacecraft missions, in particular, satellite missions, consist of an initial transient part and then a stationary part, in which the spacecraft just goes around a closed orbit, in which the heat inputs are periodic. These periodic heat inputs are expected to induce periodic temperature variations, with a maximum and a minimum temperature in each orbit. This suggests a conservative approach that consists in computing only the temperatures for the hot and cold cases of the given orbit, defining them as the two steady cases with the maximum and minimum heat loads, respectively. Naturally, the real temperature variations in the orbit are smaller, because there is not enough time for the hot and cold cases to establish themselves. In fact, the temperature variations can be considerably smaller, to such a degree that it is necessary to integrate the differential equations, at least approximately. The differential equations for energy balance are nonlinear due to the presence of radiation couplings, which follow the Stefan-Boltzmann quartic law. A common approach to these equations involves a linearization of the radiation terms that approximate them by heat conduction terms anal-sat (); therm-control_2 (); anal-sat_2 (); IDR (). This approach transforms the nonlinear equations into standard linear heat conduction equations. But this approach has not been sufficiently justified, is of a heuristic nature and does not constitute a systematic approximation. In fact, nonlinear equations are very different from linear equations and, in particular, a periodic driving may not induce periodic solutions but much more complex solutions, namely, chaotic solutions. Therefore, we have carried out in preceding papers a full nonlinear analysis of spacecraft thermal models NoDy (); NoDy1 (). The conclusion of the analysis is that the complexities of nonlinear dynamics, such as multiple equilibria and chaos, do not appear in these models. While the existence of only one equilibrium state can be proved in general, the absence of chaos under driving by variable external heat loads can only be proved for a limited range of magnitudes of the driving loads. This range presumably includes the magnitudes involved in typical spacecraft orbits. The proofs in Refs. 9 and 10 are constructive and are based on a perturbation method that is expected to be sound when the linear equations corresponding to the first perturbative order constitute a good approximation of the nonlinear equations. This implies that the fully nonlinear solution describes a weakly nonlinear oscillator. Since the perturbative approximation is mathematically rigorous and systematic, it is worthwhile to study in detail the scope of the perturbative linear equations and, furthermore, to compare them with previous linear approaches of a heuristic nature. The main purpose of this paper is to study the linear method of predicting the thermal behavior of spacecraft in stationary orbits (Sect. II and III) and to test it on a minimally realistic thermal model of a satellite in a circular orbit. Since the general one and two-node models analyzed in Refs. 9 and 10, respectively, are too simple, we define in this paper a ten-node thermal model of a small Moon-orbiting satellite (Sect. IV). This model is simple enough to allow us to explicitly show all the quantities involved (thermal couplings and capacities, heat inputs, etc.) and it is sufficient for illustrating the main features of the linear approach. As realistic thermal models have many more nodes, we consider in Sect. V the important issue of scalability of the method and, hence, its practical applications. Computational aspects of the steady-state problem have been studied by Krishnaprakas Kris1 (); Kris2 () and by Milman and Petrick MiPe (), while computational aspects of the direct integration of the nonlinear equations for the unsteady problem have been studied by Krishnaprakas Kris3 (). Here we focus on the linear equations for the stationary but unsteady case and survey its computational aspects. A note on notation: In the equations that contain matrix or vector quantities, sometimes we use component notation (with indices) while other times we use compact matrix notation (without indices), according to the nature of the equations. ## Ii Linearization of the heat-balance equations A lumped-parameter thermal model of a continuous system consists of a discrete network of isothermal regions (nodes) that represent a partition of the total thermal capacitance and that are linked by thermal conduction and radiation couplings Kreith (); therm-control (); therm-control_2 (); therm-control_3 (); anal-sat (). This discretization reduces the integro-differential heat-transfer equations to a set of energy-balance ODEs, one per node, which control the evolution of the nodes’ temperatures anal-sat (): Ci˙Ti=˙Qi(t)−N∑j=1[Kij(Ti−Tj)+Rij(T4i−T4j)]−Ri(Ti4−T40),i=1,…,N, (1) where is the number of nodes and contains the total heat input to the th-node from external radiation and from internal energy dissipation (if there is any). The conduction and radiation coupling matrices are denoted by and , respectively; they are symmetric ( and ) and ; so there are independent coupling coefficients altogether, but many vanish, usually. The temperature  K is the temperature of the environment, namely, the cosmic microwave background radiation. The th-node coefficient of radiation to the environment is given by , where denotes the outward facing area, its (infrared) emissivity, and is the Stefan-Boltzmann constant. The constant term can be included in or ignored altogether, if each . Equations (1) coincide with the ones implemented in commercial software packages, for example, ESATAN ESATAN (). There is no systematic procedure for finding the analytical solution of a system of nonlinear differential equations, except in some particularly simple cases. Of course, nonlinear systems can always be integrated numerically with finite difference schemes. Methods of this kind are employed in commercial software packages. When a nonlinear system can be approximated by a linear system and, hence, an approximate analytic solution can be found, this solution constitutes a valuable tool. Actually, one can always resort to some kind of perturbation method to linearize a nonlinear system. Therefore, we now study the rigorous linearization of Eqs. (1) based on a suitable perturbation method, and we also describe, for the sake of a comparison, a heuristic linearization, which actually is best understood in light of the results of the perturbation method. ### ii.1 Perturbative linearization If we assume that the heat inputs in the energy-balance Eqs. (1) are periodic, namely, that there is a time interval such that , then it seems sensible to study first the effect of the mean heat inputs in a period. This averaging method, introduced in Refs. 9 and 10, relies on the fact that the autonomous nonlinear system of ODEs for constant can be thoroughly analyzed with analytical and numerical methods. For example, it is possible to determine that there is a unique steady thermal state and that it is (locally) stable MiPe (); NoDy1 (). The actual values of the steady temperatures can be found efficiently with various numerical methods Kris1 (); Kris2 (); MiPe (). Furthermore, the eigenvalues and eigenvectors of the Jacobian matrix of the nonlinear system of ODEs provides us with useful information about the dynamics, in particular, about the approach to steady-state: the eigenvectors represent independent thermal modes and the eigenvalues represent their relaxation times NoDy1 (). Once the averaged equations are solved, the variation of the heat inputs can be considered as a driving of the averaged solutions. Thus, we can define the driving function Fi(t)=˙Qi(t)−⟨˙Qi⟩Ci,i=1,…,N, where denotes the mean value of over the period of oscillation. A weak driving function must not produce a notable deviation from the averaged dynamics. In particular, the long-term thermal state of an orbiting spacecraft must oscillate about the corresponding steady-state. To embody this idea, we introduce a formal perturbation parameter , to be set to the value of unity at the end, and write Eqs. (1) as ˙Ti=ϵFi(t)+⟨˙Qi⟩Ci−N∑j=1[KijCi(Ti−Tj)+RijCi(T4i−T4j)]−RiCiTi4,i=1,…,N, (2) Then, we assume an expansion of the form Tj(t)=∞∑n=0ϵnT(n)j(t). (3) When we substitute this expansion into Eqs. (2), we obtain for the zeroth order of ˙T(0)i=⟨˙Qi⟩Ci−N∑j=1[KijCi(T(0)i−T(0)j)+RijCi(T4(0)i−T4(0)j)]−RiCiT4(0)i,i=1,…,N, (4) that is to say, the averaged equations. The initial conditions for these equations are the same as for the unaveraged equations. For the first order in , we obtain the following system of linear equations: ˙T(1)i=N∑j=1Jij(t)T(1)j+Fi(t),i=1,…,N. (5) Here, is the Jacobian matrix Jij(t)=∂∂Tj˙Ti(T)∣∣∣T=T(0)(t), where is the solution of the zeroth order equation. Equations (5) are to be solved with the initial condition . The elements of the Jacobian matrix at a generic point in the temperature space are calculated to be: Jij = C−1i(Kij+4RijT3j),ifi≠j, (6) Jii = C−1i[−N∑k=1(Kik+4RikT3i)−4RiTi3]. (7) This matrix has interesting properties. First of all, it has negative diagonal and nonnegative off-diagonal elements. In other words, is a -matrix Ber-Plem (). Furthermore, it fulfills a semipositivity condition that qualifies it as a nonsingular -matrix NoDy1 (). Since the eigenvalues of an -matrix have positive real parts, the opposite holds for , namely, its eigenvalues have negative real parts. One more interesting property of , related to semipositivity, is that it possesses a form of diagonal dominance: it is similar to a diagonally dominant matrix and the similarity is given by a positive diagonal matrix. Naturally, this property is shared by . These properties are useful to prove some desirable properties of the solutions of Eqs. (5). The chief property of is that is a nonsingular -matrix. In particular, it implies that is non-negative and, therefore, that the Perron-Frobenius theory is applicable to it Ber-Plem (). The relevant results to be applied are: (i) Perron’s theorem, which states that a strictly positive matrix has a unique real and positive eigenvalue with a positive eigenvector and that this eigenvalue has maximal modulus among all the eigenvalues; (ii) a second theorem, stating that if a -matrix that is a nonsingular -matrix is also “irreducible”, then its inverse is strictly positive. The irreducibility of follows from the symmetry of the matrices and NoDy1 (). As the positive (Perron) eigenvector of is the eigenvector of that corresponds to its smallest magnitude eigenvalue, it defines the slowest relaxation mode (for a given set of temperatures). Therefore, in the evolution of temperatures given by Eqs. (4), steady-state is eventually approached from the zone corresponding to simultaneous temperature increments (or decrements). The matrix in Eqs. (5) is obtained by substituting for in Eqs. (6) and (7). Then, the nonhomogeneous linear system with variable coefficients, Eqs. (5), can be solved by variation of parameters NoDy1 (), yielding the expression: T(1)(t)=U(t)∫t0U(τ)−1⋅F(τ)dτ, (8) where is a matrix formed by columns that are linearly independent solutions of the corresponding homogeneous equation, with the condition that (the identity matrix). The difficulty in applying this formula lies in computing , that is, in computing the solutions of the homogeneous equation. Moreover, this computation demands the previous computation of the solution for . Since we are only interested in the stationary solutions of the heat-balance equations rather than in transient thermal states, it is possible to find an expression of these solutions that is more manageable than Eq. (8). The transient thermal state relaxes exponentially to the stationary solution, which is a limit cycle of the nonlinear equations, technically speaking NoDy (); NoDy1 (). Therefore, the stationary solution is given by the solution of Eqs. (5) with the constant Jacobian matrix calculated at the steady-state temperatures, which we name .111The solution can also be derived as the limit of Eq. (8) in which . This solution is simply NoDy1 (): T(1)(t)=∫t0exp[τJ]⋅F(t−τ)dτ, (9) with calculated at the point . Furthermore, the periodic stationary solution is obtained by extending the upper integration limit from to infinity: T∞(1)(t)=∫∞0exp[τJ]⋅F(t−τ)dτ. (10) This function is indeed periodic, unlike the one defined by Eq. (9), so it is determined by its values for . Note that . For numerical computations, it can be convenient to express the integral from 0 to as an integral from 0 to , taking advantage of the periodicity as follows: ∫∞0exp[τJ]⋅F(t−τ)dτ=∞∑n=0∫(n+1)TnTexp[τJ]⋅F(t−τ)dτ= ∞∑n=0exp(nTJ)∫T0exp[τJ]⋅F(t−τ)dτ=[I−exp(TJ)]−1∫T0exp[τJ]⋅F(t−τ)dτ (the series converges because the eigenvalues of have negative real parts). In the last integral, the argument of can be transferred to the interval : ∫T0exp[τJ]⋅F(t−τ)dτ=∫t0exp[τJ]⋅F(t−τ)dτ+∫Ttexp[τJ]⋅F(t−τ+T)dτ, where . Note that the one-period shift in the argument of the last is necessary for the argument to be in . Some remarks are in order. First of all, we have assumed that there is one asymptotic periodic solution of the nonlinear Eqs. (2) and only one (a unique limit cycle). Equivalently, we have assumed that the perturbation series converges. This assumption holds in an interval of the amplitude of heat input-variations NoDy1 (). Besides, for the integrals in Eq. (10) and the following equations to make sense, it is required that as . This is guaranteed, because the eigenvalues of have negative real parts, as is necessary for the steady-state to be stable. In fact, the eigenvalues are expected to be negative real numbers and is expected to be diagonalizable but both properties are not rigorously proven NoDy1 () (however, see Sect. II.2). If is diagonalizable, that is to say, there is a real matrix such that is diagonal, then the calculation of the integrals is best carried out on the eigenvector basis, given by the matrix . Using this basis, Eq. (10) is expressed as [T∞(1)]i(t)=N∑a=1Pia∫∞0exp[τλa]N∑j=1P−1ajFj(t−τ)dτ,i=1,…,N, (11) where the first sum runs over the eigenvectors and their corresponding eigenvalues . Expression (11) allows us to compare the contribution of the different thermal modes. In particular, for the fast modes, such that is large, we can use Watson’s lemma pert-methods () to derive the asymptotic expansion: ∫∞0exp[τλa]Fa(t−τ)dτ=Fa(t)−λa−˙Fa(t)λ2a+O(1λ3a), where . When is large, the first term suffices (unless is also large, for some reason); and the first term is small, unless is large. In essence, if the fast modes are not driven strongly, they can be neglected in the sum over in Eq. (11). #### ii.1.1 Second order perturbative equation For second order in , a straightforward calculation NoDy1 () yields the following linear equation: ˙T(2)=J(t)⋅T(2)+G(t), (12) where is the same Jacobian matrix that appears in the first-order Eq. (5) and Gi=N∑j=16RijCiT2(0)jT2(1)j−6Ci(N∑j=1Rij+Ri)T2(0)iT2(1)i,i=1,…,N. (13) The initial condition for Eq. (12) is , as for Eqs. (5). Therefore, the first-order and second-order equations have identical solutions in terms of their respective driving terms, although , Eqs. (13), is a known function of only when the lower order equations have been solved. The integral expression, Eqs. (10), of the stationary solution is also valid for , after replacing with and using in Eq. (13) the stationary values and (which make periodic). It is possible to carry on the perturbation method to higher orders, and it always amounts to solving the same linear equation with increasingly complicated driving terms that involve the solutions of the lower order equations. The example of Sect. IV shows that, in a typical case, is a small correction to , and further corrections are not necessary. This confirms that the perturbation method is reliable for a realistic case. ### ii.2 Heuristic linearization A linearization procedure frequently used in problems of radiation heat transfer therm-control_2 (); anal-sat_2 (); IDR () consists of using the algebraic identity T4i−T4j=(Ti+Tj)(T2i+T2j)(Ti−Tj) to define an effective conductance for the radiation coupling between nodes and . The equation Rij(T4i−T4j)=KRij(Ti−Tj), defines the effective conductance KRij=Rij(Ti+Tj)(T2i+T2j) for specified values of the node temperatures and . For an orbiting spacecraft, the natural base values of the node temperatures are the ones that correspond to the steady-state solution of the averaged equations, namely, In the special case of radiation to the environment, can be replaced with linear terms such that for The resulting linear equations are: Ci˙Ti=˙Qi(t)−N∑j=1(Kij+KRij)(Ti−Tj)−KRiTi,i=1,…,N, (14) These equations have only conduction couplings, so they are a discretization of the partial differential equations of heat conduction. As a linear system of ODEs, the standard form is ˙Ti=∑jJijTj+˙Qi(t)Ci,i=1,…,N, (15) where (the Jacobian matrix) is now given by: Jij = C−1i(Kij+KRij),ifi≠j, (16) Jii = C−1i[−N∑k=1(Kik+KRik)−KRi]. (17) The linear system of Eqs. (15) can be solved in the standard way, yielding: T(t)=exp[tJ](T(0)+∫t0exp[−τJ]⋅q(τ)dτ), (18) where we have introduced the vector , with components . We can also express the solution in terms of the driving function : T(t)=exp[tJ]T(0)+∫t0exp[(t−τ)J]⋅(F(τ)+⟨q⟩)dτ= (19) exp[tJ]T(0)+∫t0exp[(t−τ)J]⋅F(τ)dτ+J−1(exp[tJ]−I)⟨q⟩ (20) For large , this solution tends to the periodic stationary solution T∞(t)=∫∞0exp[τJ]⋅F(t−τ)dτ−J−1⟨q⟩, (21) assuming that as . This is a consequence of the structure of , as in the preceding section. In the present case, the eigenvalues of , beyond having negative real parts, are actually negative real numbers, as we show below. The total conductance matrix is symmetric but this does not imply that is symmetric. Nevertheless, if we define , the matrix is symmetric, because its off-diagonal matrix elements are: (C1/2JC−1/2)ij=Kij+KRij√CiCj,i=1,…,N,j=1,…,N,andi≠j. Hence, the matrix , similar to , has real eigenvalues. Furthermore, is diagonalized by an orthogonal transformation; that is to say, there is an orthogonal matrix such that Ot⋅(C1/2JC−1/2)⋅O=(C−1/2O)−1⋅J⋅(C−1/2O) is diagonal. Therefore, the thermal modes are actually normal; that is to say, the modes, which are the eigenvectors of and hence the columns of the matrix , are related to the eigenvectors of , which are normal and are given by the columns of . Alternatively, one can say that the eigenvectors of are normal in the “metric” defined by ; namely, N∑i=1CiPiaPib=δab, which can be written in matrix form as . Naturally, the orthogonality of modes greatly simplifies some computations. Furthermore, the symmetry of the conductance matrix implies that the sum in Eq. (14) can be written as the action of a graph Laplacian Chung () on the temperature vector. Naturally, the graph is formed by the nodes and the linking conductances. A graph Laplacian is a discretization of the ordinary Laplacian and is conventionally defined with the sign that makes it positive semidefinite. The zero eigenvalue corresponds to a constant function, that is, a constant temperature, in the present case. A vector with equal components, say, equal to , is the positive (Perron) eigenvector of the matrix. With more generality, the Laplacian of a graph can be defined as a symmetric matrix with off-diagonal entries that are negative if the nodes are connected and null if they are not graph_eigenvec (). This definition does not constrain the diagonal entries and, therefore, does not imply that a graph Laplacian is positive semidefinite. It can be made positive definite (or just semidefinite) by adding to it a multiple of the identity matrix, which does not alter the eigenvectors. Of course, the eigenvector corresponding to the smallest eigenvalue does not have to be constant, but the Perron-Frobenius theorem Ber-Plem () tells us that it is positive. By this general definition of a graph Laplacian, the matrix is a different Laplacian for the same graph, and Eqs. (15) contain the action of this Laplacian on the vector . Notice that this general definition of a graph Laplacian is connected with the definition of a -matrix Ber-Plem () and, actually, a symmetric -matrix is a graph Laplacian. If such a matrix is positive definite, then it is equivalent to a Stieltjes matrix, namely, a symmetric nonsingular -matrix Ber-Plem (). The general Jacobian obtained in Sect. II.1 is also such that and also are both nonsingular -matrices, but they need not be symmetric. To investigate the accuracy of the approximation of the radiation terms by conduction terms, let us compare the periodic solution given by Eq. (21) with the first-order perturbative solution found in Sect. II.1, namely, . Of course, the Jacobian matrices in the respective integrals differ, as do the temperature vectors added to the integrals, namely, or . While corresponds to the authentic steady-state of the nonlinear averaged equations, corresponds to the steady-state of Eqs. (14) after averaging, which is a state without significance, since we have already used the set of temperatures of the authentic steady-state to define the radiation conductances in Eq. (14). Therefore, the only sensible linear solution is the perturbative solution , even if we replace the Jacobian matrix given by Eq. (6) and (7) with the one given by Eq. (16) and (17). In our context, the notion of radiation conductance actually follows from the symmetry of the matrices or . Therefore, the most natural definition of radiation conductance probably is , that is, the symmetrization of the term in Eq. (6). This symmetrization has been tested by Krishnaprakas Kris2 (), considering the steady-state problem for models with up to nodes and working with various resolution algorithms. He found that the effect of symmetrization is not appreciable. To estimate the effect of the antisymmetric part of the matrix , namely, , on the eigenvalue problem for the Jacobian, we proceed as follows. We formulate this eigenvalue problem in terms of the matrix , so that it is an eigenvalue problem for a symmetric matrix perturbed by a small antisymmetric part. This problem is well conditioned, because the eigenvectors of the symmetric matrix (the columns of the matrix ) are orthogonal. In particular, the perturbed eigenvalues are still real. Furthermore, the first-order perturbation formula for the eigenvalue associated with an eigenvector pert-methods () yields: δλa=N∑i,j=1δAijeaieaj=0, vanishing because the perturbation matrix is antisymmetric. So the nonvanishing perturbative corrections begin at the second order in the perturbation matrix, and, in this sense, they are especially small. ## Iii Fourier analysis of the periodic solution Given that is a periodic function, it can be expanded in a Fourier series. To derive this series, let us first introduce the Fourier series of , F(t)=∞∑m=−∞^F(m)e2πimt/T. Inserting this series in the integral of Eq. (10) and integrating term by term, we obtain the Fourier series for . Alternatively, we can substitute the Fourier series for both and into Eqs. (5), where is taken to be constant; then we can solve for the Fourier coefficients of . The result is T∞(1)(t)=∞∑m=−∞e2πimt/T(2πimI/T−J)−1⋅^F(m), (22) The Fourier coefficients are obtained by integration: ^F(m)=1T∫T0F(t)e−2πimt/Tdt. (23) Given that is a real function, ^F(−m)=^F∗(m). (24) Furthermore, implies ^F(0)=0. (25) So is defined by the sequence of Fourier coefficients for positive . This sequence must fulfill the requirement that so a limited number of the initial coefficients may suffice. Actually, for numerical work, Eq. (23) can be conveniently replaced by the discrete Fourier transform ^F(m)=1nn−1∑k=0F(kT/n)e−2πimk/n, (26) which only requires sampling of the values for , but also only defines a finite number of independent Fourier coefficients, because . Notice that we usually have available just a sampling of the heat inputs at regular time intervals, rather than the analytical form of . To calculate the exact number of independent Fourier coefficients provided by Eq. (26), we must take into account Eqs. (24) and (25). If is an odd number, the independent Fourier coefficients are the ones with ; that is to say, there are independent real numbers. If is even, the independent Fourier coefficients are the ones with , and ^F(n/2)=1nn−1∑k=0(−)kF(kT/n) is real, so there are independent real numbers as well. For definiteness, let be odd. Then, we can express as F(t)=2Re⎡⎣(n−1)/2∑m=1^F(m)e2πimt/T⎤⎦. Of course, the values of at are the sampled values employed in Eq. (26), but the expression is valid for any and constitutes an interpolation of the sampled values. Naturally, the higher the sampling frequency , the more independent Fourier coefficients we have and the more accurate the representation of is. As is well known, the Fourier series of a function that is piecewise smooth converges to the function, except at its points of discontinuity, where it converges to the arithmetic mean of the two one-sided limits Fourier (). However, the convergence is not uniform, so that partial sums oscillate about the true value of the function near each point of discontinuity and “overshoot” the two one-sided limits in opposite directions. This overshooting is known as the Gibbs phenomenon, and, in our case, produces typical errors near the discontinuities of the driving function . These discontinuities are due to the sudden obstructions of the radiation on parts of the aircraft that occur at certain orbital positions, for example, when the Sun is eclipsed.222Strictly speaking, the function is always continuous but it undergoes sharp variations at some times. These sharp variations can be considered as discontinuities, especially, if the function is sampled. Section IV.2 shows that the Gibbs phenomenon at eclipse points can be responsible for the largest part of the error of the linear method when the discrete Fourier transform is used. The approximation of provided by the samples of is, of course, T∞(1)(t)=2Re⎡⎣(n−1)/2∑m=1e2πimt/T(2πimI/T−J)−1⋅^F(m)⎤⎦, (27) and is valid for any . However, if we are only interested in at , we can compute these values with the inverse discrete Fourier transform T∞(1)(kT/n)=n−1∑m=0e2πimk/n(2πi[mod(m+n−12,n)−n−12]I/T−J)−1⋅^F(m), (28) where, for , and where gives the remainder of the integer division by . This inverse discrete Fourier transform can be more convenient for a fast numerical computation. Regarding computational convenience, the discrete Fourier transform, be it direct or inverse, is best performed with a fast Fourier transform (FFT) algorithm. The classic FFT algorithm requires to be a power of two Num_rec (); Gol-Van (); in particular, it has to be even. The function , computed by Fourier analysis from samples of , is to be compared with the one computed by a numerical approximation of the integral formula, Eq. (10), in terms of the same samples. Naturally, we can use instead of the integral over the integral over below Eq. (10). This integral can be computed from the samples of by an interpolation formula, say the trapezoidal rule. It is not easy to decide whether this procedure is more efficient than Fourier transforms. Considering that the substitution of the continuous Fourier transform, Eq. (23), by the discrete transform, Eq. (26), is equivalent to computing the former with the trapezoidal rule, the integral formula may seem more direct. In particular, this formula allows us to select the values of for which we compute independent of the sampling frequency, so we can choose just a few distinguished orbital positions and avoid the computation of all the integrals (one is removed by the condition ). Note that the computation of all of the independent with Eq. (26) is equivalent to the computation of precisely integrals. However, the efficiency of the FFT reduces the natural operation count of this computation, of order , to order ; so its use can be advantageous, nevertheless. It goes without saying that the second-order perturbative contribution to the stationary solution is given by the right-hand side of Eq. (27) with the Fourier coefficients replaced by the Fourier coefficients of the function defined in Sect. II.1.1. ## Iv Ten-node model of a Moon-orbiting satellite To test the previously explained methods, we construct a small thermal model of a simple spacecraft, namely, a ten-node model of a Moon-orbiting satellite. Our satellite ten-node model supports a basic thermal structure and is simple enough for allowing one to explicitly display the main mathematical entities, e.g., the matrices , and . The satellite consists of a rectangular parallelepiped (a cuboid) of square base plus a small cylinder on one of its sides that simulates an observation instrument, as represented in Fig. 1. In addition, at a height of two thirds of the total height, there is an inner tray with the electronic equipment. The dimensions of the cuboid are , and the cylinder has a length of 0.1 m and a radius of 0.04 m. The satellite’s frame is made of aluminum alloy, using plates 1 mm thick, except the bottom plate, which is 2 mm thick. This plate plays the role of a radiator and its outer surface is painted white to have high solar reflectance. The cylinder is made of the same aluminum alloy, as well as the tray; they are 0.5 mm and 2 mm thick, respectively. The sides of the satellite, except the one with the instrument, are covered with solar cells, which increase the sides’ thickness to 2.25 mm. The thermal model of the satellite assigns one node to each face of the cuboid, one more to the cylinder and another to the tray, that is, eight nodes altogether. Furthermore, to conveniently split the total heat capacitance of the electronic equipment, it is convenient to add two extra nodes with (large) heat capacitance but with no surface that could exchange heat by radiation. Nodes of this type are called “non-geometrical nodes”. In the present case, they represent two boxes with equipment placed above and below the tray, respectively. We order the ten nodes as shown in Fig. 1. The lower box (node 10) is connected to the radiator by a thermal strap. Given the satellite’s structure and assuming appropriate values of the specific heat capacities, it is possible to compute the capacitances with the result given in Table 1. Using the value of the aluminum-alloy heat conductivity and assuming perfect contact between plates, we compute the conduction coupling constants between nodes . The remaining conduction coupling constants are given reasonable values, shown in Eq. (29). The computation of the radiation coupling constants and and indeed the computation of the external radiation heat inputs requires a detailed radiative model of the satellite, consisting of the geometrical view factors and the detailed thermo-optical properties of all surfaces. This radiative model allows us to compute the respective absorption factors Kreith (). The thermo-optical properties of the surfaces are assumed to be as realistic as possible, given the simplicity of the thermal model. All radiation reflection is assumed to be diffuse, as is common for many types of surfaces. The inner surfaces are painted black and have high emissivity, , to favor the uniformization of the interior temperature. The outer surfaces are of three types. The three sides covered with solar cells also have high emissivity, , to favor the cooling of the solar cells. On the other hand, they have high solar absorptivity, . Of this 0.75, 0.18 is processed into electricity and the remaining 0.57 dissipates as heat in the solar cells. The top surface, the surface with the cylinder, and the cylinder itself (its two sides) have low emissivity, , and low solar absorptivity, , which are chosen to simulate the effect of a multilayer insulator. In contrast, the bottom surface simulates a radiator, with and (like an optical solar reflector). All of these thermo-optical properties are summarized in Fig. 2. For the computation of the corresponding absorption factors, we employ the ray-tracing Monte-Carlo simulation method provided by ESARAD (ESATAN’s radiation module) ESATAN (). Taking into account the above information, one obtains the following conduction (in W/K) and radiation (in W/K) matrices: (Kij) = 110⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝03.4705.642.862.004.500003.4703.4701.671.333.503.000003.4705.642.862.004.500005.6405.6402.862.004.500002.861.672.862.86000003.002.001.332.002.000000004.503.504.504.5000004.506.0003.00000000000000004.5000000003.0006.00000⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠, (29) (Rij) = 10−10⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝05.064.635.053.682.716.390005.0605.054.633.682.706.390.13004.635.0505.063.692.716.390005.054.635.0603.692.706.380003.683.683.693.69003.570002.712.702.712.70007.190006.396.396.396.383.577.19000000.130000000000000000000000000000⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠, (30) (Ri) = 10−9(2.86,0.32,2.86,2.86,1.81,0.23,0,0.23,0,0). (31) The satellite’s thermal characteristics are defined by the data set but the radiation heat exchange depends on the nodal temperatures, which in turn depend on the heat input. As explained in Sect. II.1, the appropriate set of nodal temperatures corresponds to the steady-state for averaged heat inputs, given by the algebraic equation that results from making in Eq. (4). Since we need the external heat inputs and, therefore, the orbit, we proceed to define the orbit characteristics. We choose a circular equatorial orbit m above the Moon’s surface, such that s. The radiation heat input to the satellite consists, on the one hand, of the solar irradiation and the Moon’s albedo, and, on the other hand, of the Moon’s constant IR radiation. We take 0.12 for the mean Moon’s albedo and 270 K for the black-body equivalent temperature of the Moon. There is also heat produced by the dissipation of electrical power in the equipment (nodes 9 and 10). For the sake of simplicity, the dissipation rate is assumed to be constant, equal to the mean electrical power generated in an orbit. In a part of the orbit, the Moon eclipses the Sun, so the satellite receives no direct sunlight or albedo, although there is always IR radiation from the Moon. The satellite is stabilized such that the cylinder (the “observation instrument”) always points to the Moon and the longer edges are perpendicular to the orbit. The radiation heat input can be computed by taking into account the given orbital characteristics and the satellite’s thermo-optical characteristics, in particular, the absorption factors. It has been computed with ESARAD, taking 111 positions on the orbit, that is, at intervals of one minute. To determine the hot and cold cases of the orbit, we compute the total heat load on the satellite for each position in the orbit, finding a maximum of 90.59 W at position 14 and a minimum of 18.65 W at any position in the eclipse, during which all the heat loads stay constant. The solution of the corresponding steady-state problems at position 14 and at a position in the eclipse yields the two sets of nodal temperatures (for the given node order): Hot . Cold . The results in Sect. IV.1 show that the periodic thermal state does not reach these extreme temperatures, which could endanger the performance of the satellite. ### iv.1 Jacobian matrix and periodic solution We compute the averages and substitute for them and the data set in Eq. (4) to find the steady state temperatures. The results are given in Table 1. Then, according to Eqs. (6) and (7), the Jacobian matrix is (in s) J=10−3⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝−6.991.180.121.830.950.671.520002.64−12.932.640.261.331.062.752.04000.121.17−6.991.830.950.671.520001.830.121.83−7.640.950.671.520001.611.011.611.61−8.2600.16001.532.271.592.272.270−9.200.640002.562.062.562.560.150.31−15.6002.293.0509.4300000−10.05000000000.560−0.56000000.2100.4300−0.64⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. By inspection, one can check that it has nonnegative off-diagonal and negative diagonal elements, that is to say, is a -matrix. It is also diagonally dominant, namely, . The eigenvalues of are −10−4{182.20,154.30,103.40,98.03,86.12,71.09,71.04,14.90,5.70,1.72}. Their inverses (in absolute value) give us the typical relaxation times of the corresponding thermal modes. Thus, we deduce that relaxation time of the fastest mode is about 55 s, whereas the relaxation time of the slowest one is s. The latter time is similar to  s. The eigenvalues are real numbers and, furthermore, is diagonalizable, because the eigenvalues are different. Both properties also follow from being almost symmetric: its antisymmetric part, , is relatively small, namely, , where the matrix norm is the Frobenius norm (other standard matrix norms yield similar values). Therefore, the notion of “radiation conductance” (Sect. II.2) is appropriate in this case, as concerns its use in the linear equations. The thermal modes are almost normal, namely, the eigenvector matrix is such that with an error The most interesting eigenvector of is, of course, the positive (Perron) eigenvector, which corresponds to the slowest mode. The normalized positive eigenvector is (0.259,0.276,0.259,0.257,0.275,0.267,0.327,0.264,0.471,0.423) K. Note that the temperature increments are of a similar magnitude, except the ones of node 7 and, especially, nodes 9 and 10, which are associated, respectively, for the tray and the boxes of electronic equipment. The next mode, corresponding to the eigenvalue , has one negative component (the ninth), and the remaining modes have more than one. To calculate , we choose the Fourier series of Eq. (27) or, rather, the inverse discrete Fourier transform of Eq. (28), which can be computed with a FFT algorithm. The Fourier coefficients can also be computed with the FFT, according to Eq. (26). Once the vector at the 111 positions is available, the set of nodal temperatures corresponding to the first-order perturbative solution is plotted in Fig. 5. A measure of the accuracy of this perturbative calculation is given by the second-order calculation in the next section. The truncation of the Fourier series imposed by the sampling of also is a source of error, unrelated to perturbation theory. The piecewise smoothness of the function suggests that the error is small (but see Sect. IV.2). It is also interesting to see if the first-order perturbative calculation is affected by neglecting the fastest modes: according to the analysis at the end of Sect. II.1, these modes are expected to contribute in proportion to their relaxation times. The fastest mode relaxes in about 55 s, a short but non-negligible time. As a consequence, its contribution to , which we find to have a maximum magnitude of 0.8 K, is small but non-negligible. But we can deduce that still faster modes, which would appear in a thermal model of the satellite with more nodes, are hardly necessary. From the engineering standpoint, note that this satellite thermal model is successful, insofar as it predicts that all nodal temperatures stay within adequate ranges. In particular, nodes 9 and 10, corresponding to the boxes with electronic equipment, stay within the range from to 23 . These nodes are inner nodes with large thermal capacity and, hence, are protected against the larger changes in the external heat inputs. In contrast, the outer nodes are very exposed and undergo considerable variation in temperature, with especially sharp changes at the beginning and end of the eclipse. #### iv.1.1 Second-order correction According to Sect. II.1.1, the second-order perturbative correction to the periodic stationary solution is obtained by the same procedure as that for but using a different driving function that is computed from and from itself. The computations are straightforward and they yield the correction plotted in Fig. 6. This correction is always negative, because the negative term in the expression for , Eq. (13), dominates over the positive term. The equation (10) for and the corresponding equation for are both linear, so
{}
# Survey Weights The survey’s individual response files contain respondent weights calculated by Facebook. These weights are also used to produce our public contingency tables and the geographic aggregates in the COVIDcast Epidata API. Facebook has developed a User Guide for the CTIS Weights (updated May 2022). This manual explains the weight methodology, gives examples of how to use the weights when calculating estimates, and states the known limitations of the weights. We also have separate information about the survey’s limitations that affect what conclusions can be drawn from the survey data.
{}
# SCIENTIFIC PROGRAMS AND ACTIVITIES March  7, 2014 ## Toronto Probability Seminar 2007-08 held at the Fields Institute Organizers Bálint Virág , Benedek Valkó University of Toronto, Mathematics and Statistics For questions, scheduling, or to be added to the mailing list, contact the organizers at: probsem-at-math-dot-toronto-dot-edu 2008 Speaker and Talk Title June 16 4:10-5 Fields Library Eckhard Schlemm, FU Berlin (visiting U of T) will present a talk about his masters thesis (Diplomarbeit) on First-passage percolation on widh-two stretches Tuesday, April 8, 2008 4:30 p.m. 215 Huron, Room 1018 *Note Unusual Time and Place* Mate Matolcsi (Renyi Institute of Mathematics, Hungary) The real polarization problem We study a conjecture of Benitez, Sarantopoulos and Tonge concerning a lower bound on the norm of products of real linear functioanls. The conjecture is that the lower-bound is attained if and only if the vectors corresponding to the functionals are orthogonal. There are several approaches to the problem, analytic (Revesz, Pappas, 2004), geometric (Matolcsi, 2005), and probabilistic (Frenkel, 2007), yielding partial results. The probabilistic approach of Fernkel, 2007, deduces a lower bound from the following theorem: If X1, ... , Xn are jointly Gaussian random variables with zero expectation, then E(X1^2 ... Xn^2) >= EX1^2 ... EXn^2. Equality holds if and only if they are independent or at least one of them is almost surely zero. A similar result for higher moments would imply the conjecture. Monday, March 24, 2008 4:00 pm, Stewart Libary Fields Lincoln Chayes, UCLA On the absence of ferromagnetizm in typical 2D ferromagnets. Monday, March 17, 2008 10:10 am, 215 Huron St. B. Valko and B. Virag, University of Toronto The Brownian Carousel In the fourth and final part of this epic trilogy we explain some details of the proof of that connects random matrices to hyperbolic Brownian motion. Monday, March 10, 2008 4:00 pm, Stewart Libary Fields B. Valko and B. Virag, University of Toronto The Brownian Carousel, part 2b. The eigenvalues of a random Hermitian matrix form a random set of points on the real line. As the matrix size converges to infinity, the eigenvalues, after appropriate scaling, converge to a point process. The possible limit processes, called Sine-beta processes, are fundamental objects of probability theory. They are famous for their conjectured relationship to the Riemann zeta zeros, Dirichlet eigenvalues of Euclidean domains, random Young tableaux, and non-colliding walks. This series of informal talks is about a new description of these processes in terms of Brownian motion in the hyperbolic plane, called the Brownian carousel. We plan to have three lectures: 1. Introduction to random matrix eigenvalues, definition and basic properties of the Brownian Carousel 2. Computing with the Brownian carousel; continuity, phase transitions, Dyson's predictions 3. Convergence of finite random matrix eigenvalues to the Brownian carousel Monday, March 3, 2008 4:00 pm, Room TBA B. Valko and B. Virag, University of Toronto The Brownian Carousel, part 2 Monday, Feb. 25, 2008 4:30 pm, Stewart Libary Fields B. Valko and B. Virag, University of Toronto The Brownian Carousel The eigenvalues of a random Hermitian matrix form a random set of points on the real line. As the matrix size converges to infinity, the eigenvalues, after appropriate scaling, converge to a point process. The possible limit processes, called Sine-beta processes, are fundamental objects of probability theory. They are famous for their conjectured relationship to the Riemann zeta zeros, Dirichlet eigenvalues of Euclidean domains, random Young tableaux, and non-colliding walks. This series of informal talks is about a new description of these processes in terms of Brownian motion in the hyperbolic plane, called the Brownian carousel. We plan to have three lectures: 1. Introduction to random matrix eigenvalues, definition and basic properties of the Brownian Carousel 2. Computing with the Brownian carousel; continuity, phase transitions, Dyson's predictions 3. Convergence of finite random matrix eigenvalues to the Brownian carousel Monday, Feb. 11, 2008 4:30 pm, Stewart Libary Fields Brian Rider (University of Colorado at Boulder) Diffusion at RMT's hard edge The RMT hard edge refers to the behavior of the minimal eigenvalues of a (natural) one-parameter generalization of Gaussian sample covariance matrices. We show that, in the large dimensional limit, the law of these points are shared by that of the spectrum of a certain random second-orderdifferential operator. The latter may be viewed as the generator of a Brownian motion with white noise drift. By a Riccati transform, we get a second diffusion description of the hard edge in terms of hitting times. This is joint work with J. Ramirez and should be compared with slightly less recent results of J. Ramirez, B. Virag, and myself on the RMT "soft" edge. Monday, Feb. 4, 2008 4:10pm, Stewart Libary Fields Omer Angel (University of Toronto) Monday, Dec. 10, 2007 4:10pm, Stewart Libary Fields James Mingo (Queen's University) Free Cumulants: First and Second Order Twenty years ago Voiculescu showed that the limiting distribution of sums and products of some ensembles of random matrices could be computed using some algebraic methods of "free" probability. At the core of free probability are the "free" cumulants. In recent years I have developed with Roland Speicher a theory of second order cumulants to do for global fluctuations what Voiculescu's theory did for limiting distributions. Monday, Dec. 3, 2007 4:10pm, Stewart Libary Fields Omer Angel (University of Toronto) Minimal Spanning Trees revisited Given a graph with weighted edges it is easy to find the spanning tree with minimal total weight. If the graph is the complete graph K_n and the weights are independent uniform on [0,1] the MST weight converges in distribution to \zeta(3). I will discuss two variation on this result. If the diameter of the tree is constrained to be at most K, what is the minimal weight? Turns out that there is a transition at K=\log_2\log n. If the edges are presented sequentially, and an algorythm must make a decision on each edge with only partial information, what can be achieved? Some heuristics lead to algorithms related to coalescent pocesses. I will give some bounds on the optimal expected weight. Monday, Nov. 26, 2007 4:10pm, Room 210 Fields Balazs Szegedy, University of Toronto Forcing Randomness. A surprising theorem by Chung, Graham and Wilson says that if a graph has edge density close to 1/2 and four cycle density close to 1/16 than the structure of the graph is close to "random looking". The natural question arises: What structures can be forced upon a graph by a finite family of subgraph densities? These structures are interesting combinations of algebraic structure andrandomness. We present recent results in this topic. This is joint work with Laszlo Lovasz. Monday, Nov. 19, 2007 4:10pm, Stewart Library Fields Manjunath Krisnapur (University of Toronto) From random matrices to random analytic functions. Peres and Virag proved that the zeros of the power series a_0+za_1+z^2a_2+..., with i.i.d. standard complex Gaussian coefficients is a determinantal point process on the unit disk. Extending this result, I proved recently that the singular points of the power series A_0+zA_1+z^2A_2+..., where A_i are k x k matrices with i.i.d. standard complex Gaussian coefficients, is also determinantal. As this was presented as conjecture in earlier talks, the emphasis will be on the proof and its connection to truncations of unitary random matrices sampled according to Haar measure. Monday, Oct. 29, 2007 4:10pm, Stewart Library Fields Mathieu Merle (University of British Columbia) Voter, Lotka-Volterra models and super-Brownian motion Voter model was initially interpreted as representing the spread of an opinion, but as the Lotka-Volterra model, it can be also be interpreted as a stochastic model for competition species. Super-Brownian motion is a model for population undergoing both spatial displacement and a continuous branching phenomenon. Recently, it was shown by Bramson, Cox, Durrett, Le Gall and Perkins that these objects are closely related, as super-Brownian motion appears at the scaling limit of both voter and Lotka-Volterra models, in dimension greater than two. Then, know properties of super-Brownian motion can be exploited in order to gain information on these discrete models. We will see how this leads to asymptotic results for the hitting probabilities of the voter model started with a single one, in dimensions 2 and 3. We will also briefly survey recent work of Cox and Perkins, who obtain results on survival and coexistence for the Lotka-Volterra model in dimension greater than 3. Monday, Oct. 15, 2007 4:10pm, Stewart Library Fields Gidi Amir (University of Toronto) Excited random walk against a wall We analyze random walk in the upper half of a three dimensional lattice which goes down whenever it encounters a new vertex, reflects on the plane $z=0$, and behaves like a simple random walk otherwise. a.k.a. excited random walk. We show that it is recurrent with an expected number of returns of $\sqrt{\log n}$ (Joint work with Itai Benjamini and Gady Kozma) Monday, Oct. 1, 2007 4:10pm, Stewart Library Fields Gabor Pete (Microsoft Research) The exact noise and dynamical sensitivity of critical percolation, via the Fourier spectrum Let each site of the triangular lattice (or edge of the \Z^2 lattice) have an independent Poisson clock switching between open and closed. So, at any given moment, the configuration is just critical percolation. In particular, the probability of a left-right open crossing in an n*n box is roughly 1/2, and, on the infinite lattice, almost surely there are only finite open clusters. In the box, how long do we have to wait before we lose essentially all correlation between having a left-right open crossing now and then? In the infinite lattice, are there random exceptional times when there are infinite clusters? In joint work with Christophe Garban and Oded Schramm, we give quite complete answers: e.g., exceptional times do exist on both lattices, and the Hausdorff dimension of their set is computed to be 31/36 for the triangular lattice. The indicator function of a percolation crossing event is a function on the hypercube {-1,+1}^{sites or edges}, and thus it has a Fourier-Walsh expansion. Our proofs are based on giving sharp estimates on the weight'' of the Fourier coefficients at different frequencies.
{}
• Sep 11th 2008, 08:05 AM algebrapro18 A) For each part, find a function f: R -> R that has the desired properties: neither onto nor one-to-one B)Under what conditions does A\(A\B) = B? C)Define f:J -> N(natural numbers) where f(n) = 2n-1 for each n element of N(natural numbers). D) Given A = {1,2,3,4,5}, B = {2,3,4,5,6,7} and C = {a,b,c,d,e} state an example of f: A ->B, g: B-> C, such that g(f)(the composition of g onto f) is 1-1 but g is not 1-1. • Sep 11th 2008, 08:14 AM bkarpuz Quote: Originally Posted by algebrapro18 B)Under what conditions does A\(A\B) = B? See #2 of http://www.mathhelpforum.com/math-he...ts-proofs.html by kathrynmath. • Sep 11th 2008, 10:29 AM algebrapro18 • Sep 11th 2008, 10:47 AM bkarpuz Solution for B As we have msntioned previously, $A\backslash B=A\cap B^{c}$, where $B^{c}$ is the compliment set of $B$. Noting that $(B^{c})^{c}=B$, and simplifying the given expression, we get $A\backslash(A\backslash B)=A\backslash(A\cap B^{c})$ .............. $=A\cap(A\cap B^{c})^{c}$ .............. $=A\cap(A^{c}\cup B)$ .............. $=\underset{\emptyset}{\underbrace{(A\cap A^{c})}}\cup(A\cap B)$ .............. $=A\cap B.\qquad(*)$ Since (*) is equal to $B$, we infer that $B\subset A$ is true. • Sep 11th 2008, 10:57 AM algebrapro18 Thanks, that helps me with part B now all I need help with is C because I got A and D done by my self. • Sep 11th 2008, 11:11 AM bkarpuz Quote: Originally Posted by algebrapro18 Thanks, that helps me with part B now all I need help with is C because I got A and D done by my self. Can you please explain C a little bit more? What is J or should we find what it is? Also do we still want the function f not to be onto and into again? • Sep 11th 2008, 11:14 AM algebrapro18 C)Define f:J -> N(natural numbers) where f(n) = 2n-1 for each n which is an element of N. I need to use what was given(the line above is all that is given) to find: the image of f, if f is 1-1, if f is onto, and if f is onto I need to find its inverse, the domain of the inverse, and the range of the inverse. here is what I have so far: The Im(f) has to be all odd numbers because that is what you get when you plug numbers into f(n)=2n-1. from there I get stumped. • Sep 11th 2008, 11:37 AM bkarpuz Solution for C Quote: Originally Posted by algebrapro18 C)Define f:J -> N(natural numbers) where f(n) = 2n-1 for each n element of N(natural numbers). Although the following remark is not applicable for this exercise, I would like to tell it. Remark. Let $f:A\to B$ be a function and $A,B$ be finite sets if f is onto, then it is one-to-one; vice versa, if f is one-to-one, then it is onto. if $J\not\subset K:=\Big\{1,\frac{3}{2},2,\frac{5}{2},\ldots\Big\}$, then $f$ can not be a function with an image which is a subset of $\mathbb{N}$ (pick an element which is not in the set $K$, and see that it is mapped into the set $\mathbb{R}\backslash\mathbb{N}$). Therefore, we must have $J\subset K$. If $J=K$, then $f$ is one-to-one and onto. If $J\neq K$, then $f$ is only one-to-one. Just, try to figure it out by yourself by letting $f(n)=2n-1=m$, where $n\in K$ and $m\in\mathbb{N}$. Note that $f$ is strictly increasing, which indicates that it is one-to-one. Then obtain the set $K$ by isolating $n$... (Wink) • Sep 11th 2008, 12:00 PM algebrapro18 • Sep 11th 2008, 12:50 PM bkarpuz In English Quote: Originally Posted by algebrapro18 If $f$ is a function from $J$ to $\mathbb{N}$ defined by $f(n)=2n-1$, we see that $2n-1\in\mathbb{N}$ for all $n\in J$. This means that for every $n\in J$, there exists $m\in\mathbb{N}$ such that $2n-1=m$ holds. Note that for every $m\in\mathbb{N}$, we may not find $n\in J$. Therefore, we see that $ n=\frac{m+1}{2}\text{ for }m\in\mathbb{N} $ holds. Since $m\in\mathbb{N}$, we this indicates that $ n\in K:=\Big\{1,\frac{3}{2},2,\frac{5}{2},3,\frac{7}{2} ,\ldots\Big\}. $ Hence, the domain of $f$ can be picked to be any subset of $K$, i.e. $J\subset K$. Now consider the following possible cases. If $J=K$, then $f$ is one-to-one and onto. If $J\neq K$, then $f$ is only one-to-one. Hint. $f$ is strictly icreasing, and it is hence one-to-one. I guess it is more clear now?
{}
1. World 2. Videos 3. #5 # Maxwell equations from the category: Videos I want to have it!Thumbnail Content of the video: 1. [00:14] Applications of the Maxwell equations. 2. [02:08] Electric field vector 3. [05:12] Magnetic field vector 4. [10:15] Divergence integral theorem - this mathematical theorem combines the volume integral of the divergence of a vector field with the surface integral of this vector field. 5. [17:50] Curl integral theorem - this mathematical theorem combines the surface integral of the curl of a vector field with the line integral of this vector field. 6. [23:58] The FIRST Maxwell’s equation - states that the charges are sources and sinks of the electric field. 7. [27:46] The SECOND Maxwell’s equation - states that magnetic charges always occur as dipoles. There are no magnetic monopoles. 8. [30:23] The THIRD Maxwell’ equation (Faraday’s law of induction) - states that a time-varying magnetic field generates an electric field and vice versa. This Maxwell equation contain also the Lenz’s rule. 9. [35:02] THE FOURTH Maxwell’s equation - states that the magnetic field can be generated by electric currents and time-varying electric fields (displacement current) world map Manage Profile How do I gain access? To enter the Portal of Ak'tazun, you must swallow the red pill. After you have gone through the portal, you get into the matrix, where you can do the following:
{}
This function takes the results from mr() and is particularly useful if the MR has been applied using multiple exposures and multiple outcomes. It creates a new data frame with the following: • Variables: exposure, outcome, category, outcome sample size, effect, upper ci, lower ci, pval, nsnp • only one estimate for each exposure-outcome • exponentiated effects if required format_mr_results( mr_res, exponentiate = FALSE, single_snp_method = "Wald ratio", multi_snp_method = "Inverse variance weighted", ao_slc = T, priority = "Cardiometabolic" ) ## Arguments mr_res Results from mr(). Convert effects to OR? The default is FALSE. Which of the single SNP methods to use when only 1 SNP was used to estimate the causal effect? The default is "Wald ratio". Which of the multi-SNP methods to use when there was more than 1 SNPs used to estimate the causal effect? The default is "Inverse variance weighted". Logical; retrieve sample size and subcategory using available_outcomes(). If set to FALSE mr_res must contain the following additional columns: subcategory and sample_size. Name of category to prioritise at the top of the forest plot. The default is "Cardiometabolic". data frame. ## Details By default it uses the available_outcomes() function to retrieve the study level characteristics for the outcome trait, including sample size and outcome category. This assumes the MR analysis was performed using outcome GWAS(s) contained in MR-Base. If ao_slc is set to TRUE then the user must supply their own study level characteristics. This is useful when the user has supplied their own outcome GWAS results (i.e. they are not in MR-Base).
{}
# 30. Solving for materials quantities and costs. Nate’s Pool Services uses from one to three... 30.   Solving for materials quantities and costs. Nate’s Pool Services uses from one to three chemicals to clean swimming pools. Variance data for the month follow (F indicates favorable variance; U indicates unfavorable variance): Chemical A Chemical B Chemical C Materials Price Variance........................................................... Materials  Efficiency Variance.................................................. $42,000 F 40,000 U$ 25,000 F 30,000 U $21,000 U 48,000 U Net Materials Variance .............................................................$  2,000 F $5,000 U$ 69,000 U Pools Cleaned Requiring This Chemical  ............................... 100,000 110,000 125,000 The budget allowed two pounds of each kind of chemical for each pool cleaning requiring that kind of chemical. For chemical A, the average price paid was $0.20 per pound less than standard; for chemical B,$0.10 less; for chemical C, \$0.07 greater. The firm purchased and used all chemicals during the  month. For each of the three types of chemicals, calculate the   following: a.    Number of pounds of material purchased. b.    Standard price per pound of material.
{}
# Boundary Question in $\mathbb{R}^{2}$ (Manifolds) Given a subset $A$ of $\mathbb{R}^{n}$, a point $x \in \mathbb{R}^{n}$ is said to be in the boundary of A if and only if for every open rectangle $B\subseteq\mathbb{R}^{n}$ with $x\in B$, $B$ contains both a point of $A$ and a point of $\mathbb{R}^{n}\setminus A$. My question is from Spivak's Calculus on Manifolds: Construct a set $A \subseteq [0,1]\times [0,1]$ such that $A$ contains at most one point on each horizontal and vertical line but has boundary equal to $[0,1]\times[0,1]$. - Thanks for fixing the code, M Turgeon – anonymous1234 Aug 8 '12 at 14:53 Since $\mathbb Q^2$ and the set of primes are both countably infinite we can write $$\mathbb Q^2 = \{ (x_p,y_p) : p \text{ prime} \}$$ where $p \mapsto (x_p,y_p)$ is a bijection. Now let $$A := \{(x_p + \sqrt{p}/2^p, y_p + \sqrt{p}/2^{p}) : p \text { prime} \} \cap [0,1]^2.$$ To show that $A$ contains at most one point on every vertical or horizontal line it suffices to show that the maps $p \mapsto x_p + \sqrt{p}/2^p$ and $p \mapsto y_p + \sqrt{p}/2^p$ are injective. Suppose $x_p + \sqrt{p}/2^p = x_q + \sqrt{q}/2^q$ for primes $p$ and $q$. Then $\sqrt{p}$ and $\sqrt{q}$ are linearly dependent over $\mathbb Q$ which is only possible if $p = q$. Since $A$ contains at most one point on every vertical or horizontal line we already know that every open set in $[0,1]^2$ contains some points outside of $A$. Therefore, it remains to show that $A$ is dense in $[0,1]^2$ (or, equivalently, in $(0,1)^2$). If $(x,y)$ is any point in $(0,1)^2$ then, since $\mathbb Q^2 \cap (0,1)^2$ is dense in $(0,1)^2$, there is a subsequence $(p_k)$ of the primes s.t. $(x_{p_k},y_{p_k})$ is a sequence in $(0,1)^2$ which approaches $(x,y)$. But then also $(x_{p_k} + \sqrt{p_k}/2^{p_k},y_{p_k} + \sqrt{p_k}/2^{p_k}) \to (x,y)$ as $k \to \infty$. For large $k$ this is a sequence in $A$, thus $A$ is dense in $(0,1)^2$.
{}
# ISL Colloquium Topic: The Power of Bidirectional Estimators: Personalized Search Markov Chain Estimation and Beyond Thursday, February 25, 2016 - 4:15pm to 5:15pm Venue: Packard 101 Speaker: Siddhartha Banerjee (Cornell University) Abstract / Description: A fundamental problem in Markov chains is of estimating the probability of transitioning from a given starting state to a given terminal state in a fixed number of steps. This has received much attention in recent years as Markov chains form the basis of many graph centrality measures, in particular, PageRank and Personalized PageRank (PPR). Standard approaches to this problem use either linear-algebraic iterative techniques (such as the power iteration) or Monte Carlo - both however have a running time which scales linearly in the size of the network. This is too slow for real-time computation on large networks - consequently, PPR, which has long been recognized as an effective measure for ranking search results, is rarely used in practice. I'll present a new approach towards designing bidirectional estimators, which combines linear algebraic and random walk techniques. Our approach provides the first algorithm for PageRank estimation which has sublinear running-time guarantees in theory, and which is much faster than existing algorithms in practice. In particular, we show that it returns estimates with additive error $O(1/n)$ in time $O(\sqrt{n})$ in undirected networks, and in sparse directed networks. Moreover, our approach extends to general Markov chains -- this may have applications in many diverse settings, and I look forward to some suggestions from the audience! This is joint work with Peter Lofgren and Ashish Goel. Bio: Sid Banerjee is an assistant professor in the School of Operations Research and Information Engineering (ORIE) at Cornell, where he works on stochastic modeling, and the design of algorithms and incentives for large-scale systems. He received his PhD in 2013 from the ECE Department at UT Austin, and was a postdoctoral researcher in the Social Algorithms Lab at Stanford from 2013 to 2015. He was a technical consultant at Lyft in 2014, and has previously interned at the Technicolor Paris Research Lab and Alcatel-Lucent Bell Labs. His current research focusses on the design of scalable algorithms for large networks, online marketplaces and social-computing platforms.
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Optimizing respiratory virus surveillance networks using uncertainty propagation Abstract Infectious disease prevention, control and forecasting rely on sentinel observations; however, many locations lack the capacity for routine surveillance. Here we show that, by using data from multiple sites collectively, accurate estimation and forecasting of respiratory diseases for locations without surveillance is feasible. We develop a framework to optimize surveillance sites that suppresses uncertainty propagation in a networked disease transmission model. Using influenza outbreaks from 35 US states, the optimized system generates better near-term predictions than alternate systems designed using population and human mobility. We also find that monitoring regional population centers serves as a reasonable proxy for the optimized network and could direct surveillance for diseases with limited records. The  proxy method is validated using model simulations for 3,108 US counties and historical data for two other respiratory pathogens – human metapneumovirus and seasonal coronavirus – from 35 US states and can be used to guide systemic allocation of surveillance efforts. Introduction Respiratory viruses impose a high morbidity and mortality burden on human health globally: influenza alone claims 290,000 to 650,000 lives worldwide each year1. Sentinel surveillance and operational real-time forecasting systems are decision support tools that help improve the prevention and control of these pathogens2. A number of forecasting methods for influenza have been developed recently3,4,5,6,7,8,9,10,11,12,13,14. In the last few years, some of these systems have been applied operationally to forecast influenza outbreaks in the United States15,16,17, demonstrating the feasibility of real-time prediction. Surveillance data are necessary to support real-time operational forecasting. However, many locations lack sufficient resources to maintain high-quality, continuous surveillance18,19,20. This data shortcoming limits infectious disease monitoring and forecasting at those sites. At the same time, network modeling approaches that dynamically couple disease transmission across multiple locations are widely used for infectious disease simulation21,22,23,24. These models have been recently leveraged to simulate, monitor, and forecast epidemic outbreaks. For instance, metapopulation models informed by observed human movement (air-transportation25,26,27, mobile phone location28,29, work commuting30,31,32, etc.) have supported better understanding and forecasting of the spatial spread of influenza13,26,27,33,34, dengue29, malaria28, and COVID-1935,36,37,38. Further, statistical correlations of disease activity at multiple sites have enabled improved surveillance of real-time influenza incidence (i.e., nowcasting)39. This coupling of disease activity through time and across locations suggests that infectious disease monitoring and forecasting at locations lacking surveillance capacity may be possible. To support such efforts, there is a need for developing methods that optimize disease surveillance and forecasting using incomplete data. A number of studies have explored the optimization of disease surveillance systems from a variety of perspectives. Approaches include the development of a method to select sentinel providers for influenza in Iowa that maximizes the population covered by the surveillance network18 and the design of surveillance systems that sequentially recruit sentinel sites that most improve system estimation of influenza-like illness hospitalizations19. This latter optimization method, applied to influenza surveillance in Texas19 and arbovirus surveillance in Puerto Rico40, employs submodular optimization to provide a performance guarantee41. Another approach evaluated strategies for selecting sensors in a social network and found that the optimal choice depends on public health goals, network structure, and disease transmissibility42. More recently, there has been a growing interest in combining and optimizing the inclusion of non-traditional data sources such as online search queries and social media activities43,44. In this study, we demonstrate that forecasting for locations without surveillance is possible using data streams from multiple other locations collectively in a networked, mechanistic, forecasting system informed by human movement (see “Materials and Methods”). In this system, a mobility-driven metapopulation model describing the spatiotemporal transmission of respiratory virus across locations is iteratively updated using the latest observed incidence13. Observations from one location are used to adjust the model state and estimate incidence in other locations, including those without surveillance. The optimized model is then evolved into the future to generate forecasts (Fig. 1a). Such networked systems enable inference and prediction of local disease activity in locations lacking observations and provide a framework for designing cost-effective surveillance and forecasting systems in circumstances constrained by limited resources. Results Forecasting with incomplete information We performed a preliminary forecasting experiment for influenza outbreaks in 35 US states in which data from a single surveillance site were omitted. Specifically, we used the ILI (influenza-like illness) rate among all people seeking medical attention multiplied by the percentage of patients with laboratory-confirmed influenza type A, termed ILI+45, to estimate local influenza activity (Fig. 1b, Methods, Supplementary Note 1 and Supplementary Fig. 1). In the experiment, a set of forecasts was generated with data inputs from all 35 states over 9 seasons, and a second set of forecasts over 9 seasons was generated with data inputs from 34 states, omitting data from one state in turn (Supplementary Note 2). The forecast mean absolute error in the omitted state was averaged over all 35 locations for versions with and without surveillance data. Forecast errors of near-term predictions for 1- to 4-week ahead ILI+ indicate that omitting data from a single surveillance site does not seriously degrade forecast accuracy in the omitted locations (Fig. 1c and Supplementary Fig. 2). The ability to estimate and forecast disease activity for locations without observations poses an additional question: can a limited number of surveillance sites be optimally identified in order to support accurate estimation and forecasting of disease activity at all sites in a network? This question motivates the design of a quantitative framework for the optimal selection of surveillance sites within a network. In disease surveillance, incomplete and imperfect observation leads to uncertainty in the estimation of disease activity, which disrupts surveillance, forecasting, and prevention and control efforts. This uncertainty should be minimized (see discussions in Supplementary Note 3 and Supplementary Fig. 3); however, due to the nonlinear evolution of infectious disease transmission, uncertainty can grow over time46,47. This uncertainty propagation compromises the accuracy of both surveillance and forecasting: accumulated uncertainty growth from prior observations can undermine the understanding of the current disease situation (i.e., surveillance), and prospective uncertainty growth can limit prediction of future incidence (i.e., forecast). This effect is clearly evident in influenza forecasting for which smaller uncertainty across a forecast ensemble generally implies a better prediction3,5,10,13. Leveraging this relationship, an effective surveillance network should be designed to collect the most informative data that best suppresses uncertainty growth. Uncertainty propagation Here we develop a framework to quantify the spatiotemporal propagation of uncertainty in a networked forecasting system. We characterize the evolution of uncertainty in the estimated infected and susceptible populations. For m locations, a binary vector p = (p1,…, pm)T is used to record whether location i is selected for surveillance (pi = 1) or omitted (pi = 0). We denote the vector of uncertainty as $${\mathbf{x}} = \left( {\sigma _{I_1}, \ldots ,\sigma _{I_m},\sigma _{S_1}, \ldots ,\sigma _{S_m}} \right)^{\rm{T}}$$, where $$\sigma _{I_i}$$ and $$\sigma _{S_i}$$ represent the uncertainty (here measured by standard deviation) in the estimated infected and susceptible populations at location i. The propagation of x undergoes two interacting processes during the generation of a forecast: uncertainty reduction during model update using data assimilation methods and uncertainty growth during model integration (Fig. 1d). The evolution of the uncertainty vector during a short time interval can be approximated using a linear operation: xMPx, where the diagonal matrix P quantifies the uncertainty reduction during data assimilation, and the matrix M estimates uncertainty growth in the dynamical model. Disease transmission dynamics in different locations are coupled in the mobility-driven metapopulation model. The adjacency matrix and numbers of commuters among the examined 35 US states are presented in Fig. 2a–b. The dynamical coupling enables the adjustment of infected and susceptible populations in one location using surveillance data from another. To quantify uncertainty reduction during this adjustment, we introduce a diagonal matrix P = diag (P1,…, Pm, Pm+1, …, P2m) with the diagonal elements defined as $$P_j = \sqrt {\mathop {\prod}\nolimits_{i = 1}^m {\left( {1 - p_iu_{j \leftarrow i}^I} \right)} } ,\,P_{j + m} = \sqrt {\mathop {\prod}\nolimits_{i = 1}^m {\left( {1 - p_iu_{j \leftarrow i}^S} \right)} }$$ (1) for j = 1,…, m. Here, $$u_{j \leftarrow i}^I$$ and $$u_{j \leftarrow i}^S$$ are the fractional variance reduction for the infected and susceptible populations in location j attributed to the observation from location i. The matrix P encodes information about the surveillance network configuration p: if a location has observations (i.e., pi = 1), uncertainty in this location and other dynamically coupled locations is reduced; otherwise (i.e., pi = 0), this location makes no contribution to uncertainty reduction. After data assimilation, the prior model state is adjusted to a posterior, with the uncertain vector x updated to Px. The surveillance network configuration p determines the diagonal elements of P, thus controls the reduction of the uncertainty vector x. The values of $$u_{j \leftarrow i}^I$$ and $$u_{j \leftarrow i}^S$$ depend on the quality of the observation in location i. Particularly, surveillance data with less uncertainty, characterized by a smaller observational error variance (OEV), lead to a larger reduction of uncertainty in x. Thus, to calculate $$u_{j \leftarrow i}^I$$ and $$u_{j \leftarrow i}^S$$, a precise estimation of OEV is required; however, in practice, this is a challenging task as only one data point (ILI+) is observed per location per week. We therefore developed a method to quantify the OEV of these observations and reveal that the OEV of ILI+ is predominantly affected by the number of laboratory tests (Supplementary Note 4). In order to properly represent the uncertainty of observations, we optimized the OEV of ILI+ from different locations in retrospective forecasting so that near-term forecast error is minimized (Supplementary Fig. 4). The forms of cross-location uncertainty reduction $$u_{j \leftarrow i}^I$$ and $$u_{j \leftarrow i}^S$$ are derived using a state-space framework (Supplementary Note 5 and Supplementary Fig 5) and reported in Methods. We computed the mean values of $$u_{j \leftarrow i}^I$$ and $$u_{j \leftarrow i}^S$$ averaged over weekly influenza forecasts during 9 seasons. The surveillance data from one location i mostly affect the uncertainty of its own infected and susceptible populations (Fig. 2c–d, diagonal elements); however, for certain locations that are adjacent to location i or exchange a large number of commuters (Fig. 2a–b), the variances of infected and susceptible populations are reduced by the observation from location i as well (Fig. 2c–d, off-diagonal elements). Such cross-site uncertainty reduction indicates dynamical coupling between these pairs of locations. The reduced uncertainty Px will propagate in the networked system during model integration. The evolution of Px within a short time interval can be approximated using the linear propagator M of the transmission model that characterizes the uncertainty growth driven by the linearized model dynamics: PxMPx. Specifically, for a short time interval δt, the linear propagator M is estimated by M ≈ I + Jδt, where I is a 2 m × 2 m unit matrix and J is the Jacobian matrix of the full nonlinear system (Supplementary Note 5). The linear approximation was shown to be valid for a few days for influenza transmission models47, and has been previously applied in numerical weather prediction46. Typical respiratory disease surveillance releases data once per week2; at this rate the linear approximation may become less accurate. As a consequence, we here limit our attention to short-term uncertainty propagation. Later retrospective forecast results indicate that this setting can improve near-term forecasts for ILI+ up to 4 weeks ahead. The optimal surveillance problem To minimize uncertainty growth during short-term forecast, we aim to minimize the uncertainty growth rate, quantified by MPx/x = (xT PT MT MPx)/(xT x)46,47. This equation indicates that the uncertainty growth rate is determined by the dominant eigenvalue, λ1, of the matrix LPT MT MP. In operation, the matrices P and M vary by forecast time (i.e., how far into an outbreak a forecast is initiated) and system state. Thus, to design an optimal surveillance network for a wide range of unknown, potential outbreaks, we minimize the mean value, 〈λ1〉, averaged over different forecast initiation time points and system states. Mathematically, the task of selecting K optimal sentinel sites from m locations is transformed to the combinatorial optimization problem of finding p that minimizes 〈λ1〉 under the constraint $$\mathop {\sum}\nolimits_{i = 1}^m {p_i = K}$$: $${\mathbf{p}} \ast = \arg \min \left\langle {\lambda _1\left( {{\mathbf{p}},t,{\mathbf{z}}} \right)} \right\rangle {\mathrm{subject}}\,{\mathrm{to}}\mathop {\sum}\limits_{i = 1}^m {p_i = K,p_i \in \left\{ {0,1} \right\}} .$$ (2) Here λ1 (p, t, z) is the dominant eigenvalue of L at time t with system state z given the configuration of the surveillance network p. In order to calculate λ1, we run weekly data assimilation in multiple seasons to estimate the system state z at each week. Using the surveillance network configuration p and the posterior model state z at time t, we obtain the matrices P and M, and then compute the dominant eigenvalue λ1 of L using the power method48. The mean eigenvalue is averaged over λ1(p, t, z) for different weeks and seasons. The above optimal surveillance problem is a combinatorial optimization, as the inclusion of one location is impacted by other selected locations. Solving this problem for large-scale systems is challenging as the number of configurations grows exponentially with the system size. However, for a small system forecasting respiratory disease at the US state level, this problem can be solved using standard iterative optimization techniques such as simulated annealing (SA)49 (Methods). Influenza surveillance networks We validated the proposed framework using influenza outbreaks in 35 US states. In order to perform the optimization, historical outbreak data are required to infer model parameters and state variables so that simulation dynamics are representative of real-world influenza transmission patterns (e.g., seasonality, spatiotemporal spread, typical attack rate, etc.). Although sentinel providers tend to work locally, in practice, surveillance data collected from local sentinel providers are aggregated to coarser geographical scales for public health use. In particular, the US Centers for Disease Control and Prevention (CDC) releases ILI surveillance data at the state, HHS (the US Department of Health and Human Services) regional and national levels50. Here we work at this operational spatial resolution and optimize the surveillance networks at the state level. For a given number of observation locations, K, we optimize the surveillance network using SA. As short-term uncertainty propagation is suppressed, we expect that the forecast accuracy of the selected network for near-term targets, for instance, 1- to 4-week ahead ILI+, will outperform surveillance systems designed using heuristic strategies that favor locations with either larger population size, a larger number of commuters (both incoming and outgoing directions), or a higher population gradient (Population, defined as the ratio of location population size to the average population of its adjacent neighbors). Among all strategies, SA is best at minimizing the average dominant eigenvalue (Fig. 3a), and the selected states (for K = 5, 10, 15, 20) are spread across the country (Fig. 3b). We next performed retrospective forecasting for 9 seasons at the state level (Methods, Supplementary Note 6 and Supplementary Fig. 6). In retrospective forecasting, all 35 states were included in the metapopulation model, but only surveillance data from selected states were used to calibrate the model (i.e., observations from unselected states were omitted). Using the surveillance networks optimized by SA, the forecast error of near-term predictions in the states without surveillance decreases as more states are observed, and eventually converges to the forecast error of the states with observations (Fig. 3c). To evaluate the performance of surveillance networks selected using different methods, we compared the forecast error (mean absolute error) for 1-week ahead ILI+ predictions in all states, including those with and without surveillance data. In most cases, the SA approach significantly outperforms the other heuristic methods by generating surveillance networks that support lower forecast error (Fig. 3d, Wilcoxon signed-rank test, Methods and Supplementary Fig. 7). The marginal gain of observing more locations gradually decreases, highlighting the dominant role that observations from certain key locations play in constraining influenza forecast accuracy. Comparison for 2- to 4-week ahead predictions (Supplementary Fig. 7) additionally corroborate the effective minimization of uncertainty growth by SA optimization. The forecasting system generates probabilistic forecasts. Mean absolute errors reported in Fig. 3c only measure the error of point prediction (i.e., the mean value of each ensemble forecast). In order to evaluate the full probabilistic forecasts, we compared the “log score” (Methods), defined as the logarithmic value of the probability assigned to an interval around the observed target. In essence, the log score is a summary statistic measuring the distribution of ensemble forecast error. This probabilistic scoring rule has been used in the CDC FluSight forecast challenge15,16,17. Consistent with the results for forecast error, the SA approach outperforms the other three strategies (Fig. 3e). We further examined the forecast error and log score at different times relative to the predicted peak week (Supplementary Figs. 89). As an example, retrospective forecasts were generated for all 35 states over 9 seasons using surveillance networks consisting of 20 states. At most predicted lead weeks, the SA optimization supports better predictions. To understand the features of networks selected by SA, we examined their similarity with networks identified using alternate heuristic methods. In addition to population size, number of commuters, and population gradient, we also investigated three other feature-driven surveillance location selection methods and compared their results with those selected by SA. These features are: (1) Absolute humidity. In temperate regions, influenza transmission is favored during periods of lower absolute humidity51. As a result, we selected locations with lower average absolute humidity with priority. (2) Population density. Higher population density may facilitate influenza transmission due to higher person-to-person contact frequency. Locations are ranked by their population density in descending order. (3) Random walk centrality. In contrast to other local features, random walk centrality is a global metric determined by the connectivity among all locations. Specifically, the random walk centrality ri for location i is the stationary visiting probability of a random walker who travels in the network following the transfer probability specified by the commuting matrix. The values of ri satisfy the self-consistent equation: $$r_i = \mathop {\sum}\nolimits_j {C_i^jr_j/N_j}$$, and can be calculated through iteration ($$C_i^j$$ is the number of commuters from location j to i, and Nj is the population in location j). For random walk centrality, locations are ranked according to ri in descending order. Among all examined measures, the Population approach is most similar to the eigenvalue minimization approach using SA (Fig. 3f), indicating that the optimized network has a tendency to first select locations with a high Population. For example, Washington state ranks only 11th and 25th by population and number of commuters among 35 examined US states; however, it ranks 3rd according to Population and is selected with high priority by the eigenvalue minimization approach. An attractive alternative approach to SA to solve the optimal surveillance problem is to sequentially add locations that produce the largest marginal reduction of the eigenvalue. This greedy approach is less computationally demanding than the SA algorithm, and could have a performance guarantee if the objective function satisfies the submodular property41. A function is submodular if the marginal gain of including an additional location decreases with the number of existing surveillance sites. Unfortunately, the eigenvalue function we use here does not have this diminishing return property. Despite this circumstance, we tested a greedy algorithm approach and compared the resulting eigenvalue with the one obtained from the SA algorithm (Supplementary Fig. 10). The eigenvalue curves are identical for surveillance systems with less than 15 states and remain similar for larger systems. These findings indicate that the greedy approach is effective for this 35-state model, and may be applicable to small- and medium-sized systems. However, for large systems like the county-level transmission model, the greedy algorithm is computationally prohibitive due to the cost of calculating eigenvalues for large-scale matrices. The surveillance network optimization requires historical records to compute the matrices M and P. However, disease surveillance data are typically sparse in underdeveloped settings, especially for emerging infectious diseases. Moreover, the SA algorithm is computationally expensive and prohibitive for systems with more than a few hundred locations49. For large-scale systems or diseases with limited historical records, a practical strategy to design surveillance networks is needed. Given the similarity between the surveillance networks selected by SA and Population, we propose that Population, a metric that is broadly available, can be used to select surveillance sites. We examined the performance of Population at finer spatial resolution using synthetic influenza outbreaks generated at the county level. Specifically, error-laden observations of ILI+ for 20 outbreaks in the 3108 continental US counties were generated using the mobility-driven metapopulation model (Supplementary Note 7). We then compared the forecasting accuracy of surveillance networks constructed using various, alternate strategies. Specifically, we considered four other heuristic approaches: site selection informed by population coverage, number of commuters, diversity of commuters’ residential counties, and random selection. A recent study found that selecting sentinel surveillance sites based on the geographical diversity of patients visiting healthcare facilities performs well for arbovirus disease systems40. Here we examined a similar strategy in which counties with more diverse commuters, quantified by the Shannon diversity: H = − ∑hi ln hi, where hi is the fraction of incoming commuters living in county i, are preferentially selected. To provide an alternate strategy that avoids geographical clustering, we also included a strategy that randomly selects surveillance sites. Surveillance networks with K of 5%, 10%, 20% up to 100% of counties were compared. Population outperformed competing strategies (Fig. 4a, Supplementary Fig. 11). Additionally, the marginal reduction of forecast error becomes nominal once 10% of counties are observed. This indicates that observing a small fraction of dynamically central counties is sufficient to generate satisfactory estimates and forecasts for both observed and unobserved locations, and that observing additional sites with potentially larger noise does not necessarily improve forecast accuracy. When we compare results at the state level (Fig. 3d), the advantage of using Population to design surveillance networks over population and human mobility becomes even more pronounced. This indicates that spatial scale matters in selecting optimal surveillance sites. Indeed, determining the appropriate observational spatial scale that can damp excessive noise while not compromising resolution is a critical, outstanding problem in operational forecasting. We next compared the overlap of counties selected by Population with those selected by other attributes including local absolute humidity, population, number of commuters, population density, random walk centrality, commuter diversity and random selection (Fig. 4b). With limited overlap, surveillance networks designed using alternate measures differ considerably from the network selected by Population, especially for small numbers of surveillance sites. This comparison indicates that the information conveyed by Population cannot be represented by the other examined metrics. The competitive performance of Population is explained by its characteristic of avoiding redundant information from clusters of locations: only one population center tends to dominate a cluster of counties. The benefit of avoiding informational redundancy has previously been highlighted19. To detail this further, we visualize the surveillance networks composed of 10% of counties as selected by the Population, Population, Commuter, Diversity, and Random approaches (Fig. 4c). Counties selected by the population, commuters and diversity approaches are densely clustered in a few metropolitan areas. In stark contrast, the networks selected by Population are more evenly distributed across the US and are thus more representative of disease activity throughout the country. The randomly selected sites are also distributed across the US; however, many selected counties have small populations with possible large observational noise that could compromise forecasting accuracy. We quantify geographical clustering using the distribution of distance between nearest neighbors within the surveillance network. The population-, commuter- and diversity-based surveillance networks have on average a closer nearest neighbor (Fig. 4d), indicating a more clustered structure. The networks selected by the random strategy are less clustered, but the distance between nearest neighbors is still slightly lower than that of the population gradient-based networks. For the random strategy, more counties are selected in the eastern and middle US, where counties are more densely distributed. We note that the Population strategy does not merely seek spatial homogeneity; it also reflects the spatial distribution of population: the surveillance sites are denser in areas with more population (Fig. 4c). The SA algorithm also exhibits cluster-avoiding tendencies: during combinatorial optimization, once a location is selected, the chance of selecting an adjacent neighbor is low as the marginal gain diminishes. This mechanism partially explains why the surveillance sites selected by the eigenvalue minimization approach are spread broadly across the US. We further validated site selection by Population using historical outbreaks for two additional respiratory pathogens: human metapneumovirus (HMPV) and coronavirus (CoV) in 35 US states from 2013–2014 to 2016–2017 (Fig. 5a and Supplementary Note 8). HMPV and CoV are common ILI-causing respiratory viruses, and typically circulate in winter and early spring. In the dataset, their surveillance records are only available in 4 seasons, providing an instance of disease with limited data. Retrospective forecasts for HMPV and CoV outbreaks were generated using surveillance networks composed of different numbers of sentinel sites. Although the signals of HMPV and CoV are noisier than ILI+, due to fewer laboratory tests, the networked forecasting system is still able to predict near-term incidence using partial observations, and the Population site selection approach identifies key surveillance locations that support forecasts with lower errors (Fig. 5b–c and Supplementary Fig. 12). The findings demonstrate that forecasting for a range of respiratory viruses is possible in locations without surveillance. Discussion While similar in performance to SA optimization, Population remains a static metric, reflecting only the geographical distribution of population. In contrast, the combinatorial optimization approach using SA accounts for connectivity between locations, observation uncertainty, and evolving model dynamics, and thus more flexibly responds to surveillance practices and outbreak patterns. Nevertheless, should insufficient data (e.g., historical data or estimation of observational error) exist to perform SA optimization, the population gradient method could serve as a reasonable proxy for network site selection. Recent work has revealed the crucial role that urban centers play in incubating and driving influenza transmission52; here we identify the significant role metropolises and centers of population play in suppressing uncertainty growth. As an approximating solution to a combinatorial optimization problem, the optimized surveillance network may have multiple configurations with similar performance49, i.e., the network constructed using SA is only one of these possible choices. If certain locations are already monitored, such constraint could be properly incorporated into the optimization problem to find the conditional optimal design for adding more surveillance sites. Network approaches are increasingly employed in infectious disease modeling, surveillance, and forecasting. In these applications, networked models are usually fitted to real-world observations using computational Bayesian techniques (e.g., Markov Chain Monte Carlo53, particle filter54, Kalman filter55, approximate Bayesian computation56, etc.). Through this model calibration process, distributions of prior and posterior model states are obtained. This allows the direct quantification of uncertainty propagation when theoretical analysis is intractable and facilitates the generalization of the framework proposed in this study. One possible application would be to assess the value of specific observations and design proactive and adaptive observations (in space and time) in response to an ongoing outbreak. In the framework used here, important factors affecting influenza outbreaks (e.g., vaccination coverage and effectiveness, mixing patterns within and across age groups, antigenic drift, etc.) were not explicitly represented in the dynamical model. Directly accounting for those factors could potentially further reduce model misspecification and improve the selection of an optimal network. We also only compared the optimization framework with simple location features such as population size and number of commuters. In the future, other more sophisticated strategies for designing surveillance networks could be considered should data and resource availability be sufficient to support proper implementation. Also, the framework only considers the short-term evolution of uncertainty in a linearized approximation. A quantification of longer-term uncertainty propagation in the full nonlinear model would be needed to enhance and optimize the forecast of seasonal targets such as peak timing and peak intensity. Methods Data description We used patient syndromic influenza-like illness (ILI) data and laboratory test results from the US Armed Forces Health Surveillance Branch (AFHSB) to estimate state-level respiratory disease activity (Supplementary Note 1). We focused on the 35 US states in the AFHSB dataset with substantive ILI and test records. For influenza, we used ILI+, defined as the weekly ILI rate among patients seeking medical attention multiplied by the concurrent weekly positivity rate for influenza type A in laboratory testing, to reflect local influenza activity spanning 9 seasons from 2008–2009 to 2016–2017. For HMPV and CoV, laboratory test results are only available for 4 seasons from 2013–2014 to 2016–2017. Similarly, we used ILI multiplied by concurrent positivity rates for these viruses, termed HMPV+ and CoV+ respectively, to estimate disease activity in each state. The ILI visit and laboratory test data were stored in MySQL 8.0 and analyzed in MATLAB 2015b. The use of the deidentified dataset in this study was approved by AFHSB. All relevant ethical regulations were followed. Local absolute humidity (AH) conditions for each state and county were obtained from North American Land Data Assimilation System data57. A daily AH climatology of conditions averaged over a 24-year period from 1979 to 2002 was used. County-to-county commuting data, obtained from the 2009–2013 American Community Surveys, were used to approximate human movement. This dataset, publicly available from the United States Census Bureau website, provides commuting population estimates across all US counties58. Given that the survey period (2009–2013) is close to the forecast seasons, we assume the commuting patterns reported in the census survey data are representative of the study period. Forecasting framework We describe the transmission of respiratory pathogens using a metapopulation SIRS (susceptible-infected-recovered-susceptible) model, in which different locations are connected by human mobility. In practice, detailed information about human movement is not available in real time. To address this issue, we assume the volume of human movement between two locations is proportional to the average number of commuters between them. Denote $$C_j^i$$ as the number of commuters living in location i and commuting to work in location j. The number of visitors from location i to j is assumed to be $$\theta \bar C_j^i$$, where θ is an adjustable parameter and $$\bar C_j^i$$ is the average commuters between location i and j. The evolution of transmission is then described by $$\frac{{{\rm{d}}I_i}}{{{\rm{d}}t}} = \frac{{\beta _iS_iI_i}}{{N_i}} - \frac{{I_i}}{D} - \frac{{\theta I_i}}{{N_i}}\mathop {\sum}\limits_{j \ne i} {\bar C_j^i} + \theta \mathop {\sum}\limits_{j \ne i} {\frac{{\bar C_i^jI_j}}{{N_j}}} ,$$ (3) $$\frac{{{\rm{d}}S_i}}{{{\rm{d}}t}} = \frac{{N_i - S_i - I_i}}{L} - \frac{{\beta _iS_iI_i}}{{N_i}} - \frac{{\theta S_i}}{{N_i}}\mathop {\sum}\limits_{j \ne i} {\bar C_j^i} + \theta \mathop {\sum}\limits_{j \ne i} {\frac{{\bar C_i^jS_j}}{{N_j}}} .$$ (4) Here Ni, Si, and Ii are the number of total, susceptible, and infected population in location i; D is the average duration of infection; L is the average during of immunity; and βi is the transmission rate in location i. The last two terms in the above equations describe the exchange of population due to human movement. For influenza, the transmission rate is modulated by local AH conditions through βi(t) = [exp(a × qi(t) + log(R0maxR0min)) + R0min]/D, where qi(t) is daily specific humidity, a measure of AH. The parameter a = − 180 is estimated from laboratory experiments of the impact of AH on influenza virus survival. R0max and R0min are the maximum and minimum daily basic reproductive numbers inferred during data assimilation. For HMPV and CoV, we assume the transmission rate is constant and identical across locations. The transmission model is coupled with a data assimilation algorithm to optimize the model state using observed incidence data in real time. Specifically, we used the Ensemble Adjustment Kalman Filter (EAKF)59 in which the distribution of the model state is represented by an ensemble of state vectors. During data assimilation, this ensemble is iteratively updated so that the model better estimates the underlying unknown truth. The optimized dynamical model is then integrated into the future to generate probabilistic forecasts. Similar model-data assimilation forecast frameworks have been successfully used for forecasting and inference of a variety of infectious diseases3,60,61,62,63,64,65. Details about the system configuration can be found in Supplementary Note 2. The EAKF algorithm was coded in MATLAB 2015b. Cross-location uncertainty reduction We derived the form of $$u_{j \leftarrow i}^I$$ and $$u_{j \leftarrow i}^S$$ analytically using a state-space framework (Supplementary Note 5): $$u_{j \leftarrow i}^I = \frac{{\sigma _{y_iI_j}^2}}{{\left( {R_i + \sigma _{y_i}^2} \right)\sigma _{I_j}^2}},u_{j \leftarrow i}^S = \frac{{\sigma _{y_iS_j}^2}}{{\left( {R_i + \sigma _{y_i}^2} \right)\sigma _{S_j}^2}},$$ (5) where yi is the prior incidence (i.e., simulated ILI+ rate) in location i, $$\sigma _{y_iI_j}$$ ($$\sigma _{y_iS_j}$$) is the covariance between the prior incidence in location i and the prior infected (susceptible) population in location j, Ri is the OEV of the observation from location i, $$\sigma _{y_i}^2$$ is the variance of the prior incidence in location i, and $$\sigma _{I_j}^2$$ ($$\sigma _{S_j}^2$$) is the variance of the prior infected (susceptible) population in location j. Note that $$\sigma _{y_iI_j}$$ ($$\sigma _{y_iS_j}$$) quantifies the dynamical coupling between the observed state variable (simulated ILI+) in location i and the infected (susceptible) population in location j. In addition, a more uncertain observation in location i (i.e., a larger Ri) leads to a smaller reduction of uncertainty in Ij and Sj. In practice, the quantities defining $$u_{j \leftarrow i}^I$$ and $$u_{j \leftarrow i}^S$$ in Eq. (5) can be computed numerically using the state-vector ensemble during data assimilation. We validated Eq. (5) in retrospective forecasts of influenza outbreaks over 9 seasons (Supplementary Fig. 5). The actual uncertainty reduction in the state-vector ensemble agrees well with the values calculated using Eq. (5). Optimization using simulated annealing The configuration vector p can be optimized using general iterative optimization algorithms such as simulated annealing (SA)49. In SA, the energy function E(p) is defined as E(p) = 〈λ1(p, t, z)〉. Starting from a random initial configuration that satisfies $$\mathop {\sum}\nolimits_{i = 1}^m {p_i = K}$$, at each step k, the current configuration vector pk is perturbed to $${\mathbf{p}}_k^\prime$$ under constraint of the number of selected locations. This procedure can be realized by swapping the states of a randomly chosen couple of selected and omitted locations. The change in energy, $${\Delta} E = E\left( {{\mathbf{p}}_k^\prime } \right) - E({\mathbf{p}}_k)$$, can then be calculated directly from the ensemble of eigenvalues. If ΔE < 0, the perturbation is accepted and the new configuration is used as the starting point for the next step $${\mathbf{p}}_{k + 1} = {\mathbf{p}}_k^\prime$$. Otherwise, the new configuration is only accepted with a probability PE) = exp(−ΔE/(κBTk)), where κB is a constant and Tk is a time-varying parameter called temperature. In implementation, the annealing schedule starts from a high temperature T0, where essentially all perturbations can be accepted, and then gradually cools down to a low temperature with a decreasing probability of accepting worse configurations. The algorithm stops when the number of attempts exceeds a certain threshold value before a new configuration is accepted. The final configuration p is the estimated optimal solution to the optimization problem. In our implementation, we used κB = 0.1, an exponentially decreasing temperature Tk = 0.9997k and a maximal iteration number of kmax = 30,000. The stopping threshold was set at 3000. Evaluation of retrospective forecasting We examined forecast accuracy for 4 short-term targets: 1- to 4-week ahead ILI+ rates. The performance of forecast accuracy is evaluated using two measures: mean absolute error (MAE) and log score. MAE is calculated as the difference between the predicted ensemble mean and the observed ILI+ rate. Log score is defined as the log value of the probability assigned to the interval of width 0.01 centered at the observed ILI+ rate (0.005 on each side)15,16,17. In order to examine whether the SA algorithm statistically significantly outperforms the other three strategies in retrospective forecasting for influenza outbreaks, we performed a Wilcoxon signed-rank test on three pairs of methods: SA-Population, SA-Commuter, and SA-Population. The Wilcoxon signed-rank test is a non-parametric statistical test that compares two paired samples (here, paired MAEs or log scores generated by both examined methods for the same location at the same forecast week) to assess whether their mean-ranks differ66. We performed a two-sided test to return a p-value indicating that SA outperforms the other method. We calculated the p-values for the three pairs of comparison (SA-Population, SA-Commuter, and SA-Population) for each of the four targets. The p-values reported in Fig. 3d–e are the maximal p-values among all three tests (i.e., the worst case). The same analysis was performed for forecasting at the county level and for HMPV and CoV. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The US commuting data is available at https://www2.census.gov/programs-surveys/demo/tables/metro-micro/2015/commuting-flows-2015/table1.xlsx. The disease surveillance data that support the findings of this study are available from AFHSB but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of AFHSB. Source data for part of the figures are provided with this paper. Source data are provided with this paper. Code availability The code for the networked forecasting system is deposited in GitHub at https://github.com/SenPei-CU/SurveillanceOptimization. References 1. 1. World Health Organization, Influenza (seasonal). Fact Sheet No. 211, www.who.int/mediacentre/factsheets/fs211/en/index.html (2009). 2. 2. U.S. Department of Health and Human Services, FluSight: Seasonal Influenza Forecasting. Epidemic Prediciton Initiative, https://predict.cdc.gov/ (accessed 1 Dec 2020). 3. 3. Shaman, J. & Karspeck, A. Forecasting seasonal outbreaks of influenza. Proc. Natl Acad. Sci. USA 109, 20425–20430 (2012). 4. 4. Tizzoni, M. et al. Real-time numerical forecast of global epidemic spreading: case study of 2009 A/H1N1pdm. BMC Med. 10, 165 (2012). 5. 5. Shaman, J., Karspeck, A., Yang, W., Tamerius, J. & Lipsitch, M. Real-time influenza forecasts during the 2012–2013 season. Nat. Commun. 4, 2837 (2013). 6. 6. Axelsen, J. B., Yaari, R., Grenfell, B. T. & Stone, L. Multiannual forecasting of seasonal influenza dynamics reveals climatic and evolutionary drivers. Proc. Natl Acad. Sci. USA 111, 9538–9542 (2014). 7. 7. Brooks, L. C., Farrow, D. C., Hyun, S., Tibshirani, R. J. & Rosenfeld, R. Flexible modeling of epidemics with an empirical Bayes framework. PLOS Comput. Biol. 11, e1004382 (2015). 8. 8. Ben-Nun, M., Riley, P., Turtle, J., Bacon, D. P. & Riley, S. Forecasting national and regional influenza-like illness for the USA. PLOS Comput. Biol. 15, e1007013 (2019). 9. 9. Du, X., King, A. A., Woods, R. J. & Pascual, M. Evolution-informed forecasting of seasonal influenza A (H3N2). Sci. Transl. Med. 9, eaan5325 (2017). 10. 10. Pei, S. & Shaman, J. Counteracting structural errors in ensemble forecast of influenza outbreaks. Nat. Commun. 8, 925 (2017). 11. 11. Osthus, D., Gattiker, J., Priedhorsky, R. & Del Valle, S. Y. Dynamic Bayesian influenza forecasting in the United States with hierarchical discrepancy. Bayesian Anal. https://doi.org/10.1214/18-BA1117 (2018). 12. 12. Ray, E. L. & Reich, N. G. Prediction of infectious disease epidemics via weighted density ensembles. PLOS Comput. Biol. 14, e1005910 (2018). 13. 13. Pei, S., Kandula, S., Yang, W. & Shaman, J. Forecasting the spatial transmission of influenza in the United States. Proc. Natl Acad. Sci. USA 115, 2752–2757 (2018). 14. 14. Reich, N. G. et al. A collaborative multiyear, multimodel assessment of seasonal influenza forecasting in the United States. Proc. Natl Acad. Sci. USA 116, 3146–3154 (2019). 15. 15. Biggerstaff, M. et al. Results from the centers for disease control and prevention’s predict the 2013-2014 Influenza Season Challenge. BMC Infect. Dis. 16, 357 (2016). 16. 16. Biggerstaff, M. et al. Results from the second year of a collaborative effort to forecast influenza seasons in the United States. Epidemics 24, 26–33 (2018). 17. 17. McGowan, C. J. et al. Collaborative efforts to forecast seasonal influenza in the United States, 2015-2016. Sci. Rep. 9, 683 (2019). 18. 18. Polgreen, P. M. et al. Optimizing influenza sentinel surveillance at the state level. Am. J. Epidemiol. 170, 1300–1306 (2009). 19. 19. Scarpino, S. V., Dimitrov, N. B. & Meyers, L. A. Optimizing provider recruitment for influenza surveillance networks. PLOS Comput. Biol. 8, e1002472 (2012). 20. 20. Lee, E. C. et al. Deploying digital health data to optimize influenza surveillance at national and local scales. PLOS Comput. Biol. 14, e1006020 (2018). 21. 21. Keeling, M. J. & Rohani, P. Estimating spatial coupling in epidemiological systems: a mechanistic approach. Ecol. Lett. 5, 20–29 (2002). 22. 22. Riley, S. Large-scale spatial-transmission models of infectious disease. Science 316, 1298–1301 (2007). 23. 23. Balcan, D. et al. Multiscale mobility networks and the spatial spreading of infectious diseases. Proc. Natl Acad. Sci. USA 106, 21484–21489 (2009). 24. 24. Belik, V., Geisel, T. & Brockmann, D. Natural human mobility patterns and spatial spread of infectious diseases. Phys. Rev. X 1, 011001 (2011). 25. 25. Colizza, V., Barrat, A., Barthélemy, M. & Vespignani, A. The role of the airline transportation network in the prediction and predictability of global epidemics. Proc. Natl Acad. Sci. USA 103, 2015–2020 (2006). 26. 26. Brockmann, D. & Helbing, D. The hidden geometry of complex, network-driven contagion phenomena. Science 342, 1337–1342 (2013). 27. 27. Wang, L. & Wu, J. T. Characterizing the dynamics underlying global spread of epidemics. Nat. Commun. 9, 218 (2018). 28. 28. Wesolowski, A. et al. Quantifying the impact of human mobility on malaria. Science 338, 267–270 (2012). 29. 29. Wesolowski, A. et al. Impact of human mobility on the emergence of dengue epidemics in Pakistan. Proc. Natl Acad. Sci. USA 112, 11887–11892 (2015). 30. 30. Viboud, C. et al. Synchrony, waves, and spatial hierarchies in the spread of influenza. Science 312, 447–451 (2006). 31. 31. Gog, J. R. et al. Spatial transmission of 2009 pandemic influenza in the US. PLOS Comput. Biol. 10, e1003635 (2014). 32. 32. Charu, V. et al. Human mobility and the spatial transmission of influenza in the United States. PLOS Comput. Biol. 13, e1005382 (2017). 33. 33. Yang, W., Olson, D. R. & Shaman, J. Forecasting influenza outbreaks in boroughs and neighborhoods of New York City. PLOS Comput. Biol. 12, e1005201 (2016). 34. 34. Kramer, S., Pei, S. & Shaman, J. Forecasting influenza in Europe using a metapopulation model incorporating cross-border commuting and air travel. PLOS Comput. Biol. 16, e1008233 (2020). 35. 35. Li, R. et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2). Science 368, 489–493 (2020). 36. 36. Wu, J. T., Leung, K. & Leung, G. M. Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study. Lancet 395, 689–697 (2020). 37. 37. Chinazzi, M. et al. The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak. Science 368, 395–400 (2020). 38. 38. Pei, S., Kandula, S. & Shaman, J. Differential effects of intervention timing on COVID-19 spread in the United States. Sci. Adv. 6, eabd6370 (2020). 39. 39. Lu, F. S., Hattab, M. W., Clemente, C. L., Biggerstaff, M. & Santillana, M. Improved state-level influenza nowcasting in the United States leveraging Internet-based data and network approaches. Nat. Commun. 10, 147 (2019). 40. 40. Scarpino, S. V., Meyers, L. A. & Johansson, M. A. Design strategies for efficient Arbovirus Surveillance. Emerg. Infect. Dis. 23, 642–644 (2017). 41. 41. Das, Am & Kempe, D. Algorithms for subset selection in linear regression. In Proc. 40th Annual ACM Symposium on Theory of computing 45–54 (ACM Press, 2008). https://doi.org/10.1145/1374376.1374384. 42. 42. Herrera, J. L., Srinivasan, R., Brownstein, J. S., Galvani, A. P. & Meyers, L. A. Disease surveillance on complex social networks. PLOS Comput. Biol. 12, e1004928 (2016). 43. 43. Santillana, M. et al. Combining search, social media, and traditional data sources to improve influenza surveillance. PLOS Comput. Biol. 11, e1004513 (2015). 44. 44. Ertem, Z., Raymond, D. & Meyers, L. A. Optimal multi-source forecasting of seasonal influenza. PLOS Comput. Biol. 14, e1006236 (2018). 45. 45. Goldstein, E., Viboud, C., Charu, V. & Lipsitch, M. Improving the estimation of influenza-related mortality over a seasonal baseline. Epidemiology 23, 829–838 (2012). 46. 46. Palmer, T. N. Predicting uncertainty in forecasts of weather and climate. Rep. Prog. Phys. 63, 71–116 (2002). 47. 47. Pei, S., Cane, M. A. & Shaman, J. Predictability in process-based ensemble forecast of influenza. PLOS Comput. Biol. 15, e1006783 (2019). 48. 48. Saad, Y. Numerical Methods for Large Eigenvalue Problems Revised edition (SIAM, Philadelphia, 2011). 49. 49. Kirkpatrick, S., Gelatt, C. D. & Vecchi, M. P. Optimization by simulated annealing. Science 220, 671–680 (1983). 50. 50. The U.S. Centers for Disease Control and Prevention, FluView Interactive, www.cdc.gov/flu/weekly/fluviewinteractive.htm (accessed on Nov 18, 2019). 51. 51. Shaman, J. & Kohn, M. Absolute humidity modulates influenza survival, transmission, and seasonality. Proc. Natl Acad. Sci. USA. 106, 3243–3248 (2009). 52. 52. Dalziel, B. D. et al. Urbanization and humidity shape the intensity of influenza epidemics in US cities. Science 362, 75–79 (2018). 53. 53. Gelman, A. et al. Bayesian Data Analysis (Chapman and Hall/CRC, Boca Raton, FL, 2013). 54. 54. Arulampalam, M. S., Maskell, S., Gordon, N. & Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 50, 174–188 (2002). 55. 55. Evensen, G. Data Assimilation: The Ensemble Kalman Filter (Springer Science & Business Media, Heidelberg, 2009). 56. 56. Beaumont, M. A., Zhang, W. & Balding, D. J. Approximate Bayesian computation in population genetics. Genetics 162, 2025–2035 (2002). 57. 57. Cosgrove, B. A. et al. Real-time and retrospective forcing in the North American Land Data Assimilation System (NLDAS) project. J. Geophys. Res. 108, 8842 (2003). 58. 58. United States Census Bureau, County to county commuting data. www.census.gov/topics/employment/commuting.html (accessed Nov 18, 2019). 59. 59. Anderson, J. L. An ensemble adjustment Kalman filter for data assimilation. Mon. Weather Rev. 129, 2884–2903 (2001). 60. 60. Kandula, S. et al. Evaluation of mechanistic and statistical methods in forecasting influenza-like illness. J. R. Soc. Interface 15, 20180174 (2018). 61. 61. DeFelice, N. B., Little, E., Campbell, S. R. & Shaman, J. Ensemble forecast of human West Nile virus cases and mosquito infection rates. Nat. Commum. 8, 14592 (2017). 62. 62. Pei, S., Morone, F., Liljeros, F., Makse, H. & Shaman, J. Inference and control of the nosocomial transmission of methicillin-resistant Staphylococcus aureus. eLife 7, e40977 (2018). 63. 63. Kandula, S., Pei, S. & Shaman, J. Improved forecasts of influenza-associated hospitalization rates with Google Search Trends. J. R. Soc. Interface 16, 20190080 (2019). 64. 64. Bomfim, R. et al. Predicting dengue outbreaks at neighbourhood level using human mobility in urban areas. J. R. Soc. Interface 17, 20200691 (2020). 65. 65. Pei, S. & Shaman, J. Aggregating forecasts of multiple respiratory pathogens supports more accurate forecasting of influenza-like illness. PLOS Comput. Biol. 16, 1008301 (2020). 66. 66. Wilcoxon, F. Individual comparisons by ranking methods. Biometrics Bull. 1, 80–83 (1945). Acknowledgements This work was supported by US National Institutes of Health grant GM110748, Defense Advanced Research Projects Agency contract W911NF-16-2-0035, and a gift from the Morris-Singer Foundation. Author information Authors Contributions S.P. and J.S. designed the research; S.P. and X.T. performed the experiments and analysis; P.L. curated the data; S.P., X.T., P.L., and J.S. interpreted the results and wrote the manuscript. Corresponding authors Correspondence to Sen Pei or Jeffrey Shaman. Ethics declarations Competing interests J.S. and Columbia University disclose partial ownership of SK Analytics. J.S. discloses consulting for BNI. All other authors declare no competing interests. Peer review information Nature Communications thanks Jonathan Dushoff and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Reprints and Permissions Pei, S., Teng, X., Lewis, P. et al. Optimizing respiratory virus surveillance networks using uncertainty propagation. Nat Commun 12, 222 (2021). https://doi.org/10.1038/s41467-020-20399-3 • Accepted: • Published:
{}
The number of lenses for different activities is in, To analyze how Arizona workers ages 16 or older travel to work the percentage of workers using carpool, private vehicle (alone), and public transportation was collected. On a histogram, the frequency is measured by the area of the bar. Frequency polygons are analogous to line graphs, and just as line graphs make continuous data visually easy to interpret, so too do frequency polygons. The relative frequency bar graph looks exactly the same as the frequency bar graph. Eyeglassomatic manufactures eyeglasses for different retailers. There are spreadsheet software packages that will create most of them, and it is better to look at them to see what can be done. Find the median values. Another type of graph for qualitative data is a pie chart. Below is a frequency table and charts of the results: Out of a total of 128 responses, 41% (or 52/128) of students reported that Batman would win the battle, followed by Iron Man with 27%, Captain America with 19%, and Superman with 13%. Bar charts may be needed to compare data. A relative or proportional comparison is usually more useful than a comparison of absolute frequencies. Preview; Assign Practice; Preview. Learn how to create a box plot. A bar graph (or bar chart) is perhaps the most common statistical data display used by the media. A. There should be a scaling on the frequency axis and the categories should be listed on the category axis. As an example, if you asking people about what their favorite national park is, and you say to pick the top three choices, then the total number of answers can add up to more than 100% of the people involved. Learn how to analyze and interpret variation in data by using stem and leaf plots and histograms. Draw a pie chart of the data in Example 2.1.1. In This Part: Relative Frequency Groups: Small groups can present their bar graphs to the whole group at this time. Notice from the graph, you can see that Toyota and Chevy are the more popular car, with Nissan not far behind. > A third type of qualitative data graph is a Pareto chart, which is just a bar chart with the bars sorted with the highest frequencies on the left. Have questions or comments? Accordingly, the total percentage may not sum to exactly 100%. Now draw the pie chart using a compass, protractor, and straight edge. There are several different graphs that are used for qualitative data. Now just count how many of each type of cars there are. Cumulative Frequency Graph, Plot the cumulative frequency curve. A listing of data is too hard to look at and analyze, so you need to summarize it. A histogram represents the frequency distribution of continuous variables. Graph 2.1.3: Pie Chart for Type of Car Data. An alternative is to use relative frequency, or frequency as a proportion of the whole set. In that case it is easier to make a category called other for the ones with low values. Frequency Density: The major difference between a bar graph and a histogram is the way in which the frequencies of each class or interval are represented. The line plot is a useful graph for examining small sets of data. All you have to do to find the angle by multiplying the relative frequency by 360 degrees. You can also use MS Excel or Google Sheets to create a bar graph from the frequency table. He seems to spend less time on strength exercises on a given day. It really is a personal preference and also what information you are trying to address. Graph 2.1.1: Bar Graph for Type of Car Data. Table 2.1.5: Data of Travel Mode for Arizona Workers, Table 2.1.6: Data of Number of Deaths Due to CO Poisoning, Table 2.1.7: Data of Household Heating Sources, Graph 2.1.6: Multiple Bar Chart for Contraceptive Types. Frequency distribution in statistics provides the information of the number of occurrences (frequency) of distinct values distributed within a given period of time or interval, in a list, table, or graphical representation.Grouped and Ungrouped are two types of Frequency Distribution. Histogram. Explore the concept of the mean and how variation in data can be described relative to the mean. Bar graphs or charts consist of the frequencies on one axis and the categories on the other axis. One advantage to using relative frequencies is that the total of all relative frequencies in a data set should be 1 (or very close to 1, depending on round-off error), or 100%. If you use technology, there is no need for the relative frequencies or the angles. A bar chart or bar graph is a chart or graph that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. Find the inter-quartile range, how to draw a cumulative frequency curve for grouped data, How to find median and quartiles from the cumulative frequency diagram, with video lessons, examples and step-by-step solutions. Investigate some basic concepts of probability and the relationship between statistics and probability. Ford seems to be the type of car that you can tell was the least liked, though the cars in the other category would be liked less than a Ford. The following Interactive Activity (Flash interactive has been disabled) allows you to review and compare the various representations of data we have explored, both graphical and tabular. The graph will have the same shape with either label. 2.2: Bar Graphs, Pareto Charts and Pie Charts, [ "article:topic", "showtoc:no", "license:ccbysa", "authorname:kkozak", "Pareto charts", "pie charts", "bar graphs", "source[1]-stats-5164", "source[2]-stats-5164" ], https://stats.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fstats.libretexts.org%2FCourses%2FLas_Positas_College%2FMath_40%253A_Statistics_and_Probability%2F02%253A_Frequency_Distributions_and_Graphs%2F2.02%253A_Bar_Graphs_Pareto_Charts_and_Pie_Charts, 2.3: Stem-and-Leaf Graphs (Stemplots), Line Graphs, and Bar Graphs. Suppose you have the following data for which type of car students at a college drive. In this video segment, meteorologist Kim Martucci demonstrates how she solves the statistical problem of predicting the weather. Understand numerical and graphic representations of the minimum, the maximum, the median, and quartiles. To find the percentage, multiply the decimal by 100 to obtain 29.4%. A frequency table is a summary of the data with counts of how often a data value (or category) occurs. As you can see from the graph, Toyota and Chevy are more popular, while the cars in the other category are liked the least. The Wii system keeps track of how many minutes you spend on each of the exercises everyday. The percentages are given in, The percentages of people who use certain contraceptives in Central American countries are displayed in. It’s especially helpful as a device for learning basic statistical ideas. Frequency Table The frequency of a data value is equal to the number of times that the value occurs A Frequency Table arranges data values in order from least to greatest with their corresponding frequency 3. There are several days when the amount of exercise in the different categories is almost equal. Cumulative Frequency Table C. Pie Chart D. Stem and Leaf Display 3. 2. 1. Statistical analysis allows us to organize data in different ways so that we can draw out potential patterns in the variation and give better answers to the questions posed. All rights Reserved. To decrease round-off error, we would have to increase the number of decimal places used when rounding. Bar Graph B. Frequency Distribution Table Using Pivot Table. A spreadsheet program like Excel can make both of them. Do your graphs look the same? Although the vertical axis of both graphs is discrete, the horizontal axis of a bar graph is categorical while that of a histogram is numerical. The usefulness of a multiple bar graph is the ability to compare several different categories over another variable, in Example 2.1.4 the variable would be time. But using a pivot table to create an Excel frequency … Graph 2.1.4: Pareto Chart for Type of Car Data. Notice that the relative frequencies expressed as fractions add up to 17/17, which equals 1. Find the upper and lower quartiles. 2000 Avenue of the Stars, Suite 1000S, Los Angeles, CA 90067 © 2020 Annenberg Foundation. Investigate various approaches for summarizing variation in data, and learn how dividing data into groups can help provide other types of answers to statistical questions. Continue learning about organizing and grouping data in different graphs and tables. Thus, a histogram is a graphical representation of a frequency distribution with class intervals or attributes as the base and frequency as the height. Legal. Pie Chart or Circle Graph. Learning Math: Data Analysis, Statistics, and Probability > The total of the relative frequencies expressed as decimals, however, may not always be exactly 1 due to round-off error; they will occasionally add to 1.002 or 0.997, for example, or something very close to 1. A bar graph is a pictorial representation of data that uses bars to compare different categories of data. Clearly, we need a better way to summarize the data. In this case, the relative frequency of the count 5 is 5/17, which can also be written in decimal form as .294 (rounded to three digits). Bar charts show similar information. Since raw numbers are not as useful to tell other people it is better to create a third column that gives the relative frequency of each category. How can we analyze the data? Make a bar chart and a pie chart of this data. Draw a bar graph of the data in Example 2.1.1. In this way, a relative frequency bar graph allows you to think of the data in terms of the whole set in contrast to a frequency bar graph, which only provides you with individual counts. A frequency distributionlists each category of data and the number of occurrences for each category. There are many other types of graphs that can be used on qualitative data. Bar graphs are one of the means of data handling in statistics. Variables, bias, and random sampling are introduced. Remember that the number of dots over each value on the horizontal axis corresponds to the frequency of that data value: Now draw a rectangle over each value, with a height corresponding to the frequency of that value: Now remove the dots, and add a vertical scale that indicates the frequency of each value on the horizontal scale: The frequency bar graph contains the same information as the line plot for the counts of raisin boxes, but it doesn’t indicate the raisin count for each individual box. One of the ADVANTAGES of using a PICTOGRAPH: A. This indicates how strong in your memory this concept is. To determine the relative frequency for each class we first add the total number of data points: 7 + 9 + 18 + 12 + 4 = 50. Examine how to collect and compare data from observational and experimental studies, and learn how to set up your own experimental studies. Progress % Practice Now. Have you ever read a few pages of a textbook and realized shows the frequency of events occurring. Technology like MS Excel or Google Sheets will create pie charts very quickly. The relative frequencies expressed as decimals also sum to 1, and the relative frequencies expressed as percentages add up to 100%. Eyeglassomatic manufactures eyeglasses for different retailers. This means that 29.4% of the raisin boxes contain 28 raisins. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. They test to see how many defective lenses they made during the time period of January 1 to March 31. Explore different ways of representing, analyzing, and interpreting data, including line plots, frequency tables, cumulative and relative frequency tables, and bar graphs. It appears that Dylan spends more time on yoga exercises than on any other exercises on any given day. It depends on your data as to which may be useful. You can also draw a bar graph using relative frequency on the vertical axis. Watch this segment after you have completed Session 2. Learn about random events, games of chance, mathematical and experimental probability, tree diagrams, and the binomial probability model. You now have a relative frequency distribution: Table 2.1.2: Relative Frequency Table for Type of Car Data. In the raisin example, the height of each bar is the relative frequency of the corresponding raisin count, expressed as a percentage: See Note 9, below. Create a bar chart and pie chart of the data in. A bar chart is a great way to display categorical variables in the x-axis. The first step for either graph is to make a frequency or relative frequency table. It’s important to note that several kinds of answers can be given when there is variation in your data. This type of graph is … The collection, presentation, analysis, organization, and interpretation of observations of data are known as statistics. Bar Graphs, Frequency Tables, and Histograms. The advantage of Pareto charts is that you can visually see the more popular answer to the least popular. A. Abscissa B. Ordinate C. Neither A nor B D. Both A and B 4. Although the frequency bar graph is useful in many ways, it, like the line plot, can be an awkward graph for large data sets, since the vertical axis corresponds to the frequency of each data value. A bar graph breaks categorical data down by group, and represents these amounts by using bars of different lengths. Create a bar chart and pie chart of this data. A pie chart and bar chart of these results are shown below: Officially, we call this a frequency distribution. It is important to note that the sizes of the bars remain the same whether you use frequencies or relative frequencies. That’s a lot of dots for data sets with hundreds or thousands of values! In this case it is relatively easy; just use the car type. The height of each bar or rectangle tells us the frequency for the corresponding raisin count. Learn how to describe variation in estimates, and the effect of sample size on an estimate's accuracy. The second part of this segment begins approximately 24 minutes and 48 seconds after the Annenberg Media logo. Some answers may be stated as intervals, and some answers, like the mode and the median, use a specific value to represent all the different data values. As an example for Ford category: relative frequency $$= \frac{5}{50} = 0.10$$. Create tables and graphs from given data % Progress . Use underline '_' for space in data labels: 'name_1' will be viewed as 'name 1'. The bars can be plotted vertically or horizontally. The continuous data takes the form of class intervals. Learn how to determine and understand the median. Table 2.1.3: Pie Chart Angles for Type of Car Data. A bar chart (also called a bar graph) is a great way to visually display certain types of information, such as changes over time or differences in size, volume, or amount. There are several different types of graphs that can be used: bar chart, pie chart, and Pareto charts. The next example illustrates one of these types known as a multiple bar graph. The relative frequency graph and the frequency graph should look the same, except for the scaling on the frequency axis. Explore scatter plots, the least squares line, and modeling linear relationships. Here are some examples using fabricated data. On a bar graph, the frequency is the height of the bar. Here is the Pareto chart for the data in Example 2.1.1. Imagine the sheet of paper you’d need for the economy-size box of raisins! The height of each bar or rectangle tells us the frequency for the corresponding raisin count. Table 2.1.4: Data for Eyeglassomatic Here is a frequency table for the raisin count, with the corresponding relative frequencies written as fractions, decimals, and percentages: Complete the table above. There are 360 degrees in a full circle. Then you draw rectangles for each category with a height (if frequency is on the vertical axis) or length (if frequency is on the horizontal axis) that is equal to the frequency. It is short and concise C. It allows ease of comprehension D. In data analysis, bar graphs are used to measure the frequency of categorical data, while histograms measure ordinal and quantitative (interval and ratio) data. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Conversely, a bar graph is a diagrammatic comparison of discrete variables. Bar graph maker online . In the Wii Fit game, you can do four different types if exercises: yoga, strength, aerobic, and balance. Reverse order X: option to reverse the order of the categories of the first variable. Bar Charts and Frequency Distributions 1. However, there are several cars that only have one car in the list. Remember that 180 degrees is half a circle and 90 degrees is a quarter of a circle. This is just the frequency divided by the total. State any findings you see from the graphs. Using bar charts, pie charts and frequency diagrams can make information easier to digest. Learn how to select a random sample and use it to estimate characteristics of an entire population. In statistics, it can be difficult to provide a specific answer to a question because of the variation present in the data. Of the cars that you can determine from the graph, Ford is liked less than the others. Graph 2.1.5: Multiple Bar Chart for Wii Fit Data. The height of the bar is the frequency. The histogram (like the stemplot) can give you the shape of the data, the center, and the spread of the data. A frequency plot is a graph that shows the pattern in a set of data by plotting how often particular values of a measure occur. Put the frequency on the vertical axis and the category on the horizontal axis. • Create and interpret frequency distribution tables, bar graphs, histograms, and line graphs • Explain when to use a bar graph, histogram, and line graph • Enter data into SPSS and generate frequency distribution tables and graphs. Consequently, the vertical axis would have to be scaled according to the largest frequency. Next we, divide each frequency by … For large data sets, some data values occur many times and have a high frequency. Explore how the concepts developed in this course can be applied through case studies of a grade 3-5 teacher, Suzanne L'Esperance and grade 6-8 teacher, Paul Snowden, both former course participants who have adapted their new knowledge to their classrooms. Concepts include fair and unfair allocations, and how to measure variation about the mean. You can, however, replace a line plot with a frequency bar graph. For the continuous (numeric) variables, see the page Histograms, Descriptive Stats and Stem and Leaf. The first step for either graph is to make a frequency or relative frequency table. For example, there are 5 Fords, 12 Chevys, and 6 Hondas. A simple bar chart may look like this. Let’s look at the transition from line plot to frequency bar graph. In a HISTOGRAM, the INDEPENDENT VARIABLE is found on the _____? First you need to decide the categories. Pie charts are useful for comparing sizes of categories. A relative frequency bar graph looks just like a frequency bar graph except that the units on the vertical axis are expressed as percentages. It will help you learn other actions in descriptive statistics, such as cross tabulation, finding the mean or the standard deviation, or creating a box plot, bar graph, or histogram. We start with the line plot we’ve been using. A frequency table is a summary of the data with counts of how often a data value (or category) occurs. Learn about relative and cumulative frequency. The relative frequency is equal to the frequency for an observed value of the data divided by the total number of data values in the sample. Ford, Chevy, Honda, Toyota, Toyota, Nissan, Kia, Nissan, Chevy, Toyota, Honda, Chevy, Toyota, Nissan, Ford, Toyota, Nissan, Mercedes, Chevy, Ford, Nissan, Toyota, Nissan, Ford, Chevy, Toyota, Nissan, Honda, Porsche, Hyundai, Chevy, Chevy, Honda, Toyota, Chevy, Ford, Nissan, Toyota, Chevy, Honda, Chevy, Saturn, Toyota, Chevy, Chevy, Nissan, Honda, Toyota, Toyota, Nissan. VCE Further Maths Tutorials. Composite bar charts. Data Organization and Representation > 2.5 Part E: Bar Graphs and Relative Frequencies (30 Minutes). Example $$\PageIndex{4}$$ multiple bar graph. HOW TO BE SUCCESSFUL IN THIS COURSE. This can be written as a decimal, fraction, or percent. Example: Companies.jmp (Help > Sample Data) Bar Charts and Frequency Distributions Use to display the distribution of categorical (nominal or ordinal) variables. Example $$\PageIndex{3}$$ drawing a pie chart. People in Bangladesh were asked to state what type of birth control method they use. Relative frequency is just the percentage as a decimal. If you are working with actual raisins, draw a frequency bar graph and a relative frequency bar graph with your data. For example, the statement “Five of the 17 boxes have 28 raisins” is more useful than the statement “Five boxes have 28 raisins.”. Data is represented in many different forms. A pie chart is where you have a circle and you divide pieces of the circle into pie shapes that are proportional to the size of the relative frequency. The first one counts the number of occurrence between groups.The second Sometimes, we really want to know the frequency of a particular category in referenc… Data is a collection of numbers or values and it must be organized for it to be useful. This allows a person to interpret the data with a little more ease. Unlike a bar graph that depicts discrete data, histograms depict continuous data. Remember, qualitative data are words describing a characteristic of the individual. The statistical data can be represented by various methods such as tables, bar graphs, pie charts, histograms, frequency polygons, etc. The frequency bar graph contains the same information as the line plot for the counts of raisin boxes, but it doesn’t indicate the raisin count for each individual box. Graphs After creating a Frequency Distribution table you might like to make a Bar Graph or a Pie Chart using the Data Graphs (Bar, Line and Pie) page. This is especially useful in business applications, where you want to know what services your customers like the most, what processes result in more injuries, which issues employees find more important, and other type of questions like these. Professional Development Grouped Frequency Distribution Data Index Learn how to use intervals to describe variation in data. How are they different? Core (Data Analysis) Tutorial 4 - Frequency Histograms and Bar Charts. Let's suppose you give a survey concerning favorite color, and the data you collect looks something like the table below. Missed the LibreFest? There should be labels on each axis and a title for the graph. The next transition in the representation is to replace frequencies with relative frequencies. This can be put in a frequency distribution: Table 2.1.1: Frequency Table for Type of Car Data. A histogram in another kind of graph that uses bars in its display. What are Kim Martucci’s strategies for predicting the weather? What that means it that you can use a histogram with different interval or class widths to represent data with varying densities. Technology is preferred. It uses either the number of individuals in each group (also called the frequency) or the percentage in each group (called the relative frequency). Consider statistics as a problem-solving process and examine its four components: asking questions, collecting appropriate data, analyzing the data, and interpreting the results. In Connecticut households use gas, fuel oil, or electricity as a heating source. Give decimals to three decimal places and percentages to the nearest tenth of a percent. Example $$\PageIndex{2}$$ drawing a bar graph. Learning Math: Data Analysis, Statistics, and Probability. Grind means that they ground the lenses and put them in frames, multicoat means that they put tinting or scratch resistance coatings on lenses and then put them in frames, assemble means that they receive frames and lenses from other sources and put them together, make frames means that they make the frames and put lenses in from other sources, receive finished means that they received glasses from other source, and unknown means they do not know where the lenses came from. Now that you have the frequency and relative frequency table, it would be good to display this data using a graph. This option is applicable to the Simple and Clustered column charts only. Watch the recordings here on Youtube! All of the rectangles should be the same width, and there should be equally width gaps between each bar. These graphs include bar graphs, Pareto charts, and pie charts. The total of the frequency column should be the number of observations in the data. Bar charts can be horizontal or vertical; in Excel, the vertical version is referred to as column chart. Then just draw a box above each category whose height is the frequency. The relative frequency column should add up to 1.00. Practice. How are they similar to your strategies for counting raisins? The Stacked column graph always shows counts, the 100% Stacked column graph always shows percentages. Bar Graph: A bar graph is a chart that plots data with rectangular bars representing the total amount of data for that category. However, pie charts are best when you only have a few categories and the data can be expressed as a percentage. How to Make a Frequency Table and Bar Graph By: Amanda Craft and Betsy Lawson 2. Discuss any indication you can infer from the graph. Pie charts and bar graphs are the most common ways of displaying qualitative data. Graph 2.1.2: Relative Frequency Bar Graph for Type of Car Data. To construct a frequency polygon, first examine the data and decide on the number of intervals, or class intervals, to use on the x -axis and y -axis. It really doesn’t matter which one you use. Part 1 of 3. That category about organizing and grouping data in example 2.1.1 of observations of data using charts... Collection, presentation, Analysis, organization, and the data can be given when there is variation in by! Variation about the mean collection, presentation, Analysis, organization, probability. Graphs, frequency tables, and Pareto charts is that you can visually see page... Of Pareto charts, and Histograms Ordinate C. Neither a nor B D. both a and 4... Occurrences for each category representation is to make a category called other for the corresponding raisin count binomial probability.... Oil, or electricity as a multiple bar chart is a chart that data. Too hard to look at the transition from line plot we ’ ve been using answers can written. Categories is almost equal graphs from given data % Progress with varying densities scatter plots the. Ones with low values allows a person to interpret the data with varying densities type of data. Second part of this segment on the frequency is the height of the data in example 2.1.1 noted LibreTexts! A given day amounts by using bars of different lengths watch this segment the! Numerical and graphic representations of the bars remain the same whether you.. More useful than a comparison of absolute frequencies data in different graphs and.... For large data sets with hundreds or thousands of values % of the variation present in the Wii system track. Is short and concise C. it allows ease of comprehension D. data is represented in different! % Progress with hundreds or thousands of values display categorical variables in the representation is to make a bar for! You use technology, there are several different graphs that can be put in a frequency bar graph the! Of using a Pivot table to frequency bar graph statistics an Excel frequency … bar graphs Pareto. Excel or Google Sheets will create pie charts very quickly ( or category ) occurs table below association co-variation! Draw a bar graph, plot the cumulative frequency table is a pie chart this. And understand the concepts of association and co-variation between two quantitative variables Analysis, organization, and these! The whole group at this time 2020 Annenberg Foundation: yoga, strength aerobic!, with Nissan not far behind relationship between statistics and probability to which may useful! Categories is almost equal to 100 % from line plot with a little due to rounding errors for over... Learning Math: data Analysis chart using a PICTOGRAPH: a bar graph ( or category ) occurs example Ford. Use it to estimate characteristics of an entire population categories is almost equal people in Bangladesh were to... And 90 degrees is a collection of numbers or values and it must be organized it! That can be written as a multiple bar chart for type of graph uses! The 100 % Stacked column graph always shows percentages appears that Dylan spends more time strength! Looks just like a frequency table, it can be used on qualitative data these amounts by using and! On each of the data in example 2.1.1 session video approximately 23 minutes and seconds. Segment, meteorologist Kim Martucci ’ s important to note that the relative frequency table C. chart... Sets, some data values occur many times and have a high frequency a. Axis are expressed as percentages add up to 1.00 a relative or proportional comparison is usually more than! Just like a frequency table you have the frequency axis and the binomial probability model C. it allows of... For example, there are several days when the amount of exercise in the different categories is almost equal a... Plots data with varying densities used when rounding dots for data sets, some data occur! When you want to compare different categories is almost equal whole group at time. Problem of predicting the weather lot of dots for data sets with hundreds or of... S important to note that the sizes of categories, there is a diagrammatic comparison of absolute.... On one axis and the data in example 2.1.1 is variation in data by using Stem and Leaf 3. Is that you have the following graph is to use relative frequency bar graph maker online pie! Example for Ford category: relative frequency table comparing sizes of categories a compass, protractor, and there be! By 360 degrees for the ones with low values the sizes of the mean LibreFest. Co-Variation between two quantitative variables these types known as a device for learning statistical. Squares line, and modeling linear relationships or thousands of values how strong in your memory concept... Missed the LibreFest that depicts discrete data, Histograms depict continuous data takes the form of class intervals to.
{}
# Proper and flat morphism implies finitely presented? I´ve been reading the Deligne-Mumford construction of the moduli of curves with a given genus and I have some questions about the article http://www.numdam.org/article/PMIHES_1969__36__75_0.pdf 1) When the authors talk about a scheme what are they referring to? In the EGA, a scheme is what we call separated scheme, so I don´t know if they are working on the category of separated schemes or just schemes (in our terminology). 2) My second question is about Definition 1.1. They say that a stable curve is a proper flat morphism of schemes $$f:X\rightarrow S$$ whose geometric fibers are reduced, connected, 1-dimensional schemes such that: • $$X_{s}$$ has only ordinary double points, • If $$E$$ is a non-singular rational component of $$X_{s}$$ then $$E$$ meets the other components of $$X_{s}$$ in more than 2 points; • $$\rm{dim}\rm{H}^{1}(\mathcal{O}_{X_{s}})=g$$ In general, a relative curve is defined as a flat finitely presented morphism of schemes $$X\rightarrow S$$ of relative dimension 1. My question is if proper+flat in this particular case implies finitely presented. It is the same true if $$f:X\rightarrow S$$ is a proper and flat morphism whose geometric fibers are complete integral algebraic curves of arithmetic genus $$g$$?
{}
Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed # A Case Study of Smart Grid Station in Guri Branch Office of KEPCO Version 1 : Received: 10 May 2018 / Approved: 10 May 2018 / Online: 10 May 2018 (08:47:19 CEST) How to cite: Whang, J.; Hwang, W.; Yoo, Y.; Jang, G. A Case Study of Smart Grid Station in Guri Branch Office of KEPCO. Preprints 2018, 2018050161 (doi: 10.20944/preprints201805.0161.v1). Whang, J.; Hwang, W.; Yoo, Y.; Jang, G. A Case Study of Smart Grid Station in Guri Branch Office of KEPCO. Preprints 2018, 2018050161 (doi: 10.20944/preprints201805.0161.v1). ## Abstract Climate change and global warming are becoming important problems around the globe. To prevent these environmental problems, many countries try to reduce the emissions of greenhouse gases (GHG) and manage the consumption of energy. In Korea, Korea Electric Power Corporation (KEPCO) has introduced Smart Grid (SG) technologies to its branch office in 2014. This was the first demonstration of smart grid on a building called Smart Grid Station. This paper treats the achievements of the Smart Grid Station (SGS) by its early target in three aspects. The things are peak reduction, power consumption reduction and electricity fee saving. The authors analyzed the achievements by comparing the data of 2015 with the data of 2014. Through the evaluation, the authors studied the case, proved the advantages of SGS, and suggested the requirements to improve and the direction to go of the system. ## Subject Areas smart grid; Smart Grid Station; renewable energy sources; energy management system Views 0
{}
### astSetRefPos Set the reference position in a specified celestial coordinate system #### Description: This function sets the reference position (see attributes RefRA and RefDec) using axis values (in radians) supplied within the celestial coordinate system represented by a supplied SkyFrame. #### Synopsis void astSetRefPos( AstSpecFrame $\ast$this, AstSkyFrame $\ast$frm, double lon, double lat ) #### Parameters: ##### this Pointer to the SpecFrame. ##### frm Pointer to the SkyFrame which defines the celestial coordinate system in which the longitude and latitude values are supplied. If NULL is supplied, then the supplied longitude and latitude values are assumed to be FK5 J2000 RA and Dec values. ##### lon The longitude of the reference point, in the coordinate system represented by the supplied SkyFrame (radians). ##### lat The latitude of the reference point, in the coordinate system represented by the supplied SkyFrame (radians).
{}
## Mangoldt Function The function defined by (1) is also given by [1, 2, ..., ]/[1, 2, ..., ], where denotes the Least Common Multiple. The first few values of for , 2, ..., plotted above, are 1, 2, 3, 2, 5, 1, 7, 2, ... (Sloane's A014963). The Mangoldt function is related to the Riemann Zeta Function by (2) where . The Summatory Mangoldt function, illustrated above, is defined by (3) where is the Mangoldt Function. This has the explicit formula (4) where the second Sum is over all complex zeros of the Riemann Zeta Function and interpreted as (5) Vardi (1991, p. 155) also gives the interesting formula (6) where is the Nint function and is a Factorial. Vallée Poussin's version of the Prime Number Theorem states that (7) for some (Davenport 1980, Vardi 1991). The Riemann Hypothesis is equivalent to (8) (Davenport 1980, p. 114; Vardi 1991). See also Bombieri's Theorem, Greatest Prime Factor, Lambda Function, Least Common Multiple, Least Prime Factor, Riemann Function References Davenport, H. Multiplicative Number Theory, 2nd ed. New York: Springer-Verlag, p. 110, 1980. Sloane, N. J. A. Sequence A014963 in The On-Line Version of the Encyclopedia of Integer Sequences.'' http://www.research.att.com/~njas/sequences/eisonline.html. Vardi, I. Computational Recreations in Mathematica. Reading, MA: Addison-Wesley, pp. 146-147, 152-153, and 249, 1991.
{}
# NetworkFeatures¶ NetworkFeatures contains a set of features relevant for the output of network models and are calculated using the Elephant software. This set of features require that the model returns the simulation end time and a list of spiketrains, which are the times a given neuron spikes. The implemented features are: 1. average_firing_rate – Mean firing rate (for a single recorded neuron). 2. instantaneous_rate – Instantaneous firing rate (averaged over all recorded neurons within a small time window). 3. mean_isi – Average interspike interval (averaged over all recorded neurons). 4. cv – Coefficient of variation of the interspike interval (for a single recorded neuron). 5. average_cv – average coefficient of variation of the interspike interval (averaged over all recorded neurons). 6. local_variation – Local variation (variability of interspike intervals for a single recorded neuron). 7. average_local_variation – Mean local variation (variability of interspike intervals averaged over all recorded neurons). 8. fanofactor – Fanofactor (variability of spiketrains). 9. victor_purpura_dist – Victor purpura distance (spiketrain dissimilarity between two recorded neurons). 10. van_rossum_dist – Van rossum distance (spiketrain dissimilarity between two recorded neurons). 11. binned_isi – Histogram of the interspike intervals (for all recorded neurons). 12. corrcoef – Pairwise Pearson’s correlation coefficients (between the spiketrains of two recorded neurons). 13. covariance – Covariance (between the spiketrains of two recorded neurons). The use of the NetworkFeatures class in Uncertainpy follows the same logic as the use of the other feature classes, and custom features can easily be included. As with SpikingFeatures, NetworkFeatures implements a preprocess() method. This preprocess returns the following objects: 1. End time of the simulation (end_time). 2. A list of NEO spiketrains (spiketrains). Each feature function therefore require the same objects as input arguments. Note that a info object is not used. ## API Reference¶ class uncertainpy.features.NetworkFeatures(new_features=None, features_to_run=u'all', interpolate=None, labels={}, units=None, instantaneous_rate_nr_samples=50, isi_bin_size=1, corrcoef_bin_size=1, covariance_bin_size=1, logger_level=u'info')[source] Network features of a model result, works with all models that return the simulation end time, and a list of spiketrains. Parameters: new_features ({None, callable, list of callables}) – The new features to add. The feature functions have the requirements stated in reference_feature. If None, no features are added. Default is None. features_to_run ({“all”, None, str, list of feature names}, optional) – Which features to calculate uncertainties for. If "all", the uncertainties are calculated for all implemented and assigned features. If None, or an empty list [], no features are calculated. If str, only that feature is calculated. If list of feature names, all the listed features are calculated. Default is "all". interpolate ({None, “all”, str, list of feature names}, optional) – Which features are irregular, meaning they have a varying number of time points between evaluations. An interpolation is performed on each irregular feature to create regular results. If "all", all features are interpolated. If None, or an empty list, no features are interpolated. If str, only that feature is interpolated. If list of feature names, all listed features are interpolated. Default is None. labels (dictionary, optional) – A dictionary with key as the feature name and the value as a list of labels for each axis. The number of elements in the list corresponds to the dimension of the feature. Example: new_labels = {"0d_feature": ["x-axis"], "1d_feature": ["x-axis", "y-axis"], "2d_feature": ["x-axis", "y-axis", "z-axis"] } units ({None, Quantities unit}, optional) – The Quantities unit of the time in the model. If None, ms is used. The default is None. instantaneous_rate_nr_samples (int) – The number of samples used to calculate the instantaneous rate. Default is 50. isi_bin_size (int) – The size of each bin in the binned_isi method. Default is 1. corrcoef_bin_size (int) – The size of each bin in the corrcoef method. Default is 1. covariance_bin_size (int) – The size of each bin in the covariance method. Default is 1. logger_level ({“info”, “debug”, “warning”, “error”, “critical”, None}, optional) – Set the threshold for the logging level. Logging messages less severe than this level is ignored. If None, no logging is performed. Default logger level is “info”. features_to_run (list) – Which features to calculate uncertainties for. interpolate (list) – A list of irregular features to be interpolated. utility_methods (list) – A list of all utility methods implemented. All methods in this class that is not in the list of utility methods is considered to be a feature. labels (dictionary) – Labels for the axes of each feature, used when plotting. logger (logging.Logger) – Logger object responsible for logging to screen or file. instantaneous_rate_nr_samples (int) – The number of samples used to calculate the instantaneous rate. Default is 50. isi_bin_size (int) – The size of each bin in the binned_isi method. Default is 1. corrcoef_bin_size (int) – The size of each bin in the corrcoef method. Default is 1. covariance_bin_size (int) – The size of each bin in the covariance method. Default is 1. Notes Implemented features are: cv average_cv average_isi, local_variation mean local_variation average_firing_rate instantaneous_rate fanofactor van_rossum_dist victor_purpura_dist binned_isi corrcoef covariance All features in this set of features take the following input arguments: simulation_end : float The simulation end time neo_spiketrains : list A list of Neo spiketrains. The model must return: simulation_end : float The simulation end time spiketrains : list A list of spiketrains, each spiketrain is a list of the times when a given neuron spikes. Raises: ImportError – If elephant or quantities is not installed. uncertainpy.features.Features.reference_feature reference_feature showing the requirements of a feature function. add_features(new_features, labels={}) Parameters: new_features ({callable, list of callables}) – The new features to add. The feature functions have the requirements stated in reference_feature. labels (dictionary, optional) – A dictionary with the labels for the new features. The keys are the feature function names and the values are a list of labels for each axis. The number of elements in the list corresponds to the dimension of the feature. Example: new_labels = {"0d_feature": ["x-axis"], "1d_feature": ["x-axis", "y-axis"], "2d_feature": ["x-axis", "y-axis", "z-axis"] } TypeError – Raises a TypeError if new_features is not callable or list of callables. Notes The features added are not added to features_to_run. features_to_run must be set manually afterwards. uncertainpy.features.Features.reference_feature() reference_feature showing the requirements of a feature function. average_cv(simulation_end, spiketrains)[source] Calculate the average coefficient of variation. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) values (float) – The average coefficient of variation of each spiketrain. average_firing_rate(simulation_end, spiketrains)[source] Calculate the mean firing rate. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) average_firing_rate (float) – The mean firing rate of all neurons. average_isi(simulation_end, spiketrains)[source] Calculate the average interspike interval (isi) variation for each neuron. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) average_isi (float) – The average interspike interval. average_local_variation(simulation_end, spiketrains)[source] Calculate the average of the local variation. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) average_local_variation (float) – The average of the local variation for each spiketrain. binned_isi(simulation_end, spiketrains)[source] Calculate a histogram of the interspike interval. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (array) – The center of each bin. binned_isi (array) – The binned interspike intervals. calculate_all_features(*model_results) Calculate all implemented features. Parameters: *model_results – Variable length argument list. Is the values that model.run() returns. By default it contains time and values, and then any number of optional info values. results – A dictionary where the keys are the feature names and the values are a dictionary with the time values time and feature results on values, on the form {"time": t, "values": U}. dictionary TypeError – If feature_name is a utility method. Notes Checks that the feature returns two values. uncertainpy.features.Features.calculate_feature() Method for calculating a single feature. calculate_feature(feature_name, *preprocess_results) Calculate feature with feature_name. Parameters: feature_name (str) – Name of feature to calculate. *preprocess_results – The values returned by preprocess. These values are sent as input arguments to each feature. By default preprocess returns the values that model.run() returns, which contains time and values, and then any number of optional info values. The implemented features require that info is a single dictionary with the information stored as key-value pairs. Certain features require specific keys to be present. time ({None, numpy.nan, array_like}) – Time values, or equivalent, of the feature, if no time values returns None or numpy.nan. values (array_like) – The feature results, values must either be regular (have the same number of points for different paramaters) or be able to be interpolated. TypeError – If feature_name is a utility method. uncertainpy.models.Model.run() The model run method calculate_features(*model_results) Calculate all features in features_to_run. Parameters: *model_results – Variable length argument list. Is the values that model.run() returns. By default it contains time and values, and then any number of optional info values. results – A dictionary where the keys are the feature names and the values are a dictionary with the time values time and feature results on values, on the form {"time": time, "values": values}. dictionary TypeError – If feature_name is a utility method. Notes Checks that the feature returns two values. uncertainpy.features.Features.calculate_feature() Method for calculating a single feature. corrcoef(simulation_end, spiketrains)[source] Calculate the pairwise Pearson’s correlation coefficients. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) values (2D array) – The pairwise Pearson’s correlation coefficients. covariance(simulation_end, spiketrains)[source] Calculate the pairwise covariances. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) values (2D array) – The pairwise covariances. cv(simulation_end, spiketrains)[source] Calculate the coefficient of variation for each neuron. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) values (array) – The coefficient of variation for each spiketrain. fanofactor(simulation_end, spiketrains)[source] Calculate the fanofactor. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) fanofactor (float) – The fanofactor. features_to_run Which features to calculate uncertainties for. Parameters: new_features_to_run ({“all”, None, str, list of feature names}) – Which features to calculate uncertainties for. If "all", the uncertainties are calculated for all implemented and assigned features. If None, or an empty list , no features are calculated. If str, only that feature is calculated. If list of feature names, all listed features are calculated. Default is "all". A list of features to calculate uncertainties for. list implemented_features() Return a list of all callable methods in feature, that are not utility methods, does not starts with “_” and not a method of a general python object. Returns: A list of all callable methods in feature, that are not utility methods. list instantaneous_rate(simulation_end, spiketrains)[source] Calculate the mean instantaneous firing rate. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (array) – Time of the instantaneous firing rate. instantaneous_rate (float) – The instantaneous firing rate. interpolate Features that require an interpolation. Which features are interpolated, meaning they have a varying number of time points between evaluations. An interpolation is performed on each interpolated feature to create regular results. Parameters: new_interpolate ({None, “all”, str, list of feature names}) – If "all", all features are interpolated. If None, or an empty list, no features are interpolated. If str, only that feature is interpolated. If list of feature names, all listed features are interpolated. Default is None. A list of irregular features to be interpolated. list labels Labels for the axes of each feature, used when plotting. Parameters: new_labels (dictionary) – A dictionary with key as the feature name and the value as a list of labels for each axis. The number of elements in the list corresponds to the dimension of the feature. Example: new_labels = {"0d_feature": ["x-axis"], "1d_feature": ["x-axis", "y-axis"], "2d_feature": ["x-axis", "y-axis", "z-axis"] } local_variation(simulation_end, spiketrains)[source] Calculate the measure of local variation. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) local_variation (list) – The local variation for each spiketrain. preprocess(simulation_end, spiketrains) Preprossesing of the simulation end time simulation_end and spiketrains spiketrains from the model, before the features are calculated. Parameters: simulation_end (float) – The simulation end time spiketrains (list) – A list of spiketrains, each spiketrain is a list of the times when a given neuron spikes. simulation_end (float) – The simulation end time neo_spiketrains (list) – A list of Neo spiketrains. ValueError – If simulation_end is np.nan or None. Notes This preprocessing makes it so all features get the input simulation_end and spiketrains. uncertainpy.models.Model.run() The model run method reference_feature(simulation_end, neo_spiketrains) An example of an GeneralNetworkFeature. The feature functions have the following requirements, and the given parameters must either be returned by model.run or features.preprocess. Parameters: simulation_end (float) – The simulation end time neo_spiketrains (list) – A list of Neo spiketrains. time ({None, numpy.nan, array_like}) – Time values, or equivalent, of the feature, if no time values return None or numpy.nan. values (array_like) – The feature results, values. Returns None if there are no feature results and that evaluation are disregarded. uncertainpy.features.GeneralSpikingFeatures.preprocess() The GeneralSpikingFeatures preprocess method. uncertainpy.models.Model.run() The model run method validate(feature_name, *feature_result) Validate the results from calculate_feature. This method ensures each returns time, values. Parameters: model_results – Any type of model results returned by run. feature_name (str) – Name of the feature, to create better error messages. ValueError – If the model result does not fit the requirements. TypeError – If the model result does not fit the requirements. Notes Tries to verify that at least, time and values are returned from run. model_result should follow the format: return time, values, info_1, info_2, .... Where: • time_feature : {None, numpy.nan, array_like} Time values, or equivalent, of the feature, if no time values return None or numpy.nan. • values : {None, numpy.nan, array_like} The feature results, values must either be regular (have the same number of points for different paramaters) or be able to be interpolated. If there are no feature results return None or numpy.nan instead of values and that evaluation are disregarded. van_rossum_dist(simulation_end, spiketrains)[source] Calculate van Rossum distance. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) van_rossum_dist (2D array) – The van Rossum distance. victor_purpura_dist(simulation_end, spiketrains)[source] Calculate the Victor-Purpura’s distance. Parameters: simulation_end (float) – The simulation end time. neo_spiketrains (list) – A list of Neo spiketrains. time (None) values (2D array) – The Victor-Purpura’s distance.
{}
Math 424 - Fundamental Concepts of Analysis I See UW General Catalog for course description and prerequisite information. Textbooks • Advanced Calculus (Second Edition) by Patrick M. Fitzpatrick • Principles of Mathematical Analysis (Third Edition) by Walter Rudin Suggested Syllabus Continuity • epsilon-delta definition and sequence definition of continuity • epsilon-delta and sequence definitions of a limit • pointwise convergence, uniform convergence, uniformly Cauchy sequences • uniform limits and continuity Differentiation • definition of the derivative • tangent line approximation • algebra of derivatives and the chain rule • Rolle’s Theorem and the Mean Value Theorem • higher derivatives and Taylor’s Theorem • Intermediate Value Theorem for derivatives The Riemann-Stieltjes Integral • definition of the Riemann-Stieltjes integral • integrability of continuous and monotone functions • properties of the integral • change of variables • Fundamental Theorem of Calculus • integration by parts • Mean Value Theorem for Riemann integrals Uniform Convergence and Power Series • uniform convergence and integration • uniform convergence and differentiation • Weierstrass M-test • power series and radius of convergence • differentiation of power series • Weierstrass Approximation Theorem
{}
# Factoring Special Case Polynomials Rating Ø 5.0 / 4 ratings The authors Eugene Lee ## Basics on the topicFactoring Special Case Polynomials Writing and factoring polynomials can have special cases. Learning to recognize these special cases is definitely worth your time because it may save you some hair pulling later on. First is the square of binomials sums: (a + b)² = (a + b)(a + b) = a² + 2ab + b². Similar to this case is the square of binomial differences: (a – b)² = (a – b)(a – b) = a² – 2ab + b². The third special case is the difference of two squares: (a + b)(a – b) = a² – b². Pay attention to the patterns of these three special cases and rather than spending time factoring, foiling, and distributing the terms, you will be doing a victory dance to celebrate how well you are doing in your Algebra class. For example, if your teacher assigns you to factor this problem: x⁴ – 49. Whoa, this looks very difficult. Relax – it’s just the difference of two squares or a DOTS problem. (x²)² is equal to x⁴, and 7² is equal to 49, so to factor the difference of the two squares: (x² – 7) (x² + 7), and when you foil or use the distributive property, you are right back where you started: x⁴ – 49. To see more examples of special case polynomials and have a laugh too, watch this video. Understand the relationship between zeros and factors of polynomials. CCSS.MATH.CONTENT.HSA.APR.B.2 ### TranscriptFactoring Special Case Polynomials Daniella Feinberg, who recently retired from the fur business, and her husband are trying to enjoy a relaxing afternoon in front of their pool. She wants her husband, who makes a handsome living designing pools, to design one so she can throw a pool party... What's this? The neighbor's dog just came through the fence and it looks like he has plans to do a little bit of surfing...Uh-oh... it looks like this pug's doggie instincts are taking over and he's ripping up the inflatable pool! I guess he's all about that pug life. Daniella's up in arms about the neighbor's dog. I guess her husband has his work cut out for him ### Side lengths of the pool Mr. Feinberg suggests that they build a square pool with side lengths 'a'. Using what we learned about area, we know we can find the area by multiplying the two side lengths together. Doing so gives us , but Daniella doesn't like this option because it's too "normal". To make the pool less "normal", Mr. Feinberg suggests side measurements of (a - b), giving them a pool area of (a-b)(a - b) or (a - b)². ### The FOIL method Being the consummate professional, Mr. Feinberg uses the FOIL method to figure out exactly what he's dealing with. First he subtracts the b term from his existing plans. Since he already knows a(a) is a², he can concentrate on the other terms. a(-b) is -ab and since we have this twice, we can write it here...and here. Finally, he has two '-b's that he needs to multiply together, giving him +b². Now he just adds all the terms together and combines like terms, leaving him with a² - 2ab + b². Daniella thinks it'd be silly if she threw a pool party with this small of a pool, so then Mr. Feinberg suggests (a + b)(a + b) to give Daniella the biggest pool that'll fit in their yard. ### Write this mathematically We can write this mathematically as (a + b)(a + b) or (a + b)². Mr. Feinberg's training is coming in handy, so when he applies the FOIL method to (a + b)², he knows he'll get a positive a² and b² and 'ab' twice gives him 2a making his final expression a² + 2ab + b². Since Daniella doesn't want to give up her rose garden, and since square pools are SO five minutes ago, she rejects this option as well. Then, in a stroke of genius, Mr. Feinberg comes up with (a + b)(a - b). Mr. Feinberg uses the FOIL method one last time and gets a² one '+ab' one '-ab' and a -b². The two 'ab' terms cancel out, leaving Mr. Feinberg with a² - b². He draws it out for Daniella who looks at him quite curiously. She shows him that he's only cut out a little block from the area, not to mention the pool looks funny! ### The area Eureka again! Mr. Feinberg can move this piece here to make the pool not look as funny! All this talk of As and Bs is making Daniella a bit dizzy. She wants to know the area of the pool so that she can start planning the party. The Special Case Polynomials that Mr. Feinberg suggested were: a², (a - b)², (a + b)², and (a + b)(a - b). ### Use PEMDAS for calculation We can use any numbers for 'a' and 'b', but we'll use 10 feet for 'a' and 5 feet for 'b'. When we plug numbers in, instead of applying FOIL to our expression, we can use PEMDAS to make our calculations easier. Plugging 10 feet in for 'a' and 5 feet for 'b' gives us 100 ft.² for a² 25 ft.² for (a - b)² 225 ft.² (a + b)² and 75 ft.² for (a + b)(a - b). Good things usually come in threes, and Mr. Feinberg has had another brilliant idea... All in a hard day's work! They're finally done! Now Daniella can throw her blowout pool party! What's this?!? Looks like a pugly situation... ## Factoring Special Case Polynomials exercise Would you like to apply the knowledge you’ve learned? You can review and practice it with the tasks for the video Factoring Special Case Polynomials. • ### Determine the area of the pool. Hints If you've got any square with the side $s$, then the area is given by $s\times s=s^2$. Use FOIL for multiplying two binomials. For example, let's factor $(a+2)\times (a+2)$: First - multiply the first $a\times a=a^2$ Outer - multiply the outer $a\times 2=2a$ Inner - multiply the inner $2\times a=2a$ Last - multiply the last $2\times 2=4$ Adding all of the results together gives us $(a+2)^2=a^2+4a+4$. You can use $(a+b)(a-b)$ to compute numbers; for example: $102\times 98=(100+2)(100-2)=100^2-4=9996$. Solution We know the side lengths for each pool Mrs. Feinberg can choose from. We need to figure out the area of each pool so that she can make the most educated decision about which pool she wants for her epic pool party. $~$ Any square with the side lengths $a$ has the area $a\times a=a^2$. With this fact and the FOIL method for multiplication, we can figure these areas out! Lets recall the FOIL method for multiplication: First - multiply the first Outer - multiply the outer Inner - multiply the inner Last - multiply the last $~$ For the the leftmost pool, the sides are $a\times a=a^2$. $~$ For the second pool, we need to multiply $(a-b)\times(a-b)$ to find the area. Using the FOIL method, we have: (1) "multiply the first" to get $a^2$. (2) "multiply the outer" to get $-ab$. (3) "multiply the inner" to get $-ab$. (4) "multiply the last" to get $b^2$. Adding all of these together gives us our area: $(a-b)\times(a-b)=a^2-2ab+b^2$. $~$ Using the FOIL method in the same way for the third pool we get that the area is $(a+b)\times(a+b)=a^2+2ab+b^2$. $~$ For the fourth pool, we get $(a+b)\times(a-b)=a^2-b^2$. • ### Calculate the area of each pool using the given values for $a$ and $b$. Hints First calculate the the quantity inside the parenthesis and then multiply. Here is an example. Remember: PEMDAS; Parenthesis first and then Exponents. Solution From the leftmost pool to the rightmost: First Pool: • This pool is a square with side lengths $a=10~ft$. • So we have to square $a$ to $(10~ft)^2=100~ft^2$. Second Pool: • This pool is a square with side lengths $a-b$. • So first we calculate the difference, $10~ft-5~ft=5~ft$. • Then we square the difference $5~ft$ to get $(5~ft)^2=25~ft^2$. Third Pool: • This pool is a square with side lengths $a+b$. • So we calculate the sum $10~ft+5~ft=15~ft$. • Then we square it to get $(15~ft)^2=225~ft^2$. Fourth Pool: • This last pool is a rectangle with side lengths $10~ft+5~ft=15~ft$ and $10~ft-5~ft=5~ft$. • To get the corresponding area we multiply these values to get $15~ft\times 5~ft=75~ft$. • ### Calculate the different possible sizes of the rose bed. Hints Calculate the quantity inside the parenthesis first. You'll use the following expressions: • $(a+b)^2$ • $(a-b)^2$ • $(a+b)(a-b)$ Solution (1) The square rose bed with side lengths $a$ has area $a\times a=a^2=(15)^2=225~ft^2$. (2) The square rose bed with side lengths $a+7$ has area $(a+7)^2=(15+7)^2=22^2=484~ft^2$. (3) The square rose bed with side lengths $a-7$ has area $(a-7)^2=(15-7)^2=8^2=64~ft^2$. (4) A rectangular rose bed with one side length of $a+3$ and another side length of $a-3$ has area $(a+3)(a-3)=(15+3)(15-3)=18\times12=216~ft^2$. • ### Find the carpet sizes using the FOIL method. Hints First write down the different binomial products for the area of each carpet. After writing down all the binomial products, calculate the quantities inside the parenthesis for each one. Solution Let's start with the smallest carpet with side lengths $12~ft+3~ft$ and $12~ft-3~ft$. The size of this carpet is $15\times 9=135~ft^2$. $~$ Next we have the square carpet with side lengths $16~ft-3~ft$. The size of this carpet is $13^2=169~ft^2$. $~$ The rectangular carpet with side lengths $15~ft+1~ft$ and $15~ft-1~ft$ is $16\times 14=224~ft^2$. $~$ The square carpet with the side lengths $12~ft$ is $15^2=225~ft^2$. $~$ The biggest carpet is the rectangular one with side lengths $16~ft+3~ft$ and $16~ft-3~ft$. So it is $19\times 13=247~ft^2$. • ### Explain the FOIL method for multiplication. Hints The F step using the FOIL method for $(a+b)(a+b)$ is $a\times a=a^2$. The I step using the FOIL method for $(a+b)(a+b)$ is $b\times a=ab$. Solution To explain FOIL method for multiplication, let's have a look at the following example: To multiply the binomials $(a+2)(b+3)$, • First - multiply the first $a\times b=ab$ • Outer - multiply the outer $a\times 3=3a$ • Inner - multiply the inner $2\times b=2b$ • Last - multiply the last $2\times 3=6$ Adding all the results leads to $(a+2)(b+3)=ab+3a+2b+6$. • ### Use the product of binomials to calculate $43\times 37$. Hints Use one of the equations: • $(a+b)^2=a^2+2ab+b^2$ • $(a-b)^2=a^2-2ab+b^2$ • $(a+b)(a-b)=a^2-b^2$ Decide what the values of $a$ and $b$ need to be. We have to solve to equations $a+b=43$ and $a-b=37$. Solution We can use the formula $(a+b)(a-b)=a^2-b^2$ to help us solve $43\times 37$. $~$ With this formula, we know that we have to solve the equations $a+b=43$ and $a-b=37$. Adding these equations together, we get $2a=80$. Then dividing by $2$ gives us $a=40$, and we can see that $b=3$. Let's check: $(a+b)(a-b)=(40+3)(40-3)=43\times 37$ $\surd$ $~$ Looking at the right-hand side of the equation, we have $a^2-b^2$. Substituting $a=40$ and $b=3$, we get $240^2-3^2=1600-9=1591$. $~$ With this trick, multiplying numbers can seem like magic! Keep this in mind for the next time you want to impress your friends with your mental math skills.
{}
< Chem1 General Chemistry Virtual Textbook → acid-base equilibria Weak acids and bases Finding the pH of  solutions of acids, bases, and salts Most acids are weak; there are hundreds of thousands of them, whereas there are fewer than a dozen strong acids. We can treat weak acid solutions in much the same general way as we did for strong acids. The only difference is that we must now take into account the incomplete "dissociation"of the acid. We will start with the simple case of the pure acid in water, and then go from there to the more general one in which salts of the acid are present.  The latter mixtures are known as buffer solutions and are extremely important in chemistry, physiology, industry and in the environment. In order to keep our notation as simple as possible, we will refer to “hydrogen ions” and [H+] for brevity, and, wherever it is practical to do so, will assume that the acid HA "ionizes" or "dissociates" into H+ and its conjugate base A−. 1  Aqueous solutions of weak acids or bases A weak acid (represented here as HA) is one in which the reaction HA → A + H+(1-1) is incomplete.  This means that if we add 1 mole of the pure acid HA to water and make the total volume 1 L, the equilibrium concentration of the conjugate base A will be smaller (often much smaller) than 1 M/L, while that of undissociated HA will be only slightly less than 1 M/L. ### What does the "concentration of the acid" mean? The above equation tells us that dissociation of a weak acid HA in pure water yields identical concentrations of its conjugate species.  Let us represent these concentrations by x.  Then, in our "1 M " solution, the concentration of each species is as shown here: (1-2) #### "Concentration of the acid" and [HA] are not the same When dealing with problems involving acids or bases, bear in mind that when we speak of "the concentration", we usually mean the nominal or analytical concentration which is commonly denoted by Ca. For a solution made by combining 0.10 mol of pure formic acid HCOOH with sufficient water to make up a volume of 1.0 L, Ca = 0.10 M.  But we know that the concentration of the actual species [HCOOH] will be smaller the 0.10 M because some it ends up as the formate ion HCOO. It will, of course, always be the case that the sum [HCOOH] + [HCOO] = Ca . For the general case of an acid HA, we can write a mass balance equation Ca = [HA] + [A](1-3) which reminds us the "A" part of the acid must always be somewhere! For a strong acid such as hydrochloric, its total dissociation means that [HCl] = 0, so the mass balance reduces to the trivial expression Ca = [Cl-]. Any acid for which [HA] > 0 is by definition a weak acid. Similarly, for a base B we can write Cb = [B] + [HB+](1-4) ### Equilibrium concentrations of the acid and its conjugate base *This will be true as long as the acid is not so weak or dilute that we can neglect the small quantity of H+ contributed by the autoprotolysis of H2O. According to the above equations, the equilibrium concentrations of A and H+ will be identical*.  Let us represent these concentrations by x.  Then, in a solution containing 1 M/L of a weak acid, the concentration of each species is as shown here: (1-5) Substituting these values into the equilibrium expression for this reaction, we obtain (1-6) In order to predict the pH of this solution, we must solve for x. The presence of terms in both x and x 2 here tells us that this is a quadratic equation.  But don't panic! In most practical cases in which Ka is 10–4 or smaller, we can assume that x is much smaller than 1 M, allowing us to make the simplifying approximation (1 – x) ≈ 1 (1-7) so that x2Ka and thus x = √Ka. (1-8) This approximation will not generally be valid when the acid is very weak or very dilute — But more on this later. #### Solutions of arbitrary concentration The above development was for a solution made by taking 1 mole of acid and adding sufficient water to make its volume 1.0 L. In such a solution, the nominal concentration of the acid, denoted by Ca, is 1 M We can easily generalize this to solutions in which Ca has any value: (1-9) The above relation is known as a "mass balance on  A". It expresses the simple fact that the "A" part of the acid must always be somewhere — either attached to the hydrogen, or in the form of the hydrated anion A. The corresponding equilibrium expression is (1-10) and the approximations (when justified) 1-3a and 1-3b become (Cax) ≈ Ca (1-11) x ≈ (Ca Ka)½(1-12) Problem Example 1 -  approximate pH of an acetic acid solution Estimate the pH of a 0.20 M solution of acetic acid, Ka = 1.8 × 10–5. Solution: For brevity, we will represent acetic acid CH3COOH as HAc, and the acetate ion by Ac. As before, we set x = [H+] = [Ac], neglecting the tiny quantity of H+ that comes from the dissociation of water. Substitution into the equilibrium expression yields The rather small value of Ka suggests that we can drop the x term in the denominator, so that (x2 / 0.20) ≈ 1.8E-5  or  x ≈ (0.20 × 1.8E–5)½ = 1.9E-3 M The pH of the solution is –log (1.9E–3) = 2.7 ### Degree of dissociation Even though we know that the process HA → H+ + A does not correctly describe the transfer of a proton to H2O, chemists still find it convenient to use the term "ionization" or "dissociation". The "degree of dissociation" (denoted by(alpha) of a weak acid is just the fraction (1-13) which is often expressed as a per cent ( × 100). #### Degree of dissociation depends on the concentration It's important to understand that whereas Ka for a given acid is essentially a constant, \alpha will depend on the concentration of the acid. Note that these equations are also valid for weak bases if Kb and Cb are used in place of Ka and Ca. This can be shown by substituting Eq 5 into the expression for Ka: (1-14) Solving this forresults in a quadratic equation, but if the acid is sufficiently weak that we can say (1 – ) ≈ 1, the above relation becomes (1-15) the amount of HA that dissociates varies inversely with the square root of the concentration; as Ca approaches zero, approaches unity and [HA] approaches Ca. This principle is an instance of the Ostwald dilution law which relates the dissociation constant of a weak electrolyte (in this case, a weak acid), its degree of dissociation, and the concentration. A common, but incorrect explanation of this law in terms of the Le Châtelier principle states that dilution increases the concentration of water in the equation HA + H2O → H3O+ + A, thereby causing this equilibrium to shift to the right.  The error here is that [H2O] in most aqueous solutions is so large (55.5 M) that it can be considered constant; this is the reason the [H2O] term does not appear in the expression for Ka.  Another common explanation is that dilution reduces [H3O+] and [A], thus shifting the dissociation process to the right. But dilution similarly reduces [HA], which would shift the process to the left.  In fact, these two processes compete, but the former has greater effect because two species are involved. It is probably more satisfactory to avoid Le Châtelier-type arguments altogether, and regard the dilution law as an entropy effect, a consequence of the greater dispersal of thermal energy throughout the system. This energy is carried by the molecular units within the solution; dissociation of each HA unit produces two new particles which then go their own ways, thus spreading (or "diluting") the thermal energy more extensively and massively increasing the number of energetically-equivalent microscopic states, of which entropy is a measure.  (More on this here) #### Degree of dissociation depends on the pH Plots of this kind are discussed in more detail in the next lesson in this set under the heading ionization fractions. The Le Châtelier principle predicts that the extent of the reaction HA → H+ + A will be affected by the hydrogen ion concentration, and thus by the pH. This is illustrated here for the ammonium ion. Notice that when the pH is the same as the pKa, the concentrations of the acid- and base forms of the conjugate pair are identical. Problem Example 2 - percent dissociation A 0.75 M solution of an acid HA has a pH of 1.6. What is its percent dissociation? Solution: The dissociation stoichiometry HA → H+ + AB tells us the concentrations [H+] and [A] will be identical.  Thus [H+] = 10–1.6 = 0.025 M =  [A]. The dissociation fraction α = [A] / [HA] = 0.025 / 0.75 = 0.033, and thus the acid is 3.3% dissociated at 0.75 M concentration. Sometimes the percent dissociation is given, and Ka must be evaluated. Problem Example 3 - Ka from degree of dissociation A weak acid HA is 2 percent dissociated in a 1.00 M solution. Find the value of Ka. Solution: The equilibrium concentration of HA will be 2% smaller than its nominal concentration, so [HA] = 0.98 M, [A] = [H+] = 0.02 M. Substituting these values into the equilibrium expression gives #### "Concentration of the acid" and [HA] are not the same When dealing with problems involving acids or bases, bear in mind that when we speak of "the concentration", we usually mean the nominal or analytical concentration which is commonly denoted by Ca. So for a solution made by combining 0.10 mol of pure formic acid HCOOH with sufficient water to make up a volume of 1.0 L, Ca = 0.10 M.  But we know that the concentration of the actual species [HCOOH] will be smaller the 0.10 M because some it ends up as the formate ion HCOO. But it will always be the case that the sum [HCOOH] + [HCOO] = Ca. For the general case of an acid HA, we can write a mass balance equation Ca = [HA] + [A](1-16) which reminds us the "A" part of the acid must always be somewhere! Similarly, for a base B we can write Cb = [B] + [HB+](1-17) #### Degree of dissociation varies inversely with the concentration If we represent the dissociation of a Ca M solution of a weak acid by (1-18) then its dissociation constant is given by (1-19) Because the Ca term is in the denominator here, we see that the amount of HA that dissociates varies inversely with the concentration; as Ca approaches zero, [HA] approaches Ca. If we represent the fraction of the acid that is dissociated as (1-20) then Eq 8 becomes (1-21) If the acid is sufficiently weak that x does not exceed 5% of Ca, the -term in the denominator can be dropped, yielding KaCa 2 (1-22) Note that the above equations are also valid for weak bases if Kb and Cb are used in place of Ka and Ca. Problem Example 4 - effects of dilution Compare the percent dissociation of 0.10 M and .0010 M solutions of boric acid (Ka = 3.8E–10). Solution: Boric acid is sufficiently weak that we can use the approximation of Eq 1-22 to calculate a:= (5.8E–10 / .1)½ = 7.5E-5; multiply by 100 to get .0075 % diss. For the more dilute acid, a similar calculation yields 7.6E–4, or 0.76%. 2  Carrying out acid-base calculations ### What you need to know before you start In Problem Example 1, we calculated the pH of a monoprotic acid solution, making use of an approximation in order to avoid the need to solve a quadratic equation. This raises the question: how "exact" must calculations of pH be? It turns out that the relation between pH and the nominal concentration of an acid, base, or salt (and especially arbitrary mixtures of these) can become quite complicated, requiring the solution of sets of simultaneous equations. But for almost all practical applications, and certainly those encountered in a General Chemistry course, one can make some approximations that simplify the math without detracting significantly from the accuracy of the results. #### Equilibrium constants are rarely exactly known As we pointed out in the preceding lesson, the "effective" value of an equilibrium constant (the activity) will generally be different from the value given in tables in all but the most dilute ionic solutions. Even if the acid or base itself is dilute, the presence of other "spectator" ions such as Na+ at concentrations much in excess of 0.001 M can introduce error. The usual advice is to consider Ka values to be accurate to ±5 percent at best, and even more uncertain when total ionic concentrations exceed 0.1 M. As a consequence of this uncertainty, there is generally little practical reason to express the results of a pH calculation to more than two significant digits. ### Finding the pH of a solution of a weak monoprotic acid This is by far the most common type of problem you will encounter in a first-year Chemistry class.  You are given the concentration of the acid, expressed as Ca moles/L, and are asked to find the pH of the solution. The very important first step is to make sure you understand the problem by writing down the equation expressing the concentrations of each species in terms of a single unknown, which we represent here by x: (2-1) Substituting these values into the expression for Ka, we obtain (2-2) Don't bother to memorize these equations! If you understand the concept of mass balance on "A" expressed in (2-1), and can write the expression for Ka, you just substitute the x's into the latter, and you're off! If you feel the need to memorize stuff you don't need, it is likely that you don't really understand the material — and that should be a real worry! In order to predict the pH of this solution, we must first find [H+], that is, x. The presence of terms in both x and x 2 here tells us that this is a quadratic equation. This can be rearranged into x 2 = Ka (1 – x) which, when written in standard polynomial form, becomes the quadratic [H+]2Ca [H+] – Kw = 0 (2-3) But don't panic! As we will explain farther on, in most practical cases we can make some simplifying approximations which eliminate the need to solve a quadratic. And when, as occasionally happens, a quadratic is unavoidable, we will show you some relatively painless ways of dealing with it. ### How to deal with quadratic equations What you do will depend on what tools you have available. If you are only armed with a simple calculator, then there is always the venerable quadratic formula that you may have learned about in high school, but if at all possible, you should avoid it: its direct use in the present context is somewhat laborious and susceptible to error. Use of the standard quadratic formula on a computer or programmable calculator can lead to weird results! The reason for this is that if b2 >> |4ac|, one of the roots will require the subtraction of two terms whose values are very close; this can lead to considerable error when carried out by software that has finite precision. One can get around this by computing the quantity Q = –[b + sgn(b) * sqrt(b2 – 4ac)]/2 from which the roots are x1= Q /a and x2 = c /Q. (See any textbook on numerical computing for more on this and other metnods.) But who want's to bother with this stuff in order to solve typical chemistry problems?  Better to avoid quadratics altogether if at all possible! Remember: there are always two values of x (two roots) that satisfy a quadratic equation.  For all acid-base equilibrium calculations that are properly set up, these roots will be real, and only one will be positive; this is the one you take as the answer. #### Approximations, judiciously applied, simplify the math We have already  encountered two of these approximations in the examples of the preceding section: 1. In all but the most dilute solutions, we can assume that the dissociation of the acid HA is the sole source of H+, with the contribution due to water autoprotolysis being negligible. 2. We were able to simplify the equilibrium expressions by assuming that the x-term, representing the quantity of acid dissociated, is so small compared to the nominal concentration of the acid Ca that it can be neglected. Thus in Problem Example 1, the term in the denominator that has the form (0.1 - x) , representing the equilibrium concentration of the undissociated acid, is replaced by 0.1. Most people working in the field of practical chemistry will never encounter situations in which the first of these approximations is invalid. This is not the case, however, for the second one. #### Should I drop the x, or forge ahead with the quadratic form? If the acid is fairly concentrated (usually with Ca > 10–3 M) and sufficiently weak that most of it remains in its protonated form HA, then the concentration of H+ it produces may be sufficiently small that the expression for Ka reduces to Ka [H+]2 / Ca so that [H+] (KaCa)½(2-4) This can be a great convenience because it avoids the need to solve a quadratic equation.   But it also exposes you to the danger that this approximation may not be justified. The usual advice is that if this first approximation of x exceeds 5 percent of the value it is being subtracted from (0.10 in the present case), then the approximation is not justified. We will call this the "five percent rule". This plot shows the combinations of Ka and Ca that generally yield satisfactory results with the approximation of Eq 4. Problem Example 5 - pH and degree of dissociation a) Estimate the pH of a 0.20 M solution of acetic acid, Ka = 1.8 × 10–5. b) What percentage of the acid is dissociated? Solution: For brevity, we will represent acetic acid CH3COOH as HAc, and the acetate ion by Ac. As before, we set x = [H+] = [Ac], neglecting the tiny quantity of H+ that comes from the dissociation of water. Substitution into the equilibrium expression yields Can we simplify this by applying the approximation 0.20 – x ≈ 0.20 ? Looking at the number on the right side of this equation, we note that it is quite small. This means the left side must be equally small, which requires that the denominator be fairly large, so we can probably get away with dropping x. Doing so yields (x2 / 0.20) = 1.8E-5  or  x = (0.20 × 1.8E–5)½ = 1.9E-3 M The "5 per cent rule" requires that the above result be no greater than 5% of 0.20, or 0.010. Because 0.0019 meets this condition, we can set x = [H+] ≈ 1.9 × 10–3 M, and the pH will be –log (1.9 × 10–3) = 2.7 b) Percent dissociation: 100% × (1.9 × 10–3 M) / (0.20 M) = 0.95% Weak bases are treated in an exactly analogous way: Problem Example 6 - Weak base Methylamine CH3NH2 is a gas whose odor is noticed around decaying fish. A solution of CH3NH2 in water acts as a weak base. A 0.10 M solution of this amine in water is found to be 6.4% ionized.  Use this information to find \Kb and pKb for methylamine. Solution: When methylamine "ionizes", it takes up a proton from water, forming the methylaminium ion: CH3NH2 + H2O → CH3NH3+ + OH Let x = [CH3NH3+] = [OH] = .064 × 0.10 = 0.0064 [CH3NH2] = (0.10 – .064) = 0.094 Substitute these values into equilibrium expression for \Kb: To make sure we can stop here, we note that (3.6E4 / .01) = .036;  this is smaller than .05, so we pass the 5% rule and can use the approximation and drop the x-term in the denominator. pKb = – log \Kb = – log (4.4 × 10–10) = 3.36 But one does not always get off so easily! Problem Example 7 - pH of a chloric acid solution With a Ka of 0.010, HClO2 is one of the "stronger" weak acids, thanks to the two oxygen atoms whose electronegativity withdraws some negative charge from the chlorine atom, making it easier for the hydrogen to depart as a proton. Find the pH of a 0.015 M solution of chloric acid in pure water. (i) The approximation 0.10 – x ≈ .015 gives us x  ≈ (Ka Ca)½ = (0.10 ×.010)½ = (.001)½ = .032 (ii) This result should should sound alarm bells in your head right away; here is no way that one can get 0.032 mole of H+ from 0.010 mole of even the strongest acid! The difficulty, in this case, arises from the numerical value of Ka differing from the nominal concentration 0.10 M  by only a factor of 10. As a result, x / Ca = .032 / 0.10 = 0.32 which clearly exceeds the 5% limit; we have no choice but to face the full monte of the quadratic solution. (see Problem Example 8 below). #### Successive approximations will get you there with minimal math In the method of successive approximations, you start with the value of [H+] (that is, x) you calculated according to (2-4), which becomes the first approximation.  You then substitute this into (2-2), which you solve to get a second approximation. This cycle is repeated until differences between successive answers become small enough to ignore. Problem Example 8 - Method of successive approximations Estimate the pH of a 0.10 M aqueous solution of HClO2, Ka = 0.010, using the method of successive approximations. Solution:  The equilibrium condition is (i) We solve this for x, resulting in the first approximation x1, and then successively plug each result into the previous equation, yielding approximations x2 and x3: (ii) The last two approximations x2 and x3 are within 5% of each other. Note that if we had used x1 as the answer, the error would have been 18%. (An exact numeric solution yields the roots 0.027016  and –0.037016) #### Use a graphic calculator or computer to find the positive root The real roots of a polynomial equation can be found simply by plotting its value as a function of one of the variables it contains. In this example, the pH of a 10–6 M solution of hypochlorous acid (HOCl, Ka = 2.9E–8) was found by plotting the value of y = ax2 + bx + c, whose roots are the two values of x that correspond to y = 0. This method generally requires a bit of informed trial-and-error to make the locations of the roots visible within the scale of the axes. #### Be lazy, and use an on-line quadratic equation solver If you google "quadratic equation solver", you will find numerous on-line sites that offer quick-and-easy "fill-in-the-blanks" solutions. Unfortunately, few of these will be useful for acid-base problems involving numbers that must be expressed in "E-notation" (e.g., 2.7E-11.) Of those that do, the one at the MathIsFun site is highly recommended; others can be found here and at the Quad2Deg site. If you can access a quad equation solver on your personal electronic device or through the Internet, this is quick and painless.  All you need to do is write the equation in polynomial form ax2 + bx + c = 0, insert values for a, b , and c, and away you go! This is so easy, that many people prefer to avoid the "5% test" altogether, and go straight to an exact solution. But make sure can do the 5%-thing for exams where Internet-accessible devices are not permitted! Problem Example 9 - chloric acid, again Estimate the pH of a 0.10 M aqueous solution of HClO2, Ka = 0.010. Solution: The reaction equation HClO2 → H+ + ClO2– defines the equilibrium expression Multiplying the right half of the above expression yields x2 = 0.010 × (0.10 – x) = .0010 – .01 x which we arrange into standard polynomial form: x2 + 0.01 x – 0.0010 = 0 Entering the coefficients {1 .01 –.001} into an online quad solver yields the roots .027 and –.037. Taking the positive one, we have [H+] = .027 M; the solution pH is – log .027 = 1.6. Note: a common error is to forget to enter the minus sign for the last term; try doing this and watch the program blow up! #### Avoid math altogether and make a log-C vs pH plot This is not only simple to do (all you need is a scrap of paper and a straightedge), but it will give you far more insight into what's going on, especially in polyprotic systems. All explained in Section 3 of the next lesson. Some videos on simple monoprotic acid-base calculations: ### Solutions of salts #### Most salts do not form pH-neutral solutions Salts such as sodium chloride that can be made by combining a strong acid (HCl) with a strong base (NaOH, KOH) have a neutral pH, but these are exceptions to the general rule that solutions of most salts are mildly acidic or alkaline. Salts of a strong base and a weak acid yield alkaline solutions. "Hydro-lysis" literally means "water splitting", as exemplified by the reaction  A + H2O → HA + OH. The term describes what was believed to happen prior to the development of the Brønsted-Lowry proton transfer model. This important property has historically been known as hydrolysis — a term still used by chemists. Some examples: Potassium Cyanide KCN can be thought of the salt made by combining the strong base KOH with the weak acid HCN:  K+ + OH → KOH. When solid KCN dissolves in water, this process is reversed, yielding a solution of the two ions.  K+, being a "strong ion", does not react with water.  But the "weak ion" CN, being the conjugate base of a weak acid, will have a tendency to abstract protons from water: CN + H2O → HCN + OH, causing the solution to remain slightly alkaline. Sodium bicarbonate NaHCO3 (more properly known as sodium hydrogen carbonate) dissolves in water to yield a solution of the hydrogen carbonate ion HCO3.  This ion is an ampholyte — that is, it is both the conjugate base of the weak carbonic acid H2CO3, as well as the conjugate acid of the carbonate ion CO32–: The HCO3 ion is therefore amphiprotic: it can both accept and donate protons, so both processes take place: But if we compare the Ka and Kb of HCO3, it is apparent that its basic nature wins out, so a solution of NaHCO3 will be slightly alkaline. (The value of pKb is found by recalling that Ka + Kb = 14.) Sodium acetate A solution of CH3COONa serves as a model for the strong-ion salt of any organic acid, since all such acids are weak:  CH3COO + H2O → CH3COOH + OH #### Salts of most cations (positive ions) give acidic solutions The protons can either come from the cation itself (as with the ammonium ion NH4+), or from waters of hydration that are attached to a metallic ion. This latter effect happens with virtually all salts of metals beyond Group I; it is especially noticeable for salts of transition ions such as hexaaquoiron(III) ("ferric ion"): Fe(H2O)63+ → Fe(H2O)5OH2+ + H+ (2-5) This comes about because the positive field of the metal enhances the ability of H2O molecules in the hydration shell to lose protons. In the case of the hexahyrated ion shown above, a whose succession of similar steps can occur, but for most practical purposes only the first step is significant. Problem Example 10 - Aluminum chloride solution Find the pH of a 0.15 M solution of aluminum chloride. Solution: The aluminum ion exists in water as hexaaquoaluminum Al(H2O)63+, whose pKa = 4.9, Ka = 10–4.9 = 1.3E–5. Setting x = [H+] = [Al(H2O)5OH 2+],  the equilibrium expression is Using the above approximation, we get x ≈ (1.96E–6)½ = 1.4E–3, corresponding to pH = 2.8. Finally, we compute x/Ca = 1.4E–3 ÷ 0.15 = .012 confirming that we are within the "5% rule". The only commonly-encountered salts in which the proton is donated by the cation itself are those of the ammonium ion: NH4+→ NH3(aq) + H+(2-6) Problem Example 11 - Ammonium chloride solution Calculate the pH of a 0.15 M solution of NH4Cl.  The ammonium ion Ka is 5.5E–10. Solution: According to Eq 6 above, we can set [NH3] = [H+] = x, obtaining the same equilibrium expression as in the preceding problem. Because Ka is quite small, we can safely use the approximation 0.15 - 1 ≈ .015, which yields pH = –log 0.90E–5 = 5.0 #### Most salts of weak acids form alkaline solutions As indicated in the example, such equilibria strongly favor the left side; the stronger the acid HA, the less alkaline the salt solution will be. Because an ion derived from a weak acid such as HF is the conjugate base of that acid, it should not surprise you that a salt such as NaF forms an alkaline solution, even if the equilibrium greatly favors the left side:: F + H2O HF + OH(2-7) Problem Example 11 - Solution of sodium fluoride Find the pH of a 0.15 M solution of NaF.  (HF Ka = 6.7E–4) Solution: The reaction is F- + H2O = HF + OH; because HF is a weak acid, the equilibrium strongly favors the right side. The usual approximation yields But on calculating x/Ca = .01 ÷ 0.15 = .07, we find that this does not meet the "5% rule" for the validity of the approximation. We therefore expand the equilibrium expression into standard polynomial form  x2 + 6.7E–4 x – 1.0E–4 = 0 and enter the coefficients {1 6.7E–4 –.0001} into a quadratic solver. This yields the positive root x = 0.0099 which turns out to be sufficiently close to the approximation that we could have retained it after all.. perhaps 5% is a bit too restrictive for 2-significant digit calculations! #### What about a salt of a weak acid and a weak base? A salt of a weak acid gives an alkaline solution, while that of a weak base yields an acidic solution. What happens if we dissolve a salt of a weak acid and a weak base in water? Ah, this can get a bit tricky!  Nevertheless, this situation arises very frequently in applications as diverse as physiological chemistry and geochemistry. As an example of how one might approach such a problem, consider a solution of ammonium formate, which contains the ions NH4+ and HCOO-. Formic acid, the simplest organic acid, has a pKa of 3.7; for NH4+, pKa = 9.3. Three equilibria involving these ions are possible here; in addition to the reactions of the ammonium and formate ions with water, we must also take into account their tendency to react with each other to form the parent neutral species: NH4+ → NH3 + H+ K1 = 10–9.3 HCOO– + H2O → HCOOH + OH– K2 = (1O–14/10–3.7) = 10–10.3 NH4+ + HCOO– → NH3 + HCOOH K3 Inspection reveals that the last equation above is the sum of the first two,plus the reverse of the dissociation of water H+ + OH– → H2O K4 = 1/Kw The value of K3 is therefore (2-8) A rigorous treatment of this system would require that we solve these equations simultaneously with the charge balance and the two mass balance equations. However, because K3 is several orders of magnitude greater than K1 or K2, we can greatly simplify things by neglecting the other equilibria and considering only the reaction between the ammonium and formate ions. Notice that the products of this reaction will tend to suppress the extent of the first two equilibria, reducing their importance even more than the relative values of the equilibrium constants would indicate. Problem Example 12 - ammonium formate Estimate the pH of a 0.0100 M solution of ammonium formate in water. Solution: From the stoichiometry of HCOONH4, [NH4+] = [HCOO]    and    [NH3] = [HCOOH] (i) then, from Eq 8 above, (ii) in which Kb is the base constant of ammonia, Kw /10–9.3. From the formic acid dissociation equilibrium we have (iii) We now rewrite the expression for K3 (iv) which yields (v) and  thus the pH is 6.5 What is interesting about this last example is that the pH of the solution is apparently independent of the concentration of the salt. If Ka = Kb, then this is always true and the solution will be neutral (neglecting activity effects in solutions of high ionic strength). Otherwise, it is only an approximation that remains valid as long as the salt concentration is substantially larger than the magnitude of either equilibrium constant. Clearly, the pH of any solution must approach that of pure water as the solution becomes more dilute. #### Salts of analyte ions Polyprotic acids form multiple anions; those that can themselves donate protons, and are thus amphiprotic, are called analytes.  The most widely known of these is the bicarbonate (hydrogen carbonate) ion, HCO3, which we commonly know in the form of its sodium salt NaHCO3 as baking soda. The other analyte series that is widely encountered, especially in biochemistry, is those derived from phosphoric acid: The solutions of analyte ions we most often need to deal with are the of "strong ions", usually Na+, but sometimes those of Group 2 cations such as Ca2+. The exact treatment of these systems is generally rather complicated, but for the special cases in which the successive Ka's of the parent acid are separated by several orders of magnitude (as in the two systems illustrated above), a series of approximations reduces the problem to the simple expression (2-9) which, you will notice, as with the salt of a weak acid and a weak base discussed in the preceding subsection predicts that the pH is independent of the salt's concentration. This, of course, is a sure indication that this treatment is incomplete.  Fortunately, however, it works reasonably well for most practical purposes, which commonly involve buffer solutions. ### Exact treatment of solutions of weak acids and bases When dealing with acid-base systems having very small Ka's, and/or solutions that are extremely dilute, it may be necessary to consider all the species present in the solution, including minor ones such as OH This is almost never required in first-year courses.  But for students in more advanced courses, this "comprehensive approach" (as it is often called) illustrates the important general methodology of dealing with more complex equilibrium problems. It also shows explicitly how making various approximations gradually simplifies the treatment of more complex systems. In order to keep the size of the present lesson within reasonable bounds (and to shield the sensitive eyes of beginners from the shock of confronting simultaneous equations), this material has been placed in a separate lesson. 3  Solutions of polyprotic acids A diprotic acid H2A can donate its protons in two steps: H2A → HA →HA and similarly, for a tripotic acid H3A: H3A → H2A → HA2– → A3– In general, we can expect Ka2 for the "second ionization" to be smaller than Ka1 for the first step because it is more difficult to remove a proton from a negatively charged species.  The magnitude of this difference depends very much on whether the two removable protons are linked to the same atom, or to separate atoms spaced farther apart. ### Some polyprotic acids you should know These acids are listed in the order of decreasing Ka1. The numbers above the arrows show the successive Ka's of each acid. • Notice how successive Ka's of each acid become smaller, and how their ratios relate to the structures of each acid. • Sulfuric acid is the only strong polyprotic acid you are likely to encounter. • Sulfurous acid has never been detected; what we refer to as H2SO3 is more properly represented as a hydrated form of sulfur dioxide: which dissociates into the hydrosulfite ion: SO2·H2O HSO3 + H+ . The ion HSO3 exists only in solution; solid bisulfite salts are not known. • Similarly, pure carbonic acid has never been isolated, but it does exist as a minority species in an aqueous solution of CO2: [CO2(aq)] = 650[H2CO3]. The formula H2CO3 ordinarily represents the combination CO2(aq) and "true" H2CO3.  The latter is actually about a thousand times stronger than is indicated by the pKa of 6.3, which is the weighted average of the equilibrium mixture. pH of a polyprotic acid (LindaHanson, 17 min) Problem Example 13 - Comparison of two diprotic acids Compare the successive pKa values of sulfuric and oxalic acids (see their structures in the box, above right), and explain why they should be so different. Solution: The two pKa values of sulfuric acid differ by 3.0 – (–1.9) = 4.9, whereas for oxalic acid the difference is 1.3 – (–4.3) = 3.0. That's a difference of almost 100 between the two Ka's. Removal of a second proton from a molecule that already carries some negative charge is always expected to be less favorable energetically. In sulfuric acid, the two protons come from –OH groups connected to the same sulfur atom, so the negative charge that impedes loss of the second proton is more localized near the site of its removal. In oxalic acid, the two protons are removed from –OH groups attached to separate carbon atoms, so the negative charge of the mono-negative ions will exert less restraint on loss of the second proton. ### Solutions of polyprotic acids in pure water With the exception of sulfuric acid (and some other seldom-encountered strong diprotic acids), most polyprotic acids have sufficiently small Ka1 values that their aqueous solutions contain significant concentrations of the free acid as well as of the various dissociated anions. Thus for a typical diprotic acid H2A, we must consider the equilibria H2A → H+ + HA    K1 HA→ H+ + HA2–    K2 H2O → H+ + OH    Kw An exact treatment of such a system of four unknowns [H2A], [HA], [A2–] and [H+] requires the solution of a quartic equation. If we include [OH], it's even worse! In most practical cases, we can make some simplifying approximations: • Unless the solution is extremely dilute or K1 (and all the subsequent K's) are extremely small, we can forget that any hydroxide ions are present at all. • If K1 is quite small, and the ratios of succeeding K's are reasonably large, we may be able, without introducing too much error, to neglect the other K's and treat the acid as monoprotic. In addition to the three equilibria listed above, a solution of a polyprotic acid in pure water is subject to the following two conditions: Material balance: although the distribution of species between the acid form H2A and its base forms HAB and A2– may vary, their sum (defined as the "total acid concentration" Ca is a constant for a particular solution: Ca = [H2A] + [HA] + [A2–] Charge balance: The solution may not possess a net electrical charge: [H3O+] = [OH] + [HA] + 2 [A2–] Why do we multiply [A2–] by 2?  Two moles of H3O+ are needed in order to balance out the charge of 1 mole of A2–. ### Simplified treatment of polyprotic acid solutions The calculations shown in this section are all you need for the polyprotic acid problems encountered in most first-year college chemistry courses. More advanced courses may require the more exact methods in Lesson 7. Because the successive equilibrium constants for most of the weak polyprotic acids we ordinarily deal with diminish by several orders of magnitude, we can usually get away with considering only the first ionization step. To the extent that this is true, there is nothing really new to learn here. However, without getting into a lot of complicated arithmetic, we can often go farther and estimate the additional quantity of H+ produced by the second ionization step. Problem Example 14 -  Solution of CO2 in water a) Calculate the pH of a 0.050 M solution of CO2 in water. For H2CO3, K1 = 10–6.4 = 4.5E–7, K2 = 10–10.3 = 1.0E–14. b) Estimate the concentration of carbonate ion CO32– in the solution. Solution: a) Because K1 and K2 differ by almost four orders of magnitude, we will initially neglect the second dissociation step. Because this latter step produces only a tiny additional concentration of H+, we can assume that [H+] = [HCO3] = x: (i) Can we further simplify this expression by dropping the x in the denominator? Let's try:  x = [0.05 × (4.5E–7)]½ = 1.5E–4. Applying the "5-percent test", the quotient x/Ca must not exceed 0.05.  Working this out yields (1.5E–4)/(.05) = .003, so we can avoid a quadratic. x = [H+] ≈ (KaCa)½ = [(4.5E–7) × .01]½ = (.001)½ = 0.032 M, and the pH = – log .032 = 1.5. b) We now wish to estimate [CO32–] ≡ x. Examining the second dissociation step, it is evident that this will consume x mol/L of HCO3, and produce an equivalent amount of H+ which adds to the quantity we calculated in (a). (ii) Owing to the very small value of K2 compared to K1, we can assume that the concentrations of HCO3 and H+ produced in the first dissociation step will not be significantly altered in this step. This allows us to simplify the equilibrium constant expression and solve directly for [CO32–]: (iii) It is of course no coincidence that this estimate of [CO32–] yields a value identical with K2; this is entirely a consequence of the simplifying assumptions we have made. Nevertheless, as long as K2 << K1 and the solution is not highly dilute, the result will be sufficiently accurate for most purposes. #### Sulfuric acid: a special case Although this is a strong acid, it is also diprotic, and in its second dissociation step it acts as a weak acid.  Because sulfuric acid is so widely employed both in the laboratory and industry, it is useful to examine the result of taking its second dissociation into consideration. Problem Example 15 - solution of H2SO4 Estimate the pH of a 0.010 M solution of H2SO4K1 = 103, K2 = 0.012 Solution: Because K1 > 1, we can assume that a solution of this acid will be completely dissociated into H3O+ and bisulfite ions HSO4. Thus the only equilibrium we need to consider is the dissociation of a 0.010 M solution of bisulfite ions. HSO4 + H2O → SO42– + H3O+ Setting [H+] = [SO42–] = x, and dropping x from the denominator, yields x ≈ (0.010 x .012)½ = (1.2E–4)½ = 0.0011 Applying the "five percent rule", we find that x / Ca = .0011/.01 = .11 which is far over the allowable error, so we must proceed with the quadratic form. Rewriting the equilibrium expression in polynomial form gives x2 + 0.022x – 1.2E–4 = 0 Inserting the coefficients {1 .022 .000012} into a quad-solver utility yields the roots 4.5E–3 and –0.0027. Taking the positive root, we obtain pH = – log (.0045) = 2.3 Thus the second "ionization" of H2SO4 has only reduced the pH of the solution by 0.1 unit from the value (2.0) it would theoretically have if only the first step were considered. 4  Amino acids and Zwitterions Most first-year General Chemistry students may skip this section. This material is covered mainly in biochemistry courses. #### Zwitterions: molecular hermaphrodites Amino acids, the building blocks of proteins, contain amino groups –NH2 that can accept protons, and carboxyl groups –COOH that can lose protons. Under certain conditions, these events can occur simultaneously, so that the resulting molecule becomes a “double ion” which goes by its German name Zwitterion. ### Glycine: the simplest amino acid The simplest of the twenty natural amino acids that occur in proteins is glycine H2N–CH2–COOH, which we use as an example here. Solutions of glycine are distributed between the acidic-, zwitterion-, and basic species: (3-1) Although the zwitterionic species is amphiprotic, it differs from a typical ampholyte such as HCO3 in that it is electrically neutral owing to the cancellation of the opposite electrical charges on the amino and carboxyl groups. Amino acids are the most commonly-encountered kind of zwitterions, but other substances, such as quaternary ammonium compounds, also fall into this category.  For more on Zwitterions, see this Wikipedia article or this UK ChemGuide page. In the following development, we use the abbreviations H2Gly+ (glycinium), HGly (zwitterion), and Gly (glycinate) to denote the dissolved forms. The two acidity constants are (3-2) If glycine is dissolved in water, charge balance requires that H2Gly+ + [H+] = [Gly] + [OH](3-3) Substituting the equilibrium constant expressions (including that for the autoprotolysis of water) into the above relation yields (3-4) These videos discuss the pH-related chemistry of amino acids: Amino acid pKa’s and their acid/base forms (UCBerkeley, 7 min) Net charge and isoelectric point of an amino acid (Khan, 5½ min) Amino acid effective charge - quiz (UCBerkeley, 3 min) If Eqs ii and iii in this Problem Example are recalculated for a range of pH values, one can plot the concentrations of each species against pH for 0.10 M glycine in water: This distribution diagram shows that the zwitterion is the predominant species between pH values corresponding to the two pKas given in Eq 3-1. Problem Example 16 - glycine solution speciation Calculate the pH and the concentrations of the various species in a 0.100 M solution of glycine. Solution: Substitution into Eq 4 above yields (i) The concentrations of the acid and base forms are found from their respective equilibrium constant expressions (Eqs 2): (ii) (iii) The small concentrations of these singly-charged species in relation to Ca = 0.10 shows that the zwitterion is the only significant glycine species in the solution. What you should be able to do Make sure you thoroughly understand the following essential concepts that have been presented above. • Explain the difference between the terms Ca and [HA] as they relate to an aqueous solution of the acid HA. • Define degree of dissociation and sketch a plot showing how the values of for a conjugate pair HA and A relate to each other and to the pKa. • Derive Eq 2-2 relating the acid concentration and dissociation constant to the hydrogen ion concentration in a solution of a weak acid. • Re-write the above relation in polynomial form. • Derive the approximation (Eq 2-4) that is often used to estimate the pH of a solution of a weak acid in water. • Calculate the pH of a solution of a weak monoprotic weak acid or base, employing the "five-percent rule" to determine if the approximation 2-4 is justified. • Predict whether an aqueous solution of a salt will be acidic or alkaline, and explain why by writing an appropriate equation.
{}
zbMATH — the first resource for mathematics Asymptotic behaviors of radially symmetric solutions of $$\square u=| u| ^p$$ for super critical values $$p$$ in high dimensions. (English) Zbl 0925.35101 MSC: 35L70 Second-order nonlinear hyperbolic equations 35B40 Asymptotic behavior of solutions to PDEs 35B45 A priori estimates in context of PDEs 35D10 Regularity of generalized solutions of PDE (MSC2000) Keywords: scattering operator
{}
## Reading the Comics, November 13, 2019: I Could Have Posted This Wednesday Edition Now let me discuss the comic strips from last week with some real meat to their subject matter. There weren’t many: after Wednesday of last week there were only casual mentions of any mathematics topic. But one of the strips got me quite excited. You’ll know which soon enough. Mac King and Bill King’s Magic in a Minute for the 10th uses everyone’s favorite topological construct to do a magic trick. This one uses a neat quirk of the Möbius strip: that if sliced along the center of its continuous loop you get not two separate shapes but one Möbius strip of greater length. There are more astounding feats possible. If the strip were cut one-third of the way from an edge it would slice the strip into two shapes, one another Möbius strip and one a simple loop. Or consider not starting with a Möbius strip. Make the strip of paper by taking one end and twisting it twice around, for a full loop, before taping it to the other end. Slice this down the center and what results are two interlinked rings. Or place three twists in the original strip of paper before taping the ends together. Then, the shape, cut down the center, unfolds into a trefoil knot. But this would take some expert hand work to conceal the loops from the audience while cutting. It’d be a neat stunt if you could stage it, though. Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 10th uses mathematics as obfuscation. We value mathematics for being able to make precise and definitely true statements. And for being able to describe the world with precision and clarity. But this has got the danger that people hear mathematical terms and tune out, trusting that the point will be along soon after some complicated talk. Brian Boychuk and Ron Boychuk’s The Chuckle Brothers for the 11th would be a Pi Day joke if it hadn’t run in November. But when this strip first ran, in 2010, Pi Day was not such a big event in the STEM/Internet community. The Boychuks couldn’t have known. The formulas on the blackboard are nearly all legitimate, and correct, formulas for the value of π. The upper-left and the lower-right formulas are integrals, and ones that correspond to particular trigonometric formulas. The The middle-left and the upper-right formulas are series, the sums of infinitely many terms. The one in the upper right, $\sum \frac{1}{n^2} = \frac{\pi^2}{6}$, was roughly proven by Leonhard Euler. Euler developed a proof that’s convincing, but that assumed that infinitely-long polynomials behave just like finitely-long polynomials. In this context, he was correct, but this can’t be generally trusted to happen. We’ve got proofs that, to our eyes, seem rigorous enough now. The center-left formula doesn’t look correct to me. To my eye, this looks like a mistaken representation of the formula $\pi = 2 \sum_{k = 0}^{\infty} \frac{2^k \cdot k!^2}{\left(2k + 1\right)!}$ But it’s obscured by Haskins’s head. It may be that this formula’s written in a format that, in full, would be correct. There are many, many formulas for π (here’s Mathworld’s page of them and here’s Wikipedia’s page of π formulas); it’s impossible to list them all. The center-right formula is interesting because, in part, it looks weird. It’s written out as $\pi = \frac{4}{6+}\frac{1^2}{6+}\frac{3^2}{6+}\frac{5^2}{6+}\frac{7^2}{6+} \cdots$ That looks at first glance like something’s gone wrong with one of those infinite-product series for π. Not so; this is a notation used for continued fractions. A continued fraction has a string of denominators that are typically some whole number plus another fraction. Often the denominator of that fraction will itself be a whole number plus another fraction. This gets to be typographically challenging. So we have this notation instead. Its syntax is that $a + \frac{b}{c + \frac{d}{e + \frac{f}{g}}} = a + \frac{b}{c+} \frac{d}{e+} \frac{f}{g}$ There are many attractive formulas for π. It’s temping to say this is because π is such a lovely number it naturally has beautiful formulas. But more likely humans are so interested in π we go looking for formulas with some appealing sequence to them. There are some awful-looking formulas out there too. I don’t know your tastes, but for me I feel my heart cool when I see that π is equal to four divided by this number: $\sum_{n = 0}^{\infty} \frac{(-1)^n (4n)! (21460n + 1123)}{(n!)^4 441^{2n + 1} 2^{10n + 1}}$ however much I might admire the ingenuity which found that relationship, and however efficiently it may calculate digits of π. Glenn McCoy and Gary McCoy’s The Duplex for the 13th uses skill at arithmetic as shorthand for proving someone’s a teacher. There’s clearly some implicit idea that this is a school teacher, probably for elementary schools, and doesn’t have a particular specialty. But it is only three panels; they have to get the joke done, after all. And that’s all for the comic strips this week. Come Sunday I should have another Reading the Comics post. And the Fall 2019 A-to-Z draws closer to its conclusion with two more essays, trusting that I can indeed write them, for Tuesday and Thursday. I also have something disturbing to write about for Wednesday. Can’t wait. ## Reading the Comics, August 29, 2018: The Week I Missed One Edition Have you ever wondered how I gather comic strips for these Reading the Comics posts? Sure, why not go along with me. Well, I do it by this: I read a lot of comic strips. When I run across one that’s got some mathematical theme, I copy the URL for it over to a page of notes. Then I go back to those notes and write up a paragraph or so on each. That is, I do it exactly the way you might imagine if you weren’t very imaginative or trying hard. I explain all this to say that I made a note that I then didn’t notice. So I missed a comic strip. And opened myself up to wondering if there’s an etymological link between “note” and “notice”. Anyway, it’s here. I’m just explaining why it’s late. Jim Toomey’s Sherman’s Lagoon for the 19th of August is the belated inclusion. It’s a probability strip. It’s built partly on how badly people estimate probability, especially of rare events. And of how badly people estimate the consequences of rare events. For anything that isn’t common, our feelings about the likelihood of events are garbage. And even for common events we’re not that good. But then it’s hard to quantify a low-probability event too. Take the claim that a human has one chance in 3.7 million of being attacked by a shark. We’ll pretend that’s the real number; I don’t know what is. (I’m suspicious of the ‘3-7’. People picking a random two-digit number are surprisingly likely to pick 37 because, I guess, it ‘feels’ random.) Is that over their lifetime? Over a summer? In a single swimming event? In any case it’s such a tiny chance it’s not worth serious worry. But even then, a person who lives in Wisconsin and only ever swims in Lake Michigan has a considerably smaller chance of shark attack than a person from New Jersey who swims at the Shore. At least some of these things are probabilities we can affect. So the fellow may be irrational, denying himself something he’d enjoy based on a fantastically unlikely event. But he is acting to avoid something he’s decided he doesn’t want to risk. And, you know, we all act irrationally at times, or else I couldn’t justify buying a lottery ticket every eight months or so. Also is Fillmore (the turtle) the person who needs to hear this argument? Gary McCoy and Glenn McCoy’s The Duplex for the 26th is an accounting joke. And a cry about poverty, with the idea that one could make the adding up of one’s assets and debts work only by making mathematics logically inconsistent. Or maybe inconsistent. Arithmetic modulo a particular number could be said to make zero equal to some other number, after all, and that’s all valid. Useful, too, especially in enciphering messages and in generating random numbers. It’s less useful for accounting, though. At least it would draw attention if used unilaterally. Steve Kelley and Jeff Parker’s Dustin for the 28th is roughly a student-resisting-the-homework problem. From the first panel I thought Hayden might be complaining that ‘x’ was used, once again, as the variable to be solved for. It is the default choice, made because we all grew up learning of ‘x’ as the first choice for a number with a not-yet-known identity. ‘y’ and ‘z’ come in as second and third choices, most likely because they’re quite close to ‘x’. Sometimes another letter stands out, usually because the problem compels it. If the framing of the problem is about when events happen then ‘t’ becomes the default choice. If the problem suggests circular motion then ‘r’ or ‘θ’ — radius and angle — become compelling. But if we know no context, and have only the one variable, then ‘x’ it is. It seems weird to do otherwise. Bill Holbrook’s On The Fastrack for the 28th is part of a week of Fi talking about mathematics to kids. She occasionally delivers seminars meant to encourage enthusiasm about mathematics. I love the principle although I don’t know how long the effect lasts. (Although it is kind of what I’m doing here. Except I think maybe Fi gets paid.) Holbrook’s strips of this mode often include nice literal depictions of metaphors. This week didn’t offer much chance for that particular artistic play. I have at least one, and often several, Reading the Comics posts, each week. They should all appear at this link. Other essays with Sherman’s Lagoon will appear at this link when they’re written. I’m surprised to learn that’s a new tag. Essays that mention The Duplex are at this link. Other appearances by Dustin, a character who does not appear in this particular essay’s strips, are at this link. And On The Fastrack mentions should appear at this link. Thank you. ## Reading the Comics, July 21, 2018: Infinite Hotels Edition Ryan North’s Dinosaur Comics for the 18th is based on Hilbert’s Hotel. This is a construct very familiar to eager young mathematicians. It’s an almost unavoidable pop-mathematics introduction to infinitely large sets. It’s a great introduction because the model is so mundane as to be easily imagined. But you can imagine experiments with intuition-challenging results. T-Rex describes one of the classic examples in the third through fifth panels. The strip made me wonder about the origins of Hilbert’s Hotel. Everyone doing pop mathematics uses the example, but who created it? And the startling result is, David Hilbert, kind of. My reference here is Helge Kragh’s paper The True (?) Story of Hilbert’s Infinite Hotel. Apparently in a 1924-25 lecture series in Göttingen, Hilbert encouraged people to think of a hotel with infinitely many rooms. He apparently did not use it for so many examples as pop mathematicians would. He just used the question of how to accommodate a single new guest after the infinitely many rooms were first filled. And then went to imagine an infinite dance party. I don’t remember ever seeing the dance party in the wild; perhaps it’s a casualty of modern rave culture. Hilbert’s Hotel seems to have next seen print in George Gamow’s One, Two Three … Infinity. Gamow summoned the hotel back from the realms of forgotten pop mathematics with a casual, jokey tone that fooled Kragh into thinking he’d invented the model and whimsically credited Hilbert with it. (Gamow was prone to this sort of lighthearted touch.) He came back to it in The Creation Of The Universe, less to make readers consider the modern understanding of infinitely large sets than to argue for a universe having infinitely many things in it. And then it disappeared again, except for cameo appearances trying to argue that the steady-state universe would be more bizarre than what we actually see. The philosopher Pamela Huby seems to have made Hilbert’s Hotel a thing to talk about again, as part of a debate about whether a universe could be infinite in extent. William Lane Craig furthered using the hotel, as part of the theological debate about whether there could be an infinite temporal regress of events. Rudy Rucker and Eli Maor wrote descriptions of the idea in the 1980s, with vague ideas about whether Hilbert actually had anything to do with the place. And since then it’s stayed, a famous fictional hotel. David Hilbert was born in 1862; T-Rex misspoke. Ernie Bushmiller’s Nancy Classics for the 20th gets me out of my Olivia Jaimes rut. We could probably get a good discussion going about whether giving an example of a sphere is an adequate description of a sphere. Granted that a bubble-gum bubble won’t be perfectly spherical; neither will any example that exists in reality. We always trust that we can generalize to an ideal example of this thing. I did get to wondering, in Sluggo’s description of the octagon, why the specification of eight sides and eight angles. I suspect it’s meant to avoid calling an octagon something that, say, crosses over itself, thus having more angles than sides. Not sure, though. It might be a phrasing intended to make sure one remembers that there are sides and there are angles and the polygon can be interesting for both sets of component parts. John Atkinson’s Wrong Hands for the 20th is the Venn Diagram joke for the week. The half-week anyway. Also a bunch of other graph jokes for the week. Nice compilation of things. I love the paradoxical labelling of the sections of the Venn Diagram. Tom II Wilson’s Ziggy for the 20th is a plaintive cry for help from a despairing soul. Who’s adding up four- and five-digit numbers by hand for some reason. Ziggy’s got his projects, I guess is what’s going on here. Glenn McCoy and Gary McCoy’s The Duplex for the 21st is set up as an I-hate-word-problems joke. The cop does ask something people would generally like to know, though: how much longer would it take, going 60 miles per hour rather than 70? It turns out it’s easy to estimate what a small change in speed does to arrival time. Roughly speaking, reducing the speed one percent increases the travel time one percent. Similarly, increasing speed one percent decreases travel time one percent. Going about five percent slower should make the travel time a little more than five percent longer. Going from 70 to 60 miles per hour reduces the speed about fifteen percent. So travel time is going to be a bit more than 15 percent longer. If it was going to be an hour to get there, now it’ll be an hour and ten minutes. Roughly. The quality of this approximation gets worse the bigger the change is. Cutting the speed 50 percent increases the travel time rather more than 50 percent. But for small changes, we have it easier. There are a couple ways to look at this. One is as an infinite series. Suppose you’re travelling a distance ‘d’, and had been doing it at the speed ‘v’, but now you have to decelerate by a small amount, ‘s’. Then this is something true about your travel time ‘t’, and I ask you to take my word for it because it has been a very long week and I haven’t the strength to argue the proposition: $t = \frac{d}{v - s} = \frac{d}{v}\left(1 + \left(\frac{s}{v}\right) + \left(\frac{s}{v}\right)^2 + \left(\frac{s}{v}\right)^3 + \left(\frac{s}{v}\right)^4 + \left(\frac{s}{v}\right)^5 + \cdots \right)$ ‘d’ divided by ‘v’ is how long your travel took at the original speed. And, now, $\left(\frac{s}{v}\right)$ — the fraction of how much you’ve changed your speed — is, by assumption, small. The speed only changed a little bit. So $\left(\frac{s}{v}\right)^2$ is tiny. And $\left(\frac{s}{v}\right)^3$ is impossibly tiny. And $\left(\frac{s}{v}\right)^4$ is ridiculously tiny. You make an error in dropping these $\left(\frac{s}{v}\right)$ squared and cubed and forth-power and higher terms. But you don’t make much of one, not if s is small enough compared to v. And that means your estimate of the new travel time is: $\frac{d}{v} \left(1 + \frac{s}{v}\right)$ Or, that is, if you reduce the speed by (say) five percent of what you started with, you increase the travel time by five percent. Varying one important quantity by a small amount we know as “perturbations”. Working out the approximate change in one quantity based on a perturbation is a key part of a lot of calculus, and a lot of mathematical modeling. It can feel illicit; after a lifetime of learning how mathematics is precise and exact, it’s hard to deliberately throw away stuff you know is not zero. It gets you to good places, though, and fast. Morrie Turner’s Wee Pals for the 21st shows Wellington having trouble with partitions. We can divide any counting number up into the sum of other counting numbers in, usually, many ways. I can kind of see his point; there is something strange that we can express a single idea in so many different-looking ways. I’m not sure how to get Wellington where he needs to be. I suspect that some examples with dimes, quarters, and nickels would help. And this is marginal but the “Soul Circle” personal profile for the 20th of July — rerun from the 20th of July, 2013 — was about Dr Cecil T Draper, a mathematics professor. You can get to this and more Reading the Comics posts at this link. Other essays mentioning Dinosaur Comics are at this link. Essays that describe Nancy, vintage and modern, are at this link. Wrong Hands gets discussed in essays on this link. Other Ziggy-based essays are at this link. The Duplex will get mentioned in essays at this link if any other examples of the strip get tagged here. And other Wee Pals strips get reviewed at this link.
{}
## Calculus: Early Transcendentals (2nd Edition) $f(x)=x^{2}-4x+5$ $f^{-1}(x)=2+\sqrt{x-1}$ $f(x)=x^{2}-4x+5$, for $x\gt2$ Substitute $f(x)$ by $y$: $y=x^{2}-4x+5$ Group the first two terms on the right side of the equation together: $y=(x^{2}-4x)+5$ Complete the square for the expression inside parentheses. Do so by adding $\Big(\dfrac{b}{2}\Big)^{2}$ to the expression inside parentheses and subtracting it from the expression outside of the parentheses. In this case, $b=-4$ $y=\Big[x^{2}-4x+\Big(-\dfrac{4}{2}\Big)^{2}\Big]+5-\Big(-\dfrac{4}{2}\Big)^{2}$ $y=(x^{2}-4x+4)+5-4$ $y=(x^{2}-4x+4)+1$ Factor the expression inside parentheses, which is a perfect square trinomial: $y=(x-2)^{2}+1$ Take $1$ to the left side and rearrange: $y-1=(x-2)^{2}$ $(x-2)^{2}=y-1$ Take the square root of both sides: $\sqrt{(x-2)^{2}}=\sqrt{y-1}$ $x-2=\sqrt{y-1}$ Take $2$ to the right side: $x=2+\sqrt{y-1}$ Interchange $x$ and $y$: $y=2+\sqrt{x-1}$ Substitute $y$ by $f^{-1}(x)$: $f^{-1}(x)=2+\sqrt{x-1}$ The graph of the function and its inverse is shown in the answer section.
{}
Timezone: » Offline Model-Based Reinforcement Learning for Tokamak Control Ian Char · Joseph Abbate · Laszlo Bardoczi · Mark Boyer · Youngseog Chung · Rory Conlin · Keith Erickson · Viraj Mehta · Nathan Richner · Egemen Kolemen · Jeff Schneider Unlocking the potential of nuclear fusion as an energy source would have profound impacts on the world. Nuclear fusion is an attractive energy source since the fuel is abundant, there is no risk of meltdown, and there are no high-level radioactive byproducts \citep{walker2020introduction}. Perhaps the most promising technology for harnessing nuclear fusion as a power source is the tokamak: a device that relies on magnetic fields to confine a torus shaped plasma. While strides are being made to prove that net energy output is possible with tokamaks \citep{meade200950}, there are still crucial control challenges that exist with these devices \citep{humphreys2015novel}. In this work, we focus on learning controls via offline model-based reinforcement learning for DIII-D, a device operated by General Atomics in San Diego, California. This device has been in operation since 1986, during which there have been over one hundred thousand shots'' (runs of the device). We use approximately 15k shots to learn a dynamics model that can predict the evolution of the plasma subject to different actuator settings. This dynamics model can then be used as a simulator that generates experience for the reinforcement learning algorithm to train on. We apply this method to train a controller that uses DIII-D's eight neutral beams to achieve desired $\beta_N$ (the normalized ratio between plasma pressure and magnetic pressure) and differential rotation targets. This controller was then evaluated on the DIII-D device. This work marks one of the first efforts for doing feedback control on a tokamak via a reinforcement learning agent that was trained on historical data alone.
{}
## Saturday, March 24, 2018 ### Hejný's method: solve the problems Most of the problems aren't the problem, the framing is I want some of you to spend some time by playing the games – or solving the mathematics problems – at MATIKA.IN (English, play now) The website includes the interactive environments or "schemes" which are mostly equivalent to everything that pupils exposed to Hejný's method, the most prominent orthodox constructivist education method we know in Czechia these days, have to solve during their 8 years in the elementary school. Helpfully enough, the website is available in English (and 7 other languages). It was coded by Andrej Probst, a young guy doing these things as a charity (who is mostly independent of the movement that spreads the method, as far as I can say), and is used by schools in Czechia, Slovakia, and Hungary. In Czechia, there are some 4,100 elementary schools. 800 of them teach mathematics by Hejný's method. 200 of those have learned about the website and at least recommend it to the kids. The website is some straightforward JavaScript but it looks fresh and helpful, partly because of illustrations by kovidesign. I surely believe that such websites should be used in conventional mathematics education – and in other subjects, too. You will know basically everything about Hejný's method – and what it tries to teach the first graders (who get 7 years old sometimes during the school year)... and up to the eighth graders (14 years old sometimes during that last year) – if you spend an hour with this website, and if you learn about the broader philosophy of the method: The teacher always wants the kids to feel good, she never teaches any theory, there aren't any textbooks with theory, they don't learn any general rules or formulae, there are just textbooks with similar exercises, kids always work collectively and correct each other, they must learn everything by themselves. The teacher is sidelined and ideally eliminated. This is in no way a Czech or Slovak (Hejný is mixed) invention. Americans may remember the math wars of the 1990s that were mostly about some radical constructivists' efforts to eliminate teachers or instructors from the math education. It would be silly to denounce every problem on the page. After all, many of them are analogous to exercises that almost any approach to mathematics education has to include. So in my eyes, most of the lethal defects in the method are in the philosophy (including the relative truth and the idea that kids should totally ignore everything that was invented by other people); in the excessive repetition of these puzzles we will discuss in detail; and especially in the things that are missing for the same reason (the time is already spent on the repetition of the standardized childish puzzles). But sometimes, even what is included is just painful. OK, return to the website. First graders do lots of stepping, sometimes on the horizontal floor, sometimes on staircases (see YouTube videos with stepping, some of them are kindergartens). The arrow to the right or left means to add or subtract one, respectively. So you're supposed to solve things like "10 leftarrow leftarrow leftarrow" and write "7" as the answer, OK? Without the jargon, the kids use some "Abelian group theory". They remember their state (location) before the steps, and after the steps (adding or subtracting one). That can help them to internalize the numbers and perhaps learn about zero and negative numbers, too. I have nothing against this method by itself – as something that belongs "somewhere". I just think that the amount of time they (and much older kids) spend by marching and stepping is orders of magnitude higher than it should be. The kids are effectively trained to add larger numbers, like 8+9, by thinking about 17 steps, one by one, in their heads. The method is absolutely obsessed with the elimination or delaying any memorization – such as the fact that 8+9=17. You just can't get too far in mathematics if you're scared of similar "memorizable results" that dramatically speed up some problems or "trivialize" a class of tasks. In some sense, all the power of mathematics is about this speedup and trivialization! OK, first graders do see some conventional addition and subtraction of the small numbers, after all. Then they deal with pyramids. There's a triangle of boxes, like in Pascal's triangle, and the top box or bottom box is assigned the number that is the sum of two boxes underneath or above this box. You fill in the missing integers. Again, there's nothing wrong about this kind of a puzzle if it appears in isolation. What's wrong is the obsession with this problem – which, in Hejný's classrooms, becomes basically 1/10 of all of mathematics of the elementary school. Pyramids continue up to the sixth grade. The only progress is that one quickly gets from 3-level pyramids to 4-level pyramids; occasionally, you need to subtract because some other numbers in the boxes are missing, not the sums; and the sixth graders add not just integers but numbers like 7.9 – one digit after the decimal point. Is that the appropriate progress after four years of playing with this exercise? I don't think so. Triangles are virtually the same thing as pyramids except that more numbers are missing. That would make it too hard or ambiguous so you're often given the list of numbers that are missing – in the wrong order. Or you're told some extra condition that the missing numbers satisfy. So all these problems end up being "solved" by brute force, simply by trying all possible permutations or small integers that can be filled somewhere. That's how completely uninformed people may solve Sudoku – after all, it's almost the same thing. But there's no mathematics in it because mathematics only starts once you find some cleverness to deal with the simple objects around you. Triangles are a bit less straightforward than pyramids so they continue up to the sixth grade. Six years of repeating the same kind of problems that you're supposed to solve almost entirely by brute force, without learning any methods, formulae, tricks, anything. You know, mathematics has a lot of levels of abstraction – everyone who is close to mathematics must totally know what I am talking about. The kids in Hejný's classroom always stay on the ground floor mentally; they only get more experienced with the ground floor. Snakes and spider webs (they look like similar problems). Boxes are connected with arrows. Each oriented arrow means "plus 3 equals" or "times 4 equals" and you have to fill in the number to the boxes or to the arrows. Thank God, that environment only continues through the third grade. Spider webs are a bit more complex and "two-dimensional", snakes are mostly one-dimensional. In spider webs, only addition is trained, it seems, and it continues through the fifth grade. In the spider web above, only three colored "additive arrows" are used. Fill in the numbers so that the sum of 3 neighbors is always 8. Obviously, the rule only works for 3x1 or 1x3 neighbors, not for L-shape neighbors around a corner. Neighbors. Another puzzle similar to pyramids or triangles but the numbers are in a rectangle and kids memorize some particular conditions on neighbors that has to be true in all the problems of this kind. You may see that the exercises become increasingly arbitrary and kids are memorizing an increasing number of nonsensical rules that have nothing to do with mathematics or its application anywhere in the world. Neighbors continue to the third grade. Fairground (well, I think that the correct translation is an exhibition hall but who cares). You go along a path in a 3 by 3 square and the digits 1,2,...9 successively appear on your path. 2 or 3 digits are there, you need to complete the remaining 7 or 6. A straightforward puzzle but it "enriches" the kid in a direction that has little to do with what I call mathematics. Like in most similar problems, one just tries the possibilities by brute force (it's usually trivial). Buses. There are 7 people in the bus before one stop. Then 3 people get in or get out. How many people are in the bus afterwards? Through several stations, you need to keep track of the number of people in the bus. The first graders have about 10 people in the bus. The exercise appears in the sixth grade as well – and the progress? The sixth graders have 18 people in the bus or so. Wow, what an impressive progress in just 5 years. Word problems. John has 10 dumplings, Anne has more by 2, how much is 2+2? The answer is 22. Now, to wrap the first grade, there are lots of "games" that are variations of Don't get angry, buddy, a German children's game popular in Czechia where you roll a dice and advance your piece along a path. Sometimes you may have to solve some problems at some positions. Shockingly, these games continue up to the eighth grade. When I was a kid, no one pretended that "we were learning mathematics" when playing games like Don't get angry, buddy. ;-) But this is an example of the general goal: to sell ordinary games that kids play outside school as "mathematics". OK, so lots of these problems have appeared in the first grade! So the first grade could have been OK. Kids aren't learning any "real mathematics" in the first grade, anyway? Indeed, I think that the method becomes increasingly indefensible if you look at the older kids. You just see that there's no progress. Dog handlers. One counts how many dogs an owner has and how many eyes the dogs have. I could solve 99% of the problems immediately but I just couldn't figure out what I should add into a 4 by 4 table with 20 in the lower right corner. The problems looks totally underdefined to me. I suspect that they're adding eyes and dogs – probably because they didn't have enough apples and oranges. ;-) There's some "multiplication square", it's like the snakes or spider webs with the operations' being multiplication and some special rules. Again, this type of puzzle – with the exact same geometry of the square and basic rules – is taught between 2nd grade and 5th grade. Biland. The kids are basically learning to convert small enough integers to the binary form. It continues to the third grade. It's an exceptional case in which one could argue that too young kids are exposed to some material. But maybe it's OK. Binary code may be important for some portions of computer science but it's a classic "dead end" that leads nowhere. Parquettes. Cover some rectangle with several prescribed tetris-like or smaller pieces. Is that really mathematics? The symbols "ice cream, cat, peace, camel, and bird" actually denote "mouse, cat, goose, dog, goat". The most shocking addition: daddy Forrest. It should really be granddaddy but OK. Kids have to memorize that 1,2,3,4,5, [??], 10, 20 should be written as mouse, cat, goose, dog, goat... [sheep, ram], cow, horse. See this video by a bunch of fifth-graders explaining the values. The kids have to memorize special icons for every animal – a new system to write digits and beyond. On matika.in, this continues up to the fourth graders but classes have been seen where fifth graders still do it – after all, the video I just linked to was created by fifth graders. (I think it's no coincidence that the icons look similar to the letters in Glagolitic script, the first alphabet used on the Czech territory since 863 when it was brought here by St Cyril and Methodius who constructed it artificially even though they could have simply added some accents above Greek letters to enrich the Greek alphabet and make it usable here. OK, that's basically what Russians did when they invented the Cyrillic script – and we did the same with the Latin alphabet. The Glagolitic letters were rather stupid, excessively symmetric symbols. Should all the schoolkids in Czech or mathematics classes learn the Glagolitic script again? Would it bring something to them? Maybe if a few kids learn such things, it's fine, but as a mandatory stuff for everybody?) Every kid who is at least above the average in mathematics must know that this is just plain retarded and has nothing to do with mathematics. But in Hejný's method, tons of problems are being solved where you have to use the animals and their icons instead of regular numbers. "Mouse plus goat equals cat plus goose": erase one animal so that it's true. You have to erase the mouse. But what do you exactly learn by this game based on the assignment of bogus values to animal species? Clearly, kids who love to memorize nonsense may love it but smarter kids suffer. They may have literally physical obstacles that prevent them from memorizing similar bogus stuff. I was surely like that and I've heard about kids who suffer in such contexts, too. The smart kids are clearly the "villains" who must be punished. Games, board games everywhere. Don't get angry, buddy. Indian multiplication is added. You use an ancient, alternative method to calculate 85*23. Should that alternative really replace the normal way how we calculated such things? The Indian multiplication isn't just some curiosity that appears in one or two classes. It's an environment studied repeatedly between 3rd and 6th grade as if it were a foundation of mathematics! Just to be sure, they also learn the normal method to multiply larger numbers. There's some quasi-conventional division with remainder as well. As far as I can say, the ancient Indian method only differs from the normal modern one by "details" and the teaching of two methods may only confuse the kids. Cards. You use some cards with digits to create a number that is closest to another one and stuff like that. You can do various things with digits and cards with digits but is that really the young mathematician's way of dealing with digits? It reminds me of the cave men who found the Škoda car. ;-) Cycle route. Pick segments in a map so that the route doesn't intersect itself. An OK puzzle, the relationships with mathematics is limited. Hours ago, you could find YouTube videos from such classrooms with a cycle route. The teacher was doing literally nothing and the kids didn't seem to know what they were doing. Just an hour of chaos. ABCD – algebrograms. A,B,C stand for digits, you're given some constraints, find the value of A,B,C. As in other cases, a normal problem when used in isolation. Here it is trained in the fourth and fifth grade. It's not too much time but it's still a bigger exposure to this kind of puzzle than what I find appropriate. (We encountered it in the fifth grade, too.) In principle, exercises like that lead to a set of equations with many variables. In practice, the kids don't solve them as equations but as a combinatoric problem that needs brute force to test a finite number of possible answers. Area. They're calculating areas except that in all cases, it seems just counting the integer value of some boxes in a grid. If some geometry doesn't fit into a grid (2D grid or a 3D grid of cubes), the kids probably learn nothing about it at all. I have already covered over 1/2 of the environments or templates of the problems. It goes on and on. Mostly the same environments that were explained above are repeated throughout the elementary school, some of them sometimes end, others appear. But where do the kids get? You see some slightly advanced stuff – primes and factorization – in the seventh grade. But the seventh grade and eighth grade also teach kids to do "digit sums". Now, digit sums are a very immature activity to do with a number. It's exactly like the cave men who found the car. Unless you want to learn whether something is divisible by 9, you don't do digit sums because it's really a very unnatural operation from a mathematical viewpoint. (They don't learn the rules about the divisibility by 9, I think. Remember they don't learn any rules that they couldn't discover by themselves.) A shocking combination is that eighth graders are supposed to play "Don't get angry, buddy", and it's decorated by tasks to compute the digit sums. This is really retarded. Eighth graders get the task to solve a set of linear equations with two unknowns. I am not sure how they can suddenly do it, without any theory and with the childish problems they did before. There's also a task to solve a quadratic equation. But I think that they just never learn anything such as the general formula – or what the discriminant is. So they just guess the possible solutions and the problems are constructed so that the roots are 7 and 8 so there's a chance for them to guess correctly, after all. If the roots were more complicated, they would have no chance. I can't believe this. When I was a third grader, I heard about the general solution to the quadratic equation, it intrigued me, and I derived the general formula, and was somewhat happy about it. But most kids never derive such things. They still need a huge amount of things that someone else was able to derive and discover; and that they can't derive or discover but they may learn how to use it. It's just completely sick to demand – and Hejný's method demands it – that kids may only be taught what they discover themselves. It's exactly as stupid as to demand that people can only drive cars that they have made themselves. The greatness of important inventions and discoveries is exactly in their ability to be used by lots of people who weren't able to invent or discover them. The eighth graders' magic additions, magic equations, and magic square don't intellectually surpass the problems that already appeared in the first grade. It's the same kind of standardized schemes, environments, there are no new ideas, no new levels of abstractions, no building on the previous insights. So the pupils learn basically nothing deep during these subsequent 7 years. You know, if it were mathematics, they should learn lots of formulae, rules to logically think, methods to prove and disprove things, numerous particular proofs, algorithms to draw and calculate areas and convert completely new problems and situations to variables and equations. They just learn nothing of the sort. They just spend their 8 years with the same kind of puzzles that the first graders or kids in the kindergarten may immediately understand. They are just substituting somewhat larger numbers, somewhat larger animals to the collection of granddaddy Forrest's animals, add fractions of the form x/2 and x/3 to the integers that were present from the beginning, and that's it. It's bad not only because they learn nothing advanced; they don't even learn that something that is more advanced exists. So they end up thinking that the greatest mathematicians in the world are those who can compute not with 20 but 100 or more animals of granddaddy Forrest. ;-) Such kids just end up being intellectual cripples. The progress is simply not equivalent to the progress in a conventional classroom. But if I had to say what the alumni of Hejný's elementary school are roughly equivalent to, well, I would say that the old eighth graders from Hejný's classroom are roughly equivalent to the conventional fifth graders. By constantly repeating these childish problems and puzzles, it's surely the case that numerous slower kids "get it" after 8 years, even though in the conventional classroom, a smaller fraction would "get it" after 5 years. But lots of kids who had As (and sometimes Bs) in the conventional classroom are really learning much more advanced, mature stuff in mathematics. And kids with the same or similar skills at age of 14-15 would be basically non-existent if Hejný's method became dominant. This would be a catastrophe for the nation because lots of occupations really depend on the former kids who got As or at least Bs in mathematics in conventional classrooms – and the knowledge that they could have accumulated at the higher rate that was expected. Kids who got somewhere – as kids or adults – had to use lots of skills and methods that they were able to reproduce but they just didn't exactly understand, especially not "instinctively", how they were derived or why they work. If you slowed down this whole progress by a factor of two, and if the high schools and colleges adapted to the slowdown in the elementary schools – and obviously, they will have to adapt and slow down as well if Hejný's method becomes really widespread at basic schools – it would mean that the college alumni in 2025 will be literally equivalent to high school alumni in 2010. So all the jobs that had good reasons to demand university diplomas in 2010 will be impossible to meaningfully fill. (Maybe, full professors in the new Hejný system will be equivalent to the contemporary bachelors. And there will also be a Hejný-Nobel prize which which will be on par with the contemporary masters of science.) Sure, it's convenient for kids, especially those who had trouble with normal mathematics; for their parents who are rich enough to fund the kids even when they grow older and the immediate happiness of the children is more important than other things; and for the teachers who don't really have to do almost anything in Hejný's classroom. But it's not a good way to spend the taxpayer money and it's a threat for the future of the Czech economy and the intellectual credentials of the whole nation. But please, be more than free to take the problems from Hejný's method and spread them in your country, too. While I think that the blanket application of the method would be rather devastating for every civilized nation, I also think that there are always pieces of the methods that you may pick or helpfully use in your education. Bonus: the levels of abstraction in mathematics I have made it clear that the method ignores the characteristic of mathematics (and physics) that one needs to build on the previous insights, and get familiar with increasingly abstract levels of abstraction. That makes mathematics (but also physics) different from common sense. It makes these subjects different from a set of isolated insights – which is what many other subjects are all about (memorization is always enough; and skipping some classes usually doesn't hurt later). What do I mean by these levels of sophistication? One starts to learn small integers. And addition. At some point, he learns numbers greater than 9. You can write them using several digits. You may suddenly do things with higher numbers or several digits at the same moment. Of course, they sort of get there in that alternative classroom. They also understand the generalization to negative numbers etc. although I feel that it's really hard to find negative numbers in Hejný's books of exercises. But the real mathematics doesn't end there. One learns that aside from integers, there are fractions. Fractions already become almost outlawed in Hejný's classroom. And if fractions appear, they have small integers in the denominator. The general methods to add fractions and do other things are being delayed and suppressed. We could call fractions the second floor of the mathematical skyscraper. Rational numbers may be extended to real numbers. The Hejný kids learn something about real numbers but they don't practice the decimal system. I suppose it's being assumed that they use calculators for everything. But calculators can't teach them about various relationship between rational and real numbers, about solutions to simple problems that shouldn't require any calculators, and so on. I think that calculators should justify the reduction of time that kids spend by doing numerical calculation because it can indeed be done by machines; but I still think that they should learn the same "theory". Adults' mathematics extends the fields – to complex numbers, quaternions, Octonions, but also other algebraic structures, rings, semigroups, groups, vector spaces, algebras, Lie algebras, manifolds with atlases, fiber bundles, sheaves, derived categories, be my guest. Obviously, Hejný's method is meant to be for elementary schools so there's no room for those advanced structures. But the kids don't learn "any" structure whose nature isn't obvious to the smart kids in the kindergarten. And that's a problem because the kids aren't just ignorant about particular advanced things, they're ignorant about the existence of advanced things. Now, extending the "set of numbers" is far from the only direction how new floors are being built in mathematics. In some sense, they're still some of the most trivial ways to extend and generalize in mathematics. Elementary schools should teach variables, the power of $$x$$. It's really a taboo. You can find $$x$$ and $$y$$ in the seventh grade but none of these things are really trained so the kids can't get what it's good for. One should learn how to manipulate with equations. How to understand that two lines are exactly equivalent to each other. Sometimes, the manipulation only works in one direction and one needs to check the candidate results. Some of them will work. This kind of logical thinking isn't trained. Now, there are general formulae where you may substitute any value. Formulae for solutions of equations or sets of equations, formulae for circumferences, areas, volumes. All these things are suppressed for ideological reasons. Formulae are evil, Hejný and his disciples repeat. In fact, they even say that a kid who knows some formulae is an "intellectual parasite", I kid you not. But in reality, they're absolutely essential in mathematics in the normal meaning of the word. Aside from exercises designed for schools, mathematics may actually be used in lots of situations in the real world – to some extent, in all of them. This requires the kid to be able to extract the mathematical description of the relevant real-world questions, learn how to overlook the distracting features of a specific situation that have nothing to do with the calculation, and how to use the general mathematical ways to solve the essence of the problem. Again, this very philosophy is considered a taboo. The kids are never supposed to solve "problems of any new kinds". The repetition of the standardized exercises or "environments" is where the kids should live forever. In some sense, these environments are new examples of the "safe spaces". The kids are being protected against the real world! In mathematics, there are lots of general formulae and rules to solve classes of problems and Hejný's kids aren't learning them at all. So they can't even start with the numerous levels of abstractions and broader lessons that arise further in that direction. What do I mean? Well, some problems need a lot of space to be formulated but the solution is simple. When I was 6, my grandfather asked me how much is 34+78-56-29-34+56+29-78. I don't remember the numbers, they may have been smaller. I got the correct result, zero, and he told me: But look, I can get zero easily, by rearranging the terms. You can find pairs that cancel. My general point is that the solution may often be shorter than the formulation of the problem. Some things are trivial to solve if you're clever. And when you're learning mathematics, you should have the goal to be more clever in this sense. On the other hand, some problems, like Fermat's Last Theorem, may be easily formulated but it's incredibly hard to solve them or find a proof. Even if you don't know any really mature examples of that, you should be persuaded about the general fact that solutions may be extremely complicated and it's sometimes OK to spend hours or centuries on a problem written on one line. It's important because people realize that mathematicians etc. cannot be paid just for the number of characters used to formulate a problem. It's needed for the people to have at least an adequate respect to some folks doing intellectual things – whose content isn't understandable to the public. There may be some valuable essence in that work even if it seems that they're just solving something very simple. Let me return closer to the beginning. I talked about the extension of integers to rational numbers and real numbers. But there's also the increasing sequence of operations. You do addition, subtraction, multiplication, division. But already at the basic school, kids should learn something about exponentiation. Clearly, Hejný's classroom teaches nothing like the identities involving powers let alone logarithms. But then you have functions that are "not rational", starting with logarithms and exponentiation. You need to learn how they work for non-integer exponents, how they're inverses of each other, how they're used to calculate the interest in banks. And then there may be special functions. And operations one may do with functions, starting with derivatives and integrals. And differential equations etc. Again, most of these things don't belong to the elementary schools. But the kids don't even learn any kids' version of the insight that increasingly non-obvious operations with numbers, generalized numbers, and operations with operations (functions) may be made and there may be good reasons for that. The kids in that alternative method don't really understand anything about the value of any of the stuff they learn. All they learn is some recreational mathematics that is only justified because the teacher wants the puzzles to be solved. One other branch of the skyscraper is the probability theory and statistics. So the kids do some things in the direction. They roll dice for an hour and calculate how many times they got 6. But they don't learn any laws. It's forbidden for the teacher to tell them the laws. So they don't really learn anything like general combinatoric numbers, rules for probabilities of independent things that multiply, let alone standard deviations, linear regression etc. An average person cannot rediscover most of these things which is why no one is allowed to learn such things in Hejný's classroom. The kids who have to rely on the direction they're led to end up believing that it makes sense to number granddaddy Forrest's animals, and it doesn't make sense to do what advanced mathematicians are doing. It would surely be silly to consider general functions and do operations with the functions – like integration. The school just teaches them wrong ideas about what is meaningful, useful, and natural to do; and what isn't. The method's focus on the brute force is entirely pathological for the kids' opinion about "brain activities" in general because the kids measures everything by the amount of brute force. It's being assumed that every problem is "solved" by trying all possibilities for answers. When the number of possible answers is infinite, e.g. when it is a generic real number, the kids are screwed. But methods to get the answers right away – or at least faster, by many, 100, or a googol of orders of magnitude – often exist. They should really be the content of mathematics education but they're treated as taboos by Hejný's method, too. What actually makes me rather emotional is that I know lots of people who really believe that rational reasoning doesn't work, physics and science don't work, and mathematics is at most useful for the solution of some special puzzles in recreational mathematics. They misunderstand everything about mathematics and science and why the whole Universe basically works on the basis of rules that may be mathematically formulated and mathematically analyzed. And what makes me angry is that I feel that Hejný's method is designed to "confirm" these utterly idiotic misconceptions about the power and role of mathematics, mathematically formulated laws, and mathematical reasoning in the whole world! While the puzzles may be fine in isolation, the message conveyed in between the lines is a set of misconceptions that a person who doesn't understand mathematics at all may believe. Hejný's method is a political program to legitimize the stupid people's opinions about mathematics. The kids end up being innumerate at all levels (starting from the multiplication tables and ending with all the levels of abstraction sketched above) and if they will ever represent a substantial fraction of the productive generation in a nation, the nation will cease to be civilized. Maybe such a nation will still provide the world with sufficiently many workers who may press some buttons in a factory. But it can't expect to be among the nations with the highest GDP per capita.
{}
### Abstract A new challenge for surgical robotics is placed in the use of flexible manipulators, to perform procedures that are impossible for currently available rigid robots. Since the surgeon only controls the end-effector of the manipulator, new control strategies need to be developed to correctly move its flexible body without damaging the surrounding environment. This paper shows how a positional controller for a new surgical robot (STIFF-FLOP) can be learnt from the demonstrations given by an expert user. The proposed algorithm exploits the variability of the task to comply with the constraints only when needed, by implementing a minimal intervention principle control strategy. The results are applied to scenarios involving movements inside a constrained environment and disturbance rejection. ### Bibtex reference @inproceedings{Bruno14ICRA, author="Bruno, D. and Calinon, S. and Caldwell, D. G.", title="Null Space Redundancy Learning for a Flexible Surgical Robot", booktitle="Proc. {IEEE} Intl Conf. on Robotics and Automation ({ICRA})", year="2014", month="May-June", address="Hong Kong, China", pages="2443--2448" } ### Video In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations to current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks by actively controlling the selected body parts of the robot. The flexibility of the robot allows the surgeon to move within organs to reach remote areas of the body and perform challenging procedures in laparoscopy. The surgeon controls the end-effector during the surgical task, leaving the motion of the whole arm to the control and learning modules. The latter should drive the body of the robot along the trajectory followed by the surgeon, without applying pressure to or damaging the internal organs of the patients. The proposed learning algorithm works in the null space of the surgical manipulator, to avoid interfering with the surgeon and exploiting redundancy in an optimal way.
{}
# MTP format 19 {number of normally included comment lines} Here's a brief free-form tutorial on how to decipher the MTP data: Data groups consist of the following group of lines per 15-second observing cycle. First line is: UTSEC, number of retrieval levels in following table, Pressure Altitude, Pitch, Roll, Outside air temp (K), tropopause altitude #1 (km), tropopause altitude #2 (km) [if present], potential temperatures of tropopause #1 and #2, latitude, longitude, & lapse rate near flight level. The 1-liners (for each cycle) can be stripped & imported into a spreadsheet for convenient plotting of trop altitude, lapse rate, etc. The tropopause altitudes are calculated by cubic spline interpolation of the retrieved altitudes using the WMO definition (that is, trop #1 is lowest altitude where average lapse rate > -2 K/km from initial -2 K/km point to any point within 2 km; trop #2 occurs above first trop after lapse rate is < -3K/km for >1 km, and then first trop definition applies, possibly from within the 1 km region.) Remaining set of lines for each cycle consist of 5 columns: col#1 is pressure altitude (meters), col#2 is temperature from MTP (Kelvin), col#3 is temperature error estimate (K), col#4 is geometric altitude (meters), based on GPS altitude (meters), and col#5 is molecular air density [1E+21/m3].
{}
Math՚s: Exponential Notation: Comparison of Surds and Simplest Form (For CBSE, ICSE, IAS, NET, NRA 2022) Doorsteptutor material for ICSE/Class-10 is prepared by world's top subject experts: get questions, notes, tests, video lectures and more- for all subjects of ICSE/Class-10. Comparison of Surds To know which surd is greater and which one is smaller, firstly we make them of equal orders and then compare their radicands together with their coefficients. For Example- Compare and Simplest Form of a Surd A surd is said to be in its simplest form if it has- • smallest possible order • no fraction under radical (root) sign • no factor of the form , where is a positive integer, under the radical sign of order For Ex- in simplest form will be Basic Mathematical Operations on Surds Addition and Subtraction- To add or subtract surds, they must be similar or like surds. If they are not, they are first made similar by making their radicands and order same. When their radicands and order are same, their coefficients are added or subtracted. Example – (i) (ii) Multiplication and Division- Two surds can be multiplied or divided if they are of the same order. So, before multiplying or dividing, we change them to the surds of the same order. Example- (i) (ii) Rationalization of Surds Conversion of surds into rational numbers is known as rationalization of surds. For converting a surd into a rational number, we need to multiply it with another surd. In such a case, each surd is called the rationalizing factor of the another surd. For Example- (i) Find the rationalizing factors of and . ∴ Rationalizing factor is because on multiplying with , we get which is a rational number. . ∴ Rationalizing factor is Rationalizing the denominator of a fraction- Rationalization is usually done of the denominator of an expression involving irrational surds. Ex- (i) Rationalize the denominator of Sol. (ii) Rationalize Sol. Developed by:
{}
Solving Nonlinear Optimal Control Problems With State and Control Delays by Shooting Methods Combined with Numerical Continuation on the Delays # Solving Nonlinear Optimal Control Problems With State and Control Delays by Shooting Methods Combined with Numerical Continuation on the Delays Riccardo Bonalli Sorbonne Universités, UPMC Univ Paris 06, CNRS UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France and Onera - The French Aerospace Lab, F - 91761 Palaiseau, France (riccardo.bonalli@onera.fr)    Bruno Hérissé Onera - The French Aerospace Lab, F - 91761 Palaiseau, France (bruno.herisse@onera.fr).    Emmanuel Trélat Sorbonne Universités, UPMC Univ Paris 06, CNRS UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France (emmanuel.trelat@upmc.fr, https://www.ljll.math.upmc.fr/trelat/). ###### Abstract In this paper we introduce a new procedure to solve nonlinear optimal control problems with delays which exploits indirect methods combined with numerical homotopy procedures. It is known that solving this kind of problems via indirect methods (which arise from the Pontryagin Maximum Principle) is complex and computationally demanding because their implementation is faced to two main difficulties: the extremal equations involve forward and backward terms, and besides, the related shooting method has to be carefully initialized. Here, starting from the solution of the non-delayed version of the optimal control problem, delays are introduced by a numerical continuation. This creates a sequence of optimal delayed solutions that converges to the desired solution. We establish a convergence theorem ensuring the continuous dependence w.r.t. the delay of the optimal state, of the optimal control (in a weak sense) and of the corresponding adjoint vector. The convergence of the adjoint vector represents the most challenging step to prove and it is crucial for the well-posedness of the proposed homotopy procedure. Two numerical examples are proposed and analyzed to show the efficiency of this approach. Key words. Optimal control, time-delayed systems, indirect methods, shooting methods, numerical homotopy methods, numerical continuation methods. AMS subject classifications. 49J15, 65H20. ## 1 Introduction ### 1.1 Delayed Optimal Control Problems Let , be positive integers, a positive real number, a measurable subset and define an initial state function and an initial control function . For every and every positive final time , consider the following nonlinear control system on with constant delays ⎧⎪⎨⎪⎩˙xτ(t)=f(t,xτ(t),xτ(t−τ1),uτ(t),uτ(t−τ2)) ,t∈[0,T]xτ(t)=ϕ1(t) , t∈[−Δ,0],uτ(t)=ϕ2(t) , t∈[−Δ,0)uτ(⋅)∈L∞([−Δ,T],Ω) (1) where and , are of class (at least) w.r.t. their second and third variables. Control systems (LABEL:dynDelay) play an important role describing many phenomena in physics, biology and economics (see, e.g. [1]). Let be a subset of . Assume that is reachable from for the control system (LABEL:dynDelay), that is, for every , there exists a final time and a control , such that the trajectory , solution of (LABEL:dynDelay) on , satisfies . Such a control is called admissible and we denote by the set of all admissible controls of (LABEL:dynDelay) defined on taking their values in while denotes the set of all admissible controls of (LABEL:dynDelay) defined on taking their values in . Then, and . Given a couple of delays, we consider the Optimal Control Problem with Delays (OCP) consisting in steering the control system (LABEL:dynDelay) to , while minimizing the cost function CTτ(τ,uτ(⋅)):=∫Tτ0f0(t,xτ(t),xτ(t−τ1),uτ(t),uτ(t−τ2))dt (2) where and , are of class (at least) w.r.t. their second and third variables. The final time may be fixed or not. The literature is abundant of numerical methods to solve (OCP). Most of them rely on direct methods, which basically consist in discretizing all the variables concerned and in reducing (OCP) to a finite dimensional problem. The works [2, 3, 4, 5, 6] develop several numerical techniques to convert the optimal control problem with delays into nonlinear constrained optimization problems. On the other hand, [7, 8, 9, 10, 11] propose different approaches that approximate the solution of (OCP) by truncated orthogonal series and reduce the optimal control problem with delays to a system of algebraic equations. Yet, other contributions (see, e.g. [12]) propose an approximating sequence of non-delayed optimal control problems whose solutions converge to the optimal solution of (OCP). However, the dimension induced by these approaches becomes as larger as the discretization is finer. Some applications, many of which are found in aerospace engineering, like atmospheric reentry and satellite launching (for example in [13]), require great accuracy which can be reached by indirect methods. Moreover, the computational load needed to compute indirect methods remains minimal if compared to the one used to obtain a good solution with direct approaches. It is then interesting to solve efficiently (OCP) via these procedures. ### 1.2 Indirect Methods Applied to (Ocp)τ The core of indirect methods relies on solving, thanks to Newton-like algorithms, the two-point or multi-point boundary value problem which arises from necessary optimality conditions coming from the application of the Pontryagin Maximum Principle (PMP) [14]. The paper [15] was first to provide a maximum principle for optimal control problems with a constant state delay while [16] obtains the same conditions by a simple substitution-like method. In [17] a similar result is achieved for control problems with pure control delays. In [18, 19], necessary conditions are obtained for optimal control problems with multiple constant delays in state and control variables. Moreover, [20] derives a maximum principle for control systems with a time-dependent delay in the state variable. Finally, [21] provides necessary conditions for optimal control problems with multiple constant delays and mixed control-state constraints. The advantages of indirect methods, whose more basic version is known as shooting method, are their extremely good numerical accuracy and the fact that, if they converge, the convergence is very quick. Indeed, since they rely on the Newton method, they inherit the convergence properties of the Newton method. Nevertheless, their main drawback is related to their difficult initialization (see for example [13]). This is pointed out as soon as the necessary optimality conditions are computed on (OCP). It is known that (see, e.g. [14, 22, 21]), if , is an optimal solution of (OCP) with optimal final time , there exist a nonpositive scalar and an absolutely continuous mapping called adjoint vector, with , such that the so-called extremal satisfies ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩˙xτ(t)=∂H∂p(t,xτ(t),xτ(t−τ1),pτ(t),p0τ,uτ(t),uτ(t−τ2)) , t∈[0,Tτ]˙pτ(t)=−∂H∂x(t,xτ(t),xτ(t−τ1),pτ(t),p0τ,uτ(t),uτ(t−τ2))−∂H∂y(t+τ1,xτ(t+τ1),xτ(t),pτ(t+τ1),p0τ,uτ(t+τ1),uτ(t+τ1−τ2)) ,t∈[0,Tτ−τ1]˙pτ(t)=−∂H∂x(t,xτ(t),xτ(t−τ1),pτ(t),p0τ,uτ(t),uτ(t−τ2)) ,t∈(Tτ−τ1,Tτ] (3) where is the Hamiltonian, and the maximization condition H (t,xτ(t),xτ(t−τ1),pτ(t),p0τ,uτ(t),uτ(t−τ2)) + \mathds1[0,Tτ−τ2]H(t+τ2,xτ(t+τ2),xτ(t+τ2−τ1),pτ(t+τ2),p0τ,uτ(t+τ2),uτ(t)) ≥ H(t,xτ(t),xτ(t−τ1),pτ(t),p0τ,v,uτ(t−τ2)) + \mathds1[0,Tτ−τ2]H(t+τ2,xτ(t+τ2),xτ(t+τ2−τ1),pτ(t+τ2),p0τ,uτ(t+τ2),v) ∀ v∈Ω holds almost everywhere on . Moreover, if the final time is free and, without loss of generality, one supposes that and are points of continuity of , H(Tτ,xτ(Tτ),xτ(Tτ−τ1),pτ(Tτ),p0τ,uτ(Tτ),uτ(Tτ−τ2))=0 (5) (see [22] for a more general condition using the concept of Lebesgue approximate continuity). The extremal is said normal whenever , and in that case it is usual to normalize the adjoint vector so that ; otherwise it is said abnormal. Assuming that is known as a function of and (by the maximization condition (LABEL:maxCond)), each iteration of a shooting method consists in solving the coupled dynamics (LABEL:dynDual), where a value of is provided. This means that one has to solve a Differential-Difference Boundary Value Problem (DDBVP) where both forward and backward terms of time appear within mixed type differential equations. The difficulty is then the lack of global information which forbids a purely local integration by usual iterative methods for ODEs. Some techniques to solve mixed type differential equations were developed. For example, [23] proposes an analytical decomposition of the solutions as sums of forward solutions and backward solutions, while [24] provides a solving numerical scheme. However, these approaches treat either only linear cases or the inversion of matrices whose dimension increases as much as the numerical accuracy raises. In order to initialize correctly a shooting method for (LABEL:dynDual), a guess of the final value of the adjoint vector is not sufficient, but rather, a good numerical guess of the whole function must be provided to make the procedure converge. This represents an additional difficulty with respect to the usual shooting method and it requires a global discretization of (LABEL:dynDual). It seems that this topic has been little addressed in the literature. The paper [25] proposes a collocation methods to solve the DDBVP arising from (LABEL:dynDual) that turns out to be successful to solve several optimal control problems with delays. However, as a consequence of the collocation method, the degree of interpolating polynomials grows up fast for hard problems. Moreover, a precomputation of points where the solution of (LABEL:dynDual) has discontinuous derivative is needed to make the whole approach feasible, intensifying the quantity of numerical computations. ### 1.3 Numerical Homotopy Approach The basic idea of homotopy methods is to solve a difficult problem step by step, starting from a simpler problem, by parameter deformation. Theory and practice of continuation methods are well known (see, e.g. [26]). Combined with the shooting problem derived from the PMP, a homotopy method consists in deforming the problem into a simpler one (that can be easily solved) and then in solving a series of shooting problems step by step to come back to the original problem. The main difficulty of homotopy methods lies in the choice of a sufficiently regular deformation that allows the convergence of the homotopy method. The starting problem should be easy to solve, and the path between this starting problem and the original problem should be handy to model. This path is parametrized by a parameter denoted and, when the homotopic parameter is a real number and the path is linear in , the homotopy method is rather called a continuation method. Consider the Optimal Control Problem without Delays (OCP)(OCP) which consists of steering to the control system {˙x(t)=f(t,x(t),x(t),u(t),u(t)) ,t∈[0,T]x(t)=ϕ1(0) ,u(⋅)∈L∞([0,T],Ω) (6) while minimizing the cost function CT(u(⋅)):=CT(0,u(⋅))=∫T0f0(t,x(t),x(t),u(t),u(t))dt. (7) In many situations, exploiting the non-delayed version of the PMP mixed to other techniques (such as geometric control, dynamical system theory applied to mission design, etc., we refer the reader to [13] for a survey on these procedures), one is able to initialize efficiently a shooting method on (OCP). Thus, it is legitimate to wonder if one may solve (OCP) by indirect methods starting a homotopy method where represents the deformation parameter and (OCP) is taken as the starting problem. This approach is a way to address the flaw of indirect methods applied to (OCP): on one hand, the global adjoint vector of (OCP) could be used to inizialize efficiently a shooting method on (LABEL:dynDual) (for small enough) and, on the other hand, we could solve (LABEL:dynDual) via usual iterative methods for ODEs (for example, by using the global state solution at the previous iteration). However, one should be careful when using homotopy methods. As we pointed out previously, the existence of a sufficiently regular deformation curve of delays that allows the convergence of the method must be ensured. In [13], it was proved that, in the case of unconstrained optimal control problems without delays where , the existence of a parameter deformation curve is equivalent to ask that neither abnormal minimizers nor conjugate points occur along the homotopy path. Some similar assumptions must be made to apply this procedure to solve succesfully (OCP) by indirect methods. ### 1.4 Main Contribution and Paper Structure The idea proposed in this paper consists in introducing a general method that allows to solve successfully (OCP) using indirect methods combined with homotopy procedures, with as deformation parameter, starting from the solution of its non-delayed version (OCP). The main contribution of the paper is a convergence theorem that ensures the continuous dependence w.r.t. the delay of the optimal state, the optimal control (in a weak sense) and of the corresponding adjoint vector of (OCP). This ensures to reach the optimal solution of (OCP) starting from the optimal solution of (OCP) iteratively by travelling across a sequence converging to . The most challenging and most important nontrivial conclusion is the continuous dependence of the adjoint vectors of (OCP) w.r.t. . This last fact is crucial because it allows indirect methods to solve (OCP) starting a homotopy method on with (OCP) as initial problem. The article is structured as follows. Section 2 presents the assumptions and the statement of the convergence theorem; moreover, a practical algorithm to solve (OCP) by homotopy is provided. In Section 3 the efficiency of this approach is illustrated by testing the proposed algorithm on two examples. Finally, in the appendices, the technical details of the proof of the main result are provided. ## 2 Convergence Theorem for (Ocp)τ Within the proposed convergence result, it is crucial to split the case in which the delay on the control variable appears from the one which considers only pure state delays. The context of control delays reveals to be more complex, especially, in proving the existence of optimal control for (OCP). Indeed, a standard approach to prove existence would consider usual Filippov’s assumptions (as in the classical reference [27]) which, in the case of control delays, must be extended. In particular, using the Guinn’s reduction (see, e.g. [16]), the control system with delays results to be equivalent to a non-delayed system with a larger number of variables depending on the value of . Such extension was used in [28]. However, it is not difficult to see that the usual assumption concerning the convexity of the epigraph of the extended dynamics is not sufficient to prove Lemma 2.1 in [28]. More details are provided in Remark LABEL:remarkGuinn, in Section LABEL:existenceSect. ### 2.1 Main Result We make the following assumptions. Common assumptions: 1. is a compact and convex subset of and is a compact subset of ; 2. (OCP)has a unique solution defined on a neighborhood of ; 3. The optimal trajectory has a unique extremal lift (up to a multiplicative scalar) defined on , which is normal, denoted , solution of the Maximum Principle; 4. There exists a positive real number such that, for every and every , denoting the related trajectory arising from the control system (LABEL:dynDelay) with final time , we have ∀t∈[−Δ,Tτ,v]:T+Tτ,v+∥xτ,v(t)∥≤b. In case of pure state delays: 1. For every delay , every optimal control of (OCP) is continuous; 2. The sets {(f1(t,x,y,u),f01(t,x,y,u)+γ,∂~f1∂x(t,x,y,u),∂~f1∂y(t,x,y,u)):u∈Ω , γ≥0} are convex for every and every , where . In case of delays both in state and control variables: 1. For every delay , every optimal control of (OCP) takes its values at extremal points of . Moreover, the optimal final time and are points of continuity of ; 2. The vector field and the cost function are locally Lipschitz w.r.t. i.e., for every there exist a neighborhood of and a continuous function , such that ∥f(t,x,y,u1,v1)−f(t,x,y,u2,v2)∥≤α(t,x,y)(∥u1−u2∥+∥v1−v2∥) for every (the same statement holds for ); 3. The sets {(f(t,x,y,u,v),f0(t,x,y,u,v)+γ):u,v∈Ω , γ≥0} are convex for every and every . Some remarks on these assumptions are in order. First of all, assumptions and on the uniqueness of the solution of (OCP) and on the uniqueness of its extremal lift are related to the differentiability properties of the value function (see, e.g. [29, 30, 31]). They are standard in optimization and are just made to keep a nice statement (see Theorem LABEL:theoMain). These assumptions can be weakened as follows. If we replace and with the assumption ”every extremal lift of every solution of (OCP) is normal”, then the conclusion provided in Theorem LABEL:theoMain hereafter still holds, except that the convergence properties must be written in terms of closure points. The proof of this fact follows the same guideline used to prove Theorem LABEL:theoMain and we avoid to report the details. Assumptions and play a complementary role in proving the convergence property for the adjoint vectors. Moreover, Assumption becomes also crucial to ensure the convergence of optimal controls and trajectories when considering delays both in state and control variables. Without this assumption of nonsingular controls, proving these last convergences becomes a hard task. The issue is related to the following fact. Let , be Banach spaces and be a continuous map. Suppose that is a sequence such that and for some . Then, in general, we cannot ensure that . A way to overcome this flaw is to ensure the equivalence between weak convergence and strong convergence under some additional assumptions, and, in our main result, this is achieved thanks to (see, e.g. [32]). ###### Theorem 2.1. Assume that assumptions , , and hold. Consider first the context of pure state delays i.e., problems (OCP) such that and , and assume that assumptions and hold. Then, there exists such that, for every , (OCP) has at least one solution whose arc is defined on , every extremal lift of which is normal. Let be such a normal extremal lift. Then, up to continuous extensions on , as tends to 0, • converges to ; • converges uniformly to ; • converges uniformly to ; • converges to in for the weak star topology. If the final time of (OCP) is fixed, then for every . On the other hand, consider general problems (OCP) with delays in both state and control variables. If one assumes that, for every , if (OCP) is controllable then it admits an optimal solution, then, under assumptions , and , there exists such that, for every , the same conclusions on the convergences given above hold and, in addition, as tends to 0, converges to almost everywhere in . Moreover, if dynamics and cost are affine w.r.t. in the two control variables, the existence of an optimal solution for every is ensured. Finally, for every , by extending to the delay all the previous assumptions, we have that the optimal solutions of (OCP) (or in the case of pure state delays) and their related adjoint vectors are continuous w.r.t. at for the above topologies. The proof of Theorem LABEL:theoMain is technical and lenghty. We report it in Appendix LABEL:appProof. The last statement of Theorem LABEL:theoMain (the continuous dependence w.r.t. , for every ) is the most general conclusion achieved and extends the first part of the theorem. The proof of this generalization follows the same guidelines of the proof of the continuity at and we avoid to report the details. We want to stress the fact that the continuous dependence w.r.t. of the adjoint vectors of (OCP) represents the most challenging and the most important result achieved by Theorem LABEL:theoMain. It represents the essential step that allows the proposed homotopy method to converge robustly for every, small enough, couple of delays . The proof of this fact is not easy. An accurate analysis of the convergence of Pontryagin cones in the case of the delayed version of the PMP is required. ###### Remark 2.2. Theorem LABEL:theoMain can be extended to obtain stronger convergence conclusions, by using weaker assumptions, in the particular case of dynamics that are affine in the two control variables, and costs of type ∫Tτ0[C1∥xτ(t)∥2+C2∥xτ(t−τ1)∥2+C3∥uτ(t)∥2+C4∥uτ(t−τ2)∥2]dt where are constants. Indeed, considering assumptions - and either or , the convergence properties established in Theorem LABEL:theoMain for and still hold and, moreover, converges to in for the weak topology, as tends to 0. The proof of this fact arises easily adapting the scheme in Appendix LABEL:appProof. For sake of brevity, we do not give these technical details. ### 2.2 The Related Algorithm and Its Convergence Exploiting the statement of Theorem LABEL:theoMain, we may conceive a general algorithm, based on indirect methods, capable of solving (OCP) by applying homotopy procedures on parameter , starting from the solution of its non-delayed version (OCP). As we explained in the previous sections, the critical behavior coming out from this approach consists of the integration of mixed-type equations that arise from System (LABEL:dynDual). The previous convergence result suggests us the idea that we may solve (LABEL:dynDual) via usual iterative methods for ODEs, for example, by using the global state solution at the previous iteration. Moreover, the global adjoint vector of (OCP) could be used to inizialize, from the beginning, the whole shooting. These considerations lead us to Algorithm LABEL:alg. To prove the convergence of Algorithm LABEL:alg we apply Theorem LABEL:theoMain. We focus on the case of general state and control delays, highlighting that the same conclusion holds for problems with pure state delays provided that optimal controls can be expressed as continuous functions of the state and the adjoint vector (by using the maximality condition on the Hamiltonian). Suppose that assumptions , , , , , and hold and that the delay considered is such that . Then, we know that for every in the open ball , (OCP) has at least an optimal solution with normal extremal lift. The first consequence is that, referring to Algorithm LABEL:alg, we can put for every integers , . Thanks to Theorem LABEL:theoMain, , and as soon as . Then, the indirect method inside Algorithm LABEL:alg results to be well defined and well initialized by the adjoint vector of (OCP). Indeed, necessarily, the algorithm will travel backward one of the subsequence converging to the solution of (OCP). Since for every sequence converging to the related extremal lift of (OCP) converge to the one of (OCP) (for the evident topologies), every homotopy methods on lead to the same optimal solution of (OCP). ###### Remark 2.3. It is interesting to remark that, at least formally, there are no difficulties to apply Algorithm LABEL:alg to more general (OCP) which consider locally bounded varying delays that are functions of the time and the state i.e., . In this context, some relations close to (LABEL:dynDual)-(LABEL:freeT) are still provided (see, e.g. [33]), so that, the proposed numerical continuation scheme remains well-defined. ## 3 Numerical Example In order to prove effectiveness and robustness of our approach, we test it on two examples. As a matter of standard analysis for numerical approaches to solve optimal control problems with delays, we follow the guideline provided by [6]. The first test is an academic example while the second one considers the nontrivial problem consisting of a continuous nonlinear two-stage stirred tank reactor system (CSTR), proposed by [34] and [35]. We stress the fact that, in this paper, we are interested in solving an optimal control problem with delays (OCP) starting from its non-delayed version (OCP), without taking care of how (OCP) is solved. Even if we are aware of the fact that this task is far from being easy, here, we focus our attention on the performance achieved once the solution of (OCP) is known. However, as suggested in Section LABEL:secIntro, in many situations one is able to initialize correctly a shooting method on (OCP) (see [13]). ### 3.1 Setting Preliminaries The numerical examples proposed are solved applying verbatim Algorithm LABEL:alg. Good solutions are obtained using a basic linear continuation on . Moreover, an explicit second-order Runge-Kutta method is handled to solve all the ODEs coming from the dual formulation while the routine hybrd [36] is used to solve the shooting problem. The procedure is initialized using the solution of (OCP) provided by the optimization software AMPL [37] combined with the interior point solver IPOPT [38]. We stress the fact that one has to be careful when passing the numerical approximation of the extremals in step B.3) of the previous algorithm. Indeed, it is known that, using collocation methods like Runge-Kutta schemes, the error between the solution and its numerical approximation remains bounded throughout and decreases with , where is the time step while is the order of the method, only if this numerical approximation is obtained by interpolating the numerical values within each subinterval of integration with a polynomial of order . From this remark, it is straightforward that the dimension of the shooting considered in Algorithm LABEL:alg, not only it increases w.r.t. , but it is also proportional to . In the particular case of an explicit second-order Runge-Kutta method, the dimension of shootings is bounded above by (where is the dimension of the state). The numerical calculations are computed on a machine Intel(R) Xeon(R) CPU E5-1607 v2 3.00GHz, with 8.00 Gb of RAM. ### 3.2 Analytical Example Consider the Optimal Control Problem with Delays (OCP) which consists in minimizing the cost subject to ⎧⎪⎨⎪⎩˙xτ(t)=xτ(t−τ1)uτ(t−τ2) ,t∈[0,3]xτ(t)=1 , t∈[−τ1,0],uτ(t)=0 , t∈[−τ2,0)uτ(⋅)∈L∞([−τ2,3],R),τ=(τ1,τ2)=(1,2) Since no terminal conditions are imposed, this particular (OCP) can only have normal extremals. Then, the Hamiltonian is and the adjoint equation is , with . Finally, we infer from the maximization condition (LABEL:maxCond), that optimal controls are given by . The paper [6] shows that the optimal synthesis of (OCP) can be obtained analytically. In particular, one has u2(t)=\mathds1[0,1](t)e2+1(et−e2−t),x1(t)=\mathds1[0,2](t)+\mathds1[2,3](t)e2+1(et−2+e4−t) (8) Considering Remark LABEL:remarkExample, we apply Algorithm LABEL:alg to solve (OCP) with Runge-Kutta time steps and a tolerance of and maximal iterations for hybrd routine. Using a Simpson’s rule, the optimal value is obtained in just in one iteration of the continuation scheme. Moreover, global errors in the sup norm between (LABEL:contEx1) and their numerical approximations respectively of for the control and of for the state are obtained. Figure LABEL:figEx1 shows the optimal quantities for (OCP), its non-delayed version and an intermediate solution when . ### 3.3 A Nonlinear Chemical Tank Reactor Model Let us consider a two-stage nonlinear continuous stirred tank reactor (CSTR) system with a first-order irreversible chemical reaction occurring in each tank. The system was studied by [35] and successively by [34] in the framework of the dynamic programming. This Optimal Control Problem with Delays (OCP) consists in minimizing the cost subject to ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩˙x1τ(t)=0.5−x1τ(t)−R1(x1τ(t),x2τ(t)) , t∈[0,T]˙x2τ(t)=R1(x1τ(t),x2τ(t))−(u1τ(t)+2)(x2τ(t)+0.25) , t∈[0,T]˙x3τ(t)=x1τ(t−τ)−x3τ(t)−R2(x3τ(t),x4τ(t))+0.25 , t∈[0,T]˙x4τ(t)=x2τ(t−τ)−2x4τ(t)−u2τ(t)(x4τ(t)+0.25)+R2(x3τ(t),x4τ(t))−0.25 , t∈[0,T]x1τ(t)=0.15 , x2τ(t)=−0.03 , t∈[−τ,0],x3τ(0)=0.1 , x4τ(0)=0u1τ(⋅) , u2τ(⋅)∈L∞([0,T],R) where, now, we have a fixed scalar delay which is chosen in the interval and acts on the state only, the final time is fixed and functions , are given by , . Since no terminal conditions are imposed, (OCP) have only normal extremals. The Hamiltonian is H=p1(0.5−x1−R1(x1,x2))+p2(R1(x1,x2)−(u1+2)(x2+0.25)) +p3(y1−x3−R2(x3,x4)+0.25)+p4(y2−2x4−u2(x4+0.25)+R2(x3,x4)−0.25) −((x1)
{}
## Algebra 2 (1st Edition) Published by McDougal Littell # Chapter 8 Rational Functions - 8.4 Multiply and Divide Rational Expressions - 8.4 Exercises - Mixed Review - Page 580: 58 22, 462 #### Work Step by Step The greatest common factor is the largest number that each number can be divided by to get a whole number. The least common multiple is the smallest whole number that is a multiple of both numbers. Knowing this, we find that the least common multiple is $462$, and the greatest common factor is $22$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
# Which statistics are sufficient? Can anyone help explain why the following statistics are either Sufficient or not Sufficient? Mode, Mean, Median, Standard Deviation, Skewness, Kurtosis, Range, IQR - Perhaps stats.stackexchange.com would be a better venue for this sort of question. –  Jebruho Nov 4 '12 at 22:57 Every one of them is sufficient for some family of distributions. Sufficiency is relative to a set of probability distributions. This is explained in this article: jstor.org/stable/2683116 –  Michael Hardy Nov 4 '12 at 23:38 A statistic is not 'sufficient' on its own. Sufficiency is with respect to a distribution (or rather a family). For example, Wikipedia's article on sufficient statistics has A statistic $T(X)$ is sufficient for underlying parameter $θ$ precisely if the conditional probability distribution of the data $X$, given the statistic $T(X)$, does not depend on the parameter $θ$ So a statistic may be sufficient for some parameter for one distribution but not sufficient for another parameter of another distribution. I can think of a number of distributions in which the sample mean is sufficient for some statistic ... and vastly more where it isn't. You don't specify either parameters nor distributions in your question, so we can't really say whether any of them are sufficient. It depends on the situation. -
{}
PDF e-Pub ## Section: New Results ### Quasi-optimal computation of the $p$-curvature The $p$-curvature of a system of linear differential equations in positive characteristic $p$ is a matrix that measures to what extent the system is close to having a fundamental matrix of rational function solutions. This notion, originally introduced in the arithmetic theory of differential equations, has been recently used as an effective tool in computer algebra and in combinatorial applications. We have described in [6] a recent algorithm for computing the $p$-curvature, whose complexity is almost optimal with respect to the size of the output. The new algorithm performs remarkably well in practice. Its design relies on the existence of a well-suited ring, of so-called Hurwitz series, for which an analogue of the Cauchy–Lipschitz Theorem holds, and on a FFT-like method in which the “evaluation points” are Hurwitz series.
{}
# Air driven generator? ### Help Support HomeBuiltAirplanes.com: #### Dan Thomas ##### Well-Known Member I forgot to add 'auto engine' and I guess I should have BOLD/CAPITALIZED/flashing neon colored the 'emergency' part. While a second Lithium battery might be a better bet, we're talking emergency situations. It doesn't need to be perfect, it just needs to do what it needs to do. Say 6-10A for a fuel pump, an amp or two for the ECU, a couple for the coils, say 30-40A for an engine and radio on an emergency bus. Car motors need a lot of gadgets to run... I've accepted any/all performance hits for the simple ability to actually afford to be able to fly. That 'minipod' was 2lbs for 4A. Duralast Gold New Alternator 14130N 40A, 11lbs with pulley. Not a huge penalty. You could shave off some weight. Getting it to spin 2000rpm to get enough out of it... that's might be an issue. Batteries age, and need to be maintained/charged. A second alt. would just sit there until you needed it. ULpower's UL260 uses a 30-amp PM generator and rectifier-regulator. The engine and fuel pump alone eat up 15 amps. Seems that EFI and EI need plenty of power. UL260i | ULPower Aero Engines #### pfarber ##### Well-Known Member HBA Supporter ULpower's UL260 uses a 30-amp PM generator and rectifier-regulator. The engine and fuel pump alone eat up 15 amps. Seems that EFI and EI need plenty of power. UL260i | ULPower Aero Engines The fuel pump is the largest load and even then its less than 20 amps, closer to 10. ECU/spark/radio would be less than 10amps. A full load calc would need to be done. My specific install is planning to use an electric water pump, so that's another 10-15 amps. I don't see a secondary battery as a viable solution, they just could not handle the load. If you think about it, the weight is not a significant penality as a certified motor is dragging around an alternator AND TWO generators AND a battery. I think with EFI, a lithium main battery, a primary AND secondary alternator, weight wise its a not a significant difference. Even if you hook up the second alt to the engine (and not try to use wind power) its not a significant hp load. ##### Well-Known Member The Gennipod was developed by Jim Hardy if the Gen 2 GenniPod prop is flipped it can support speeds over 120 otherwise it is optimal for 60-90 I think. Can you advise what speed generated 4A? MAtt It was on an Airbike ultralight. Speed 55-60 mph. #### mcrae0104 ##### Well-Known Member HBA Supporter Log Member This project will be a great opportunity to collect some data points once the system is operational. It's probably the first water pump driven by electricity driven by a turbine/generator driven by thrust from a propeller driven by internal combustion. Let's see how it does. #### pfarber ##### Well-Known Member HBA Supporter It was on an Airbike ultralight. Speed 55-60 mph. #### pfarber ##### Well-Known Member HBA Supporter This project will be a great opportunity to collect some data points once the system is operational. It's probably the first water pump driven by electricity driven by a turbine/generator driven by thrust from a propeller driven by internal combustion. Let's see how it does. I'm still working out how Bumblebees fly first. Then I plan to reconcile Newtons 3rd law and Bouronilli's Principal. Someone needs to figure out how an airplane wing works. Then I'll hit the water pump/propeller/turbine/alternator. Should be done by noon tomorrow. But early results seem to indicate the answer is 7. #### mcrae0104 ##### Well-Known Member HBA Supporter Log Member Bonus points for sarcastic wit, pfarber. I was actually being serious though. I'm interested to hear how it works out. #### pfarber ##### Well-Known Member HBA Supporter Bonus points for sarcastic wit, pfarber. I was actually being serious though. I'm interested to hear how it works out. I've actually thought of a good viability test, simply get an alternator (amp from a econo-box from the junkyard) and mount a rough propeller (basically take some sheet metal and bend it so I can get some of that newtons 3rd law action), bolt it to a pipe, hang it out the window and drive 75mph. See what kind of RPMs I can get. If I can't spin it at least 2000rpm I don't think I will get enough out of it. From there find a 'better' prop (another reason to play with SolidWorks, thanks EAA!) and get it 3D printed. This would be an emergency use item, so 'working' is more important than 'efficient'. #### TFF ##### Well-Known Member A driven propellor is different from a driving one. You would have to get that right to see any speed. Ram Air Turbine, RAT. They are quite noisy. Depends on load needed. The ones I had to work on ran the emergency hydraulic system. No hydraulic no flight controls. It had an adjustable pitch blade too. If it’s some radios and maybe an EFI, a battery is way better when comes to complexity. A friends plane would have to run out of gas more than twice to use up his backup battery. Checking was a pain to spin up one of the generators. Lots of drag to spin one. ##### Well-Known Member I've actually thought of a good viability test, simply get an alternator (amp from a econo-box from the junkyard) and mount a rough propeller (basically take some sheet metal and bend it so I can get some of that newtons 3rd law action), I used a home made sheet metal fan to turn my air generator. In a short while one blade broke off and you can imagine the vibration that resulted. Thought the plane might come apart. Made a quick landing in a field and removed the fan so I could continue the flight. The next time I used a large wood model airplane prop and that worked well. #### Dan Thomas ##### Well-Known Member Way back in the 1970s I was towing gliders with an Auster, the same airplane I later bought to restore. This aircraft had no engine-driven generator, and in its original configuration it had a wind-driven generator mounted in the wing leading edge. That generator was long gone. So we had to occasionally recharge the battery for the radio and starter, though I usually hand-propped-it. One of the guys in the club took an old generator (from a VW, I think) and mounted the two-blade radiator fan from an old Austin on it. (Austins were cars made in England and plentiful in the '50s and '60s before the Japanese imports outclassed and underpriced them. Japanese cars didn't use Lucas electrics, for one thing.) Mounted it on a wing strut. That fan spun the generator OK, until one day, flying back from a glider meet, it started to howl. The fan was just twisted sheet steel, and twisting shortens the blade length. Fan RPM at flight speeds was higher than it was in the car, and centrifugal force caused the blade to untwist and go to a lower pitch, which increased the RPM, which increased the centrifugal force, which untwisted it more, which made it go faster......... I was terrified that the fan would break and either send a chunk of steel through me or a fuel tank, or the imbalance would tear the strut off.. I slowed down to about 50 MPH (stall was mid-30s) and crawled toward the home airport, the fan howling like a siren. Made it, and bent the blade back over the strut when I got out. They replaced it with the plastic fan off a Datsun, which turned much slower. It hadn't overrevved before that trip because all the flying was towing gliders at 70 MPH and descending at 60 and 70, landing at 40. Cruising at 110 caused it to start untwisting. That's one reason I made the little 3" prop for my 1.5-amp generator out of welded-up steel. Stout and stiff. #### pfarber ##### Well-Known Member HBA Supporter I get that thin metal props are not safe or a long term solution. It would be used for nothing more than a viability test. At 75mph can I even hope to get the RPMs high enough to make +12v and a couple dozen amps. But I guess 'doom and gloom' is the theme of 2020. Instead of long missives of old cars and bent parts, you put some thought into 'how could you NOT break the thing' or something helpful along those lines. If I get an hour to two I'll fiddle with it more. #### Dana ##### Super Moderator Staff member The fan on the Gennipod looks like it came from a big computer fan. How much current does a starter draw? Lessee... the Odyssey battery in my plane is rated at 170 CCA and 16Ah nominal capacity. Say the starter draws that full 170 for, say, 10 seconds, that's about 1/2 Ah so you might get 30 starts. The Gennipod puts out 4A so it'd take 7.5 minutes to recharge the battery from a single start, not counting any other loads. That sounds reasonable. Last edited: #### pfarber ##### Well-Known Member HBA Supporter The primary engine alt. would do normal charging duties. Since I am running a car motor with EFI I would like (but not 100% positive I need) another power source. Although if the battery dies and the alternator breaks then things have gone very wrong.
{}
# How do you simplify 5t − 3(1 + t) ? Apr 5, 2018 $2 t - 3$ Here's how I did it: #### Explanation: $5 t - 3 \left(1 + t\right)$ The first thing we want to do is distribute (multiply) the $- 3$ to everything in the parenthesis: $- 3 \cdot 1 = - 3$ $- 3 \cdot t = - 3 t$ And when we combine them we get $- 3 - 3 t$. Now the expression is: $5 t - 3 - 3 t$ Now we do $5 t - 3 t$: $2 t - 3$ (final simplified answer) Apr 5, 2018 Your answer is $2 t - 3$ Firstly open up the bracket $- 3 \left(1 + t\right)$ this is equal to $- 3 - 3 t$. I am sure you noticed the sign change this is because minus times plus equals minus. Your equation know is $5 t - 3 - 3 t$ which is then $5 t - 3 t - 3$. The simplification is $2 t - 3$. Hope this helps
{}
Figure 1: Example of a (a) diblock copolymer, (b) gradient copolymer and (c) random copolymer Copolymers are polymers that are synthesized with more than one kind of repeat unit (or monomer). It exhibits a gradual change in monomer composition from predominantly one species to predominantly the other,[1] unlike with block copolymers, which have an abrupt change in composition,[2] and random copolymers, which have no continuous change in composition (see Figure 1).[3][4] In the gradient copolymer, as a result of the gradual compositional change along the length of the polymer chain less intrachain and interchain repulsion are observed.[5] The development of controlled radical polymerization as a synthetic methodology in the 1990s allowed for increased study of the concepts and properties of gradient copolymers because the synthesis of this group of novel polymers was now straightforward. Due to the similar properties of gradient copolymers to that of block copolymers, they have been considered as a cost effective alternative in applications for other preexisting copolymers.[5] ## Polymer Composition Figure 2: Graphical depiction of the composition of a gradient copolymer In the gradient copolymer, there is a continuous change in monomer composition along the polymer chain (see Figure 2). This change in composition can be depicted in a mathematical expression. The local composition gradient fraction $g(X)$ is described by molar fraction of monomer 1 in the copolymer $(F_1)$ and degree of polymerization $(X)$ and its relationship is as follows:[5] $g(X)=\frac{dF_1(X)}{dX}$ The above equation supposes all of the local monomer composition is continuous. To make up for this assumption, another equation of ensemble average is used:[5] $F^{(loc)}_1(X)=\frac{1}{N}\sum_{i=1}^NF_{1,i}(X)$ The $F^{(loc)}_1(X)$ refers ensemble average of the local chain composition, $X$ refers degree of polymerization, $N$ refers number of polymer chains in the sample and $F_{1,i}(X)$ refers composition of polymer chain i at position $X$. This second equation identifies the average composition over all present polymer chains at a given position, $X$.[5] ## Synthesis Prior to the development of controlled radical polymerization (CRP), gradient copolymers (as distinguished from statistical copolymers) were not synthetically possible. While a "gradient" can be achieved through compositional drift due to a difference in reactivity of the two monomers, this drift will not encompass the entire possible compositional range. All of the common CRP methods[6] including atom transfer radical polymerization and Reversible addition−fragmentation chain transfer polymerization as well as other living polymerization techniques including anionic addition polymerization and ring-opening polymerization have been used to synthesize gradient copolymers.[5] The gradient can be formed through either a spontaneous or a forced gradient. Spontaneous gradient polymerization is due to a difference in reactivity of the monomers. The resulting change in composition throughout the polymerization creates an inconsistent gradient along the polymer. Forced gradient polymerization involves varying the comonomer composition of the feed being throughout the reaction time. Because the rate of addition of the second monomer influences the polymerization and therefore properties of the formed polymer, continuous information about the polymer composition is vital. The online compositional information is often gathered through automatic continuous online monitoring of polymerization reactions, a process which provides in situ information allowing for constant composition adjustment to achieve the desired gradient composition. ## Properties The wide range of composition possible in a gradient polymer due to the variety of monomers incorporated and the change of the composition results in a large variety of properties. In general, the glass transition temperature (Tg) is broad in comparison with the homopolymers. Micelles of the gradient copolymer can form when the gradient copolymer concentration is too high in a block copolymer solution. As the micelles form, the micelle diameter actually shrinks creating a "reel in" effect. The general structure of these copolymers in solution is not yet well established. The composition can be determined by gel permeation chromatography(GPC) and nuclear magnetic resonance (NMR). Generally the composition has a narrow polydispersity index (PDI) and the molecular weight increases with time as the polymer forms. ## Applications • Compatibilizing Phase-Separated Polymer Blends Figure 3: a) random copolymer blend with annealing b) gradient copolymer blend with annealing For the compatiabilization of immiscible blends, the gradient copolymer can be used by improving mechanical and optical properties of immiscible polymers and decreasing its dispersed phase to droplet size.[7] The compatibilization has been tested by reduction in interfacial tension and steric hindrance against coalescence. This application is not available for block and graft copolymer because of its very low critical micelle concentration (cmc). However, the gradient copolymer, which has higher cmc and exhibits a broader interfacial coverage, can be applied to effective blend compatibilizers.[8] A small amount of gradient copolymer (i.e.styrene/4-hydroxystyrene) is added to a polymer blend (i.e. polystyrene/polycaprolactone) during melt processing. The resulting interfacial copolymer helps to stabilize the dispersed phase due to the hydrogen-bonding effects of hydroxylstyrene with the polycaprolactone ester group. • Impact Modifiers and Sound or Vibration Dampers The gradient copolymer have very broad glass transition temperature (Tg) in comparison with other copolymers, at least four times bigger than that of a random copolymer. This broad glass transition is one of the important features for vibration and acoustic damping applications. The broad Tg gives wide range of mechanical properties of material. The glass transition breadth can be adjusted by selection of monomers with different degrees of reactivity in their controlled radical polymerization (CRP). The strongly segregated styrene/4-hydroxystyrene (S/HS) gradient copolymer is used to study damping properties due to its unusual broad glass transition breadth.[5] • Potential applications There are many possible applications for gradient copolymer like pressure-sensitive adhesives, wetting agent, coating, or dispersion. However, these applications are not proved about its practical performance and stability as gradient copolymers.[5] ## References 1. ^ Kryszewski, M (1998). "Gradient Polymers and Copolymers". Polymers for Advanced Technologies (John Wiley & Sons, Ltd.) 9: 224–259. ISSN 1042-7147. 2. ^ Beginn, Uwe (2008). "Gradient Copolymer". Colloid Polym Sci (Springer) 286: 1465–1474. doi:10.1007/s00396-008-1922-y. 3. ^ Matyjaszewski, Krzyszytof; Michael J. Ziegler, Stephen V. Arehart, Dorota Greszta and Tadeusz Pakula (2000). "Gradient Copolymers by Atom Transfer Radical Copolymerization". J. Phys. Org. Chem. (John Wiley & Sons, Ltd.) 13: 775–786. doi:10.1002/1099-1395. 4. ^ Cowie, J.M.G.; Valeria Arrighi (2008). Polymers: Chemistry and Physics of Modern Materials (Third ed.). CRC Press. pp. 147–148. ISBN 9780849398131. 5. Mok, Michelle; Jungki Kim, John M. Torkelson (2008). "Gradient Copolymers with Broad Glass Transition Temperature Regions: Design of Purely Interphase Compositions for Damping Applications". Journal of Polymer Science (Wiley Periodicals, Inc.) 46: 48–58. doi:10.1002/polb. 6. ^ Davis, Kelly; Krzysztof Matyjaszewski (2002). "Statistical, Gradient, Block, and Graft Copolymers by Controlled/Living Radical Polymerizations". Advanced in Polymer Science (Springer) 159: 1–13. doi:10.1007/3-540-45806-9_1. 7. ^ Ramic, Anthony J.; Julia C. Stehlin, Steven D. Hudson, Alexander M. Jamieson, and Ica Manas-Zloczower (2000). "Influence of Block Copolymer on Droplet Brekup and Coalescence in Model Immiscible Polymer Blends". Macromolecules (American Chemical Society) 33: 371–374. Bibcode:2000MaMol..33..371R. doi:10.1021/ma990420c. 8. ^ Kim, Jungki; Maisha K. Gray, Hongying Zhou, SonBinh T. Nguyen, and John M. Torkelson (Feb, 22, 2005). "Polymer Blend Compatibilization by Gradient Copolymer Addition during Melt Processing: Stabilization nof Dispersed Phase to Static Coarsening". Macromolecules (American Chemical Society) 38: 1037–1040. Bibcode:2005MaMol..38.1037K. doi:10.1021/ma047549t.
{}
# Inverse and original function relationships 1. Jan 8, 2014 ### MathewsMD Just curious: Are there any unique relationships b/w the inverse of a function and the original, specifically when considering the derivative and integral? 2. Jan 8, 2014 ### DrewD wiki for derivative and Wolfram for the integral. Is that what you're looking for? 3. Jan 9, 2014 ### HallsofIvy Use the chain rule on $f(f^{-1}(x))$ to get the first and "integration by parts" to get the second. 4. Jan 9, 2014 ### MathewsMD Just wondering...if you have an original function $f(x) = y$ and the inverse, $g(y) = x = f^{-1} (f(x))$. Then differentiating $g(y)$ wrt to x you get: $\frac {d(g(y)}{dy} \frac {dy}{dx} = g'(y)y' = g'(f(x))f'(x) = f'^{-1} (f(x)) f'(x)$ I'm just slightly confused on how the article derived $f'^{-1} (x) = \frac {1}{f' ( f^{-1} (x))}$ since all it says is that chain rule is used, specifically here: http://en.wikipedia.org/wiki/Inverse_functions_and_differentiation#Additional_properties. Also, just a little lower in the article, it states: $\frac {d^2y}{dx^2} . \frac {dx}{dy} + \frac {d^2x}{dy^2} . (\frac {dy}{dx})^2 = 0$ All I can really simplify this to by simple manipulation is: $\frac {(d^2y)(dy^2)}{(d^2x)(dx^2)} = -(\frac {dy}{dx})^3$ This isn't really helpful and I don't exactly know what else to do from here. Any insight on taking a different approach or how to move on would be greatly appreciated. I feel like I'm just not applying chain rule correctly, but any help would be great! Last edited: Jan 9, 2014 5. Jan 9, 2014 ### DrewD Just use $g(f(x))=x$ and differentiate. I think it actually works faster if you use $f(g(x))=x$ like HoI, but both work fine. 6. Jan 10, 2014 ### MathewsMD Hmm....I must say that my understanding of Liebniz notation is not too strong. For example, what exactly does this represent (if $y = f(x)$): $\frac {d^3y}{dx^3}$ Is it simply $\frac {1}{f'''^{-1}(x)}$ ? Also, is $\frac {d^2y}{dx^2} . \frac {dx}{dy} + \frac {d^2x}{dy^2} . (\frac {dy}{dx})^3 = 0$ equal to $\frac {1}{f''^{-1}(x)}. \frac {1}{f^{-1}} + \frac {1}{f''(x)}. (f'(x))^3 = 0$ Besides rewriting it, I am still a little confused on how this was simplified in the next steps in the link. Last edited: Jan 10, 2014 7. Jan 10, 2014 ### MathewsMD I am also going through the expression: $f^{-1}(x) = \int \frac {1}{f'(f^{-1}(x))}dx + c$ By taking the derivative of both sides, what I simplify this to is: $f'^{-1}(x) = \frac {1}{f'(f^{-1}(x))}$ I just don't exactly see how this result was derived using Chain Rule in the link. 8. Jan 15, 2014
{}
location:  Publications → journals → CMB Abstract view # Abstract Plancherel (Trace) Formulas over Homogeneous Spaces of Compact Groups Published:2016-07-14 Printed: Mar 2017 • Arash Ghaani Farashahi, Numerical Harmonic Analysis Group (NuHAG), Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1 A-1090 Wien, Vienna, Austria Format: LaTeX MathJax PDF ## Abstract This paper introduces a unified operator theory approach to the abstract Plancherel (trace) formulas over homogeneous spaces of compact groups. Let $G$ be a compact group and $H$ be a closed subgroup of $G$. Let $G/H$ be the left coset space of $H$ in $G$ and $\mu$ be the normalized $G$-invariant measure on $G/H$ associated to the Weil's formula. Then, we present a generalized abstract notion of Plancherel (trace) formula for the Hilbert space $L^2(G/H,\mu)$. Keywords: compact group, homogeneous space, dual space, Plancherel (trace) formula MSC Classifications: 20G05 - Representation theory 43A85 - Analysis on homogeneous spaces 43A32 - Other transforms and operators of Fourier type 43A40 - Character groups and dual objects top of page | contact us | privacy | site map |
{}
6c For \u03f5 \u03b4 0 where x 3 8x 2x 2 3x4 \u03f5 for every 0x 2 \u03b4 Let S 1 1 Then 0x 2 \u03b4 1 1 # 6c for ϵ δ 0 where x 3 8x 2x 2 3x4 ϵ for every 0x This preview shows page 3 - 5 out of 5 pages. 6c) For ϵ > 0 δ > 0, where |x 3 -8|=|x-2||x 2 +3x+4|< ϵ , for every 0<|x-2|< δ Let S 1 =1. Then 0<|x-2|< δ 1 =1, then 1<x<3 Then |x-2||(3^2)+2(3)+4|=19|x-2||< ϵ and δ = ϵ 19 . 9a) By Theorem 5.1.10, There exists a sequence S n in D with each S n c such that S n converges to c, but f(s n ) is not convergent in R . Let f(x)= 1 x and S n = 1 n . lim n→∞ S n = 0 n but f(s n )= Lim n=+ so f(s n ) is divergent and the limit does not exist. 9b) By Theorem 5.1.10, k N Sin(2 πk + 2 ¿ = {0, if n=0 1, if n=1 0, if n=2 -1, if n=3} S n = n 2 π , for all n N , Then the limit as n approaches positive infinity of S n =0 But f(s n ) is the sequence {0,1,0,-1,0, -1, ….} which does not converge. 9c) x→ 0 + ¿ xsin 1 x lim ¿ ¿ . Let ϵ > 0 δ > 0 where ϵ = δ . Then | xsin 1 x -0|<|x| as | sin 1 x | 1 x > 0. | xsin 1 x ≤x , x > 0. | xsin 1 x ϵ where 0 <x < δ x→ 0 + ¿ xsin 1 x = 0 lim ¿ ¿ 13) From Theorem 4.1.13, 4.4.11, and 5.1.8 we can see that a) Let S n D that converges to c with S n ≠c b) Knowing lim x→ c f ( x ) = L lim x →c h ( x ) = L, by Theorem 5.1.8, lim n→∞ f ( s n ) = L lim n→ ∞ h ( s n ) = M c) Since f(x) g(x) h(x) for every x D f(s n ) g(s n ) h(s n ) n N d) Given ϵ > 0, N 1 , such that |h(s n )-M|< ϵ n≥ N 1 e) Given ϵ > 0, N 2 , such that |f(s n )-L|< ϵ n≥ N 2 |f(s n )-L|< ϵ and |h(s n )-M|< ϵ n≥ N N 1 N 2 aremaximums f) Therefore – ϵ < f(s n )-L< ϵ and – ϵ < g(s n )-M< ϵ g) From line c, f(s n ) L≤ g(s n ) L≤ h(s n ) L n N ϵ < g(s n )-L< ϵ n N lim n→∞ g ( s n ) = L n N ****Similar can be said that lim n→∞ g ( s n ) = M 16)By theorem 5.1.2, let f:d-> and c be an accumulation point of D. Then lim as x approaches c of f(x)=L iff for each neighborhood V of L, there exists a deleted neighborhood U of c such that f(U D) V . #### You've reached the end of your free preview. Want to read all 5 pages? • Fall '08 • Staff • Limits, Limit of a sequence, Limit superior and limit inferior, lim sup, subsequence, Lim Sup Vn
{}
# Find the inverse of a 2x2 matrix ### Problem Find the inverse of the matrix, $\left[\begin{array}{rr}3 & 6 \\ -10 & -10\end{array}\right]$.
{}
# Subordinator (mathematics) In the mathematics of probability, a subordinator is a concept related to stochastic processes. A subordinator is itself a stochastic process of the evolution of time within another stochastic process, the subordinated stochastic process. In other words, a subordinator will determine the random number of "time steps" that occur within the subordinated process for a given unit of chronological time. In order to be a subordinator a process must be a Lévy process.[1] It also must be increasing, almost surely.[1] ## Examples The variance gamma process can be described as a Brownian motion subject to a gamma subordinator.[1] If a Brownian motion, $W(t)$, with drift $\theta t$ is subjected to a random time change which follows a gamma process, $\Gamma(t; 1, \nu)$, the variance gamma process will follow: $X^{VG}(t; \sigma, \nu, \theta) \;:=\; \theta \,\Gamma(t; 1, \nu) + \sigma\,W(\Gamma(t; 1, \nu)).$ The Cauchy process can be described as a Brownian motion subject to a Lévy subordinator.[1]
{}
# Math Help - Integer Corollary Proof 1. ## Integer Corollary Proof I would appreciate some help with this proof! Let n be an element of the integers. There exists no integer x such that n < x < n+1. Thanks! 2. Originally Posted by jstarks44444 I would appreciate some help with this proof! Let n be an element of the integers. There exists no integer x such that n < x < n+1. Thanks! Here is a proof that there is no integer between 0 and 1 as an example for you. $S=\{n\in\mathbb{Z} | 0 Now, suppose $a\in (0,1)$ and a is an integer. Now, since 0 < a < 1 is in S, then S is non-empty. By the well-ordering principle, S has a least element, l, so 0 < l < 1. Multiple the inequality by l. $0*l < l*l < 1*l\Rightarrow 0. We have reached a contradiction since $l^2 3. How would I go about doing this for n and n+1 though? using this method i arrive at ln < l^2 < ln + l 4. Originally Posted by jstarks44444 I would appreciate some help with this proof! Let n be an element of the integers. There exists no integer x such that n < x < n+1. Originally Posted by jstarks44444 How would I go about doing this for n and n+1 though? using this method i arrive at ln < l^2 < ln + l but can any contradiction be made there? You have us at a clear disadvantage. We have no way to know what text material you are using. In the title of this post is the word ‘corollary’. That means it is in addition to some theorem. What theorem? You just have let in on the what basis you are proving things. 5. Well the theories that come before this corollary are, *For all k in the Naturals, k >= 1 *There exists no integer x such that 0<x<1 6. Originally Posted by jstarks44444 Well the theories that come before this corollary are, *For all k in the Naturals, k >= 1 *There exists no integer x such that 0<x<1 I suspect that you also have the following definition: $m>n$ if and only if $m-n\in \mathbb{N}$. That means that if $n\in \mathbb{N}$ then $n-0=n\in \mathbb{N}$ so that $n>0$ If the theorem to which the current question is a corollary is $n\ge1$ for all $n\in \mathbb{N}$ the how could there be an integer between $0~\&~1~?$ 7. But n is an element of the integers, not of the naturals.... where is the inductive step in this, or do you not need one? contradiction? 8. Originally Posted by jstarks44444 But n is an element of the integers, not of the naturals.... If you continue to refuse to post a complete list of axioms, definitions, and theorems you will not receive help. 9. Sorry..axioms for the natural numbers: If m,n are in the naturals then m+n is in the naturals If m,n are in the naturals then mn is in the naturals 0 is not in the naturals For every m in the integers, we have m in the naturals or m=0 or -m in the naturals 10. Originally Posted by jstarks44444 Sorry..axioms for the natural numbers: If m,n are in the naturals then m+n is in the naturals If m,n are in the naturals then mn is in the naturals 0 is not in the naturals For every m in the integers, we have m in the naturals or m=0 or -m in the naturals Thank you for that. BUT what is the definition for $m>n$ where $\{m,n\}\subset\mathbb{Z}$. In case you do no know this, your text material is non-standard. We need to know the axiom set for the integers.
{}
Thread: Searching for text in files 1. No Profile Picture Contributing User Devshed Newbie (0 - 499 posts) Join Date Nov 2003 Location Edinburgh, UK Posts 84 Rep Power 13 Searching for text in files Hi All, I am having trouble putting some Ruby together to search through some lines of text in a specific file. I have one file, called siteowner.aspx, that contains a certain string that would like to compare against a static value. So far, I can pick the correct line out, but I would like to filter it for a certain string. So far, my code is as follows: Code: File.open("\\path\\to\\site.aspx", 'r') do |infile| while (line2 = infile.gets) puts 'Via Regxp, Site owner is: '+ line2 if line2 =~ (/^= /) end end However, I am stuck with the regular expression. The above code gives me: 'Via Regxp, Site owner is: siteowner = "OWNER" But what I would really like is: 'Via Regxp, Site owner is: "Owner". I dont suppose someone could help me out a bit and show me where I have gone wrong? 2. No Profile Picture Contributing User Devshed Novice (500 - 999 posts) Join Date Jan 2004 Location Constant Limbo Posts 989 Rep Power 365 Code: irb(main):004:0> s = "siteowner=OWNER" => "siteowner=OWNER" irb(main):005:0> puts "Owner is: #{\$1}" if s =~ /.+=(\S+)/ Owner is: OWNER You can use grouping to get a specific portion of a regular expression. irb(main):008:0> "OWNER".capitalize => "Owner"
{}
Expii # Polyprotic Acids - Expii Polyprotic acids contain more than one proton that will dissociate in solution. Each dissociation step has its own K$$_a$$ value.
{}
# Archive for March, 2010 ## AT&T 3G in Las Vegas While I was in Las Vegas for MIX10, I couldn’t suppress my inexplicable urge to run as many speedtests as I could muster. Of course, I was packing the usual iPhone 3GS with AT&T. Sadly, nearly the entire visit speeds were barely 250 kilobits/s down, 220 kilobits/s up, if I could even get the speedtest.net application to run. Take a look at the following: (kilobits/s) Average Min Max Downstream 251.8 14.0 552.0 Upstream 220.8 0.0 357.0 This data is from 13 tests taken during my 3 day stay. They’re from over 3G UMTS when it did work, and GSM EDGE when it didn’t, and that was virtually the entire time. 3G was either slow, or didn’t work at all; switching to EDGE was the only way to do anything. ## How is this possible? Now, it’s fair to say that some of this is sampling bias and the fact that I was at a conference, but even then, there’s no excuse. This is a city used to a huge flux of visitors in a short time for trade conferences. Frankly, I can only begin to imagine how overloaded networks are during major conferences like E3. Take a look at the following plot of the average speeds for each day: Average Downstream Speed Can you spot which three days are the ones I’m talking about? Note that on the 16th, I couldn’t even get a test to run to completion; it just didn’t work. There’s nothing more to really say about the issue than simply how bad this is. If this is the kind of performance AT&T users see and complain so vocally about in the San Fransisco Bay Area and Manhattan, I can completely understand. Frankly, I can see no other reason for that kind of performance degradation other than congestion. ## Stories from MIX10 – Impressions Over spring break I spent an amazing – and busy – three days in Las Vegas at Microsoft’s MIX10. I got to see a complete platform reboot with Windows Phone 7 Series, some interesting news about IE 9, and most importantly got to meet  some awesome people. MIX10 Keynote Stage I’ve been writing a lot over that time with AnandTech, which I’ll wrap up here: • First day MIX10 Windows Phone 7 Series Impressions – link • Internet Explorer 9 Platform Preview – link • Windows Phone 7 Series: The AnandTech Guide – link • If you had to read any one of these, this would be the one to be. It’s over 8000 words and comprehensively wraps up the platform in my opinion. AnandTech - Bing search hands on There were a couple hilarious quotes that I overheard at the conference, which I think I’ll just share briefly. Keep in mind this is at a development conference. 1. “…and we call this checkbox driven development. We can do everything we want just with checkboxes” 2. “…and we only had to write one line of code! Just one line, and we’re done!” 3. But my favorite: “Can I use the back button for fire? What if I really really want to use the back button?” – immediately after a presentation about how the back button is reserved for going back. ## My Notetaking Workflow I didn’t have much time this year to follow TED (In fact, when I first sat down to write this, it was still going on). To be honest, I usually watch the videos a few months afterward, once they’re all finally uploaded and the hype has died down. It’s easy to get caught up in how much certain talks are plugged compared to others, especially with how much live information leaks out over twitter. But I did break that trend this year a bit. I noticed an intriguing project by Robert Scoble on a blog post of his involving taking photos of notes by the attendees and posting them to flickr. Intrigued, I expected to be wowed by the different creative and thoughtful methods employed which I could use myself for note-taking. Imagine my disappointment, then, when what I saw that most attendees were either using their iPhones or BlackBerries, scraps of paper, nonstandard spiral bound notebooks, or just generally chaotic methods for taking notes. I mean, aside from the now-famous mind-mapping note girl (photo here; I can’t look at it again because it makes my brain hurt and my teeth start gnashing), there really wasn’t anything TED-level-inspiring. ## Numerical Breakdown Let’s just break it down for a second: • 34 pictures in the set • Mobile devices: 9 – 26.5% • iPhones: 7 – 20.6% • BlackBerry: 2 – 5.9% • Paper: 25 – 73.5% • Notebooks (spiral or bound): 14 – 41.2% • Mini Notebooks (or similarly sized): 6 – 17.6% • Program/Scraps: 4 – 14.7% • PowerPoint Handouts (Bill Gates): 1 – 2.9% Generally, I abhor excel plots, but this does a good job communicating my point: Notes Breakdown But that’s not all; of the iPhone note photos, virtually every single one used the built-in notes application. Yeah, the notes application that ships with the iPhone which lacks just about everything imaginable. A typical AT&T "Mark The Spot" report No Evernote love? No Google Documents love? That’s certainly surprising. Yet these attendees consider themselves shakers and movers? Definitely avant-garde? Perhaps ahead of the curve at adoption of new tech? Sorry, virtually every one of you was thoroughly beaten by mind-map girl entirely by default, entirely because of her uniqueness factor. Even more surprising, the journalists in the photo set aren’t even using Steno pads. With the exception of Bill Gates (who obviously is using PowerPoint handouts for his presentation), there’s really no excuse. Granted, this could entirely just be bad sampling on Scoble’s part. Whatever the case, it’s a unique opportunity to segue into how much I love the way I take notes. ## OneNote – The best kept secret for organizing everything Ok, those words aren’t entirely my own, but they’re the truth. Microsoft OneNote 2007 (and its predecessor) aren’t just about notes, they’re about collecting, organizing, searching, and making accessible just about anything and everything. You don’t need a tablet, and it isn’t just about text. I think it’s pretty fair to say that OneNote is almost the best kept secret and most undiscovered part of Office 2007. Relatively Typical Notes View My freshman year of college, I decided that I wanted to try using it for all of my notes. At the time, I was intrigued by the notion of using a Samsung Q1 Ultra V, a UMPC, due to its tiny form factor and long battery life. That worked, but I’ve since moved on to a Latitude XT in favor of its active digitizer and capacitive multitouch screen. Regardless, I’ve used OneNote for virtually all my notes since, and it has numerous advantages over paper: Search Results 1. My notes are searchable, entirely. Not just text in its native form either, but handwritten text from the tablet, images (it searches the images), and audio. 2. I don’t have to carry around spiral bound notebooks that are heavy, or waste money on dead trees (hey, this is one aspect of my life that actually is green). 3. I can annotate and take notes directly atop PDFs, PowerPoints, or whatever materials are being studied without having to print them beforehand. This is extremely useful as I can get anything into notes by printing it to OneNote. 4. My notes can be (and are) backed up regularly. That’s something you can’t really do with paper notes, short of making copies or scanning every day. 5. I can keep every year’s worth of notes in one place. Obviously, that’s a ton of stuff 3 years in. I think you’d be hard pressed to carry around your spiral bound notebooks every day. 6. I can organize with sections, tabs, notebooks, and pages. The analogues to a notebook are obvious, but there are other things as well that make a lot more sense in the context of digital notes. 7. Something which always comes in handy is being able to instantly send my notes to other people; I can make PDFs of pages, sections, or entire notebooks. 8. Everything lives in one place: text notes, powerpoints, images, clips of webpages, even file. I honestly can’t see how it’d be possible to take notes electronically without OneNote at this point. Granted, there are a lot of other alternatives, but I find that they either have gamestopping flaws or are otherwise unwieldy: • Microsoft Word • I see this one a lot in classes, and don’t even know where to start. Word is great as a word processing tool, but that’s about all. Sure, you can take notes, but they won’t be searchable (which is a huge drawback for me), and ultimately you’re constrained by this page-by-page model that lies at its core. Combining graphics with notes is possible, but hard. OneNote is almost like Word without pages. • How the heck are you supposed to take equations down quickly in Word? If you’ve used the equation editor, you know what a lesson in frustration it is. • $\LaTeXe$/LyX • A while back on Slashdot I read a great article I could relate to about taking notes in class for science and engineering. It discussed/asked what the optimal computerized note-taking suite was given an emphasis on entering equations. Of course, $\LaTeXe$ came up, along with its GUI-wrapped similar cousin LyX. I’m a big big fan of $\LaTeXe$, especially for documents and other things, but I can’t see it being practical or fast enough for taking notes every day. Granted, there are people out there (like some of my crazier friends) that are faster at typing the equations than writing them, but I find myself being able to write faster.
{}
# American Institue of Mathematical Sciences 2013, 9(1): 117-129. doi: 10.3934/jimo.2013.9.117 ## Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints 1 Jiangxi Key Laboratory of Numerical Simulation Technology, School of Mathematics and Computer Sciences, Gannan Normal University, Ganzhou, 341000 2 School of Mathematics and Computer Sciences, Gannan Normal University, Ganzhou, 341000, China 3 School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China Received  February 2012 Revised  May 2012 Published  December 2012 In this paper, we present a multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints, which can be viewed as an extension of multivariate spectral gradient method for solving unconstrained optimization problems. The proposed method does not need the computation of the derivative as well as the solution of some linear equations. Under some suitable conditions, we can establish its global convergence results. Preliminary numerical results show that the proposed method is efficient and promising. Citation: Gaohang Yu, Shanzhou Niu, Jianhua Ma. Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints. Journal of Industrial & Management Optimization, 2013, 9 (1) : 117-129. doi: 10.3934/jimo.2013.9.117 ##### References: [1] J. Barzilai and J. M. Borwein, Two point step size gradient methods,, IMA J. Numer. Anal., 8 (1988), 141. doi: 10.1093/imanum/8.1.141. [2] S. P. Dirkse and M. C. Ferris, MCPLIB: A collection of nonlinear mixed complementarity problems,, Optim. Meth. Soft., 5 (1995), 319. doi: 10.1080/10556789508805619. [3] E. Dolan and J. Moré, Benchmarking optimization software with performance profiles,, Math. Program. Ser. A, 91 (2002), 201. doi: 10.1007/s101070100263. [4] M. E. El-Hawary, "Optimal Power Flow: Solution Techniques, Requirement and Challenges,", IEEE Service Center, (1996). [5] L. Han, G. H. Yu and L. T. Guan, Multivariate spectral gradient method for unconstrained optimization,, Appl. Math. and Comput., 201 (2008), 621. doi: 10.1016/j.amc.2007.12.054. [6] A. N. Iusem and M. V. Solodov, Newton-type methods with generalized distances for constrained optimization,, Optim., 41 (1997), 257. doi: 10.1080/02331939708844339. [7] W. La Cruz, J. M. Martinez and M. Raydan, Spectral residual method without gradient information for solving large-scale nonlinear systems of equations,, Math. Comp., 75 (2006), 1429. doi: 10.1090/S0025-5718-06-01840-0. [8] W. La Cruz and M. Raydan, Nonmonotone spectral methods for large-scale nonlinear systems,, Optim. Meth. Soft., 18 (2003), 583. doi: 10.1080/10556780310001610493. [9] D. H. Li and X. L. Wang, A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations,, Numer. Alge. Ctrl. Optim., 1 (2011), 71. [10] Q. N. Li and D. H. Li, A class of derivative-free methods for large-scale nonlinear monotone equations,, IMA J. Numer. Anal., 31 (2011), 1625. doi: 10.1093/imanum/drq015. [11] F. M. Ma and C. W. Wang, Modified projection method for solving a system of monotone equations with convex constraints,, Appl. Math. Comput., 34 (2010), 47. [12] K. Meintjes and A. P. Morgan, A methodology for solving chemical equilibrium systems,, Appl. Math. Comput., 22 (1987), 333. doi: 10.1016/0096-3003(87)90076-2. [13] K. Meintjes and A. P. Morgan, Chemical equilibrium systems as numerical test problems,, ACM Trans. Math. Soft., 16 (1990), 143. doi: 10.1145/78928.78930. [14] J. M. Ortega and W. C. Rheinboldt, "Iterative Solution of Nonlinear Equations in Several Variables,", Academic Press, (1970). [15] M. V. Solodov and B. F. Svaiter, A globally convergent inexact Newton method for systems of monotone equations,, in, (1998), 355. [16] C. W. Wang, Y. J. Wang and C. L. Xu, A projection method for a system of nonlinear monotone equations with convex constraints,, Math. Meth. Oper. Res., 66 (2007), 33. doi: 10.1007/s00186-006-0140-y. [17] A. J. Wood and B. F. Wollenberg, "Power Generations, Operations and Control,", Wiley, (1996). [18] N. Yamashita and M. Fukushima, Modified Newton methods for solving a semismooth reformulation of monotone complementarity problems,, Math. Program., 76 (1997), 469. [19] G. H. Yu, A derivative-free method for solving large-scale nonlinear systems of equations,, J. Ind. Manag. Optim., 6 (2010), 149. doi: 10.3934/jimo.2010.6.149. [20] G. H. Yu, Nonmonotone spectral gradient-type methods for large-scaleunconstrained optimization and nonlinear systems of equations,, Pacific J. Optim., 7 (2011), 387. [21] Z. S. Yu, J. Lin, J. Sun, Y. H. Xiao, L. Y. Liu and Z. H. Li, Spectral gradient projection method for monotone nonlinear equations with convex constraints,, Appl. Numer. Math., 59 (2009), 2416. doi: 10.1016/j.apnum.2009.04.004. [22] E. Zeidler, "Nonlinear Functional Analysis and Its Applications, II/B: Nonlinear Monotone Operators,", Springer-Verlag, (1990). doi: 10.1007/978-1-4612-0985-0. [23] L. Zhang and W. J. Zhou, Spectral gradient projection method for solving nonlinear monotone equations,, J. Comput. Appl. Math., 196 (2006), 478. doi: 10.1016/j.cam.2005.10.002. [24] W. J. Zhou and D. H. Li, Limited memory BFGS method for nonlinear monotone equations,, J. Comp. Math., 25 (2007), 89. [25] W. J. Zhou and D. H. Li, A globally convergent BFGS method for nonlinear monotone equations without any merit functions,, Math. Comp., 77 (2008), 2231. doi: 10.1090/S0025-5718-08-02121-2. show all references ##### References: [1] J. Barzilai and J. M. Borwein, Two point step size gradient methods,, IMA J. Numer. Anal., 8 (1988), 141. doi: 10.1093/imanum/8.1.141. [2] S. P. Dirkse and M. C. Ferris, MCPLIB: A collection of nonlinear mixed complementarity problems,, Optim. Meth. Soft., 5 (1995), 319. doi: 10.1080/10556789508805619. [3] E. Dolan and J. Moré, Benchmarking optimization software with performance profiles,, Math. Program. Ser. A, 91 (2002), 201. doi: 10.1007/s101070100263. [4] M. E. El-Hawary, "Optimal Power Flow: Solution Techniques, Requirement and Challenges,", IEEE Service Center, (1996). [5] L. Han, G. H. Yu and L. T. Guan, Multivariate spectral gradient method for unconstrained optimization,, Appl. Math. and Comput., 201 (2008), 621. doi: 10.1016/j.amc.2007.12.054. [6] A. N. Iusem and M. V. Solodov, Newton-type methods with generalized distances for constrained optimization,, Optim., 41 (1997), 257. doi: 10.1080/02331939708844339. [7] W. La Cruz, J. M. Martinez and M. Raydan, Spectral residual method without gradient information for solving large-scale nonlinear systems of equations,, Math. Comp., 75 (2006), 1429. doi: 10.1090/S0025-5718-06-01840-0. [8] W. La Cruz and M. Raydan, Nonmonotone spectral methods for large-scale nonlinear systems,, Optim. Meth. Soft., 18 (2003), 583. doi: 10.1080/10556780310001610493. [9] D. H. Li and X. L. Wang, A modified Fletcher-Reeves-type derivative-free method for symmetric nonlinear equations,, Numer. Alge. Ctrl. Optim., 1 (2011), 71. [10] Q. N. Li and D. H. Li, A class of derivative-free methods for large-scale nonlinear monotone equations,, IMA J. Numer. Anal., 31 (2011), 1625. doi: 10.1093/imanum/drq015. [11] F. M. Ma and C. W. Wang, Modified projection method for solving a system of monotone equations with convex constraints,, Appl. Math. Comput., 34 (2010), 47. [12] K. Meintjes and A. P. Morgan, A methodology for solving chemical equilibrium systems,, Appl. Math. Comput., 22 (1987), 333. doi: 10.1016/0096-3003(87)90076-2. [13] K. Meintjes and A. P. Morgan, Chemical equilibrium systems as numerical test problems,, ACM Trans. Math. Soft., 16 (1990), 143. doi: 10.1145/78928.78930. [14] J. M. Ortega and W. C. Rheinboldt, "Iterative Solution of Nonlinear Equations in Several Variables,", Academic Press, (1970). [15] M. V. Solodov and B. F. Svaiter, A globally convergent inexact Newton method for systems of monotone equations,, in, (1998), 355. [16] C. W. Wang, Y. J. Wang and C. L. Xu, A projection method for a system of nonlinear monotone equations with convex constraints,, Math. Meth. Oper. Res., 66 (2007), 33. doi: 10.1007/s00186-006-0140-y. [17] A. J. Wood and B. F. Wollenberg, "Power Generations, Operations and Control,", Wiley, (1996). [18] N. Yamashita and M. Fukushima, Modified Newton methods for solving a semismooth reformulation of monotone complementarity problems,, Math. Program., 76 (1997), 469. [19] G. H. Yu, A derivative-free method for solving large-scale nonlinear systems of equations,, J. Ind. Manag. Optim., 6 (2010), 149. doi: 10.3934/jimo.2010.6.149. [20] G. H. Yu, Nonmonotone spectral gradient-type methods for large-scaleunconstrained optimization and nonlinear systems of equations,, Pacific J. Optim., 7 (2011), 387. [21] Z. S. Yu, J. Lin, J. Sun, Y. H. Xiao, L. Y. Liu and Z. H. Li, Spectral gradient projection method for monotone nonlinear equations with convex constraints,, Appl. Numer. Math., 59 (2009), 2416. doi: 10.1016/j.apnum.2009.04.004. [22] E. Zeidler, "Nonlinear Functional Analysis and Its Applications, II/B: Nonlinear Monotone Operators,", Springer-Verlag, (1990). doi: 10.1007/978-1-4612-0985-0. [23] L. Zhang and W. J. Zhou, Spectral gradient projection method for solving nonlinear monotone equations,, J. Comput. Appl. Math., 196 (2006), 478. doi: 10.1016/j.cam.2005.10.002. [24] W. J. Zhou and D. H. Li, Limited memory BFGS method for nonlinear monotone equations,, J. Comp. Math., 25 (2007), 89. [25] W. J. Zhou and D. H. Li, A globally convergent BFGS method for nonlinear monotone equations without any merit functions,, Math. Comp., 77 (2008), 2231. doi: 10.1090/S0025-5718-08-02121-2. [1] Jinkui Liu, Shengjie Li. Multivariate spectral DY-type projection method for convex constrained nonlinear monotone equations. Journal of Industrial & Management Optimization, 2017, 13 (1) : 283-295. doi: 10.3934/jimo.2016017 [2] Gusein Sh. Guseinov. Spectral method for deriving multivariate Poisson summation formulae. Communications on Pure & Applied Analysis, 2013, 12 (1) : 359-373. doi: 10.3934/cpaa.2013.12.359 [3] Eric Chung, Yalchin Efendiev, Ke Shi, Shuai Ye. A multiscale model reduction method for nonlinear monotone elliptic equations in heterogeneous media. Networks & Heterogeneous Media, 2017, 12 (4) : 619-642. doi: 10.3934/nhm.2017025 [4] Zhili Ge, Gang Qian, Deren Han. Global convergence of an inexact operator splitting method for monotone variational inequalities. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1013-1026. doi: 10.3934/jimo.2011.7.1013 [5] Can Huang, Zhimin Zhang. The spectral collocation method for stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 667-679. doi: 10.3934/dcdsb.2013.18.667 [6] Boris Kramer, John R. Singler. A POD projection method for large-scale algebraic Riccati equations. Numerical Algebra, Control & Optimization, 2016, 6 (4) : 413-435. doi: 10.3934/naco.2016018 [7] Wanyou Cheng, Zixin Chen, Donghui Li. Nomonotone spectral gradient method for sparse recovery. Inverse Problems & Imaging, 2015, 9 (3) : 815-833. doi: 10.3934/ipi.2015.9.815 [8] Lijun Yi, Zhongqing Wang. Legendre spectral collocation method for second-order nonlinear ordinary/partial differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 299-322. doi: 10.3934/dcdsb.2014.19.299 [9] Qinghua Ma, Zuoliang Xu, Liping Wang. Recovery of the local volatility function using regularization and a gradient projection method. Journal of Industrial & Management Optimization, 2015, 11 (2) : 421-437. doi: 10.3934/jimo.2015.11.421 [10] Stefan Kindermann. Convergence of the gradient method for ill-posed problems. Inverse Problems & Imaging, 2017, 11 (4) : 703-720. doi: 10.3934/ipi.2017033 [11] Nora Merabet. Global convergence of a memory gradient method with closed-form step size formula. Conference Publications, 2007, 2007 (Special) : 721-730. doi: 10.3934/proc.2007.2007.721 [12] Jinyan Fan, Jianyu Pan. Inexact Levenberg-Marquardt method for nonlinear equations. Discrete & Continuous Dynamical Systems - B, 2004, 4 (4) : 1223-1232. doi: 10.3934/dcdsb.2004.4.1223 [13] Zhiyou Wu, Fusheng Bai, Guoquan Li, Yongjian Yang. A new auxiliary function method for systems of nonlinear equations. Journal of Industrial & Management Optimization, 2015, 11 (2) : 345-364. doi: 10.3934/jimo.2015.11.345 [14] Martin Burger, José A. Carrillo, Marie-Therese Wolfram. A mixed finite element method for nonlinear diffusion equations. Kinetic & Related Models, 2010, 3 (1) : 59-83. doi: 10.3934/krm.2010.3.59 [15] Thierry Colin, Boniface Nkonga. Multiscale numerical method for nonlinear Maxwell equations. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 631-658. doi: 10.3934/dcdsb.2005.5.631 [16] Deren Han, Zehui Jia, Yongzhong Song, David Z. W. Wang. An efficient projection method for nonlinear inverse problems with sparsity constraints. Inverse Problems & Imaging, 2016, 10 (3) : 689-709. doi: 10.3934/ipi.2016017 [17] Lin Du, Yun Zhang. $\mathcal{H}_\infty$ filtering for switched nonlinear systems: A state projection method. Journal of Industrial & Management Optimization, 2017, 13 (4) : 1-15. doi: 10.3934/jimo.2017035 [18] C.Y. Wang, M.X. Li. Convergence property of the Fletcher-Reeves conjugate gradient method with errors. Journal of Industrial & Management Optimization, 2005, 1 (2) : 193-200. doi: 10.3934/jimo.2005.1.193 [19] Yu-Ning Yang, Su Zhang. On linear convergence of projected gradient method for a class of affine rank minimization problems. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1507-1519. doi: 10.3934/jimo.2016.12.1507 [20] Jie Tang, Ziqing Xie, Zhimin Zhang. The long time behavior of a spectral collocation method for delay differential equations of pantograph type. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 797-819. doi: 10.3934/dcdsb.2013.18.797 2016 Impact Factor: 0.994
{}
# Potentiality classes and Borel reductions In a 1998 paper by Hjorth, Kechris, and Louveau, there was a definition given of a "potentiality class." That is, given an invariant equivalence relation $E$ on a standard Borel space $X$, we say $E$ is (for example) potentially $\Pi^0_3$ if, for some Polish topology on $X$ yielding the same Borel sets, $E$ is a $\Pi^0_3$ subset of $X\times X$. This condition is equivalent to being Borel reducible to some $F$ which is actually $\Pi^0_3$ on its space, so these potentiality classes are downward closed with respect to Borel reducibility. We say $E$ is $Pot(\Gamma)$ (for any such $\Gamma$) if it is potentially in $\Gamma$, but not potentially in the dual space (or, if $\Gamma$ is self-dual, if it is potentially in $\Gamma$ but not in any proper sub-class). The main theorem is that there are only a small number of potentiality classes. What they do not directly say, but perhaps assume is obvious, is that if two relations are in the same potentiality class, then they are Borel equivalent. Is this a correct reading of the condition? If not, I'm not sure why not, but if so, it has a lot of interesting model-theoretic consequences and I would think they would point it out explicitly. The paper in question is "Borel equivalence relations induced by actions of the symmetric group" by Hjorth, Kechris, Louveau, in the Annals of Pure and Applied Logic, 92 (1998) 63-112. • Maybe I'm misunderstanding something. As far as I can tell, the main theorem is not that there are only a small number of potentiality classes, but rather potentiality classes of relations induced by closed subgroups of $S_\infty$! This seems like quite a difference. I don't know about this relation, but in general, there are no more than $\omega_1$ potentiality classes, while there are $\mathfrak c$ many classes of Borel equivalence, so without CH at the very least, the two can't be the same (and even with CH it sounds dubious). – tomasz Mar 3 '15 at 19:03 • From a model theory perspective, those are the only potentiality classes of interest (that are Borel) - the isomorphism relation for the set of countable models of an $L_{\omega_1,\omega}$-sentence is always such a class. The really interesting thing here would be if this showed there are only $\omega_1$ classes of Borel equivalence for such situations. I agree it sounds a bit dubious. – Richard Rast Mar 4 '15 at 15:00 • Fair enough. However, I think saying that these are the only interesting potentiality classes from perspective of model theory is a bit too general, Borel equivalence relations in model theory are not limited to isomorphism relations of countable models. – tomasz Mar 5 '15 at 6:01 • @RichardRast: If by "Borel equivalent" you mean "Borel bireducible", then the answer should be no. I recently had to quote this theorem, which should be somewhere in the paper you linked (but I did not bother checking). Given this fact, every essentially countable Borel equivalence relation which is the orbit equivalence relation of a Borel action of $S_{\infty}$ is potentially $\Sigma^0_2$. You can easily find three non-equivalent such orbit equivalence relations. (For example, isomorphism of torsion-free abelian groups of rank 1,2,3) – Burak Sep 7 '15 at 6:50 It is not necessarily true that Borel equivalence relations that are potentially in the same pointclass are Borel bireducible. For example, consider the orbit equivalence relations of the logic action of $S_{\infty}$ on the standard Borel space of torsion-free abelian groups of rank $n$. Then these orbit equivalence relations are essentially countable and hence are potentially $\mathbf{\Sigma^0_2}$ (for example, see this theorem). However, Simon Thomas proved that the Borel complexity of isomorphism of torsion-free abelian groups of rank $n$ (strictly) increases with rank (in this paper).
{}
# Strengthening an implication of the abc conjecture Granville gives p.5 an implication of the abc conjecture: Assume the abc conjecture. Let $f(x,y)$ be squarefree homogeneous polynomial with integer coefficients. For coprime integers $m,n$ if $q^2 \mid f(m,n)$ then $q \ll \max(|m|,|n|)^{2+\epsilon}$. Can we strengthen this to $q \ll |mn|^{1+\epsilon}$? For constant $n$ this is consistent with the paper. Are there heuristic arguments for small roots modulo squares? Yes. Without loss of generality, $x$ and $y$ divide $f(x,y)$. (If not, then multiply by one or the other, and $q$ will still divide it). Without loss of generality $m \geq n$. Then we know that the product of primes dividing $f(m,n)$ is at least $m^{\deg f - 2 - \epsilon}$ and that $f(m,n)$ is at most a constant times $m^{\deg f -1} n$, so $q$ is at most the ratio, which is $(mn)^{1+\epsilon}$.
{}
# gcd calculation online ## Calculus gcd Calculus in processing ... please wait ### Function : gcd #### Summary : GCD calculator using Euclid's algorithm, and details the steps of GCD calculation. Gcd online #### Description : The GCD function calculates online the greatest common divisor of two integers. To calculate the GCD online , the function uses the Euclidean algorithm .The steps of the GCD calculations are specified. # Principle of the Euclidean algorithm The Euclidean algorithm uses successive Euclidean division to determine the GCD. To calculate the greatest common divisor of two integers a and b, using the algorithm is performed the Euclidean division of a by b , we obtain a = bq + r. If r is zero, q is the GCD , otherwise it repeats the operation by performing the Euclidean division of b and r .The algorithm uses the following property gcd (a,b)= gcd(b,r). The GCD is the last non-zero remainder . The following example shows a detailed calculation using the algorithm of Euclide to determine the GCD of two numbers. # Calculating the GCD Thus, for calculating the online gcd of two integers 150 and 350 , just type gcd(150;350) , the calculator returns the result 50. The calculation of the GCD is particularly useful for simplifying a fraction and put in the form of an irreducible fraction. GCD calculator using Euclid's algorithm, and details the steps of GCD calculation. #### Syntax : gcd(a;b), a and b are integers. #### Examples : gcd(15;25), returns 5 Calculate online with gcd (gcd calculation online)
{}
# Table of content don't show a number for a line I'm new to Latex and I'm writing my thesis with "report" document class. These are my first lines: ... \newcommand\blankpage{ \null \thispagestyle{empty} \newpage} \begin{document} \includepdf{cover.pdf} \afterpage{\blankpage} \chapter*{Abstract} \pagenumbering{gobble} text text text \tableofcontents \chapter*{Introduction} \pagenumbering{Roman} text text text \chapter{Chapter 1} \pagenumbering{arabic} text text text Essentially quite all is good: "Abstract" and table of contents are unnumbered, "Introduction" is numbered with Roman numbers and "Chapter1" with arabic numbers. Unfortunately in table of contents the line "Introduction" has no number near it. I'd like it shows the roman 'I' number (that's the right number). • you switch to Roman too late, after you have written the table of contents line. – David Carlisle Oct 25 '17 at 18:24 • Putting Roman after \tableofcontent I obtain what I wanted, but now the last page of the table of content is numbered 'I' so the "Introduction" starts with 'II'. That's so curious since in the table of contents "Introduction" line shows number 'I'... – Akinn Oct 25 '17 at 18:27 • change of pagenumbering should always be after a forced pagebreak so \clearpage normally. – David Carlisle Oct 25 '17 at 19:02 • also it's not clear why you have \afterpage or \blankpage the \pagenumbering resets the page counter to 1 so the subtracting of one in \blankpage isn't doing anything. – David Carlisle Oct 25 '17 at 19:04 • I solved putting \clearpage after \tableofcontents – Akinn Oct 25 '17 at 19:17 \clearpage
{}
# How do you solve the rational equation 4/(x+2)-3/(x-5)=15/(x^2-3x-10)? Jan 27, 2016 x = 41 #### Explanation: Firstly: factor the denominator on the right side. ${x}^{2} - 3 x - 10 = \left(x - 5\right) \left(x + 2\right)$ Equation now becomes : $\frac{4}{x + 2} - \frac{3}{x - 5} = \frac{15}{\left(x - 5\right) \left(x + 2\right)}$ Multiply each term on both sides by (x-5)(x+2) $\frac{4 \left(x - 5\right) \cancel{x + 2}}{\cancel{x + 2}} - \frac{3 \cancel{x - 5} \left(x + 2\right)}{\cancel{x - 5}}$= $\frac{15 \cancel{x - 5} \cancel{x + 2}}{\cancel{x - 5} \cancel{x + 2}}$ which simplified becomes : 4(x-5) -3(x+2) = 15 hence 4x - 20 - 3x - 6 = 15 therefore x = 15+6+20 = 41
{}
anonymous 3 years ago Question: A smartphone currently is worth \$175 and depreciates at 25% per year. What will the value be in 3 years? What equation do I use to solve this? 1. anonymous loses 25% of its value means it retains 75% of its value use $$175\times .75^t$$ 2. anonymous eqquation close to p=(1+r/100)^n n=number of years compound interest 3. anonymous or if you prefer $$175\times (\frac{3}{4})^t$$ either way you want $$175\times (.75)^3$$ and a calculator is needed for this one 4. anonymous well done satellite 5. anonymous Thanks guys.(:
{}
Divergent Series/Examples/sin i n over n^2 Example of Divergent Series The complex series defined as: $\ds S = \sum_{n \mathop = 1}^\infty \dfrac {\sin i n} {n^2}$ is divergent. Proof $\ds \cmod {\dfrac {\sin i n} {n^2} }$ $=$ $\ds \cmod {\dfrac {\exp \paren {i \paren {i n} } - \exp \paren {-i \paren {i n} } } {2 i n^2} }$ Sine Exponential Formulation $\ds$ $=$ $\ds \cmod {\dfrac {\exp \paren {- n} - \exp n} {2 n^2} }$ $\ds$ $>$ $\ds \dfrac {e^n - 1} {2 n^2}$ $\ds$ $\to$ $\ds \infty$ Hence the result. $\blacksquare$
{}
## Bakers Technique Designing Approximation Schemes For Planar Graphs In 1994, Brenda Baker described a method for designing approximation algorithms for planar graphs [^B. Baker, Approximation algorithms for NP-complete problems on planar graphs, JACM Volume 41 Issue 1, Jan. 1994.^]. The technique results in an algorithm with approximation ratio {$1\pm\epsilon$} and running time {$2^{O(1/\epsilon)}n$}. Such a trade-off between running time and approximation ratio is called an approximation scheme; when the running time is polynomial for fixed {$\epsilon$}, it is a polynomial-time approximation scheme or PTAS. The method applies to a host of problems. We describe the method using maximum-weight independent set as our example problem [^The technique can be viewed as an application of the layering technique as described in Section 2.2 in Approximation Algorithms by Vazirani.^]. # Layering technique The layering technique for designing an approximation algorithm for a given problem has two key parts: • break the input into layers such that the given problem can be solved optimally in each layer • taking the union of the solutions in each layer results in a feasible solution to the original input When applied to planar graphs, we partition the vertices of the graph into layers. For maximum independent set, each layer needs to be independent to guarantee that the union of the layers' solutions are feasible for the original graph. Attach:Baker.png Δ | The layers of a planar graph. Consider an embedding of the planar graph {$G = (V,E)$}. We label each vertex according to their depth from the boundary of the graph: the vertices on the boundary of the graph are given label 0; the vertices with label {$i$} are on the boundary of the graph obtained by deleting the vertices with label {$0,1,\ldots,i-1$}. Let {$V_\ell$} be the set of vertices with label {$\ell \bmod k$} ({$k$} is a parameter that will be fixed later). Let {$G^\ell_1, G^\ell_2, \ldots,$} be the components of {$G$} that are obtained by deleting {$V_\ell$}. These components are the layers. By construction, {$G^\ell_j$} is a planar graph with {$k-1$} labels; we call such graphs {$(k-1)$}-outerplanar graphs, more on that below. Problems such as maximum-weight independent set can be solved optimally in {$2^{O(k)} n$} time in such graphs. # The algorithm To motivate the algorithm, we start with some observations. Let {$S^*$} be the maximum-weight independent set of vertices in the graph {$G = (V,E)$}. {$\{V_0,V_1, \ldots, V_{k-1} \}$} is a partition of the vertices of the graph; it can be used to partition {$S^*$} as well, giving us {$\sum_{\ell = 0}^{k-1} w(S^* \cap V_\ell) = w(S^*)$}. It follows that {$\min_\ell w(S^* \cap V_\ell) \leq {1\over k} w(S^*)$}. Let {$V_{\ell^*}$} be the set in the partition that witnesses this minimum contribution to the optimal solution. Let {$S_i^{\ell^*}$} be the maximum-weight independent set of {$G_i^{\ell^*}$}. Since independent sets are independent sets of subgraphs too, {$S^* \cap G_i^{\ell^*}$} is an independent set of {$G_i^{\ell^*}$} and so {$w(S_i^{\ell^*}) \geq w(S^* \cap G_i^{\ell^*})$}. So, if we take the union of the solutions for each layer {$\cup_i S_i^{\ell^*}$}, we get a solution whose weight is at least {$\sum_i w(S^* \cap G_i^{\ell^*}) = w(S^*) - w(S^* \cap V_{\ell^*})$}. By the choice of {$\ell^*$}, this is at least {$(1-{1\over k})w(S^*)$}. Using {$k = {1\over \epsilon}$}, the solution {$\cup_i S_i^{\ell^*}$} approximates {$S^*$} within {$1-\epsilon$} as desired. However, in order to find {$\cup_i S_i^{\ell^*}$} we needed to know {$\ell^*$}, which in turn required knowing {$S^*$}, the optimal solution to the problem we were trying to compute. However, as {$\ell^*$} can only be one of {$k$} values we can simply try each value: INDEPENDENT-SET({$G$},{$w$},{$\epsilon$}) {$k = 1/\epsilon$} find the levels of outerplanarity {$\pmod k$}: {$\{V_0,V_1, \ldots, V_{k-1} \}$} for {$\ell = 0, \ldots, k-1$} find the components {$G^\ell_1, G^\ell_2, \ldots,$} of {$G$} after deleting {$V_\ell$} for {$i = 1, \ldots,$} compute {$S_i^{\ell}$}, the maximum-weight independent set of {$G_i^{\ell}$} {$S^{\ell} = \cup_i S_i^\ell$} let {$S^{\ell^*}$} be the solution of maximum weight among {$\{S^0,S^1, \ldots, S^{k-1} \}$} return {$S^{\ell^*}$} If one can find the maximum-weight independent sets in each {$G_i^{\ell}$} in {$2^{O(k)} |G_i^{\ell}|$} time, then this algorithm runs in {$2^{O(k)}k n$} time. It remains to show that we can compute the maximum-weight independent set in {$G_i^{\ell}$} in {$2^{O(k)} |G_i^{\ell}|$} time. By construction, we know {$G_i^{\ell}$} is a {$(k-1)$}-outerplanar graph. Usually one argues the solvability of such problems as maximum-weight independent set in {$(k-1)$}-outerplanar graphs by quoting two theorems: Theorem: {$k$}-outerplanar graphs have treewidth {$3k-1$} and the corresponding tree decomposition can be found in linear time. [^H. Bodlaender, A partial k-arboretum of graphs with bounded treewidth, Theoretical Computer Science, Volume 209, Issues 1-2, 6 December 1998, Pages 1-45.^] Theorem: Given a tree decomposition of a graph of treewidth {$tw$}, maximum-weight independent set can be solved optimally in {$2^{O(tw)} n$} time. [^For example, M. Bern, E. Lawler, A. Wong, Linear-time computation of optimal subgraphs of decomposable graphs, Journal of Algorithms 8 (2): 216–235, 1987.^] Rather than prove these theorems at the expense of several pages here, we show how to solve maximum-weight independent set in a {$(k-1)$}-innerplanar graph. Innerplanar graphs are a special case of outerplanar graphs and the ingredients used below will (hopefully) impart the intuition and ingredients of the above two theorems. # Solving maximum-weight independent set in a {$k$}-innerplanar graph 400px|thumb|right|A 3-innerplanar graph (left) and 3-outerplanar graph (right). Vertices are coloured according to their label (red = 1, blue = 2, green = 3)? First, let's formally define inner- and outerplanarity. A planar graph that can be drawn so that all the vertices are on the boundary of the graph is an outerplanar graph (or 1-outerplanar graph). A single vertex is innerplanar (or 0-innerplanar). A {$k$}-innerplanar graph is a planar graph obtained from a {$(k-1)$}-innerplanar graph {$G$} by adding a simple cycle {$C$} and connecting vertices of {$C$} to vertices on the boundary of {$G$}. A planar graph is {$k$}-outerplanar if one deletes the vertices on the boundary of the graph and obtains a {$(k-1)$}-outerplanar graph. Note that a {$(k+1)$}-innerplanar graph is a {$k$}-outerplanar, but not vice versa. Consider a {$k$}-innerplanar graph {$G$} with vertices labelled according to their level in the above definition, with the boundary vertices labelled {$k$}. We add edges of the form {$(i,i+1)$} to {$G$} while maintaing planarity (and so also maintain {$k$}-innerplanarity) so that each edge of label {$i > 1$} has a neighbour with label {$i-1$}. Call the resulting graph {$\bar G$}. Claim: {$\bar G$} has a spanning tree {$T$} such that the longest simple path in {$T$} has at most {$2k+1$} nodes. Proof: For each vertex {$x$} with labelled {$\ell(x) > 0$}, let {$n(x)$} be a node labelled {$\ell(x)-1$}. The set of edges {$T = \cup_{x:\ell(x) > 0} (x,n(x))$} forms a spanning tree. Node is at most {$\ell(x)$} edges away from the unique node of label 0 in this tree; the longest path in this tree has {$2k+1$} nodes. Without dwelling too much on the details of planar graphs, we use the following: 250px|thumb|left|A planar graph (black/grey) and its dual (red) with primal and dual spanning trees (dark)? Theorem: The set of non-tree edges forms a spanning tree of the dual graph of {$\bar G$} whose vertices are the faces of {$\bar G$} with vertex adjacencies inherited from face adjacencies. We use this dual spanning tree to guide a dynamic program we use to solve maximum-weight independent set optimally. To keep the dynamic program as simple as possible, we add further edges to triangulate {$\bar G$}, resulting in a dual graph and dual spanning tree, {$T^*$}, that is degree 3. We add an artificial root node connected to the node of {$T^*$} corresponding to the outer face of {$\bar G$}. ## The dynamic program For a non-tree edge {$e$}, let {$C_e$} be the elementary cycle formed by {$e$} and the path in {$T$} between the endpoints of {$e$}. Let {$G[C_e]$} be the subgraph of {$G$} that is enclosed by {$C_e$}. We create a dynamic programming table {$DP_e$} that is indexed by every independent set {$S$} of vertices of {$C_e$}. {$DP_e[S]$} is the weight of the maximum-weight independent set of {$G[C_e]$} that includes {$S$}. For subsets {$S' \subset V(C_e)$} that are not independent, we The dynamic program fills in the tables in a leaf-to-root order. For a leaf edge {$e$}, {$C_e$} is a cycle of length 3 and independent sets of vertices of {$C_e$} can contain at most one vertex. {$DP_e[S]$} is trivial to compute for leaf edges. 200px|thumb|right|A non-leaf, dual-tree edge e and the child edges considered by the dynamic program.? A non-leaf edge {$e$} has at most two child edges {$e_L$} and {$e_R$}. We only detail the case when {$e$} has exactly two child edges; the case of one child edge is simpler. {$C_{e_L}$} and {$C_{e_R}$} are enclosed by {$C_e$}. In fact, it is easy to show that these cycles take the form {$C_e = e \cup P_L \cup P_R,\, C_{e_L} = e_L \cup P_L \cup P_C,\, C_{e_R} = e_R \cup P_C \cup P_R$}, where {$P_L,\, P_R,\, P_C$} are tree paths from a common node {$x$} to the endpoints of {$e_L$} and {$e_R$}. To populate the entries of the table {$DP_e$}, we consider combining every pair of entries in the child tables {$DP_{e_L}[S_L], DP_{e_R}[S_R]$}. Formally, we populate the entries of the table {$DP_e$} as follows: {$DP_e[S] = -\infty\mbox{, for every }S \subseteq V(C_e)$} % initialize the table entries to indicate infeasibility {$\mbox{for every } S_L \subseteq V(C_{e_L})$} {$\mbox{for every } S_R \subseteq V(C_{e_R})$} {$\mbox{if }S_L \cap V(P_C) = S_R \cap V(P_C)$} % if the two subsolutions are consistent with eachother % find the set of vertices that are in e's elementary cycle {$\mbox{if }S = S_L \cup S_R \setminus V(P_C)\mbox{ is independent in the original graph}$} % original graph is without the edges added to triangulate, etc. % note it is sufficient to check if e's endpoints are both in S if e is in the original graph % take this solution if it is better than any already computed: {$DP_e[S] = \max\{DP_e[S], DP_{e_L}[S_L]+DP_{e_R}[S_R]-w(S_L \cap V(P_C))\}$} The root edge {$e_0$} of the dual tree is a convenient representation of the entire graph; the value of the optimal solution is given by {$\max_{S \subset V} DP_{e_0}[S]$}. The usual techniques can be used to compute the independent set corresponding to this value. ## Running time By the above claim, the longest path in {$T$} has at most {$2k+1$} nodes. So, the size of any vertex set that indexes the dynamic programming tables is also at most {$2k+1$}; it follows that the number of entries in {$DP_e$} is at most {$2^{2k+1}$}. One iteration of the innermost loop takes O(k) time. Since the dual tree has a linear number of nodes, the entire dynamic program takes time {$O(k4^kn)$}). # Questions 1. How would you modify the above algorithm to give a {$(1+\epsilon)$}-approximation for the minimum-cost vertex cover problem? You may assume that minimum-cost vertex cover can be solved in {$2^{O(tw)} n$} time in graphs of treewidth {$tw$}. Be careful: how do you choose the layers so that you are guaranteed to find a feasible solution? 2. Give a dynamic program for solving the minimum-cost vertex cover problem in {$k$}-innerplanar graphs. [^#^]
{}
# scipy.signal.freqz_zpk¶ scipy.signal.freqz_zpk(z, p, k, worN=None, whole=False)[source] Compute the frequency response of a digital filter in ZPK form. Given the Zeros, Poles and Gain of a digital filter, compute its frequency response: :math:H(z)=k \prod_i (z - Z[i]) / \prod_j (z - P[j]) where $$k$$ is the gain, $$Z$$ are the zeros and $$P$$ are the poles. Parameters: z : array_like Zeroes of a linear filter p : array_like Poles of a linear filter k : scalar Gain of a linear filter worN : {None, int, array_like}, optional If None (default), then compute at 512 frequencies equally spaced around the unit circle. If a single integer, then compute at that many frequencies. If an array_like, compute the response at the frequencies given (in radians/sample). whole : bool, optional Normally, frequencies are computed from 0 to the Nyquist frequency, pi radians/sample (upper-half of unit-circle). If whole is True, compute frequencies from 0 to 2*pi radians/sample. w : ndarray The normalized frequencies at which h was computed, in radians/sample. h : ndarray The frequency response. freqs Compute the frequency response of an analog filter in TF form freqs_zpk Compute the frequency response of an analog filter in ZPK form freqz Compute the frequency response of a digital filter in TF form Notes Examples >>> from scipy import signal >>> z, p, k = signal.butter(4, 0.2, output='zpk') >>> w, h = signal.freqz_zpk(z, p, k) >>> import matplotlib.pyplot as plt >>> fig = plt.figure() >>> plt.title('Digital filter frequency response') >>> plt.plot(w, 20 * np.log10(abs(h)), 'b') >>> plt.ylabel('Amplitude [dB]', color='b') >>> ax2 = ax1.twinx() >>> angles = np.unwrap(np.angle(h)) >>> plt.plot(w, angles, 'g')
{}
# Two Bodies of Masses M1 and M2 and Specific Heat Capacities S1 and S2 Are Connected by a Rod of Length L, Cross-sectional Area A, Thermal Conductivity K and Negligible Heat - Physics Sum Two bodies of masses m1 and m2 and specific heat capacities s1 and s2 are connected by a rod of length l, cross-sectional area A, thermal conductivity K and negligible heat capacity. The whole system is thermally insulated. At time t = 0, the temperature of the first body is T1 and the temperature of the second body is T2 (T2 > T1). Find the temperature difference between the two bodies at time t. #### Solution Rate of transfer of heat from the rod is given by (DeltaQ)/(Deltat) = (KA(T_2 - T_1))/l Heat transfer from the rod in time ΔΔ t is given by (DeltaQ)/(Deltat) = (KA(T_2 - T_1))/l Deltat ............(1) Heat loss by the body at temperature T2 is equal to the heat gain by the body at temperature T1 Therefore, heat loss by the body at temperature t2  in time Δt is given by DeltaQ = m_2s_2(T_2 - T_2) ....(2) from equation (i) and (ii) m_2s_2(T_2 - T_2')= (KA(T_2 - T_1))/l Delta t ⇒ T_2' = T_2 - (KA(T_2 - T_1))/(l(m_2s_2)) Delta t This gives us the fall in the temperature of the body at temperature T2. Similarly, rise in temperature of water at temperature T1 is given by T_1' = T_1 + (KA(T_2 - T_1))/(l(m_1s_1)) Delta t Change in the temperature is given by (T_2' - T_1') = (T_2 - T_1) - [(KA (T_2 - T_1))/(lm_1s_1) Deltat + (KA(T_2 - T_1))/(lm_2s_1)Delta t] ⇒(T_2' - T_1') - (T_2 - T_1) = - [(KA(T_2 - T _1))/(lm_1s_1) Deltat + [(KA(T_2 - T _1))/(lm_2s_2) Deltat] rArr  (DeltaT)/(Deltat)= (KA(T_2 - T_1))/l [1/(m_1s_1) + 1/(m_2 s_2)] Deltat rArr 1/(T_2 - T_1) DeltaT =- (KA)/l [(m_1s_1 + m_2s_2)/(m_1s_1m_2s_2)] On integrating both the sides, we get lim Δ t → 0 int 1/(T_2 - T_1)dT = int - (KA)/l [( m_1s_1 + m_2s_2)/(m_1s_1m_2s_2) ]dt ⇒  In [T_2 - T_1] = - (KA)/l [( m_1s_1 + m_2s_2)/(m_1s_1m_2s_2)]t ⇒ (T_2 - T_1) = e^(-lamda t) Here , lamda = "KA/l [ "m_1s_1 + m_2s_2"/"m_1s_1m_2s_2"] Concept: Thermal Expansion of Solids Is there an error in this question or solution? #### APPEARS IN HC Verma Class 11, Class 12 Concepts of Physics Vol. 2 Chapter 6 Heat Transfer Q 37 | Page 101
{}
7 Q: # If a body weighs 12N on the surface of the earth, how much will it weigh on the surface of the moon where acceleration due to gravity is only one-sixth of that on earth's surface? A) 12N B) 2N C) 10N D) 6N Answer:   B) 2N Explanation: Weight is mass x g. Since mass is constant and g is only $16th$of that on the earth. The weight on the moon is only $16th$ of that on the earth. Therefore 2N i.e (b) is the answer. Subject: Physics Exam Prep: AIEEE Q: Low voltage cables are meant for use up to A) 3.3 kV B) 1.1kV C) 11 kV D) 0.5 kV Answer & Explanation Answer: B) 1.1kV Explanation: Low Voltage Cables are used to supply power for use up to 1.1 kV. Report Error Filed Under: Physics Exam Prep: AIEEE , Bank Exams , CAT Job Role: Analyst , Bank Clerk , Bank PO 0 189 Q: A manometer is used to measure A) earthquake B) pressure C) temperature D) density Answer & Explanation Answer: B) pressure Explanation: A manometer is used to measure a pressure difference between two points. Report Error Filed Under: Physics Exam Prep: AIEEE , Bank Exams , CAT Job Role: Analyst , Bank Clerk , Bank PO 0 168 Q: Mirage is due to A) Magnetic disturbances in the atmosphere B) Equal heating of different parts of the atmosphere C) Unequal heating of different parts of the atmosphere D) Depletion of ozone layer in the atmosphere Answer & Explanation Answer: C) Unequal heating of different parts of the atmosphere Explanation: Mirage is a naturally occurring optical phenomenon due to unequal heating of different parts of the atmosphere in which light rays bend to produce a displaced image of distant objects or the sky. Report Error Filed Under: Physics Exam Prep: AIEEE , Bank Exams , CAT Job Role: Analyst , Bank Clerk , Bank PO 0 104 Q: Speed regulation of synchronous motor is A) 1% B) 25% C) Zero D) 0.5% Answer & Explanation Answer: C) Zero Explanation: Synchronous motor is a motor which runs at a constant speed irrespective of load attached to it. So the speed regulation of synchronous motor is ZERO. Report Error Filed Under: Physics Exam Prep: AIEEE , Bank Exams , CAT Job Role: Analyst , Bank Clerk , Bank PO 1 90 Q: Cryogenic Engines find applications in A) Frost - Free refrigerators B) Sub-marine propulsion C) Researches in superconductivity D) Rocket technology Answer & Explanation Answer: D) Rocket technology Explanation: Cryogenic Engines find applications in Rocket technology. A cryogenic rocket engine is a rocket engine that uses a cryogenic fuel or oxidizer, that is, its fuel or oxidizer are gases liquefied and stored at very low temperatures. Notably, these engines were one of the main factors of NASA's success in reaching the Moon by the Saturn V rocket. Report Error Filed Under: Physics Exam Prep: AIEEE , Bank Exams , CAT , GATE Job Role: Analyst , Bank Clerk , Bank PO 0 188 Q: Which sources produce alternating current (AC)? A) Solar power plants B) Hydro-electric generators C) Batteries D) All of the above Answer & Explanation Answer: B) Hydro-electric generators Explanation: Alternating Current (AC) is the current in which flow of current is altered periodically whereas in DC current i.e, direct current the flow of current is only in one direction. Some of the sources of AC current are : 1. Hydro-electric generators 2. Thermal power generators 3. Nuclear  power generators... Report Error Filed Under: Physics Exam Prep: AIEEE , Bank Exams , CAT Job Role: Analyst , Bank Clerk , Bank PO 0 195 Q: If velocity is constant then acceleration is what? A) constant B) increasing C) 0 D) infinite Answer & Explanation Answer: C) 0 Explanation: If an object moves with some velocity that means the magnitude and direction of the object changes i.e, the object moves with some acceleration. Hence, if the object moves with constant velocity, no change in direction and magnitude of speed i.e, no acceleration = 0 acceleration. Report Error Filed Under: Physics Exam Prep: AIEEE , Bank Exams , CAT Job Role: Analyst , Bank Clerk , Bank PO 1 239 Q: A fixture is defined as a device which A) is used to check the accuracy of workpiece B) holds and locates a workpiece and guides and controls one or more cutting tools C) holds and locates a workpiece during an inspection or for a manufacturing operation D) All of the above Answer & Explanation Answer: B) holds and locates a workpiece and guides and controls one or more cutting tools Explanation: A fixture is defined as a device which holds and locates a workpiece and guides and controls one or more cutting tools. Report Error Filed Under: Physics Exam Prep: AIEEE , Bank Exams , CAT Job Role: Analyst , Bank Clerk , Bank PO 0 230
{}
## Elementary Technical Mathematics Published by Brooks Cole # Chapter 6 - Section 6.5 - Translating Words into Algebraic Symbols - Exercises - Page 248: 20 #### Answer $$2(x-6)=30$$ #### Work Step by Step We can call our unknown number "$x$". The difference between a number and 6 can be written as $x-6$ and doubling that would give $2(x-6)$. (Remember to add the parenthesis.) So, $2(x-6)=30$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
Leaders use $Commander$ and $Scout$ subprocesses to run the Synod protocol. As we shall see, the following invariants hold in the Synod protocol: • L1: For any ballot $b$ and slot $s$, at most one command $c$ is selected and at most one commander for $\langle b, s, c \rangle$ is spawned. • L2: Suppose that for each $\alpha$ among a majority of acceptors $\langle b, s, c \rangle \in \alpha.accepted$. If $b' > b$ and a commander is spawned for $\langle b', s, c' \rangle$, then $c = c'$. Invariant L1 implies Invariant A4, because by L1 all acceptors that accept a pvalue for a particular ballot and slot number received the pvalue from the same commander. Similarly, Invariant L2 implies Invariant A5. ###### Commander A leader may work on multiple slots at the same time. For each such slot, the leader selects a command and spawns a new process that we call a commander. While we present it as a separate process, the commander is really just a thread running within the leader. The commander runs what is known as $phase$ $2$ of the Synod protocol. Below you can find the pseudo-code for a Commander: $\texttt{process} ~ \textit{Commander}(\lambda, \textit{acceptors}, \textit{replicas}, \langle b, s, c \rangle)$ $\texttt{var} ~ \textit{waitfor} := \textit{acceptors}$; $\forall \alpha \in \textit{acceptors}: \textit{send}(\alpha, \langle \textbf{p2a}, \textit{self}(), \langle b, s, c \rangle \rangle)$; $\texttt{for ever}$ $\texttt{switch} ~ \textit{receive}()$ $\texttt{case} ~ \langle \textbf{p2b}, \alpha, b' \rangle:$ $\texttt{if} ~ b' = b ~ \texttt{then}$ $\textit{waitfor} := \textit{waitfor} - \{ \alpha \}$; $\texttt{if} ~ |\textit{waitfor}| < |\textit{acceptors}| / 2 ~ \texttt{then}$ $\forall \rho \in \textit{replicas}:$ $\textit{send}(\rho, \langle \textbf{decision}, s, c \rangle$); $\textit{exit}()$; $\texttt{end if}$ $\texttt{else}$ $\textit{send}(\lambda, \langle \textbf{preempted}, b' \rangle)$; $\textit{exit}()$; $\texttt{end if}$ $\texttt{end case}$ $\texttt{end switch}$ $\texttt{end for}$ $\texttt{end process}$ A commander sends a $\langle \textbf{p2a}, \lambda, \langle b, s, c \rangle \rangle$ message to all acceptors, and waits for responses of the form $\langle \textbf{p2b}, \alpha, b' \rangle$. In each such response $b' \ge b$ will hold. There are two cases: • If a commander receives $\langle \textbf{p2b}, \alpha, b \rangle$ from all acceptors in a majority of acceptors, then the commander learns that command $c$ has been chosen for slot $s$. In this case, the commander notifies all replicas and exits. To satisfy Invariant R1, we need to enforce that if a commander learns that $c$ is chosen for slot $s$, and another commander learns that $c'$ is chosen for the same slot $s$, then $c = c'$. This is a consequence of Invariant A5: if a majority of acceptors accept $\langle b, s, c \rangle$, then for any later ballot $b'$ and the same slot number $s$, acceptors can only accept $\langle b', s, c \rangle$. Thus if the commander of $\langle b', s, c' \rangle$ learns that $c'$ has been chosen for $s$, it is guaranteed that $c = c'$ and no inconsistency occurs, assuming---of course---that Invariant L2 holds. • If a commander receives $\langle \textbf{p2b}, \alpha', b' \rangle$ from some acceptor $\alpha'$, with $b' \ne b$, then it learns that a ballot $b'$, which must be larger than $b$ as guaranteed by acceptors, is active. This means that ballot $b$ may no longer be able to make progress, as there may no longer exist a majority of acceptors that can accept $\langle b, s, c \rangle$. In this case, the commander notifies its leader about the existence of $b'$, and exits. Under the assumptions that at most a minority of acceptors can crash, that messages are delivered reliably, and that the commander does not crash, the commander will eventually do one or the other. The leader must enforce Invariants L1 and L2. Because there is only one leader per ballot, Invariant L1 is trivial to enforce by the leader not spawning more than one commander per ballot number and slot number. To enforce Invariant L2, the leader runs what is often called $phase$ $1$ of the Synod protocol or a $view$ $change$ protocol for some ballot before spawning commanders for that ballot. The leader spawns a $scout$ thread to run the view change protocol for some ballot $b$. A leader starts at most one of these for any ballot $b$, and only for its own ballots. ###### Scout Below you can find the pseudo-code for a scout. The code is similar to that of a commander, except that it sends and receives phase 1 messages instead of phase 2 messages. $\texttt{process} ~ \textit{Scout}(\lambda, \textit{acceptors}, b)$ $\texttt{var} ~ \textit{waitfor} := \textit{acceptors}, ~ pvalues := \emptyset$; $\forall \alpha \in \textit{acceptors}: \textit{send}(\alpha,\langle \textbf{p1a}, self(), b \rangle)$; $\texttt{for ever}$ $\texttt{switch} ~ \textit{receive}()$ $\texttt{case} ~ \langle \textbf{p1b}, \alpha, b', r \rangle:$ $\texttt{if} ~ b' = b ~ \texttt{then}$ $pvalues := pvalues \cup r$; $\textit{waitfor} := \textit{waitfor} - \{ \alpha \}$; $\texttt{if} ~ |\textit{waitfor}| < |\textit{acceptors}| / 2 ~ \texttt{then}$ $\textit{send}(\lambda, \langle \textbf{adopted}, b, pvalues \rangle)$; $\textit{exit}()$; $\texttt{end if}$ $\texttt{else}$ $\textit{send}(\lambda, \langle \textbf{preempted}, b' \rangle)$; $\textit{exit}()$; $\texttt{end if}$ $\texttt{end case}$ $\texttt{end switch}$ $\texttt{end for}$ $\texttt{end process}$ A scout completes successfully when it has collected $\langle \textbf{p1b}, \alpha, b, r_\alpha \rangle$ messages from all acceptors in a majority, and returns an $\langle \textbf{adopted}, b, \bigcup r_\alpha \rangle$ message to its leader $\lambda$. As we will see later, the leader uses $\bigcup r_\alpha$, the union of all pvalues accepted by this majority of acceptors, in order to enforce Invariant L2. Leader $\lambda$ maintains three state variables: • $\lambda.ballot\_num$: a monotonically increasing ballot number, initially $(0, \lambda)$. • $\lambda.\textit{active}$: a boolean flag, initially $\texttt{false}$. • $\lambda.\textit{proposals}$: a map of slot numbers to proposed commands in the form of a set of $\langle \textit{slot number}, \textit{command} \,\rangle$ pairs, initially empty. At any time, there is at most one entry per slot number in the set. Below you can find the pseudo-code for a Leader: $\texttt{process} ~ \textit{Leader}(\textit{acceptors}, \textit{replicas})$ $\texttt{var} ~ ballot\_num = (0, self()), \textit{active} = \texttt{false},\textit{proposals} = \emptyset$; $\textit{spawn}(\textit{Scout}(self(), \textit{acceptors}, ballot\_num))$; $\texttt{for ever}$ $\texttt{switch} ~ \textit{receive}()$ $\texttt{case} ~ \langle \textbf{propose}, s, c \rangle:$ $\texttt{if} ~\not\exists c' : \langle s, c' \rangle \in \textit{proposals} ~ \texttt{then}$ $\textit{proposals} := \textit{proposals} \cup \{ \langle s, c \rangle \}$; $\texttt{if} ~ \textit{active} ~ \texttt{then}$ $\textit{spawn}(\textit{Commander}(self(), \textit{acceptors}, \textit{replicas}, \langle ballot\_num, s, c \rangle)$; $\texttt{end if}$ $\texttt{end if}$ $\texttt{end case}$ $\texttt{case} ~ \langle \textbf{adopted}, ballot\_num, \textit{pvals} \rangle:$ $\textit{proposals} := \textit{proposals} \lhd \textit{pmax}(\textit{pvals})$; $\forall \langle s, c \rangle \in \textit{proposals}:$ $\textit{spawn}(\textit{Commander}(self(), \textit{acceptors}, \textit{replicas}, \langle ballot\_num, s, c \rangle)$; $\textit{active} := \texttt{true}$; $\texttt{end case}$ $\texttt{case} ~ \langle \textbf{preempted}, \langle r', {\lambda'} \rangle \rangle:$ $\texttt{if} ~ (r', {\lambda'}) > ballot\_num ~ \texttt{then}$ $\textit{active} := \texttt{false}$; $ballot\_num := (r' + 1, self())$; $\textit{spawn}(\textit{Scout}(self(), \textit{acceptors}, ballot\_num))$; $\texttt{end if}$ $\texttt{end case}$ $\texttt{end switch}$ $\texttt{end for}$ $\texttt{end process}$ The leader starts by spawning a scout for its initial ballot number, and then enters into a loop awaiting messages. There are three types of messages that cause transitions: • $\langle \textbf{propose}, s, c \rangle$: A replica proposes command $c$ for slot number $s$. • $\langle \textbf{adopted}, ballot\_num, \textit{pvals} \rangle$: Sent by a scout, this message signifies that the current ballot number $ballot\_num$ has been adopted by a majority of acceptors. If an $\textbf{adopted}$ message arrives for an old ballot number, it is ignored. The set $\textit{pvals}$ contains all pvalues accepted by these acceptors prior to $ballot\_num$. • $\langle \textbf{preempted}, \langle r', {\lambda'} \rangle \rangle$: Sent by either a scout or a commander, it means that some acceptor has adopted $\langle r', {\lambda'} \rangle$. If $\langle r', {\lambda'} \rangle > ballot\_num$, it may no longer be possible to use ballot $ballot\_num$ to choose a command. A leader goes between $passive$ and $active$ modes. When passive, the leader is waiting for an $\langle \textbf{adopted}, ballot\_num, \textit{pvals} \rangle$ message from the last scout that it spawned. When this message arrives, the leader becomes active and spawns commanders for each of the slots for which it has a proposed command, but must select commands that satisfy Invariant L2. We will now consider how the leader goes about this. When active, the leader knows that a majority of acceptors, say $\cal A$, have adopted $ballot\_num$ and thus no longer accept pvalues for ballot numbers less than $ballot\_num$, because of Invariants A1 and A2. In addition, it has all pvalues accepted by the acceptors in $\cal A$ prior to $ballot\_num$. The leader uses these pvalues to update its own proposals variable. There are two cases to consider: • If, for some slot $s$, there is no pvalue in $\textit{pvals}$, then, prior to $ballot\_num$, it is not possible that any pvalue has been chosen or will be chosen for slot $s$. After all, suppose that some pvalue $\langle b, s, c \rangle$ were chosen, with $b < ballot\_num$. This would require a majority of acceptors ${\cal A}'$ to accept $\langle b, s, c \rangle$, but we have responses from a majority ${\cal A}$ that have adopted $ballot\_num$ and have not accepted, nor can accept, pvalues with a ballot number smaller than $ballot\_num$, by Invariants A1 and A2. Because both ${\cal A}$ and ${\cal A}'$ are majorities, ${\cal A} \cap {\cal A}'$ is non-empty---some acceptor in the intersection must have violated Invariant A1, A2, or A3, which we assume cannot happen. Because no pvalue has been or will be chosen for slot $s$ prior to $ballot\_num$, the leader can propose any command for that slot without causing a conflict on an earlier ballot, thus enforcing Invariant L2. • Otherwise, let $\langle b, s, c \rangle$ be the pvalue with the maximum ballot number for slot $s$. Because of Invariant A4, this pvalue is unique---there cannot be two different commands for the same ballot number and slot number. Also note that $b < ballot\_num$, because acceptors only report pvalues they accepted before adopting $ballot\_num$. Like the leader of $ballot\_num$, the leader of $b$ must have picked $c$ carefully to ensure that Invariant L2 holds, and thus if a pvalue is chosen before or at $b$, the command it contains must be $c$. Since all acceptors in $\cal A$ have adopted $ballot\_num$, no pvalues between $b$ and $ballot\_num$ can be chosen, by Invariants A1 and A2. Thus, by using $c$ as a command, $\lambda$ enforces Invariant L2. This inductive argument is the crux for the correctness of the Synod protocol. It demonstrates that Invariant L2 holds, which in turn implies Invariant A5, which in turn implies Invariant R1 that ensures that all replicas apply the same operations in the same order. Back to the code, after the leader receives $\langle \textbf{adopted}, ballot\_num, \textit{pvals} \rangle$, it determines for each slot the command corresponding to the maximum ballot number in $\textit{pvals}$ by invoking the function $\textit{pmax}$. Formally, the function $\textit{pmax}(\textit{pvals})$ is defined as follows: $\textit{pmax}(\textit{pvals}) \equiv \{ \langle s, c \rangle ~|~ \exists b: \langle b, s, c \rangle \in \textit{pvals} ~\wedge \\ ~~~~~~~~~~\forall b', c': \langle b', s, c' \rangle \in \textit{pvals} \Rightarrow b' \le b ~\}$ The update operator $\lhd$ applies to two sets of proposals. $x \lhd y$ returns the elements of $y$ as well as the elements of $x$ that are not in $y$. Formally: $x \lhd y \equiv \{ \langle s, c \rangle ~|~ \langle s, c \rangle \in y ~\vee \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~ (\langle s, c \rangle \in x ~ \wedge \not\exists c': \langle s, c' \rangle \in y) \}$ Thus the line $\textit{proposals} := \textit{proposals} \lhd \textit{pmax}(\textit{pvals});$ updates the set of proposals, replacing for each slot number the command corresponding to the maximum pvalue in $\textit{pvals}$, if any. Now the leader can start commanders for each slot while satisfying Invariant L2. If a new proposal arrives while the leader is active, the leader checks to see if it already has a proposal for the same slot $($and has thus spawned a commander for that slot$)$ in its set $\textit{proposals}$. If not, the new proposal will satisfy Invariant L2, and thus the leader adds the proposal to $\textit{proposals}$ and spawns a commander. If either a scout or a commander detects that an acceptor has adopted a ballot number $b$, with $b > ballot\_num$, then it sends the leader a $\texttt{preempted}$ message. The leader becomes passive and spawns a new scout with a ballot number that is higher than $b$. Below is an example of a leader $\lambda$ spawning a scout to become active, and a client $\kappa$ sending a request to two replicas $\rho_1$ and $\rho_2$, which in turn send proposals to $\lambda$.
{}
# Question about the Smallest Grammar problem. Is the problem to prove whether or not there exists an algorithm with running time polynomial in the length of the input string $|s|$, or polynomial both in $|s|$ and the size of the alphabet $|A|$ ? The papers I'm looking at assume that you know which one they mean. Edit: Paper - Well, the alphabet can trivially be taken to be the set of symbols found in the input string, or you could fix it as $\{0,1\}$. I assume that one would allow any number of nonterminals in the grammar; more than $|s|$ of them will never be optimal anyway. –  Henning Makholm Apr 17 '12 at 19:11
{}