content
stringlengths
86
994k
meta
stringlengths
288
619
About control of guaranteed estimation The control problem by parameters in the course of the guaranteed state estimation of linear non-stationary systems is considered. It is supposed that unknown disturbances in the system and the observation channel are limited by norm in the space of square integrable functions and the initial state of the system is also unknown. The process of guaranteed state estimation includes the solution of a matrix Riccati equation that contains some parameters, which may be chosen at any instant of time by the first player (an observer) and the second player (an opponent of the observer). The purposes of players are diametrically opposite: the observer aims to minimize diameter of information set at the end of observation process, and the second player on the contrary aims to maximize it. This problem is interpreted as a differential game with two players for the Riccati equation. All the choosing parameters are limited to compact sets in appropriate spaces of matrices. The payoff of the game is interpreted through the Euclidean norm of the inverse Riccati matrix at the end of the process. A specific case of the problem with constant matrices is considered. Methods of minimax optimization, the theory of optimal control, and the theory of differential games are used. Examples are also given. CYBERNETICS AND PHYSICS, Vol. 7, No. 1, 2018, pp. 18-25. https://doi.org/10.35470/2226-4116-2018-7-1-18-25
{"url":"http://lib.physcon.ru/doc?id=9a30c7ca32d9","timestamp":"2024-11-02T01:56:59Z","content_type":"text/html","content_length":"5248","record_id":"<urn:uuid:d1403217-a0f9-43ba-9e29-e8df4c71ab94>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00642.warc.gz"}
Strange values after using Initial Conditions So the List of my models is: • All y+ Wall treatment • Cell Quality Remediation • Constant Density • Gas (- Air) • Gradients • K-Omega Turbulence • Reynols-Averaged Navier-Stokes • Segregated Flow • SST (Menter) K-Omega • Steady • Thee Dimensional • Turbulent I saw my region as a windtunnel, so I do have a velocity inlet with a constant velocity. e.g.: my simulation was: on 10m/s. so I had the velocity inlet as 10m/s and in continua I had the initial value 10m/s for all cells. Now I extraced the solution of my last simulation. There the velocity wasn't exactly 10m/s everywhere anymore. The outlet is simply a pressure outlet. Should I change the boundary conditions when I use Initial conditions? I mean. I still want the air to flow in 10m/s. Going from steady to unsteady is a matter of changing your physics models from steady to unsteady (probably implicit unsteady in your case). Here you would run the solution out using steady model then when it converges switch over to unsteady. This gives you a flow field that is close to right for your first few time steps rather than have these first few steps take up many, many inner iterations to achieve convergence. If the computation is longer it's not a huge problem. Do you think this can solve the problem? How can I do it? If ramping up the solution does mean a better mesh it is not possible for me now, because I have just a limited RAM for post processing. If its the discritization scheme I gonna read in the documentation about it.
{"url":"https://www.cfd-online.com/Forums/star-ccm/166255-strange-values-after-using-initial-conditions.html","timestamp":"2024-11-08T05:13:03Z","content_type":"application/xhtml+xml","content_length":"105195","record_id":"<urn:uuid:2c7147d2-1ec6-4cec-95c3-554b6de096ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00146.warc.gz"}
Note for the P versus NP Problem Download PDFOpen PDF in browser Note for the P versus NP Problem EasyChair Preprint 11886, version 10 5 pages•Date: April 8, 2024 P versus NP is considered as one of the most fundamental open problems in computer science. This consists in knowing the answer of the following question: Is P equal to NP? It was essentially mentioned in 1955 from a letter written by John Nash to the United States National Security Agency. However, a precise statement of the P versus NP problem was introduced independently by Stephen Cook and Leonid Levin. Since that date, all efforts to find a proof for this problem have failed. Another major complexity class is NP-complete. It is well-known that P is equal to NP under the assumption of the existence of a polynomial time algorithm for some NP-complete. We show that the Monotone Weighted Xor 2-satisfiability problem (MWX2SAT) is NP-complete and P at the same time. Certainly, we make a polynomial time reduction from every directed graph and positive integer k in the K-CLOSURE problem to an instance of MWX2SAT. In this way, we show that MWX2SAT is also an NP-complete problem. Moreover, we create and implement a polynomial time algorithm which decides the instances of MWX2SAT. Consequently, we prove that P = NP. Keyphrases: completeness, complexity classes, computational algorithm, polynomial time, reduction Links: https://easychair.org/publications/preprint/WJ7r Download PDFOpen PDF in browser
{"url":"https://easychair-www.easychair.org/publications/preprint/WJ7r","timestamp":"2024-11-13T05:53:31Z","content_type":"text/html","content_length":"5949","record_id":"<urn:uuid:9a758b30-2a92-4418-87f7-87192aed2dfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00278.warc.gz"}
Services on Demand Related links Print version ISSN 0370-3908 Rev. acad. colomb. cienc. exact. fis. nat. vol.35 no.136 Bogotá July/Sept. 2011 STUDY OF SOME VOLUMETRIC PROPERTIES OF GLYCEROL FORMAL + ETHANOL MIXTURES AND CORRELATION WITH THE JOUYBAN-ACREE MODEL Andrés R. Holguín^1,Daniel R. Delgado^1, Fleming Martínez^1* Mehri Khoubnasabjafari^2 Abolghasem Jouyban^3 ^1 Grupo de Investigaciones Farmacéutico-Fisicoquímicas, Departamento de Farmacia, Facultad de Ciencias, Universidad Nacional de Colombia, A.A. 14490, Bogotá, D.C., Colombia. ^* Correspondence: E-mail: fmartinezr@unal.edu.co ^2 Tuberculosis and Lung Disease Research Center, Tabriz University of Medical Sciences, Tabriz, Iran. ^3 Drug Applied Research Center and Faculty of Pharmacy, Tabriz University of Medical Sciences, Tabriz, Iran. Molar volumes, excess molar volumes, and partial molar volumes were investigated for glycerol formal + ethanol mixtures by density measurements at several temperatures. Excess molar volumes are fitted by Redlich-Kister equation and compared with other systems. The system exhibits negative excess volumes probably due to increased H-bond interactions. Volume thermal expansion coefficients are also calculated. The Jouyban-Acree model was used for density and molar volume correlations at different temperatures. The mean relative deviations between experimental and calculated data were 0.03 ± 0.03% and 0.17 ± 0.13%, respectively for density and molar volume data. Also, using a minimum number of data points, the Jouyban-Acree model can predict density and molar volume with acceptable accuracies (0.03 ± 0.03% and 0.15 ± 0.12%, respectively). Key words: glycerol formal; ethanol; binary liquid mixtures; excess volumes; partial volumes; Jouyban-Acree model. En este trabajo se calculan los volúmenes molares, molares de exceso y molares parciales a partir de valores de densidad para el sistema glicerol formal + etanol en todo el intervalo de composición a temperaturas entre 278,15 y 313,15 K. Los volúmenes molares de exceso se modelaron de acuerdo a la ecuación de Redlich-Kister y se compararon con los reportados para otros sistemas. El sistema estudiado presenta volúmenes de exceso negativos probablemente debido a fuertes interacciones por unión de hidrógeno. También se analizó el efecto de la temperatura sobre las diferentes propiedades volumétricas estudiadas. Así mismo se calcularon los coeficientes térmicos de expansión volumétrica. Finalmente se usó el modelo Jouyban-Acree para correlacionar la densidad y el volumen molar de las diferentes mezclas encontrando desviaciones medias relativas de 0,03 ± 0,03% y 0,17 ± 0,13% para densidades y volúmenes molares respectivamente. Palabras clave: glicerol formal; etanol; mezclas líquidas binarias; volúmenes de exceso; modelo de Jouyban-Acree. Non-aqueous solvent mixtures have sometimes been used in human and veterinarian pharmacy in order to increase the solubility of drugs poorly soluble in water during the design of injectable homogeneous dosage forms (Rubino, J.T., 1988). Ethanol and propylene glycol are the cosolvents most used in design nowadays and sometimes have been employed blended (Yalkowsky, S.H., 1999). Glycerol formal is a non-toxic and environmentally-friendly organic solvent (Budavari, S. et al. 2001), miscible with water, ethanol and propylene glycol in all possible compositions and has been widely used as cosolvent for veterinarian formulations such as those containing the antinematodal drug, ivermectin (Lo, P.K.A. et al. 1985; DiPietro, J.A. et al. 1986; Reinemeyer, C.R. & Courtney, C.H., 2001). Glycerol formal is available as the mixture of 5- hydroxy-1,3-dioxane and 4-hydroxymethyl-1,3-dioxolane (60:40) and as the individual isomers (Budavari, S. et al. 2001; Pivnichny, J.V., 1984). The mixtures obtained using these cosolvents are nonideal due to increased interactions between unlike molecules and differences in molar volumes of pure components, which leads to non-additive volumes on mixing (Battino, R., 1971; Kapadi, U.R. et al. 2001). For this reason it is necessary to characterize the volumetric behavior of these binary mixtures as a function of composition and temperature in order to extend the physicochemical information available for liquid mixtures used in pharmacy. This information is useful to represent the intermolecular interactions present in liquid pharmaceutical systems and to facilitate the processes of medicines design at industrial level (Jiménez, J. et al. 2004). In this report, the excess molar volumes and the partial molar volumes of the binary system of glycerol formal + ethanol at various temperatures as well as other volumetric properties are reported. The physicochemical properties reported here were calculated according to several mathematical procedures widely exposed in the literature (Wahab, M.A. et al. 2002; Salas, J.A. et al. 2002; Peralta, R.D. et al. 2003; Resa, J.M. et al. 2004). This work is a continuation of those presented previously about some volumetric properties of glycerol formal + water mixtures (Delgado, D.R. et al. 2011) and glycerol formal + propylene glycol mixtures (Rodríguez, G.A. et al. 2011). In this investigation glycerol formal (5-hydroxy-1,3 - dioxane isomer) from Lambiotte & Cie S.A. was employed and is in agreement with the quality requirements indicated for veterinarian medicinal products. Density and refractive index of glycerol formal (ρ = 1.2214 g·cm^-3 and n[D] = 1.4535 at 298.15 K, respectively) were in good agreement with the values reported for the single 5-hydroxy-1,3 dioxane isomer (ρ[4]^25= 1.2200 g·cm^-3 and n[D]^25= 1.4527) (Budavari, S. et al. 2001). Figure 1 shows the molecular structure of 5-hydroxy- 1,3-dioxane isomer . In the same way, dehydrated ethanol A.R. (Merck, Germany) was also used and is in agreement with the quality requirements indicated for medicinal products indicated in the American Pharmacopeia USP (US Pharmacopeia, 1994). The dehydrated glycerol formal and ethanol employed were maintained over molecular sieve to obtain dry solvents prior to prepare the solvent mixtures. Cosolvent mixtures preparation All glycerol formal + ethanol mixtures were prepared in quantities of 40.00 g by mass using a Ohaus Pioneer TM PA214 analytical balance with sensitivity ± 0.1 mg, in concentrations from 0.05 to 0.95 in mass fraction (varying in 0.05) of glycerol formal, to study 19 mixtures and the two pure solvents. This procedure implies an uncertainty of ± 2 x 10^–5 in mole fraction. The mixtures were allowed to stand in Magni Whirl Blue M or Neslab RTE 10 Digital Plus (Thermo Electron Company) water baths at temperatures from 278.15 K to 313.15 K varying in 5.00 ± 0.05 K for at least 30 minutes prior to density determinations. Density determination This property was determined using a DMA 45 Anton Paar digital density meter connected to a Neslab RTE 10 Digital Plus (Thermo Electron Company) recirculating thermostatic water bath according to a procedure previously described (Martínez, F. et al. 2002). The equipment was calibrated according to Instruction Manual using air and water at the different temperatures studied (Kratky, O. et al. 1980). From density values, all thermodynamic properties were calculated as will be indicate in the next section. Results and discussion In order to define the solvents 1 and 2 in the binary mixtures according to polarity the Hildebrand solubility parameter (δ) of glycerol formal was calculated as 24.8 MPa^1/2 according to procedures described by Barton, A.F.M. (1991), which are presented in Table 1. Accordingly, glycerol formal is a solvent less polar compared with ethanol (δ value is 26.5 MPa^1/2, Barton, A.F.M., 1991). TABLA 1 In Table 2 the composition of glycerol formal + ethanol, in mass (µ[GF]) and mole (x[GF]) fraction, in addition to studied density values at several temperatures, is presented. Figure 2 plotted experimental density data against fraction of glycerol formal and temperature. TABLA 2 In the literature no values are available for this binary solvent system and therefore no direct comparison is possible. Nevertheless, it is important to remember that Pineda, L.M. et al. (2003) and Arias, L.J. et al. (2004) reported density values at 298.15 K for binary mixtures obtained employing material raw without any dehydration process, just as they are used in the pharmaceutical industries. Accordingly, the cosolvents studied by these authors had some low quantities of water, i.e. 0.31 % m/m and 6.52% m/m for glycerol formal and ethanol, respectively. Table 2 shows that in all cases the density increases as the glycerol formal proportion increases in the mixtures and it decreases linearly as the temperature increases. In the other hand, density values decrease as the ethanol proportion increases in the mixtures following concave parabolic trends Molar volumes and excess molar volumes In Table 3 the molar volumes (V^0) for binary mixtures at all studied temperatures are presented which were calculated from Equation (1). TABLA 3 where M[1] and M[2] are the molar masses, for both components respectively (104.10 g·mol^-1 for glycerol formal and 46.07 g·mol^-1 for ethanol, Budavari, S. et al. 2001), x[1] and x[2] are the respective mole fraction of components, and ρ is the mixture density. Figure 3 shows the molar volume as a function of mixtures composition and temperature. On the other hand, the excess molar volumes (V^0-E) calculated from Equation (2) (where, ρ1 and ρ2 are the densities of pure components) at all studied temperatures, are also presented in Table 3. This behavior is shown graphically in Figure 4 at all studied temperatures. In similar way to the behavior obtained in other similar investigations developed in our reseach group with other solvent systems (Jiménez, J. et al. 2004; Jiménez, J. & Martínez, F. 2005, 2006; Ruidiaz, M.A. & Martínez, F., 2009; Rodríguez, S.J. et al. 2010), in almost all cases the excess volumes are negative (especially around 0.60-0.70 in mole fraction of glycerol formal, where it is approximately equal to –0.60 cm^3·mol^-1 at 313.15 K) indicating contraction in volume, except at 278.15 and 283.15 K in the mixture with composition 0.05 in mass fraction of glycerol formal where positive values near to 0.03 cm^3·mol^-1 were obtained. It is interesting to note that glycerol formal + water mixtures exhibited negative excess volumes (Delgado, D.R. et al. 2011) whereas glycerol formal + propylene glycol exhibited positive excess volumes (Rodríguez, G.A. et al. 2011). As was already said (Jiménez, J & Martínez, F., 2005, 2006; Delgado, D.R. et al. 2011), according to Fort, R.T. & Moore, W.R. (1966), a negative excess volume is an indication of strong heteromolecular interactions in the liquid mixtures and is attributed to charge transfer, dipoledipole, dipole-induced dipole interactions, and hydrogen bonding between the unlike components, while a positive sign indicates a weak interaction and is attributed to dispersion forces (London interactions) which are likely to be operative in every cases. In the evaluated system, where the hydrogen bonding predominates, the contraction in volume has been interpreted basically in qualitative terms considering the following events: first: expansion due to depolymerization of glycerol formal and ethanol by one another, second: contraction due to free volume difference of unlike molecules, and third: contraction due to hydrogen bond formation between glycerol formal and ethanol through –OH---O< or –OH---OH bonding. Thus, the large negative values of V^0–E over the free volume contribution indicate the presence of strong specific interactions with predominance of formation of hydrogen bonds between glycerol formal and ethanol over the rupture of hydrogen bonding in ethanol-ethanol and water-water. The excess molar volumes become more positive as the temperature although this result is not clear at molecular level. Partial molar volumes The partial specific volumes of glycerol formal ^0[GF] and ethanol ^0[EtOH] were calculated using the classical Bakhuis-Roozeboom method by means of equations (3) and (4) applied to the variation of the respective specific volumes as a function of glycerol formal mass fraction (Reciprocal of densities reported in Table 3 and presented in Figure 5 at four temperatures) and adjusting them to second degree polynomials by least squares regression analyses (Kestin, J., 1979; Perrot, P., 1998). The first derivatives were taken out on the polynomials obtained and solved at each composition The partial molar volumes were calculated from the respective partial specific volumes multiplied by the molar masses. The ^0[GF] and ^0[EtOH] values are also presented in Table 3 in addition to the slopes obtained (dV/dµ[GF]) at each composition and temperature. In all cases the partial molar volumes of glycerol formal are lower than those obtained for the pure solvent at all temperatures. In the other hand, the partial molar volumes of ethanol are greater than those for the pure solvent in the mixtures where this cosolvent is in great proportion (0.00 < µ[GF] < 0.30) but they are lower in the other mixtures (0.30 < µ[GF] < 1.00). In cosolvent mixtures, the partial volumes for glycerol formal varied from 81.77 cm3·mol^-1 (for µ[GF] = 0.05 at 278.15 K) to 86.10 cm3·mol^-1 (for µ[GF] = 0.95 at 313.15 K), and for ethanol varied from 56.45 cm^3·mol^-1 (for µ[GF] = 0.95 at 278.15 K) to 59.68 cm3·mol^-1 (for µ[GF] = 0.10 at 313.15 K). The results obtained for ^0[GF] and ^0[EtOH] are in agreement with the negative excess volumes obtained. The variation of this property is presented in Figure 6 as a function of glycerol formal mole fraction at 298.15 K for glycerol formal and ethanol, respectively. These values were calculated as the difference between partial molar volumes and molar volumes presented in Table 3. For both solvents the partial molar volume diminishes as their respective proportion in the mixtures diminishes, except for ethanol in those mixtures where it is in great proportion. Redlich-Kister equation The Redlich-Kister equation has been used in recent decades for manipulating several kinds of physicochemical values of mixtures such as: excess volumes, excess viscosities, solubilities in cosolvent mixtures, among others (Redlich, O. & Kister, A.T., 1948). When applied to excess molar volumes is presented as Equation (5), where x[1] and x[2] are the respective mole fractions. In the analysis of our data about excess volumes, the Equation (5) was used in the form of third degree polynomial equations using least square analyses, obtaining four coefficients as presented in Equation (6). The Redlich-Kister parameters for glycerol formal + ethanol mixtures at all temperatures studied are presented in Table 4 beside related determination coefficients and standard deviations calculated according to Equation (7) (where D is the number of compositions studied and N is the number of terms used in the regression, that is 19 and 4 respectively). Figura 7 shows the Redlich-Kister equation applied to glycerol formal + ethanol data at several temperatures. The variation coefficients greater than 0.94 (except at 288.15 and 293.15 K) indicate that the obtained regular polynomials regressions describe adequately the excess volumes. In similar way, standard deviations are similar to those presented in the literature for other kind of mixtures (Kapadi, U.R. et al. 2001; Salas, J.A. et al. 2002; Wahab, M.A. et al. 2002; Peralta, R.D. et al. 2003; Resa, J.M. et al. 2004; Ruidiaz, M.A. & Martínez, F. 2009; Cristancho, D.M. et al. 2011). On the other hand, σ values obtained for glycerol formal + ethanol mixtures were in general similarity to those obtained for glycerol formal + propylene glycol (near to 0.030 cm^3·mol^-1, Rodríguez, G.A. et al. 2011), ethanol + propylene glycol (varying from 0.003 to 0.021 cm^3·mol^-1, Jiménez, J. & Martínez, F., 2006), and glycerol formal + water (near to 0.008 cm^3·mol^-1, Delgado, D.R. et al. 2011). Volume thermal expansion In pharmaceutical pre-formulation studies, it is too important to know the variation of physicochemical properties related to pharmaceutical dosage forms, with respect to temperature changes; especially the properties that affect the concentration of active ingredients. Thus, the volume thermal expansion coefficients (α) were calculated by means of Equation (8) (Ott, J.B. & Boerio- Goates, J., 2000) by using the variation of molar volumes with temperature (Table 2). Table 5 summarizes the (∂V^0/∂T) and α values for all mixtures and pure solvents. In all cases linear models were obtained with determination coefficients greater than 0.999. The α values varied from 7.28 x 10^–4 K^–1 in pure glycerol formal to 1.135 x 10^–3 K^–1 in pure ethanol at 298.15 K although the α variation is not linear with the mixtures Data correlation using the Jouyban-Acree model The Jouyban-Acree model was introduced to correlate the physicochemical properties of the solution in mixed solvents including the dielectric constants (Jouyban, A. et al. 2004), viscosity (Jouyban, A. et al. 2005a), solvatochromic parameter (Jouyban, A. et al. 2006), density (Jouyban, A. et al. 2005b), speed of sound (Hasan, M. et al. 2006; Kadam, U.B. et al. 2006) and more recently molar volumes (Cristancho, D.M. et al. 2011; Delgado, D.R. et al. 2011; Rodríguez, G.A. et al. 2011). The model uses the physicochemical properties of the mono-solvents as input data and a number of curve-fitting parameters representing the effects of solvent-solvent interactions in the solution. It is basically derived for representing the solvent effects on the solubility of non-polar solutes in nearly ideal binary solvent mixtures at isothermal conditions by Acree Jr., W.E. (1992); and then its applications were extended to the solubility of polar solutes in water + cosolvent mixtures at isothermal conditions (Jouyban-Gharamaleki, A. et al. 1998). Further extensions were made to represent the solvent composition and temperature effects on the solubility of drugs (Jouyban, A. et al. 1998); and also some other parameters such as acid dissociation constants (Jouyban, A. et al. 2005c), electrophoretic mobility in capillary electrophoresis (Jouyban-Gharamaleki, A. et al. 2000) and retention factors in high performance liquid chromatography (Jouyban, A. et al. 2005d) have been calculated perfectly. The model for representing the solvent composition and temperature effects on the density of solvent mixtures is: where ρ[m,T], ρ[1,T], ρ[2,T] are densities of mixed solvent, solvents 1 (glycerol formal) and 2 (ethanol) at different temperatures (T), respectively. The x[1], x[2] are mole fractions of glycerol formal and ethanol, respectively. The J[i] terms are coefficients of the model computed by using a no intercept regression analysis of: The following equation was obtained for density correlation of mixtures of glycerol formal and ethanol at different temperatures after excluding non-significant model constants: The calculated density values using Equation (11) are presented in Table 1. The mean relative deviation (MRD) between experimental and calculated data was calculated as an accuracy criterion using: and was 0.03 ± 0.03 % for Equation (11). The N in Equation (12) is the number of data points in the data set. An adapted version of Equation (11) was used to represent the effects of solvent composition and temperature on the molar volume of mixed solvents in recent works (Cristancho, D.M. et al. 2011; Delgado, D.R. et al. 2011; Rodríguez, G.A. et al. 2011). A similar model could be trained to represent the molar volume data of glycerol formal + ethanol data at various temperatures as: The calculated molar volume values are presented in Table 2. The model fits very well to the experimental data and the MRD was 0.17 ± 0.13 %. In addition to the fitness capability of the model, it could be used to predict the molar volume data using the trained version of the model employing the minimum number of experimental data points. For this purpose, a minimum number of experimental data (11 odd data points of set 278.15 K and 11 odd data points of set 313.15 K) have been used for density and molar volume data and the following equations obtained: The MRD values of Equation (14) and (15) for predicted densities and molar volumes were 0.03 ± 0.03 % and 0.15 ± 0.12 % (N = 150). Figures 8 and 9 show the predicted values versus experimental values of density and molar volume, respectively. High regression coefficients (R^2 = 1.0000 (i.e. > 0.9999) for density and R^2 = 0.9997 for molar volume) suggest the predictability and applicability of the Jouyban- Acree model to predict the density and molar volume data using a minimum number of experimental data. This work reports experimental information about the volumetric behavior of the glycerol formal + ethanol at eight temperatures commonly found in technological conditions. Thus, this work complements the information reported in the literature about volumetric properties of the possible binary mixtures conformed by glycerol formal, ethanol, propylene glycol, and water (Jiménez, J. et al. 2004; Jiménez, J. & Martínez, F., 2005, 2006; Delgado, D.R. et al. 2011; Rodríguez, G.A. et al. 2011). It can be concluded that this binary system shows non ideal behavior exhibiting negative deviations. These observations demonstrate that it is necessary to characterize systematically representative binary systems in order to have complete experimental information about the physical and chemical properties useful in the understanding of liquid pharmaceutical systems. Also, the Jouyban-Acree model can predict density and molar volume of solution in mixtures of solvents at different temperatures using minimum number of experimental data points with acceptable accuracy in comparison with experimental data. Furthermore, the reported experimental values could be used to challenge other theoretical methods developed for estimation of thermophysical properties in mixtures (Prausnitz, J.M. et al. 1986). We thank the DIB of the Universidad Nacional de Colombia (UNC) by the financial support in addition to the Department of Pharmacy of UNC for facilitating the equipment and laboratories used in this Acree Jr., W.E. 1992. Mathematical representation of thermodynamic properties: Part 2. Derivation of the combined nearly ideal binary solvent (NIBS)/Redlich-Kister mathematical representation from a two-body and three-body interactional mixing model. Thermochim Acta 198:71-79. [ Links ] Arias, L.J., Díaz, A.J., Martínez, F. 2004. Viscosidad cinemática de mezclas ternarias formadas por agua, alcohol, propilenoglicol y glicerin formal a 25.0°C. Rev Colomb Cienc Quím Farm 33:20-37. [ Links ] Barton, A.F.M. 1991. "Handbook of Solubility Parameters and Other Cohesion Parameters". 2nd edition, New York, CRC Press, p.157-193. [ Links ] Battino, R. 1971. Volume changes on mixing for binary mixtures of liquids. Chem Rev 7:5-45. [ Links ] Budavari, S., O'Neil, M.J., Smith, A., Heckelman, P.E., Obenchain Jr., J.R., Gallipeau, J.A.R., D'Arecea, M.A. 2001. "The Merck Index, An Encyclopedia of Chemicals, Drugs, and Biologicals", 13th edition, Merck & Co., Inc., Whitehouse Station, NJ, pp. 799-800. [ Links ] Cristancho, D.M., Delgado, D.R., Martínez, F., Fakhree, M.A.A., Jouyban, A. 2011. Volumetric properties of glycerol + water mixtures at several temperatures and correlation with the Jouyban-Acree model. Rev Colomb Cienc Quím Farm 40:92- 115. [ Links ] Delgado, D.R., Martínez, F., Fakhree, M.A.A., Jouyban, A. 2011. Volumetric properties of the glycerol formal + water cosolvent system and correlation with the Jouyban-Acree model. Phys Chem Liq. DOI: 10.1080/00319104.2011.584311. [ Links ] DiPietro J.A., Todd Jr., K.S., Reuter, V. 1986. Anti-strongyle activity of a propylene glycol-glycerol formal formulation of ivermectin in horses (mares). Am J Vet Res 47:874-875. [ Links ] Fort, R.T., Moore, W.R. 1966. Viscosities of binary liquid mixtures. Trans Faraday Soc 62:1112-1119. [ Links ] Hasan, M., Kadam, U.B. Hiray. A.P., Sawant, A.B. 2006. Densities, viscosities, and ultrasonic velocity studies of binary mixtures of chloroform with pentan-1-ol, hexan-1-ol, and heptan-1-ol at (303.15 and 313.15) K. J Chem Eng Data 51:671-675. [ Links ] Jiménez, J., Manrique, J., Martínez F. 2004. Effect of temperature on some volumetric properties for ethanol + water mixtures. Rev Colomb Cienc Quím Farm 33:145-155. [ Links ] _____., Martínez, F. 2005. Study of some volumetric properties of 1,2-propanediol + water mixtures at several temperatures. Rev Colomb Cienc Quím Farm 34:46-57. [ Links ] _____.,2006. Volumetric properties of ethanol + 1,2- propanediol mixtures at different temperatures. Phys Chem Liq 44:521-530. [ Links ] Jouyban, A., Soltanpour, Sh., Chan, H.K. 2004. A simple relationship between dielectric constant of mixed solvents with solvent composition and temperature. Int J Pharm 269:353-360. [ Links ] _____., Khoubnasabjafari, M., Vaez-Gharamaleki, Z., Fekari, Z., Acree Jr., W.E. 2005a. Calculation of the viscosity of binary liquids at various temperatures using Jouyban-Acree model. Chem Pharm Bull (Tokyo), 53:519-523. [ Links ] _____., Fathi-Azarbayjani, A., Khoubnasabjafari, M., Acree Jr., W.E. 2005b. Mathematical representation of the density of liquid mixtures at various temperatures using Jouyban- Acree model. Indian J Chem A 44:1553-1560. [ Links ] _____., Soltani, S., Chan, H.K., Acree Jr., W.E. 2005c. Modeling acid dissociation constant of analytes in binary solvents at various temperatures using Jouyban-Acree model. Thermochim Acta 428 :119-123. [ Links ] _____., Rashidi, M.R., Vaez-Gharamaleki, Z., Matin, A.A., Djozan, Dj. 2005d. Mathematical representation of solute solubility in binary mixture of Supercrititical fluids by using Jouyban-Acree Model. Pharmazie 60:527-529. [ Links ] _____., Khoubnasabjafari, M., Acree Jr., W.E. 2006. Modeling the solvatochromic parameter () of mixed solvents with respect to solvent composition and temperature using the Jouyban-Acree model. DARU, 14:22-25. [ Links ] Jouyban-Gharamaleki, A., Barzegar-Jalali, M., Acree Jr., W.E. 1998. Solubility correlation of structurally related drugs in binary solvent mixtures. Int J Pharm 166:205-209. [ Links ] _____., Valaee, L., Barzegar-Jalali, M., Clark, B.J., Acree Jr., W.E. 1999. Comparison of various cosolvency models for calculating solute solubility in water-cosolvent mixtures. Int J Pharm 177 :93-101. [ Links ] _____., Khaledi, M.G., Clark, B.J. 2000. Calculation of electrophoretic mobilities in water-organic modifier mixtures. J Chromatogr A 868:277-284. [ Links ] Kadam, U.B., Hiray, A.P., Sawant, A.B., Hasan, M. 2006. Densities, viscosities, and ultrasonic velocity studies of binary mixtures of trichloromethane with methanol, ethanol, propan-1-ol, and butan-1-ol at T = (298.15 and 308.15) K. J Chem Thermodyn 38:1675-1683. [ Links ] Kapadi, U.R., Hundiwale, D.G., Patil, N.B., Lande, M.K., Patil, P.R. 2001. Studies of viscosity and excess molar volume of binary mixtures of propane-1,2 diol with water at various temperatures. Fluid Phase Equilibr 192:63-70. [ Links ] Kestin, J. 1979. "A Course in Thermodynamics", McGraw-Hill, New York, pp. 331-332. [ Links ] Kratky, O., Leopold, H., Stabinger, H. 1980. "DMA45 Calculating Digital Density Meter, Instruction Manual", Anton Paar, K.G., Graz, Austria, pp. 1-12. [ Links ] Lo, P.K.A., Fink, D.W., Williams, J.B., Blodinger, J. 1985. Pharmacokinetic studies of ivermectin: Effects of formulation. Veter Res Comm 9:251-268. [ Links ] Martínez, F., Gómez, A. Ávila, C.M. 2002. Volúmenes molales parciales de transferencia de algunas sulfonamidas desde el agua hasta la mezcla agua-etanol (X = 0.5). Acta Farm Bonaerense 21:107-118. [ Links ] Ott, J.B., Boerio-Goates, J. 2000. "Chemical Thermodynamics: Advanced Applications", Academic Press, London, pp. 271-291. [ Links ] Peralta, R.D., Infante, R., Cortez, G. Ramírez, R.R., Wisniak, J. 2003. Densities and excess volumes of binary mixtures of 1,4-dioxane with either ethyl acrylate, or butyl acrylate, or methyl methacrylate, or styrene at T = 298.15 K. J Chem Thermodyn 35:239-250. [ Links ] Perrot, P. 1998. "A to Z of Thermodynamics", Oxford University Press, Inc., New York, pp. 221-225. [ Links ] Pineda, L.M., Teatino, R.E., Martínez, F. 2003. Propiedades fisicoquímicas de mezclas ternarias formadas por agua, alcohol, propilenoglicol y glicerin formal a 25°C. Rev Colomb Cienc Quím Farm 32 :13-22. [ Links ] Pivnichny, J.V. 1984. Separation and determination of the two components of glycerol formal by high-performance liquid chromatography. J Pharm Biomed Anal 2:491-500. [ Links ] Prausnitz, J.M., Lichtenthaler, R.N., Gomes de Acevedo, E. 1986. "Molecular Thermodynamics of Fluid-Phase Equilibria", 2nd edition, Prentice-Hall, Inc., Englewood Cliffs, NJ, pp. 181-186. [ Links ] Redlich, O., Kister, A.T. 1948. Algebraic representation of thermodynamic properties and the classification of solutions. Ind Eng Chem 40:345-348. [ Links ] Reinemeyer, C.R., Courtney, C.H. 2001. Antinematodal drugs, in: "Veterinary Pharmacology and Therapeutics", 8th edition, edited by Adams, H.R., Iowa State University Press & Blackwell Publishing Professional, Ames, Iowa, p. 965. [ Links ] Resa, J.M., González, C., Goenaga, J.M., Iglesias, M. 2004. Temperature dependence of excess molar volumes of ethanol + water + ethyl acetate. J Solution Chem 33:169-198. [ Links ] Rodríguez, S.J., Cristancho, D.M., Neita, P.C., Vargas, E.F., Martínez, F. 2010. Volumetric properties of the octyl methoxycinnamate + ethyl acetate solvent system at several temperatures. Phys Chem Liq 48:638-647. [ Links ] Rodríguez, G.A., Delgado, D.R., Martínez, F., Fakhree, M.A.A., Jouyban, A. 2011. Volumetric properties of glycerol formal + propylene glycol mixtures at several temperatures and correlation with the Jouyban-Acree model. J Solution Chem In press. [ Links ] Rubino, J.T. 1988. Cosolvents and cosolvency, in: "Encyclopedia of Pharmaceutical Technology", Vol. 3, edited by Swarbrick, J., Boylan, J.C., Marcel Dekker, Inc., New York, pp. 375-398. [ Links ] Ruidiaz, M.A., Martínez, F. 2009. Volumetric properties of the pharmaceutical model cosolvent system 1,4-dioxane + water at several temperatures. Vitae Rev Fac Quím Farm 16:327-337. [ Links ] Salas, J.A., Zurita, J.L., Katz, M. 2002. Excess molar volumes and excess viscosities of the 1-chlorobutane + pentane + dimethoxyethane ternary system at 298.15 K. J Argent Chem Soc 90:61-73. [ Links US Pharmacopeia. 1994. 23rd edition, The United States Pharmacopeial Convention, Rockville, MD, p. 43. [ Links ] Wahab, M.A., Azhar, M., Mottaleb, M.A. 2002. Volumetric behaviour of binary liquid mixtures at a temperature of 303.15K. Bull Kor Chem Soc 23:953-956. [ Links ] Yalkowsky, S.H. 1999. "Solubility and Solubilization in Aqueous Media", American Chemical Society and Oxford University Press, New York, pp. 180-235. [ Links ] Recibido: junio 22 de 2011. Aceptado para su publicación: agosto 30 de 2011.
{"url":"http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S0370-39082011000300005&lng=en&nrm=iso","timestamp":"2024-11-02T17:29:47Z","content_type":"application/xhtml+xml","content_length":"71326","record_id":"<urn:uuid:43957d13-e75c-4e6f-82e9-7358bb8d2df9>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00010.warc.gz"}
Complex infinity: Introduction to the symbols (subsection Symbols/04) The symbol has the following values at some finite points : The symbol has the following values at some infinite points : The symbol has the following value at point : is a symbol. It represents an unknown or not exactly determined point (potentially with infinite magnitude) of the complex plane. Often it results from a double limit where two infinitesimal parameters approach zero at different speeds (e.g. ). is a symbol. On the Riemann sphere, it is the north pole approached from exactly east. In the projective complex plane, it is a point at the line at infinity. is a symbol. On the Riemann sphere, it is the north pole. In the projective complex plane, it is the line at infinity. is a symbol. On the Riemann sphere, it is the north pole together with the direction how to approach it. In the projective complex plane, it is a point at the line at infinity. The symbols , , , and have the following complex characteristics: Derivatives of the symbols , , , and satisfy the following relations: Simple indefinite integrals of the symbols , , and have the following representations: All Fourier integral transforms of the symbols , , , and can be evaluated using the following formal rules: Laplace direct and inverse integral transforms of the symbols , , , and can be evaluated using the following formal rules: The symbols , , and satisfy some obvious inequalities, for example:
{"url":"https://functions.wolfram.com/Constants/ComplexInfinity/introductions/Symbols/04/","timestamp":"2024-11-13T00:10:45Z","content_type":"text/html","content_length":"49324","record_id":"<urn:uuid:fba2efc3-d63f-4c2f-a872-e3a069819fdb>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00347.warc.gz"}
Value groups and residue fields of models of real exponentiation Let F be an archimedean field, G a divisible ordered abelian group and h a group exponential on G. A triple (F,G,h) is realised in a non-archimedean exponential field (K,exp) if the residue field of K under the natural valuation is F and the induced exponential group of (K,exp) is (G,h). We give a full characterisation of all triples (F,G,h) which can be realised in a model of real exponentiation in the following two cases: i) G is countable. ii) G is of cardinality kappa and kappa-saturated for an uncountable regular cardinal kappa with kappa^(<kappa) = kappa. is licensed under a Creative Commons Attribution 3.0 License Journal of Logic and Analysis ISSN: 1759-9008
{"url":"http://logicandanalysis.com/index.php/jla/article/view/351","timestamp":"2024-11-14T06:41:20Z","content_type":"application/xhtml+xml","content_length":"18866","record_id":"<urn:uuid:06ea5d23-d315-49f6-9104-ac3808c64969>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00371.warc.gz"}
Formations of inverse semigroups Gomes, Gracinda M. S.; Nobre, Isabel J. Semigroup Forum , 105 (2022), 217–243 This article explores a generalisation of the theory of formations of groups. Taking formations of groups as the starting point, formations of inverse semigroups are defined, as well as the wider classes of i-formations (i standing for idempotent-separating) and some classes of the kind named f-formations (f standing for fundamental). The relation between the nature of a class of groups and that of certain classes of inverse semigroups with associated groups in the first is discussed. The product of formations is considered, and a product like the Gaschutz’s product known for groups is presented for f-formations.
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=7&member_id=11&doc_id=3542","timestamp":"2024-11-14T02:06:14Z","content_type":"text/html","content_length":"8610","record_id":"<urn:uuid:691517d7-62d4-4fa3-9e07-464de0f2c0d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00762.warc.gz"}
International Scientific Journal A novel algorithm for solving the classic Stefan problem is proposed in the paper. Instead of front tracking, we preset the moving interface locations and use these location coordinates as the grid points to find out the arrival time of moving interface respectively. Through this approach, the difficulty in mesh generation can be avoided completely. The simulation shows the numerical result is well coincident with the exact solution, implying the new approach performs well in solving this problem. PAPER SUBMITTED: 2010-05-10 PAPER REVISED: 2010-08-14 PAPER ACCEPTED: 2010-11-11 , VOLUME , ISSUE Supplement 1 , PAGES [S39 - S44]
{"url":"https://thermalscience.vinca.rs/2011/supplement/6","timestamp":"2024-11-12T13:54:54Z","content_type":"text/html","content_length":"12116","record_id":"<urn:uuid:9407522c-dac7-48ff-81ca-9c673a4cd87b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00469.warc.gz"}
Compton scattering and photon momentum transfer in context of photon momentum 31 Aug 2024 Compton Scattering and Photon Momentum Transfer: A Theoretical Analysis Compton scattering, a phenomenon where a photon collides with a free electron, is a fundamental process in the interaction between matter and radiation. This article provides an in-depth analysis of Compton scattering and the associated photon momentum transfer, highlighting the underlying physics and theoretical framework. In 1923, Arthur Compton performed a series of experiments demonstrating that photons can behave as particles with momentum [1]. The Compton effect, as it came to be known, revealed that when a photon collides with a free electron, the photon’s energy is transferred to the electron, resulting in a change in both the photon’s wavelength and direction. This phenomenon has since been extensively studied and forms the basis of our understanding of photon-matter interactions. Compton Scattering The Compton scattering process can be described as follows: 1. A photon with energy E and momentum p collides with a free electron at rest. 2. The photon transfers some of its energy to the electron, resulting in a change in both the photon’s wavelength (λ) and direction. 3. The scattered photon has an energy E’ and momentum p’. The Compton scattering cross-section is given by: σ(E) = 8πr0^2 * (1 + cos(θ)) / (E/E0)^2 where r0 is the classical electron radius, θ is the scattering angle, and E0 is the initial photon energy. Photon Momentum Transfer The momentum transfer from the photon to the electron can be calculated using the following formula: Δp = p’ - p = 4πr0^2 * (1 + cos(θ)) / λ where Δp is the change in momentum, and λ is the initial wavelength of the photon. Compton scattering provides a fundamental understanding of the interaction between photons and matter. The associated photon momentum transfer highlights the particle-like behavior of photons and has significant implications for our understanding of quantum mechanics and radiation-matter interactions. In conclusion, this article has provided an in-depth analysis of Compton scattering and photon momentum transfer, highlighting the underlying physics and theoretical framework. Further research into these phenomena will continue to shed light on the intricate relationships between matter and radiation. [1] A. H. Compton, “A Quantum Theory of the Scattering of X-Rays by Light Elements,” Phys. Rev., vol. 21, no. 5, pp. 483-502, May 1923. Related articles for ‘photon momentum’ : • Reading: Compton scattering and photon momentum transfer in context of photon momentum Calculators for ‘photon momentum’
{"url":"https://blog.truegeometry.com/tutorials/education/4df74e2d861671633d570f8780940bcf/JSON_TO_ARTCL_Compton_scattering_and_photon_momentum_transfer_in_context_of_phot.html","timestamp":"2024-11-05T17:12:11Z","content_type":"text/html","content_length":"16025","record_id":"<urn:uuid:71c73a7a-faeb-4ca7-a869-c7113b1b3027>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00490.warc.gz"}
How to Convert from Radians to Degrees in Excel: Quick Guide Table of Contents : Converting between radians and degrees is a common requirement in various fields such as mathematics, physics, and engineering. If you're working in Excel, you'll find that the program offers a straightforward way to perform this conversion. In this guide, we'll explore how to convert radians to degrees in Excel, including formulas, functions, and tips for accurate calculations. Let's get started! 🌟 Understanding Radians and Degrees Before diving into Excel, it’s important to understand the two units of angular measurement. • Radians: A radian is the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle. There are approximately ( 6.283 ) radians in a full circle (or ( 2\pi )). • Degrees: Degrees divide a circle into 360 equal parts. Therefore, a full circle is ( 360 ) degrees. The relationship between the two is given by the formula: [ 1 \text{ radian} = \frac{180}{\pi} \text{ degrees} ] Why Convert Radians to Degrees? Knowing how to convert radians to degrees can be crucial in situations where: • You are interpreting trigonometric functions. • You are working with circular motion or oscillations. • You need to visualize data in degrees for presentations. The Conversion Formula The formula to convert radians to degrees is: [ \text{Degrees} = \text{Radians} \times \frac{180}{\pi} ] Step-by-Step Guide: Converting Radians to Degrees in Excel Let’s break down the steps to convert radians to degrees using Excel's built-in functions. Method 1: Using the DEGREES Function Excel provides a convenient function named DEGREES() to convert radians to degrees easily. 1. Open Excel and navigate to the cell where you want to display the degrees. 2. Type the formula: Here, A1 is the cell that contains the radians you want to convert. 3. Press Enter. The cell will now display the degrees corresponding to the radians in cell A1. Radians (A1) Degrees (B1) 1 =DEGREES(A1) 3.14 =DEGREES(A2) • After entering the values in column A, column B will automatically calculate degrees using the DEGREES() function. Method 2: Manual Conversion Using the Formula If you prefer to use a formula instead of the built-in function, you can do so by directly applying the conversion formula. 1. Select a cell for the degree output. 2. Enter the formula: Replace A1 with the cell reference containing radians. 3. Press Enter to get the result in degrees. Radians (A1) Degrees (B1) 1 =A1*(180/PI()) 3.14 =A2*(180/PI()) Important Note Always remember to check the input values for accuracy. Rounding errors can occur when dealing with irrational numbers like Ï€. Common Use Cases for Radians to Degrees Conversion in Excel • Trigonometric Calculations: Converting angles for calculations involving sine, cosine, and tangent. • Graphing: Plotting angles accurately in degrees for more intuitive understanding. • Engineering Applications: In scenarios where specifications require degrees over radians. Quick Tips for Working with Excel Functions • Check Your Cell References: Ensure you’re referencing the correct cells to avoid errors. • Use Named Ranges: For larger datasets, consider using named ranges for easier readability and maintenance. • Practice Excel Functions: Familiarize yourself with other related functions like RADIANS() to enhance your spreadsheet skills. Converting radians to degrees in Excel is a simple and straightforward process, whether you choose to use the built-in DEGREES function or apply the manual conversion formula. By mastering this technique, you can ensure accuracy in calculations and improve your productivity in data analysis. So next time you encounter radians in your work, you'll be ready to convert them easily into degrees. Happy computing! 📊✨
{"url":"https://tek-lin-pop.tekniq.com/projects/how-to-convert-from-radians-to-degrees-in-excel-quick-guide","timestamp":"2024-11-01T20:58:38Z","content_type":"text/html","content_length":"85080","record_id":"<urn:uuid:8fff63cd-353d-4efb-bf65-e7f88fff05ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00342.warc.gz"}
The price of anarchy in routing games as a function of the demand The price of anarchy has become a standard measure of the efficiency of equilibria in games. Most of the literature in this area has focused on establishing worst-case bounds for specific classes of games, such as routing games or more general congestion games. Recently, the price of anarchy in routing games has been studied as a function of the traffic demand, providing asymptotic results in light and heavy traffic. The aim of this paper is to study the price of anarchy in nonatomic routing games in the intermediate region of the demand. To achieve this goal, we begin by establishing some smoothness properties of Wardrop equilibria and social optima for general smooth costs. In the case of affine costs we show that the equilibrium is piecewise linear, with break points at the demand levels at which the set of active paths changes. We prove that the number of such break points is finite, although it can be exponential in the size of the network. Exploiting a scaling law between the equilibrium and the social optimum, we derive a similar behavior for the optimal flows. We then prove that in any interval between break points the price of anarchy is smooth and it is either monotone (decreasing or increasing) over the full interval, or it decreases up to a certain minimum point in the interior of the interval and increases afterwards. We deduce that for affine costs the maximum of the price of anarchy can only occur at the break points. For general costs we provide counterexamples showing that the set of break points is not always finite. • 90B06 • 90B10 • 90C25 • 90C33 • Affine cost functions • Nonatomic routing games • Price of anarchy • Primary 91A14 • Secondary 91A43 • Variable demand Dive into the research topics of 'The price of anarchy in routing games as a function of the demand'. Together they form a unique fingerprint.
{"url":"https://pure.uai.cl/en/publications/the-price-of-anarchy-in-routing-games-as-a-function-of-the-demand","timestamp":"2024-11-05T00:19:20Z","content_type":"text/html","content_length":"54164","record_id":"<urn:uuid:e66af45a-c3f8-4b3f-a529-3025e484122a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00204.warc.gz"}
Linear Algebra | NoSleepCreative Wiki We usually understand 3D objects such as a primitive cube as a single mass but the reality is that they are made of points. A cube has at least 4 points. Each point has it whole XYZ coordinates. How can we perform transformations across all these points. The answer is using matrices/linear transformation. A matrix, in a way, is like a rule, that can be apply to each point. Do one transformation at a time from right to left Span of 2 vectors is set of all their linear combination if dot product of 2 vector = 0, then vectors are perpendicular i-hat - unit vector of x j-hat - unit vector of y The basis of vector spaces is set of linearly independent vectors that span the full span. Multiplying any matrix by the identity matrix results in the same matrix.
{"url":"https://docs.nosleepcreative.com/dev/archive/mathematics/linear-algebra","timestamp":"2024-11-07T02:43:24Z","content_type":"text/html","content_length":"416919","record_id":"<urn:uuid:737750b2-e529-426c-b17f-ac8b54456fc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00734.warc.gz"}
What is SVM? | MLJAR SVM stands for Support Vector Machine, which is a supervised learning algorithm used for classification and regression tasks. The primary goal of SVM is to find the hyperplane that best separates the data points into different classes. In classification, SVM works by finding the optimal hyperplane that maximizes the margin, which is the distance between the hyperplane and the nearest data points from each class, known as support vectors. This hyperplane effectively acts as a decision boundary, allowing SVM to classify new data points into one of the predefined classes based on which side of the hyperplane they fall on. SVM can handle linear and nonlinear classification tasks by using different kernel functions, such as linear, polynomial, radial basis function (RBF), or sigmoid, to transform the input data into higher-dimensional spaces where it's easier to find a separating hyperplane. Overall, SVM is widely used in various fields such as pattern recognition, image classification, bioinformatics, and more, due to its effectiveness in handling both linearly and nonlinearly separable How SVM works: Support Vector Machine works by finding the optimal hyperplane that best separates different classes in a dataset. Here's a step-by-step explanation of how SVM works: 1. Data Preparation - SVM starts with a dataset consisting of labeled examples, where each example belongs to one of two classes (for binary classification). Each example is represented by a set of 2. Mapping Data to a Higher Dimension - SVM maps the input data points into a higher-dimensional feature space. This mapping is done using a kernel function, which implicitly transforms the input data into a higher-dimensional space where it may be easier to find a separating hyperplane. Common kernel functions include linear, polynomial, and radial basis function (RBF). 3. Finding the Optimal Hyperplane - In the higher-dimensional feature space, SVM aims to find the hyperplane that maximizes the margin between the classes. The margin is the distance between the hyperplane and the nearest data points from each class, also known as support vectors. The hyperplane that maximizes this margin is considered the optimal separating hyperplane. 4. Training the SVM - The process of training an SVM involves finding the parameters (weights and bias) that define the optimal hyperplane. This is typically formulated as an optimization problem, where the objective is to maximize the margin while minimizing classification errors. Regularization parameters, such as the cost parameter (C), can be used to control the trade-off between maximizing the margin and minimizing classification errors. 5. Classification - Once the optimal hyperplane is determined, SVM can classify new data points by examining which side of the hyperplane they fall on. If a data point lies on one side of the hyperplane, it is classified as belonging to one class, while if it lies on the other side, it is classified as belonging to the other class. 6. Handling Non-Linear Decision Boundaries - SVM is effective at handling non-linear decision boundaries through the use of kernel functions. These functions implicitly map the input data into a higher-dimensional space where a linear separation may be possible. This allows SVM to classify data that is not linearly separable in the original feature space. Overall, SVM is a powerful machine learning algorithm for classification tasks, particularly when dealing with high-dimensional data and cases where a clear margin of separation exists between SVM usages: Support Vector Machines find applications across various fields due to their versatility and effectiveness in classification and regression tasks. Here are some common areas where SVMs are used: • Text Classification: □ SVMs are widely used in natural language processing tasks such as text classification, sentiment analysis, spam detection, and document categorization. They can effectively classify text documents into different categories based on their content. • Image Recognition: □ SVMs are used for image classification, object detection, and image segmentation tasks in computer vision applications. They can classify images into different categories or detect specific objects within images. • Bioinformatics: □ SVMs are employed in bioinformatics for tasks such as protein classification, gene expression analysis, and biomarker detection. They can analyze biological data and classify samples based on various features. • Medical Diagnosis: □ SVMs are used in medical diagnosis and healthcare applications for tasks such as disease prediction, patient classification, and medical image analysis. They can assist in diagnosing diseases based on patient data or medical images. • Financial Forecasting: □ SVMs are utilized in financial forecasting and stock market analysis for tasks such as stock price prediction, trend identification, and risk assessment. They can analyze financial data and make predictions based on historical patterns. • Remote Sensing: □ SVMs are used in remote sensing applications for tasks such as land cover classification, vegetation mapping, and environmental monitoring. They can analyze satellite or aerial imagery to classify different land cover types or detect changes in the environment. • Handwritten Digit Recognition: □ SVMs are employed in optical character recognition (OCR) systems for tasks such as handwritten digit recognition. They can classify handwritten digits accurately, making them useful in applications such as postal automation and bank check processing. • Fault Diagnosis: □ SVMs are used in fault diagnosis and condition monitoring systems for tasks such as machinery fault detection and predictive maintenance. They can analyze sensor data from machines to detect abnormal patterns and diagnose potential faults early. Overall, SVMs find applications in diverse domains where there is a need for accurate classification, pattern recognition, and predictive modeling based on input data. Their ability to handle high-dimensional data and nonlinear relationships makes them suitable for a wide range of real-world problems. SVM explained: SVMs are like super-smart lines or boundaries that help us separate things into different groups. Imagine you have a bunch of points on a piece of paper, some marked with a red pen and others with a blue pen. SVMs are like drawing a line that tries to put as much space as possible between the red points and the blue points. But here's the cool part: SVMs don't just draw any line. They're very picky! They look for the best line that keeps the red points away from the blue points by as big a gap as possible. This line is called the "maximum margin" line because it's like the biggest gap you can get between the two groups. To find this perfect line, SVMs only focus on a few special points. These points are like the leaders of each group – the ones that are closest to the line. We call them "support vectors." SVMs don't worry about all the other points; they just care about these special ones because they help decide where the line goes. Now, sometimes the points are all jumbled up, and you can't draw a straight line to separate them. That's where the "kernel trick" comes in. It's like lifting the paper off the table into a higher dimension, where it's easier to find a line or boundary that separates the points neatly. Once SVMs find this perfect line or boundary, they can easily tell which group new points belong to. If a new point is on one side of the line, it belongs to one group, and if it's on the other side, it belongs to the other group. So, SVMs are like clever lines or boundaries that help us sort things into different groups, making them super useful in all sorts of tasks. 5 Pros and Cons of SVMs: • Advantages: □ Effective in High-Dimensional Spaces - SVM works well in high-dimensional spaces, making it suitable for problems with many features, such as text classification or image recognition. □ Versatility with Kernel Functions - SVM allows for the use of different kernel functions, such as linear, polynomial, and radial basis function (RBF), which can be chosen based on the problem at hand. This flexibility enables SVM to handle non-linear decision boundaries effectively. □ Robustness to Overfitting - SVM has regularization parameters that help prevent overfitting, making it less sensitive to noise in the training data compared to some other algorithms. □ Global Optimum - The objective function in SVM aims to find the hyperplane that maximizes the margin between classes, leading to a global optimum solution rather than getting stuck in local □ Effective for Small to Medium-Sized Datasets - SVM typically performs well on small to medium-sized datasets, where it can efficiently find the optimal separating hyperplane. • Disadvantages: □ Sensitivity to Parameter Tuning - SVM requires careful selection of parameters such as the regularization parameter (C) and the choice of kernel function. Poor parameter choices can lead to suboptimal performance or overfitting. □ Computationally Intensive - Training an SVM model can be computationally intensive, especially for large datasets. The time complexity of SVM algorithms can become prohibitive as the number of samples increases. □ Memory Intensive - SVM models can be memory intensive, particularly when dealing with large datasets or high-dimensional feature spaces. This can limit the scalability of SVM for certain □ Black Box Model - SVMs provide little insight into the relationship between the input features and the output, making them less interpretable compared to some other algorithms. Understanding the decision-making process of SVMs can be challenging. □ Limited Performance on Imbalanced Datasets - SVM may struggle with imbalanced datasets, where one class has significantly fewer samples than the others. In such cases, the model may prioritize the majority class and perform poorly on the minority class. Balancing techniques or alternative algorithms may be necessary. • "A Tutorial on Support Vector Machines for Pattern Recognition" by Christopher J.C. Burges - This paper provides a detailed introduction to SVMs, including their mathematical formulation, training algorithms, and practical considerations. • "Support Vector Machines" by Cristianini and Shawe-Taylor - This is a comprehensive textbook covering the theoretical foundations, algorithms, and applications of SVMs. • "Support Vector Machines" by Hsu, Chang, and Lin - This review article discusses the developments in SVM algorithms, optimization techniques, and kernel functions, as well as their applications in classification and regression tasks. Support Vector Machines find extensive usage across various domains due to their versatility, effectiveness, and robust performance in classification and regression tasks. Overall, SVMs serve as powerful tools in a wide range of applications where accurate classification, pattern recognition, and predictive modeling are essential. Their ability to handle high-dimensional data, nonlinear relationships, and complex decision boundaries makes them valuable in addressing real-world challenges across diverse domains.
{"url":"https://mljar.com/glossary/svm/","timestamp":"2024-11-03T16:21:56Z","content_type":"text/html","content_length":"82408","record_id":"<urn:uuid:ad051160-68c4-41f3-ba3d-f56dbdca3741>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00294.warc.gz"}
Mastering Multi-hued Color Scales with Chroma.js tl;dr: Use this tool or chroma.js’ bezier color interpolation and lightness correction. Probably one of the most useful things about Cynthia Brewers color advice for cartography are the multihue color schemes. This post explains how you can create your own, using two new features of chroma.js: Bezier interpolation and automatic lightness correction. Why multi-hue? While a (linear) variation in lightness is the most important quality of a sequential color scale, varying the hue can bring further significant improvements. Hue variation provides a better color contrast and thus makes the colors easier to differentiate. Also I feel that it makes them look a little more aesthetic. This is why the multihue color schemes were included in ColorBrewer, and in the related paper the authors also pointing out that ”they are more difficult to create than single-hue schemes because all three dimensions of colour are changing simultaneously“. Why is it difficult to create a (good) multi-hue scale? The straight forward approach to create a multi-hue color scheme is to just pick two colors of a different hue as start end end color for the gradient. Interpolating in CIE Lab* ensured that we end up with a linear lightness gradient, so the scheme is valid to use in visualizations and maps. But still the result is not 100% satisfying. The colors in the middle steps tend to look a little desaturated, especially if we compare this to the nice multi-hue schemes in ColorBrewer. In some cases the gradient doesn’t even go where we want it to go. For instance, in the following yellow–blue gradient we might want the colors to go through some nice green tones, but instead we get this kind of grayish purple tones. Introducing additional color-stops is dangerous So to get to better multi-hue schemes we most likely end up introducing additional color stops, and this is where the real trouble starts. The main problem of constructing such a multi-hue, multi-stop color scheme is how to pick the middle colors. To illustrate this problem let us look at a palette sometimes used to visualize temperatures: starting with black the gradient goes through red and yellow and finally ends in white. You might already see the problem with this, but for further illustration I visualized the lightness profile of the gradient. The plot shows the L* value for each color after converting to CIE Lab*. In this case the curve’s slope is varying radically at the color stops. After the gradient has passed yellow we see almost no increase of lightness anymore which makes it almost impossible to differentiate colors in the last quarter of the scale. This gets even more obvious when picking a set of 7 equidistant colors of that gradient. The third and fourth colors as well as the fifth and sixth colors are almost identical. Using these colors in a map is definitely not a good idea, as it would make it very hard to read the values. As you can imagine, it doesn’t matter if we interpolate in RGB or Lab*, as we still would have the hard breaks in the lightness curve. This is because the breaks are not the result of the color space, but of the linear interpolation between the color stops. One way to fix it is to use a non-linear interpolation instead. Smoothing multi-stop gradients using Bezier interpolation If the hard edges are causing the problem with the gradient, why not just smooth them by using a non-linear interpolation, such as quadratic or cubic Bezier curves. The first and last colors are the start and end point of the curve, while the other colors are just treated as control points for the curve. In the previous example we would have two control points (red & yellow) and would interpolate using a cubic Bezier curve.^[Aside: if you’re interested in learning more about how Bezier curves work I highly recommend looking at and playing with Jason Davis’ interactive Bezier curve Although the above illustration suggests that the colors lie in a two dimensional space, this is of course not the case. Instead the Bezier interpolation is applied for each of the three dimensions of the CIE Lab* space. As bezier curves usually don’t pass the control points our resulting gradient will not include the colors red and yellow. But still they will ‘guide’ the gradient on its way from black to white. Here’s how the resulting gradient would look like, along with the resulting lightness curve. As you can see, the resulting gradient has indeed a much smother lightness curve. Taking seven equidistant steps of the gradient we now end up with nicely differentiable colors. Of course you can achieve the same gradient with linear interpolation, just as you can approximate a cubic Bezier curve with linear segments. But using the control points is much easier than finding the actual stops. Here’s another example simulating the ColorBrewer schemes Yellow-Green-Blue and Red-Yellow-Blue. During the writing of the initial version of this post, more precisely right after I first visualized the lightness profile, another simple idea for improving the color gradients popped up. Auto-correcting the lightness Looking at the lightness curves shown above, we intuitively know what we are aiming for: a straight lightness transition between the start and the end of the gradient. If the gradient is defined as a function over a variable t ∈[0..1], we just need to fix the _t _in such a way that the lightness curve ends up being a straight line. In other words we just “move” the colors along the gradient so that they end up with a linear lightness curve. Applying this correction to the black–red–yellow–white gradient (top) we end up with this: In the corrected version (bottom), red has moved from the first third to about the center of the gradient, while yellow ended up almost at the end of it. This makes sense as yellow is indeed a much brighter color. Looking at equidistant samples from this gradient we now see nicely differentiable colors, safe to be used in maps and visualizations. As second example I applied the lightness correction to the Yellow-Green-Blue color scale example from above. If you compare the results to the Bezier interpolated version you see that this version is slightly more saturated. Combining Bezier interpolation and lightness correction Judging from these first experiments, I think both techniques are producing quite promising results. Finally, one can apply both the Bezier interpolation and the lightness correction. The following example shows a gradient of lightyellow–orangered–deeppink–darkred with (1) just linear Lab* interpolation, (2) cubic Bezier interpolation, (3) lightness correction, and (4) Bezier interpolation and lightness correction. Even though the Bezier interpolated scale already had an almost linear lightness profile (shown in red), the additional lightness correction slightly improved the differentiability of the resulting colors. Please click on the image above to experiment with the example yourself. How to use these features in chroma.js First of all, if you just need the hexadecimal values of a nice color scale, you can start playing with the Chroma.js Color Scale Helper right away (if you haven’t done already). All the examples in this post were created using this tool, and are linking back to it with the corresponding settings. To use the Bezier interpolation in chroma.js you just create the interpolator function by callig chroma.bezier() with an array of colors as first argument. The interpolator function can be called with a number between 0 and 1 as argument, and will return a chroma.color object. var bez = chroma.bezier(['white', 'yellow', 'red', 'black']); // compute some colors bez(0).hex(); // #ffffff bez(0.33).hex(); // #ffcc67 bez(0.66).hex(); // #b65f1a bez(1).hex(); // #000000 The interpolation method depends on the number of colors in that array: if you pass two colors linear interpolation is used, if you pass three colors a quadratic Bezier curve is used, and if you pass four colors a cubic Bezier curve is used. Five colors is a special case where two independent quadratic Beziers are used for the colors (1,2,3) and (3,4,5), which is ideal for diverging color scales. To use the Bezier interpolation with chroma.scale you just pass the interpolator function instead of the colors array. Since the Bezier interpolator uses Lab* by default, the scale mode is being chroma.bezier(['white', 'yellow', 'red', 'black']).scale().colors(5); To use the lightness correction you just call scale.correctLightness() of any chroma.scale object. It is important to note that the lightness correction only works for sequential color scales, where the input colors are ordered by lightness. So this won’t work for diverging color scales, yet. But I might fix this in the future. chroma.scale(['white', 'yellow', 'red', 'black']).correctLightness(); If you’re interested in the implementation of the features, here’s the CoffeeScript source code of the Bezier interpolation and the lightness correction. Hope you enjoyed this post, and as always, I’m curious to read what you think about it in the comments. Update: diverging multi-hue color palettes An updated version of my tool now allows to create multi-hue diverging color palettes that perform the color interpolation and lightness correction separately for both sides of the diverging scale. Feel free to fork the tool on Github.
{"url":"https://www.vis4.net/blog/mastering-multi-hued-color-scales/","timestamp":"2024-11-13T12:23:57Z","content_type":"text/html","content_length":"61406","record_id":"<urn:uuid:fabadd13-0a9d-4494-95fe-9d81e1674676>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00468.warc.gz"}
Higher spin AdSd+1/CFTd at one loop Following S. Giombi and I. R. Klebanov, [J. High Energy Phys. 12 (2013) 068], we carry out one-loop tests of higher spin AdSd+1/CFTd correspondences for d≥2. The Vasiliev theories in AdSd+1, which contain each integer spin once, are related to the U(N) singlet sector of the d-dimensional CFT of N free complex scalar fields; the minimal theories containing each even spin once - to the O(N) singlet sector of the CFT of N free real scalar fields. Using analytic continuation of higher spin zeta functions, which naturally regulate the spin sums, we calculate one-loop vacuum energies in Euclidean AdSd+1. In even d we compare the result with the O(N0) correction to the a coefficient of the Weyl anomaly; in odd d - with the O(N0) correction to the free energy F on the d-dimensional sphere. For the theories of integer spins, the correction vanishes in agreement with the CFT of N free complex scalars. For the minimal theories, the correction always equals the contribution of one real conformal scalar field in d dimensions. As explained in Giombi and Klebanov, this result may agree with the O(N) singlet sector of the theory of N real scalar fields, provided the coupling constant in the higher spin theory is identified as GN∼1/(N-1). Our calculations in even d are closely related to finding the regularized a anomalies of conformal higher spin theories. In each even d we identify two such theories with vanishing a anomaly: a theory of all integer spins, and a theory of all even spins coupled to a complex conformal scalar. We also discuss an interacting UV fixed point in d=5 obtained from the free scalar theory via an irrelevant double-trace quartic interaction. This interacting large N theory is dual to the Vasiliev theory in AdS6 where the bulk scalar is quantized with the alternate boundary condition. All Science Journal Classification (ASJC) codes • Nuclear and High Energy Physics • Physics and Astronomy (miscellaneous) Dive into the research topics of 'Higher spin AdSd+1/CFTd at one loop'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/higher-spin-adsd1cftd-at-one-loop","timestamp":"2024-11-06T12:47:26Z","content_type":"text/html","content_length":"53518","record_id":"<urn:uuid:e14d97a2-3b91-427f-88f7-1e673b53fb52>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00440.warc.gz"}
Chapter 4: Multiple Sequence Alignments, Molecular Evolution, and Phylogenetics 4.1 Multiple Sequence Alignment A multiple sequence alignment is an alignment of more than 2 sequences. It turns out that this makes the problem of alignment much more complicated, and much more computationally expensive. Dynamic programming algorithm such as Smith-Waterman can be extended to higher dimensions, but at a significant computing cost. Therefore, numerous methods have been developed to make this task faster. 4.1.1 MSA Methods There have been numerous methods developed for computing MSAs do make them more computationally feasible. Dynamic Programming Despite the computational cost of MSA by Dynamic Programming, there have been approaches to compute multiple sequence alignments using these approaches [24,25]. The programs MSA [25] and MULTALIN [26] use dynamic programming. This process takes [latex]O(L^N)[/latex] computations for aligning [latex]N[/latex] sequences of length [latex]L[/latex]. The Carrillo-Lipman Algorithm uses pairwise alignments to constrain the search space. By only considering regions of the multi-sequence alignment space that are within a score threshold for each pair of sequences, the [latex]L^N[/latex] search space can be reduced. Progressive alignments Progressive alignments begin by performing pairwise alignments, by aligning each pair of sequences. Then it combines each pair and integrates them into a multiple sequence alignment. The different methods differ in their strategy to combine them into an overall multiple sequence alignment. Most of these methods are “greedy”, in that they combine the most similar pairs first, and proceed by fitting the less similar pairs into the MSA. Some programs that could be considered progressive alignment include T-Coffee [27], ClustalW and its variants [28], and PSAlign [29]. Iterative Alignment Iterative Alignment is another approach that improves upon the progressive alignment because it starts with a progressive alignment and then iterates to incrementally improve the alignment with each iteration. Some programs that could be considered iterative alignment include CHAOS/DIALIGN [30], and MUSCLE [31]. Full Genome Alignments Specialized multiple sequence alignment approaches have been developed for aligning complete genomes, to overcome the challenges associated with aligning such long sequences. Some programs that align full genomes include MLAGAN (using LAGAN) [32], MULTIZ (using BLASTZ) [33], LASTZ [34], MUSCLE [31], and MUMmer [35, 36]. 4.1.2 MSA File Formats There are several file formats that are specifically designed for multiple sequence alignment. These approaches can differ in their readability by a human, or are designed for storing large sequence Multi-FASTA Format Probably the simplest multiple sequence alignment format is the Multi-FASTA format (MFA), which is essentially like a FASTA file, such that each sequence provides the alignment sequence (with gaps) for a given species. The deflines can in some cases only contain information about the species, and the file name, for example, could contain information about what sequence is being described by the file. For short sequences the mfa can be human readable, but for very long sequences it can become difficult to read. Here is an example [latex]\texttt{.mfa}[/latex] file that shows the alignment of a small (28aa) Drosophila melanogaster peptide called Sarcolamban (isoform C) with its best hits to [latex]\texttt{nr}[/latex]. The Clustal format was developed for the program [latex]\texttt{clustal}[/latex] [37], but has been widely used by many other programs [28, 38]. This file format is intended to be fairly human readable in that it expresses only a fixed length of the alignment in each section, or block. Here is what the Clustal format looks like for the same Sarcolamban example: CLUSTAL W (1.83) multiple sequence alignment D.melanogaster ------------------------------------------------------------ D.sechellia ------------------------------------------------------------ D.busckii ------------------------------------------------------------ D.melanogaster ----------------------MSEARNLFTTFGILAILLFFLYLIYA------------ D.sechellia ----------------------MSEARNLFTTFGILAILLFFLYLIYAPAAKSESIKMNE D.pseudoobscura EQCPNKKYPPKQPTTTTTKPIKMNEARSLFTTFLILAFLLFLLYAFYEA----------- D.busckii ----------------------MNEAKSLVTTFLILAFLLFLLYAFYEA----------- *.**:.*.*** ***:***:** :* D.melanogaster ---------VL--------------- D.sechellia AKSLFTTFLILAFLLFLLYAFYEAAF D.pseudoobscura ---------AF--------------- D.busckii ---------AF--------------- The clustal format has the following requirements, which can make it difficult to create one manually. First, the first line in the file must start with the words “[latex]\texttt{CLUSTAL W}[/latex]” or “[latex]\texttt{CLUSTALW}[/latex]“. Other information in the first line is ignored, but can contain information about the version of [latex]\texttt{CLUSTAL W}[/latex] that was used to create it. Next, there must be one or more empty lines before the actual sequence data begins. The rest of the file consists of one or more blocks of sequence data. Each block consists of one line for each sequence in the alignment. Each line consists of the sequence name, defline, or identifier, some amount white space, then up to 60 sequence symbols such as characters or gaps. Optionally, the line can be followed by white space followed by a cumulative count of residues for the sequences. The amount of white space between the identifier and the sequences is usually chosen so that the sequence data is aligned within the sequence block. After the sequence lines, there can be a line showing the degree of conservation for the columns of the alignment in this block. Finally, all this can be followed by some amount of empty lines. The Multiple Alignment Format (MAF) can be a useful format for storing multiple sequence alignment information. It is often used to store full-genome alignments at the UCSC Genome Bioinformatics site. The file begins with a header beginning with [latex]\texttt{##maf}[/latex] and information about the version and scoring system. The rest of the file consists of alignment blocks. Alignment blocks start with a line that begins with the letter [latex]\texttt{a}[/latex] and a score for the alignment block. Each subsequent line begins with either an [latex]\texttt{s}[/latex], a [latex]\ texttt{i}[/latex], or an [latex]\texttt{e}[/latex] indicating what kind of line it is. The lines beginning with [latex]\texttt{s}[/latex] contain sequence information. Lines that begin with [latex]\ texttt{i}[/latex] typically follow each [latex]\texttt{s}[/latex]-line, and contain information about what is occurring before and after the sequences in this alignment block for the species considered in the line. Lines beginning with [latex]\texttt{e}[/latex] contain information about empty parts of the alignment block, for species that do not have sequences aligning to this block. For example, the following is a portion of the alignment of the Human Genome (GRCh38/hg38) [latex]\texttt{chr22}[/latex] with 99 vertebrates. ##maf version=1 scoring=roast.v3.3 a score=49441.000000 s hg38.chr22 10514742 28 + 50818468 acagaatggattattggaacagaataga s panTro4.chrUn_GL393523 96163 28 + 405060 agacaatggattagtggaacagaagaga i panTro4.chrUn_GL393523 C 0 C 0 s ponAbe2.chrUn 66608224 28 - 72422247 aaagaatggattagtggaacagaataga i ponAbe2.chrUn C 0 C 0 s nomLeu3.chr6 67506008 28 - 121039945 acagaatagattagtggaacagaataga i nomLeu3.chr6 C 0 C 0 s rheMac3.chr7 24251349 14 + 170124641 --------------tggaacagaataga i rheMac3.chr7 C 0 C 0 s macFas5.chr7 24018429 14 + 171882078 --------------tggaacagaataga i macFas5.chr7 C 0 C 0 s chlSab2.chr26 21952261 14 - 58131712 --------------tggaacagaataga i chlSab2.chr26 C 0 C 0 s calJac3.chr10 24187336 28 + 132174527 acagaatagaccagtggatcagaataga i calJac3.chr10 C 0 C 0 s saiBol1.JH378136 10582894 28 - 21366645 acataatagactagtggatcagaataga i saiBol1.JH378136 C 0 C 0 s eptFus1.JH977629 13032669 12 + 23049436 ----------------gaacaaagcaga i eptFus1.JH977629 C 0 C 0 e odoRosDiv1.KB229735 169922 2861 + 556676 I e felCat8.chrB3 91175386 3552 - 148068395 I e otoGar3.GL873530 132194 0 + 36342412 C e speTri2.JH393281 9424515 97 + 41493964 I e myoLuc2.GL429790 1333875 0 - 11218282 C e myoDav1.KB110799 133834 0 + 1195772 C e pteAle1.KB031042 11269154 1770 - 35143243 I e musFur1.GL896926 13230044 2877 + 15480060 I e canFam3.chr30 13413941 3281 + 40214260 I e cerSim1.JH767728 28819459 183 + 61284144 I e equCab2.chr1 43185635 316 - 185838109 I e orcOrc1.KB316861 20719851 245 - 22150888 I e camFer1.KB017752 865624 507 + 1978457 I The [latex]\texttt{s}[/latex] lines contain 5 fields after the [latex]\texttt{s}[/latex] at the beginning of the line. First, the source of the column usually consists of a genome assembly version, and chromosome name separated by a dot “.”. Next is the start position of the sequence in that assembly/chromosome. This is followed by the size of the sequence from the species, which may of course vary from species to species. The next field is a strand, with “+” or “-“, indicating what strand from the species’ chromosome the sequence was taken from. The next field is the size of the source, which is typically the length of the chromosome in basepairs from which the sequence was extracted. Lastly, the sequence itself is included in the alignment block. 4.2 Phylogenetic Trees A phylogenetic tree is a representation of the evolutionary history of a character or sequence. Branching points on the tree typically represent gene duplication events or speciation events. We try to infer the evolutionary history of a sequence by computing an optimal phylogenetic tree that is consistent with the extant sequences or species that we observe. 4.2.1 Representing a Phylogenetic Tree A phylogenetic tree is often a “binary tree” where each branch point goes from one to two branches. The junction points where the branching takes place are called “internal nodes”. One way of representing a tree is with nested parentheses corresponding to branching. Consider the following example where the two characters [latex]\texttt{A}[/latex] and [latex]\texttt{B}[/latex] are grouped together, and the characters [latex]\texttt{C}[/latex] and [latex]\texttt{D}[/latex] are grouped. The semi-colon at the end is needed to make this tree in proper “newick” tree format. One of the fastest ways to draw a tree on the command line is the “ASCII tree”, which we can draw by using the function [latex]\texttt{Phylo.draw_ascii()}[/latex]. To use this, we’ll need to save our tree to a text file that we can read in. We could read the tree as just text into the python terminal (creating a string), but that would require loading an additional module [latex]\texttt{cStringIO}[/latex] to use the function [latex]\texttt{StringIO}[/latex]. Therefore, it might be just as easy to save it to a text file called [latex]\texttt{tree.txt}[/latex] that we can read in. Putting this together, we can draw the tree with the following commands: >>> from Bio import Phylo >>> tree = Phylo.read("tree.txt","newick") >>> Phylo.draw_ascii(tree) ___________________________________ A | |___________________________________ B | ___________________________________ C |___________________________________ D This particular tree has all the characters at the same level, and does not include any distance or “branch length” information. Using real biological sequences, we can compute the distances along each brach to get a more informative tree. For example, we can download 18S rRNA sequences from NCBI Nucleotide. Using clustalw, we can compute a multiple sequence alignment, and produce a phylogenetic tree. In this case, the command $ clustalw -infile=18S_rRNA.fa -type=DNA -outfile=18S_rRNA.aln will produce the output file [latex]\texttt{18S_rRNA.dnd}[/latex], which is a tree in newick tree format. The file contains the following information You’ll note, this is the same format as the simple example above, but rather than a simple label for each character/sequence, there is a label and a numerical value, corresponding to the brach length, separated by a colon. In addition, each of the parentheses are followed by a colon and a numerical value. In each case, the value of the branch length corresponds to the substitutions per site required to change one sequence to another, a common unit of distance used in phylogenetic trees. This value also corresponds to the length of the branch when drawing the tree. The command to draw a tree image is simply [latex]\texttt{Phylo.draw}[/latex], which will allow the user to save the image. >>> from Bio import Phylo >>> tree = Phylo.read('18S_rRNA.dnd','newick') >>> Phylo.draw(tree) The resulting image can be seen in Figure 4.1, and visually demonstrates the branch lengths corresponding to the distance between individual sequences. The x-axis in the representation corresponds to this distance, but the y-axis only separates taxa, and the distance along the y-axis does not add to the evolutionary distance between sequences. Figure 4.1: A phylogenetic tree computed with Clustal for 18S rRNA sequences for Drosophila melanogaster, Homo sapiens, Mus musculus, and Gallus gallus. 4.2.2 Pairwise Distances Phylogenetic trees are often computed as binary trees with branch lengths that optimally match the pair-wise distances between species. In order to compute a phylogenetic tree, we need a way of defining this distance. One strategy developed by Feng and Doolittle is to compute a distance from the alignment scores computed from pair-wise alignments. The distance is defined in terms of an “effective”, or normalized, alignment score for each pair of species. This distance is defined as [latex]D_{ij} = - \ln S_{eff}(i,j)[/latex] so that pairs of sequences [latex](i,j)[/latex] that have high scores will have a small distance between them. The effective score [latex]S_{eff}(i,j)[/latex] is defined as [latex]S_{eff}(i,j) = \frac{S_{real}(i,j) - S_{rand}(i,j)}{S_{iden}(i,j) - S_{rand}(i,j)} \times 100[/latex] Where in this expression, [latex]S_{real}(i,j)[/latex] is the observed pairwise similarity between sequences from species [latex]i[/latex] and [latex]j[/latex]. The value [latex]S_{iden}(i,j)[/latex] is the average of the two scores when you align species [latex]i[/latex] and [latex]j[/latex] to themselves, which represents the score corresponding to aligning “identical” sequences, the maximum possible score one could get. [latex]S_{rand}(i,j)[/latex] is the average pairwise similarity between randomized, or shuffled, versions of the sequences from species [latex]i[/latex] and [latex]j[/ latex]. After this normalization, the score [latex]S_{eff}(i,j)[/latex] ranges from 0 to 100. 4.3 Models of mutations Evolution is a multi-faceted process. There are many forces involved in molecular evolution. The process of mutation is a major force in evolution. Mutation can happen when mistakes are made in DNA replication, just because the DNA replication machinery isn’t 100% perfect. Other sources of mutation are exposure to radiation, such as UV radiation, certain chemicals can induce mutations, and viruses can induce mutations of the DNA (as well as insert genetic material). Recombination is a genetic exchange between chromosomes or regions within a chromosome. During meiosis, genes and genomic DNA are shuffled between parent chromosomes. Genetic drift is a stochastic process of changing allele frequencies over time due to random sampling of organisms. Finally, natural selection is the process where differences in phenotype can affect survival and reproduction rates of different individuals. 4.3.1 Genetic Drift Because genetic drift is a stochastic process, it can be modeled as a “rate”. The rate of nucleotide substitutions [latex]r[/latex] can be expressed as [latex]r = \frac{\mu}{2T}[/latex] where [latex]\mu[/latex] is the substitutions per site across the genome, and [latex]T[/latex] is the time of divergence of the two (extant) species to their common ancestor. The factor of two can be understood by the fact that it takes a total of [latex]2T[/latex] to go from [latex]x_1[/latex] to [latex]x_2[/latex], stopping at [latex]x_a[/latex] along the way. The mutations that occur over time separating [latex]x_1[/latex] and [latex]x_2[/latex] can be viewed as distributed over a time [latex]2T[/latex]. Figure 4.2: For two extant species [latex]x_1[/latex] and [latex]x_2[/latex] diverged for a time [latex]T[/latex] from a common ancestor [latex]x_a[/latex], the mutation rate can be expressed as a time [latex]2T[/latex] separating [latex]x_1[/latex] and [latex]x_2[/latex]. In most organisms, the rate is observed to be about [latex]10^{-9}[/latex] to [latex]10^{-8}[/latex] mutations per generation. Some viruses have higher mutation rates [latex]10^{-6}[/latex] mutations per generation. The generation times of different species can also affect the nucleotide substitution rate [latex]r[/latex]. Organisms with shorter generation times have more opportunities for meiosis per unit time. When mutation rates are evaluated within a gene, positional dependence in nucleotide evolution is observed. Because of the degeneracy in the genetic code, the third position of most codons has a higher substitution rate. Some regions of proteins are conserved domains, hence the corresponding regions of the gene have lower mutation rates compared to other parts of the gene. Other genes such as immunoglobulins have very high mutation rates and are considered to be “hypervariable”. Noncoding RNAs have functional constraints to preserver hairpins, and may have sequence evolution that preserves base pairing through compensatory changes on the paired nucleotide. The result is that many noncoding RNAs such as tRNAs have very conserved structure, but vary at the sequence level. 4.3.2 Substitution Models The branches of phylogenetic trees can often represent the the expected number of substitutions per site. That is, the distance along the branches of the phylogenetic tree from the ancestor to the extant species correspond to the expected number of substitutions per site in the time it takes to evolve from the ancestor to the extant species. A substitution model describes the process of substitution from one set of characters to another through mutation. These models are often neutral, in the sense that selection is not considered, and the characters mutate in an unconstrained way. Furthermore, these models are typically considered to be independent from position to position. Substitution models typically are described by a rate matrix [latex]Q[/latex] with terms [latex]Q_{ab}[/latex] that describe mutating from character [latex]a[/latex] to [latex]b[/latex] for terms where [latex]a \ne b[/latex]. The diagonal terms of the matrix are defined so that the sum of the rows are zero, so that [latex]Q_{aa} = -\sum_{b \ne a} Q_{ab}[/latex] In general, the matrix [latex]Q[/latex] can be defined as: [latex]Q = \begin{pmatrix} * & Q_{AC} & Q_{AG} & Q_{AT} \\ Q_{CA} & * & Q_{CG} & Q_{CT} \\ Q_{GA} & Q_{GC} & * & Q_{GT} \\ Q_{TA} & Q_{TC} & Q_{TG} & * \end{pmatrix}[/latex] where the diagonal terms are defined such that it is consistent with Equation 4.1. The rate matrix is associated with a probability matrix [latex]P(t)[/latex], which describes the probability of observing the mutation from [latex]a[/latex] to [latex]b[/latex] in a time [latex]t[/latex] by the terms [latex]P_{ab}(t)[/latex]. We want these probabilities to be multiplicative, meaning that [latex]P(t_1)P(t_2) = P(t_1+t_2)[/latex]. The mutations associated with amounts of time [latex]t_1[/latex] and [latex]t_2[/latex] applied successively can be understood as the same as the mutations associated with [latex]t_1 + t_2[/latex]. Furthermore, the derivative of the equation can be expressed as [latex]P'(t) = P(t)Q[/latex] The solution to this equation is the exponential function. The rate matrix itself can be exponentiated to compute the probability of a particular mutation in an amount of time [latex]t[/latex], which can be computed using the Taylor series for the exponential function. [latex]P(t) = e^{Qt} = \sum_{n=0}^{\infty} Q^n \frac{t^n}{n!}[/latex] Each such model also assumes an equilibrium frequencies [latex]\pi[/latex], which describe the probability of each nucleotide after the system has reached equilibrium. 4.3.3 Jukes-Cantor 1969 (JC69) The simplest substitution model was proposed by Jukes and Cantor (JC69), and describes equal rates of evolution between all nucleotides [39]. The JC69 model defines a constant mutation rate [latex]\ mu[/latex], and equilibrium frequencies such that [latex]\pi_A = \pi_C = \pi_G = \pi_T = \frac{1}{4}[/latex]. The equilibrium frequencies describe the frequencies of each nucleotide that result after the system has evolved under this model for a “very long time”. The rate matrix for the Jukes-Cantor model is then given by: [latex]{Q = \begin{pmatrix} -\frac{3}{4}\mu & \frac{\mu}{4} & \frac{\mu}{4} & \frac{\mu}{4} \\ \frac{\mu}{4} & -\frac{3}{4}\mu & \frac{\mu}{4} & \frac{\mu}{4} \\ \frac{\mu}{4} & \frac{\mu}{4} & -\ frac{3}{4}\mu & \frac{\mu}{4} \\ \frac{\mu}{4} & \frac{\mu}{4} & \frac{\mu}{4} & -\frac{3}{4}\mu \end{pmatrix}}[/latex] It can be shown that the full expression for computing [latex]P(t) = e^{Qt}[/latex] is: [latex]P(t) = \begin{pmatrix} \frac{1}{4}(1+3e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) \\ \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4} (1+3e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) \\ \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1+3e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) \\ \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1+3e^{- \mu t}) \end{pmatrix}00[/latex] Therefore, we can solve this system to get the probabilities [latex]P_{ab}(t)[/latex], which can be expressed as [latex]\label{JKSimple} \begin{aligned} P_{ab}(t) = \begin{cases} \frac{1}{4} (1 + 3 e^{-\mu t}) & \mbox{if } a = b\\ \frac{1}{4} (1 - e^{-\mu t }) & \mbox{if } a \ne b\\ \end{cases} \end{aligned}[/ Finally, we note that the sum of the terms of a row of the matrix [latex]Q[/latex] that correspond to mutation changes give us the expected value of the distance [latex]\hat{d}[/latex] in substitutions per site. For the Jukes-Cantor model, this corresponds to [latex]\hat{d} = \frac{3}{4} \mu t[/latex]. Substituting this into Equation 4.7 for the terms involving change ([latex]a \ne b [/latex]), gives [latex]p = \frac{1}{4} (1 - e^{-\frac{4}{3}\hat{d}})[/latex], which can be solved for [latex]\hat{d}[/latex] to give: [latex]\hat{d} = -\frac{3}{4} \ln (1 - \frac{4}{3} p)[/latex]} This formula is often called the Jukes-Cantor distance formula, and it gives a way to relate the proportion of sites that differ [latex]p[/latex] to the evolutionary distance [latex]\hat{d}[/latex], which stands for the expected number of substitutions in the time [latex]t[/latex] for a mutation rate [latex]\mu[/latex]. This formula corrects for the fact that the proportion of sites that differ, [latex]p[/latex], does not take into account sites that mutate, and then mutate back to the original character. 4.3.4 Kimura 1980 model (K80) The Jukes-Cantor model considers all mutations to be equally likely. The Kimura model (K80) considers the fact that transversions, mutations involving purine to pyrimidines or vice versa, are less likely than transitions, which are from purines to purines or pyrimidines to pyrimidines [40]. Therefore, this model has two parameters: a rate [latex]\alpha[/latex] for transitions, and a rate [latex]\beta[/latex] for transversions. [latex]Q = \begin{pmatrix} -(2\beta+\alpha) & \beta & \alpha & \beta \\ \beta & -(2\beta+\alpha) & \beta & \alpha \\ \alpha & \beta & -(2\beta+\alpha) & \beta \\ \beta & \alpha & \beta & -(2\beta+\ alpha) \end{pmatrix}[/latex] Applying a similar derivation as was done for the JC69 model, we get the following distance formula for K80: [latex]{d = -\frac{1}{2} \ln(1 - 2p - q) - \frac{1}{4} \ln(1 - 2q)}[/latex] where [latex]p[/latex] is the proportion of sites that show a transition, and [latex]q[/latex] is the proportion of sites that show transversions. 4.3.5 Felsenstein 1981 model (F81) The Felsenstein model essentially makes the assumption that the rate of mutation to a given nucleotide has a specific value equal to its equilibrium frequency [latex]\pi_b[/latex], but these value vary from nucleotide to nucleotide [41]. The rate matrix is then defined as: [latex]Q = \begin{pmatrix} * & \pi_{C} & \pi_{G} & \pi_{T} \\ \pi_{A} & * & \pi_{G} & \pi_{T} \\ \pi_{A} & \pi_{C} & * & \pi_{T} \\ \pi_{A} & \pi_{C} & \pi_{G} & * \end{pmatrix}[/latex] 4.3.6 The Hasegawa, Kishino and Yano model (HKY85) The Hasegawa, Kishino and Yano model takes the K80 and F81 models a step further and distinguishes between transversions and transitions [42]. In this expression, the transitions are weighted by an additional term [latex]\kappa[/latex]. [latex]Q = \begin{pmatrix} * & \pi_{C} & \kappa \pi_{G} & \pi_{T} \\ \pi_{A} & * & \pi_{G} & \kappa \pi_{T} \\ \kappa \pi_{A} & \pi_{C} & * & \pi_{T} \\ \pi_{A} & \kappa \pi_{C} & \pi_{G} & * \end 4.3.7 Generalized Time-Reversible Model This model has 6 rate parameters, and 4 frequencies hence, 9 free parameters (frequencies sum to 1) [43]. This method is the most detailed, but also requires the most number of parameters to 4.3.8 Building Phylogenetic Trees Now that we have a probabilistic framework with which to describe phylogenetic distances, we need some methods to build a tree from a set of pair-wise distances. Here are two basic approaches to building Phylogenetic Trees. Unweighted Pair Group Method with Arithmetics Mean (UPGMA) Algorithm UPGMA is a phylogenetic tree building algorithm that uses a type of hierarchical clustering [44]. This algorithm builds a rooted tree by creating internal nodes for each pair of taxa (or internal nodes), starting with the most similar and proceeding to the least similar. This approach starts with a distance matrix [latex]d_{ij}[/latex] for each taxa [latex]i[/latex] and [latex]j[/latex]. When branches are built connecting [latex]i[/latex] and [latex]j[/latex], an internal node [latex]k[/latex] is created, which corresponds to a cluster [latex]C_k[/latex] containing [latex]i[/latex] and [latex]j[/latex]. Distances are updated such that the distance between a cluster (internal node) and a leaf node is the average distance between all members of the cluster and the leaf node. Similarly, the distance between clusters is the average distance between members of the cluster. Neighbor Joining Algorithm One of the issues with UPGMA is the fact that it is a greedy algorithm, and joins the closest taxa first. There are tree structures where this fails. To get around this, the Neighbor joining algorithm normalizes the distances and computes a set of new distances that avoid this issue [45]. Using the original distance matrix [latex]d_{i,j}[/latex], a new distance matrix [latex]D_{i,j}[/ latex] is computed using the following formula: [latex]D_{i,j} = d_{i,j} - \frac{1}{n-2} \left( \sum_{k=1}^n d_{i,k} + \sum_{k=1}^n d_{j,k} \right)[/latex] Begin with a star tree, and a matrix of pair-wise distances d(i,j) between each pair of sequences/taxa, an updated distance matrix that normalizes compared to all distances is created. The updated distances [latex]D_{i,j}[/latex] are computed. The closest taxa using this distance are identified, and an internal node is created such that the distance along the branch connecting these two nearest taxa is the distance [latex]D_{i,j}[/latex]. This process s repeated using the new (internal) node as a taxa and the distances are updated. 4.3.9 Evaluating the Quality of a Phylogenetic Tree Maximum Parsimony Maximum Parsimony makes the assumption that the best phylogenetic tree is that with the shortest branch lengths possible, which corresponds to the fewest mutations to explain the observable characters [46 47]. This method begins by identifying the phylogenetically informative sites. These sites would have to have a character present (no gaps) for all taxa under consideration, and not be the same character for all taxa. Then trees are constructed, and characters (or sets of characters) are defined for each internal node all the way up to the root. Then each tree has a cost defined, corresponding to the total number of mutations that need to be assumed to explain that tree. The tree with the shortest total branch length is typically chosen. The length of the tree is defined as the sum of the lengths of each individual character (or column of the alignment) [latex]L_j[/latex], and possibly using a weight [latex]w_j[/latex] for different characters (often just [latex]1[/latex] for all columns, but could weight certain positions more). [latex]{L = \sum_{j=1}^C w_j L_j}[/latex] In this expression, the length of a character [latex]L_j[/latex] can be computed as the total number of mutations needed to explain this distribution of characters given the topology of the tree. Maximum Likelihood Maximum likelihood is an approach that computes a likelihood for a tree, using a probabilistic model for each tree [48 49]. The probabilistic model is applied to each branch of the tree, such as the Jukes-Cantor model, and the likelihood of a tree is the product of the probabilities of each branch. Generally speaking, we seek to find the tree [latex]\mathcal{T}[/latex] that has the greatest likelihood given the [latex]D[/latex]. [latex]P(\mathcal{T}|D) = \frac{P(D|\mathcal{T})P(\mathcal{T})}{P(D)}[/latex] These probabilities could be computed from an evolutionary model, such as Jukes-Cantor model. For example, if we have observed characters [latex]_1[/latex] and [latex]x_2[/latex], and an unknown ancestral character [latex]x_a[/latex], and lengths of the branches from [latex]x_a[/latex] to be [latex]t_1[/latex] and [latex]t_2[/latex] respectively, we could compute the likelihood of this simple tree as [latex]P(x_1,x_2|\mathcal{T},t_1,t_2) = \sum_a p_a P_{a,x_1}(t_1) P_{a,x_2}(t_2)[/latex] where we are summing over all possible ancestral characters [latex]a[/latex], and computing the probability of mutating along the branches using a probabilistic model. This probabilistic model can be the same terms of the probability matrices discussed in Equation 4.6. 4.3.10 Tree Searching In addition to computing the quality of a specific tree, we also want to find ways of searching the space of trees. Because the space of all possible trees is so large, we can’t exhaustively enumerate them all in a practical amount of time. Therefore, we need to sample different trees stochastically. Two such methods are Nearest Neighbor Interchange (NNI) and Subtree Pruning and Re-grafting (SPR). Nearest Neighbor Interchange Figure 4.3: Nearest Neighbor Interchange operates by swapping out the locations of subgraphs within a tree. For a tree with four sub-trees, there are only 2 possible interchanges. For a tree toplogy that contains four or more taxa, the Nearest Neighbor Interchange (NNI) exchanges subtrees within the larger tree. It can be shown that for a tree with four subtrees there are only 3 distinct ways to exchange subtrees to create a new tree, including the original tree. Therefore, each such application produces two new trees that are different from the input tree [50 51]. Subtree Pruning and Regrafting Figure 4.4: A depiction of SPR Tree searching method. A. One subtree of the larger tree structure is selected. B. An attachment point is selected. C. The subtree is then “grafted” or attached to the attachment point. The Subtree Pruning and Regrafting (SPR) method takes a subtree from a larger tree, removes it and reattaches it to another part of the tree [52 53]. 4.4 Lab 5: Phylogenetics In this lab, we will learn some basic commands for computing phylogenetic trees, and some python commands that will draw the tree. Let’s create a new directory called [latex]\texttt{Lab5}[/latex] to work in. 4.4.1 Download Sequences from NCBI Download sequences in FASTA format for a gene of interest from NCBI nucleotide (https://www.ncbi.nlm.nih.gov/nuccore). Build a FASTA file containing each sequence and a defline. Shorten the defline for each species to make it easier to read later. As an example, I downloaded 18S rRNA for Human (Homo sapiens), Mouse (Mus musculus), Rat (Rattus norvegicus), Frog (Xenopus laevis), Chicken (Gallus gallus), Fly (Drosophila melanogaster) and Arabidopsis (Arabidopsis thalian). You can download these 18S rRNA sequences with the following command: $ wget http://hendrixlab.cgrb.oregonstate.edu/teaching/18S_rRNAs.fasta 4.4.2 Create a Multiple Sequence Alignment and Phylogenetic Tree with Clustal First, use [latex]\texttt{clustalw2}[/latex] to align the sequences, and output a multiple sequence alignment and dendrogram file. $ clustalw2 -infile=18S_rRNAs.fasta -type=DNA -outfile=18S_rRNAs.aln The dendogram file, indicated by the “[latex]\texttt{.dnd}[/latex]” suffix, can be used to create an image of a phylogenetic tree using Biopython: First, enter the python terminal by typing “[latex]\ texttt{.python}[/latex]” and then the following: >>> import matplotlib as mpl >>> mpl.use('Agg') >>> import matplotlib.pyplot as pyplot >>> from Bio import Phylo >>> tree = Phylo.read('18S_rRNAs.dnd','newick') >>> Phylo.draw(tree) >>> pyplot.savefig("myTree.png") >>> quit() How does the resulting tree compare to what you expect given these species? 4.4.3 Create a Multiple Sequence Alignment and Phylogenetic Tree with phyML The program phyML provides much more flexibility in what sort of trees it can compute. To use it, we’ll need to convert our alignment file to phylip. We can do this with the Biopython module AlignIO >>> from Bio import AlignIO >>> alignment = AlignIO.parse("18S_rRNAs.aln","clustal") >>> AlignIO.write(alignment,open("18S_rRNAs.phy","w"),"phylip") >>> quit() You can run phyml in the simplest way by simply typing “phyml” and then typing your alignment file: $ phyml Enter the sequence file name > 18S_rRNAs.phy The program will give you a set of options, and you can optionally change them. To change the model, type “+” and then “M” to toggle through the models. Finally, hit “enter” to run the program. >>> import matplotlib as mpl >>> mpl.use('Agg') >>> import matplotlib.pyplot as pyplot >>> from Bio import Phylo >>> tree = Phylo.read('18S_rRNAs.phy_phyml_tree.txt','newick') >>> Phylo.draw(tree) >>> pyplot.savefig("myTreeML.png") >>> quit() How does the tree created from [latex]\texttt{clustalw2}[/latex] compare to the tree created using [latex]\texttt{phyml}[/latex]?
{"url":"https://open.oregonstate.education/appliedbioinformatics/chapter/chapter-4/","timestamp":"2024-11-03T13:08:42Z","content_type":"text/html","content_length":"118599","record_id":"<urn:uuid:9338fe5c-4255-495d-82f9-805201f57533>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00782.warc.gz"}
A method of arranging a capacitor array of a successive approximation register analog-to-digital converter in a successive approximation process, the method including: splitting a binary capacitor array into unit capacitors, then sorting, grouping, and rotating the original binary capacitive array involved in successive approximation conversion. Pursuant to 35 U.S.C.§ 119 and the Paris Convention Treaty, this application claims foreign priority to Chinese Patent Application No. 201711039954.3 filed Oct. 31, 2017, the contents of which and any intervening amendments thereto are incorporated herein by reference. Inquiries from the public to applicants or assignees concerning this document or the related applications should be directed to: Matthias Scholl P. C., Attn.: Dr. Matthias Scholl Esq., 245 First Street, 18th Floor, and Cambridge, Mass. 02142. BACKGROUND OF THE INVENTION Field of the Invention This disclosure relates to a successive approximation register analog-to-digital converters (SAR ADC), and more particularly to a method of arranging a capacitor array of a successive approximation register analog-to-digital converter in a successive approximation process. Description of the Related Art Smart sensors are devices that contain integrated transducers, signal conditioning modules, and processing modules. Smart sensors are applied in such fields as precision instruments, medical instruments, communication, radar, aerospace, electronic countermeasures, security screening systems, fault detection, and earthquake detection. In recent years, with the rapid development of smart sensors, research on embedded modules such as sensors, amplifiers, and analog-to-digital converters (ADCs) for smart sensors has drawn much attention. FIG. 1 shows a block diagram of a smart sensor node: the sensor detects a physical, chemical, or biological quantity, then small signal at the output of the sensor is amplified and filtered. Thereafter, an analog-to-digital converter (ADC) converts the analog sensing signal into digital codes. Since the ADC is an important block in a smart sensor node, optimizing the performance of the ADC, specifically optimizing the resolution of the ADC, is important for meeting the demands of multi-functional smart sensor nodes for low power consumption and small silicon area. The architectures of mainstream Nyquist-Rate ADCs include Flash ADC, successive approximation register (SAR) ADC, pipeline ADC, and Sigma-Delta ADC. Spurious-free dynamic range (SFDR), signal-to-noise and distortion ratio (SNDR), and signal-to-noise ratio (SNR) are dynamic parameters that evaluate the linearity of the ADC. Higher dynamic parameters mean higher linearity. Flash ADC can only be used for low resolution and high sampling rate application. Pipeline and Sigma-Delta ADCs are not appropriate for low power consumption design as they require using op-amps. SAR ADC uses a binary algorithm to convert the input analog signal into the output digital signal. As shown in FIG. 2, it consists of a sample-and-hold (S/H) stage, a digital-to-analog converter (DAC), a voltage comparator, and a successive approximation register. The high-resolution SAR ADC mainly adopts a combined capacitor-resistor network, as shown in FIG. 3. In the combined capacitor-resistor network, resistors and capacitors are both used. The most significant bits (MSBs) of DAC are formed by the binary capacitive array, while the least significant bits (LSBs) of DAC are formed by the resistor string. Therefore, the total capacitance of the combined capacitor-resistor structure becomes smaller than that of a binary structure, which effectively reduces the area of the capacitive array. In particular, the smaller the area, the faster the As shown in FIG. 3, the capacitor mismatch limits the performance of the converter. The calibration method can improve SFDR by swapping a MSB capacitor and the rest of the capacitors, but the performance improvement is limited. Traditional foreground calibration and background calibration techniques increase the chip area. The capacitor re-configuring technique can significantly improve the SFDR, however, additional 64 unit capacitors are needed, which greatly increases the chip area. In view of the above-described problems, it is an objective of the invention to a method of arranging a capacitor array of a successive approximation register analog-to-digital converter that can reduce unnecessary losses such as design complexity, chip area, power consumption, and speed of a SAR ADC. Another objective of the invention is to reduce the capacitor mismatch of a SAR ADC without using additional capacitors. To achieve the above objective, according to one aspect of the invention, there is provided a method of arranging a capacitor array of a successive approximation register analog-to-digital converter in a successive approximation process, the method comprising: □ Step 1: separating a 6-bit binary capacitive DAC of 64 unit capacitors (64C) into 64 independent capacitors; □ Step 2: sorting the 64 independent capacitors from highest to lowest by capacitance, then using 64 digital codes from C1 to C64 to numbering the 64 independent capacitors sorted from highest to lowest by capacitance, and then recording the 64 digital codes and corresponding capacitance values of the 64 independent capacitors in a register; □ Step 3: dividing the 64 independent capacitors into four groups as follows: ☆ Group 1 comprising C1, C64, C3, C62, C5, C60, C7, C58, C9, C56, C11, C54, C13, C52, C15, and C50; ☆ Group 2 comprising C17, C48, C19, C46, C21, C44, C23, C42, C25, C40, C27, C38, C29, C36, C31, and C34; ☆ Group 3 comprising C32, C33, C30, C35, C28, C37, C26, C39, C24, C41, C22, C43, C20, C45, C18, and C47; and ☆ Group 4 comprising C16, C49, C14, C51, C12, C53, C10, C55, C8, C57, C6, C59, C4, C61, C2, and C63; and □ Step 4: in a successive approximation conversion process, selecting two groups to constitute a largest capacitor 32C, select one group from the remaining two groups to constitute a second largest capacitor 16C, selecting first 8 capacitors in the last group to constitute a capacitor 8C, selecting the next 9th-12th capacitors in the last group to constitute a capacitor 4C, selecting the 13th and 14th capacitors in the last group to constitute a capacitor 2C, and selecting the last two capacitors to constitute two capacitors C and C, respectively. In a class of this embodiment, step 4 is carried out as follows: □ in a first successive approximation conversion process, Group 1 and Group 2 are selected to constitute the largest capacitor 32C, Group 3 is selected to constitute the second largest capacitor 16C, Group 4 is selected to constitute capacitors 8C, 4C, 2C, C, and C; □ in a second successive approximation conversion process, Group 4 and Group 1 are selected to replace the largest capacitor 32C, Group 2 is selected to replace the second largest capacitor 16 C, Group 3 is selected to replace capacitors 8C, 4C, 2C, C, and C; □ in a third successive approximation conversion process, Group 3 and Group 4 are selected to replace the largest capacitor 32C, Group 1 is selected to replace the second largest capacitor 16C, Group 2 is selected to replace capacitors 8C, 4C, 2C, C, and C; □ in a fourth successive approximation conversion process, Group 2 and Group 3 are selected to replace the largest capacitor 32C, Group 4 is selected to replace the second largest capacitor 16 C, Group 1 is selected to replace capacitors 8C, 4C, 2C, C, and C; □ in a fifth successive approximation conversion process, Group 3 and Group 4 are selected to replace the largest capacitor 32C, Group 1 is selected to replace the second largest capacitor 16C, Group 2 is selected to replace capacitors 8C, 4C, 2C, C, and C; and in a sixth successive approximation conversion process, Group 4 and Group 1 are selected to replace the largest capacitor 32C, Group 2 is selected to replace the second largest capacitor 16C, Group 3 is selected to replace capacitors 8C, 4C, 2C, C, and C. In a class of this embodiment, the above-mentioned six successive approximation conversion processes are repeated in loops. Advantages of the method according to embodiments of the disclosure are summarized as follows: by sorting, combining, and adjusting the capacitive array, the capacitor mismatch can be reduced. Compared with the conventional methods, the ADC mismatch error is reduced and the accuracy is improved. Compared with the conventional technology, this invention does not require additional FIG. 1 is basic architectural components of a traditional smart sensor node; FIG. 2 is a typical architecture of SAR ADC; FIG. 3 is a capacitor-resistor combined 14-bit SAR ADC architecture; FIG. 4A is a conventional binary capacitive array of the capacitor-resistor combined 14-bit SAR ADC architecture in FIG. 3; FIG. 4B is a scheme of splitting binary capacitive array into unary architecture; FIG. 4C is a scheme of sorting 64 independent capacitors; FIG. 4D is a scheme of dividing the 64 independent capacitors into 4 groups; FIG. 5 is a scheme of rotating the 4 groups of capacitors in successive approximation conversion process; FIG. 6 is a chart of comparison of probability density function between capacitors with and without sorting; FIG. 7 is 500 Monte Carlo SFDR simulation results for 14-bit SAR ADC in the conventional method, the capacitor re-configuring method, and the method of the invention with σ[u]=0.1% (left) and σ[u]= 0.2% (right); and FIG. 8 is 500 Monte Carlo SNDR simulation results for 14-bit SAR ADC in the conventional method, the capacitor re-configuring method, and the method of the invention with σ[u]=0.1% (left) and σ[u]= 0.2% (right). The capacitor optimization method of the invention is for enhancing the linearity of capacitor-resistor combined SAR ADC for smart sensor applications. The capacitor optimization method of the invention includes splitting a binary capacitor array into unit capacitors and then sorting and grouping, and finally, according to certain rule, rotating the original binary capacitive array involved in successive approximation conversion. The method of the invention applied to a traditional 14-bit resistor-capacitor successive approximation ADC as shown in FIG. 3 which consists of a 6-bit capacitor DAC and an 8-bit resistor string DAC is described in detail below. The optimization method proposed in this invention is shown in FIGS. 4A-4B. First, the binary capacitive array is split into unary capacitive array. After powered on, all the unit capacitors are measured and sorted, and digital codes of all capacitors are obtained. Then, grouping is performed according to the rules shown in step 3, and finally, the capacitors are replaced according to the rules during each successive approximation conversion according to the strategy shown in FIG. 5. For example, for the i-th input Vin (i), the largest capacitor 32C in the original binary capacitive array is replaced by the first group and the second group, the second largest capacitor 16C is replaced by the third group, the remaining 8C, 4C, 2C, C, and C capacitors are replaced by the capacitors in the fourth group. For the next input Vin (i+1), a different rule is taken to replace the capacitors. A complete period includes six rounds in total. The reason why the linearity can be improved by this invention lies in the following two aspects. Firstly, according to statistical principles, the standard deviation of the distribution function after sorting is reduced so that the equivalent capacitor mismatch error is reduced, according to the distribution function shown in FIG. 6. In FIG. 6, the distribution function of the sorted capacitor is narrower and higher relative to the unit capacitance (black curve in FIG. 6), which means smaller standard deviation and smaller capacitor mismatch error. Secondly, the capacitor mismatch error accumulates continuously in a traditional SAR ADC. In order to eliminate the accumulation, the capacitive array optimization technique proposed in this invention sorts the unit capacitors firstly, then divides the unit capacitors into 4 groups, alternates the 4 groups of capacitors in sequence according to six different arrangements. This invention does not need to introduce an extra operational amplifier to conduct noise shaping, does not require any calibration algorithms, and does not require extra capacitors. The accumulated mismatch error is quantified by the variance σ[INL]^2 of INL: $σ INL 2 = n ( N T - n ) N T 3 σ u 2 , ( 1 )$ in which N[T ]is the total number of capacitors; for N-bit SAR ADC, N[T]=2^N, n is the number of used components; for the traditional capacitive array, when n is equal to N[T]/2, there is the formula as follows: $σ INL , max = σ u 2 N + 2 , ( 2 )$ which demonstrates that the maximum error of the traditional SAR ADC occurs at the midpoint, and the maximum integral nonlinearity error is $σ INL , max = σ u 2 N + 2 .$ According to this invention, four groups of capacitors rotates in turn, it is assumed that the digital code n[1], n[2], n[3 ], and n[4 ]represent conversion results for the first, the second, the third, and the fourth conversion, respectively, and the variance for the four times of conversion is: σ[n1234]^2=(n[1]+n[2]n[3]+n[4])σ[u]^2. When n[1234]=n[1]+n[2]+n[3]+n[4], the INL variance is calculated as follows: $σ INL _ group 2 = ( n 1234 ) [ N T - ( n 1234 ) ] 16 N T 3 σ u 2 , ( 3 )$ in which N[T ]is the total number of capacitors, σ[u ]is the mismatch error of unit capacitor. When n[1234]=N[T]/2, σ[INL][groud][,max ]is calculated as follows: $σ INL group , max = σ u 2 N + 6 . ( 4 ) .$ Comparing (2) and (4), it demonstrates that the rotation of the four groups of capacitors reduces the integrated nonlinear error to one quarter of that of the traditional SAR ADC, and as well known, the reduction of integrated nonlinear error corresponds to increase of SFDR. In conclusion, grouping and sorting results in a reduction of the capacitor equivalent mismatch error. The capacitor replacement rule avoids the error accumulation, thus improving the linearity. Therefore, this invention combines the advantages of two methods to achieve a substantial increase in linearity. FIG. 7 and FIG. 8 show the SFDR and SNDR results based on the conventional method, the capacitor re-configuring method proposed in Fan, and the capacitive array optimization technique proposed in this invention for 500 Monte Carlo runs based on 14-bit resistor-capacitor successive approximation ADC. In the simulation, the unit capacitance is 100 f, and the unit capacitance mismatch error σ[u ]is 0.1% and 0.2%, respectively. Table 1 summarizes the performance comparison among the traditional method, the capacitor re-configuring method proposed in Fan, and the capacitive array optimization technique of this invention. For capacitor re-configuring technique, extra 64 capacitors were added to the capacitive array, and the difference between the maximum value and minimum value of SFDR in the set of values obtained by the Monte Carlo simulation reaches 26.6 dB with σ[u]=0.2%, the capacitive array optimization technique of this invention makes the SFDR more concentrated in the center, and reduces the difference between maximum value and minimum value of SFDR to only 6 dB with σ[u]=0.2%, which means more stable performance enhancement. It is worth to mention that the concentration becomes more obvious for the SNDR and SNR results. In a word, the capacitive array optimization technique of this invention achieves excellent performance enhancement without extra capacitors and without sacrificing the sampling rate of conventional SAR ADC. TABLE 1 Comparison of SFDR between conventional method, the re-configuring techniques of Fan, and the method of the invention in 14-bit ADC Averaged SFDR (dB) Averaged SNDR (dB) σ = 0.1% σ = 0.2% σ = 0.1% σ = 0.2% Conventional 85 79.4 78.7 73.7 14-bit SAR ADC Re-configuring 102.1 96.9 85.5 84.9 technique in 14-bit SAR ADC of Fan The method of 102.9 96.6 84.8 82.3 the invention in 14-bit SAR ADC Compared with the conventional resistor-capacitor SAR ADC, this invention improves the average SFDR by about 17.2 dB and the average SNDR by about 8.6 dB with σ[u]=0.2%. Although the capacitor re-configuring proposed in Fan can also improve SFDR, but an additional 64 extra capacitors are needed. This invention avoids the addition of 64 extra unit capacitors, further reduces the power consumption and silicon area. In this invention, a novel capacitor array optimization scheme is proposed based on conventional capacitor-resistor SAR ADC. By sorting, grouping, and rotating the capacitive array, the mismatch errors of the ADC can be counteracted. Compared with the traditional noise shaping technology or the Least-Mean-Square (LMS) calibration algorithm, the control logic of this invention is much easier, and the hardware cost is much smaller, reducing the power consumption and the area at the same time. Compared with the capacitor re-configuring method of Fan, this invention avoids the introduction of additional capacitors but achieves the dynamic parameters nearly similar to the capacitor re-configuring method. Unless otherwise indicated, the numerical ranges involved in the invention include the end values. While particular embodiments of the invention have been shown and described, it will be obvious to those skilled in the art that changes and modifications may be made without departing from the invention in its broader aspects, and therefore, the aim in the appended claims is to cover all such changes and modifications as fall within the true spirit and scope of the invention. 2. A method of arranging a capacitor array of a successive approximation register analog-to-digital converter, the method comprising: wherein 4) comprises six successive approximation conversion processes as follows: 1) separating a 6-bit binary capacitive digital-to-analog converter (DAC) of 64 unit capacitors into 64 independent capacitors; 2) sorting the 64 independent capacitors from highest to lowest by capacitance, then using 64 digital codes from C1 to C64 to number the 64 independent capacitors sorted from highest to lowest by capacitance, and then recording the 64 digital codes and corresponding capacitance values of the 64 independent capacitors in a register; 3) dividing the 64 independent capacitors into four groups as follows: i) Group 1 comprising C1, C64, C3, C62, C5, C60, C7, C58, C9, C56, C11, C54, C13, C52, C15, and C50; ii) Group 2 comprising C17, C48, C19, C46, C21, C44, C23, C42, C25, C40, C27, C38, C29, C36, C31, and C34; iii) Group 3 comprising C32, C33, C30, C35, C28, C37, C26, C39, C24, C41, C22, C43, C20, C45, C18, and C47; and iv) Group 4 comprising C16, C49, C14, C51, C12, C53, C10, C55, C8, C57, C6, C59, C4, C61, C2, and C63; and 4) in a successive approximation conversion process, selecting two of the four groups to constitute a largest capacitor 32C, selecting a first of the remaining two of the four groups to constitute a second largest capacitor 16C, selecting 1th-8th capacitors in a second of the remaining two of the four groups to constitute a capacitor 8C, selecting 9th-12th capacitors in the second of the remaining two of the four groups to constitute a capacitor 4C, selecting 13th and 14th capacitors in the second of the remaining two of the four groups to constitute a capacitor 2C, and selecting 15th and 16th capacitors in the second of the remaining two of the four groups to constitute two capacitors C and C, respectively; in a first successive approximation conversion process, Group 1 and Group 2 are selected to constitute the largest capacitor 32C, Group 3 is selected to constitute the second largest capacitor 16C, Group 4 is selected to constitute the capacitors 8C, 4C, 2C, C, and C; in a second successive approximation conversion process, Group 4 and Group 1 are selected to replace the largest capacitor 32C, Group 2 is selected to replace the second largest capacitor 16C, Group 3 is selected to replace the capacitors 8C, 4C, 2C, C, and C; in a third successive approximation conversion process, Group 3 and Group 4 are selected to replace the largest capacitor 32C, Group 1 is selected to replace the second largest capacitor 16C, Group 2 is selected to replace the capacitors 8C, 4C, 2C, C, and C; in a fourth successive approximation conversion process, Group 2 and Group 3 are selected to replace the largest capacitor 32C, Group 4 is selected to replace the second largest capacitor 16C, Group 1 is selected to replace the capacitors 8C, 4C, 2C, C, and C; in a fifth successive approximation conversion process, Group 3 and Group 4 are selected to replace the largest capacitor 32C, Group 1 is selected to replace the second largest capacitor 16C, Group 2 is selected to replace the capacitors 8C, 4C, 2C, C, and C; and in a sixth successive approximation conversion process, Group 4 and Group 1 are selected to replace the largest capacitor 32C, Group 2 is selected to replace the second largest capacitor 16C, Group 3 is selected to replace the capacitors 8C, 4C, 2C, C, and C; and the six successive approximation conversion processes are repeated in loops. Patent History Publication number : 20190131998 : Aug 28, 2018 Publication Date : May 2, 2019 Hua FAN Jingxuan YANG Quanyuan FENG Dagang LI Daqian HU Yuanjun CEN Hadi HEIDARI Franco MALOBERTI Jingtao LI Huaying SU Application Number : 16/114,300 International Classification: H03M 1/46 (20060101);
{"url":"https://patents.justia.com/patent/20190131998","timestamp":"2024-11-03T11:02:20Z","content_type":"text/html","content_length":"84284","record_id":"<urn:uuid:aa1d831c-b9fc-4272-84ca-421b9d68071e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00521.warc.gz"}
According to a survey of American households, the probability that the residents own two cars if... According to a survey of American households, the probability that the residents own two cars if... According to a survey of American households, the probability that the residents own two cars if annual household income is over $75,000 is 73%. Of the households surveyed, 60% had incomes over $75,000 and 72% had two cars. The probability that the residents of a household own two cars and have an income over $75,000 a year is..... round 3 decimal places Suppose that Event A : residents own two cars Event B : had incomes over $75,000 Now , from given P(A) = 72% = 0.72 P(B) = 60% = 0.60 Also given , P(A | B) = 73% = 0.73 But from the definition of the conditional probability , P(A | B) = P(A Answer : The probability that the residents of a household own two cars and have an income over $75,000 a year is 0.438
{"url":"https://justaaa.com/statistics-and-probability/381658-according-to-a-survey-of-american-households-the","timestamp":"2024-11-01T19:04:20Z","content_type":"text/html","content_length":"42624","record_id":"<urn:uuid:198ad7fc-115d-45a1-ba0a-41a9fb3085ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00400.warc.gz"}
BJu Tijdschriften □ 1. Introduction In contemporary society, digitalisation is proceeding at the speed of light. In this rapidly changing environment, it is important to keep working and reflecting on the use of digitalisation in (academic) education. After all, digitalisation can be a means to improve the quality of education.^1xIn this way, see the letter of the Dutch Minister of Education, Culture and Science on digitalisation of 16 October 2018. Enhanced by the financial impulse by the Executive Board to the Educate-it program,^2xwww.uu.nl/nieuws/ educate-it-krijgt-extra-miljoenen-voor-verdere-digitalisering-onderwijs (last accessed on 19 May 2019). Utrecht University, therefore, focuses on blended learning as an integral part of its education.^3xUtrecht University, Strategic Plan 2016-2020, available at www.uu.nl (last accessed on 19 May 2019). This phenomenon is not limited to Utrecht University, but can also be seen elsewhere in the Netherlands and abroad. Blended learning can be defined as “a formal education program in which a student learns at least in part through online delivery of content and instruction with some element of student control over time, place, path, and/or pace and at least in part at a supervised brick-and-mortar location away from home.”^4xStaker & Horn 2012, p. 3. Critical on the term blended learning are Oliver & Trigwell 2005, who defend subverting the term and using it to describe an approach that focuses on the learner and its learning (instead of on the teacher). These authors suggest that an in-depth analysis of the variation in experience of learning of students in a blended learning context is needed in the future. Blended learning has received quite some attention in the Netherlands over the past years in the area of study of law. De Vries, Director of Education of the Department of Law at the Faculty of Law, Economics and Governance (LEG) of Utrecht University, recently emphasized the added value of digital resources in (legal) education, at least as long as they serve the study of law. “Blended learning, as a structural application in legal education, allows students to master the law at a higher level. In this way, students can get into a study rhythm that allows them to connect the scarce contact moments with each other”, according to De Vries.^5xDe Vries 2019 (our translation). However, one can question whether blended learning actually contributes to the students learning process and the quality of education?^6xFurthermore, Schutgens 2019 stated that old-fashioned live-teaching and, above all, having offline students have their advantages. First, a discussion of the educational context, i.e. the importance of the focus on student’ learning and the effects and possibilities of blended learning will follow. Second, the teaching background and context as well as the pilot including the Scalable Learning environment will be described. Third, the methods will be discussed, to be followed by the results and evaluation. Finally, in the final discussion (‘conclusion’) some recommendations will be provided as to where to find (further) possibilities to stimulate students towards a deep approach to their □ 2. Educational Context: Blended Learning and the Effects on Student’ Learning 2.1. Focus on Student’ Learning All education, whether online or offline, should be aimed at supporting the learning process of students. As Biggs and Tang state, the focus should be on what students do, not primarily on what teachers do; what teachers do should serve student’ learning.^7xBiggs & Tang 2011. In educational literature, a common distinction is made between deep and surface approaches to learning.^8xBiggs 1987b; Biggs & Tang 2011, esp. p. 24 et seqq. In a surface approach to learning the students’ intention will be to get the task done with minimum effort in order to meet the course requirements, i.e. by routinely only memorising facts and procedures (rote learning). On the other side of the continuum is the deep approach to learning, meaning that a student is actively engaged in the search for underlying meanings, i.e. by relating ideas to previous knowledge and experience. Deep learning is a way of learning aimed at understanding the meaning behind (legal) texts, critically examining new facts and ideas, tying them into existing cognitive structures and discovering links between ideas. A deep learning approach is of key importance for the engagement of students with their subject material, and results in an improved quality of learning outcomes.^9xPostareff, Parpala & Lindblom-Ylänne 2015, p. 316 with references. There are various encouraging and discouraging factors that can stimulate the adoption of deep approaches to learning, which may be situated in the context of a learning environment, in students’ perceptions of that context, and in individual characteristics of the students themselves (e.g. study skills, level of interest etc.). 2.2. Blended Learning, Student’ Preparation and Face-to-Face Education The use of information (and communication) technology (I(C)T), combined with (various types of) in-class learning activities can support the student’s (higher levels of) learning.^10xMcCray 2000. Furthermore, according to Yildirim 2017, p. 86, blended learning offers ‘various educational options to learners, minimizes the inequality of opportunity, provides individualized solutions pertinent to learning differences and eliminates hindrances related to space and time.’ One way in which blended learning has the potential to do so is when it is implemented such that students get the opportunity to prepare themselves for class by being enrolled in an online learning environment. These online learning environments provide students with the opportunity to, independent of time, place or pace, prepare themselves for class in a learning setting that was designed such that it optimizes learning. Other than paper-based materials, a learning environment that uses IT can implement a number of design principles that have been shown to facilitate learning: content matter can be presented in various forms (e.g., text, video, audio),^ 11xMayer & Moreno 1998. hypertext make it easy to navigate through the information,^12xJacobson & Spiro 1995. (and immediate feedback can be added to formative assessment.^13xDihoff et al. 2004; Epstein et al. 2002 Students’ preparation by using an IT based online learning environment, i.e. a blended learning environment, could support (deep) learning. In literature, however, success of e-learning results are often considered from an institutional or technological point of view, or are based on the question whether e-learning initiatives are continued or not. According to us, this should not be the decisive criterion. Our point of view is, as already mentioned, that e-learning initiatives should aim to improve the quality of teaching and learning. One could also say that face-to-face education can become more focused on deep learning when using an online learning environment that encourages students to prepare themselves for class. A recent study on flipping the classroom shows in average a small positive effect on learning outcomes. Van Alten and others call flipping the classroom a promising pedagogical approach when appropriately designed.^14xVan Asten et al. 2019. In this article we will describe our findings as to the question whether such positive effect has been found in our situation in which we used Scalable Learning, which is a way of flipping the classroom.^15xSee on this topic, e.g., Brame 2013; - It is not the first course in the law curriculum at Utrecht University in which this way of flipping the classroom has been used, a flipping the classroom concept was used already in the first course of the law curriculum (‘Foundations of Law’). 2.3. The Role of the Teacher The form of blended learning just described provides the possibility to improve face-to-face classroom interaction among students and between students and teachers. The latter interaction is very important because it is one of the factors that encourage or discourage the adoption of deep(er) approaches to learning – i.e. the approach students take to the learning materials is influenced by the role the teacher takes upon him/her.^16xSee, e.g., Campbell et al. 2001. If teachers practise an approach that is more student oriented and focus more on changing their concepts, and are more involved, students are more inclined to go into deep approaches to learning.^17xBaeten et al. 2010. If teachers instead focus (only) on transmitting knowledge, students are less inclined to go into deep approaches to learning. This fits into the two ways of teaching distinguished by Trigwell and others: one that focuses on transmitting knowledge and one that focuses on students and on achieving a change in their conceptions. The first way of teaching more likely leads to a surface approach, the second way of teaching to a deep approach.^18x Trigwell, Prosser & Waterhouse 1999. Because the approach students adopt is not a personality trait, but is also related to their perception of the task to be accomplished,^19xSee already Marton & Säljo 1976. teachers’ conceptions of teaching and their beliefs as to the purpose of legal education will have consequences for their teaching approach and on the perceptions, students have of their tasks.^20xChesterman 2016, p. 77. In our study we will also consider the effect of teaching approaches on the adoption of (surface and/or deep) approaches to learning and see whether approaches focussed on information transmission rather lead to a surface approach (and lower quality of learning outcomes), and an approach focussed on changing conceptions rather lead to a deep approach to learning (and higher quality of learning outcomes). □ 3. A New Online Learning Environment: Scalable Learning 3.1. Introduction: Background of the Teaching Environment The course ‘Introduction to Private law - Property Law’ is a first-year course in the law curriculum. It is the second course of their curriculum in private law. Approximately 700 students take part in this course every year. The course lasts ten weeks, including an exam week. Each education week consists of a lecture of two hours and two small tutorials of two hours each. The lectures take place in large groups (approx. 700 students), tutorials in smaller groups (approx. 25 students). Lectures are used for transferring knowledge, although interactive elements are nowadays increasingly incorporated in lectures. In smaller groups active learning, and active participation of students, is crucial. Tutorials are given by various teachers, who all have their own style and methods. Students prepare by studying the study materials (literature and case law) on the most important property law doctrines and the pertaining conceptual framework, i.e. doctrines such as possession, ownership, transfer of ownership, prescription, etc. They also have to prepare the assignments carefully, elaborating their solutions in writing. Self-study assignments at knowledge level as part of an e-learning environment have to be completed by students, who have to respond to questions to which they received (automatically generated, pre-programmed) online feedback. Students have both an intermediate and a final exam, consisting of open-ended questions, solutions to (hypothetical) cases, and discussion of (theoretical) The Scalable Learning environment^21xIn the academic year 2018-2019 a project, financed by the Utrecht Education Incentive Fund (Faculty LEG), made it possible to create interactive materials (interactive knowledge clips in Scalable Learning) and to experiment with blended learning. consisted of nine knowledge clips aimed to impart basic knowledge in an appealing way to students, at their own pace. The clips lasted between 2,5 and 7 minutes. The environment is intended to activate ‘prior knowledge’. We have added basic questions to the knowledge clips on some of the most important topics of the course to allow students to test whether they have understood the material and to alert them to important concepts. Different types of questions were used in the e-learning environment (as part of the knowledge clips) in order to contribute to a more varied range of forms of and a more challenging education. The topics were as follows: the system of property law, possession, looking up important manuals in the digital library, delivery of movable property, causal system, commingling and specification, accession and specification, accessoriness and droit de suite and Roman right of retention. During the tutorials, assignments are used to practice applying the acquired knowledge and to discuss more difficult matters. The focus is on the skills to solve cases, analyse judgments and to analyse and apply legislative provisions. Then, during tutorials the teacher could try to make students gain a deeper insight through in-depth questions. In addition, we used Learning Analytics, i.e. the ‘measurement, collection, analysis, and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.’ Learning Analytics in Scalable Learning made it possible for teachers to see students’ weekly preparation completion, to see lecture and quiz completion percentages, to monitor when students pause a clip, when they get confused, when and what questions they have and ultimately, when they return to an earlier moment in the clip. Learning Analytics made it possible for teachers to register data and/or scores, view them prior to teaching, and incorporate results into the teaching material, allowing to address issues that students perceive as difficult. In the module review, the teacher could view the answers to the questions and use these in class review. We used Learning Analytics for two purposes: 1. To track students’ activity (as a minimal preparation for in-class education); 2. As a starting point for our in-class discussion. □ 4. Methods 4.1. Starting Point: (Previous) Evaluations After the course a (formalised) discussion between a group of students who took the course, organised, monitored and chaired by student members of the Education Committee, and the course coordinator, took place that showed that our knowledge clips were generally well received. As the clips were made by students, and they were left with some freedom as to how to give form to the knowledge clips, its quality somewhat varied. The official evaluation confirmed this picture. Student satisfaction is however not sufficient for the conclusion that a learning environment contributed to student’ learning. In this respect, another use of Scalable Learning at another Department of the LEG Faculty, namely Governance, at Utrecht University, has been evaluated by a focus group, an interview and a questionnaire at midterm and at the end of the course (N = 78 resp. 53). In this course too, students were generally positive about the knowledge clips. They said it helped them to acquire a better understanding of the material. According to teachers, students had a better basic knowledge when entering the classroom, and so were better prepared. Teachers said that more in-depth questions were asked during their lessons. The question remains whether the use of Scalable Learning automatically leads to a deeper level of learning or not, and what (crucial) role teachers play (the teacher might be a mediating variable). 4.2. Research Question and Expected Outcomes The remainder of this article will present the outcomes of a quantitative study on the (possible) change in surface vs. deep learning of law students in their first-year ‘Introduction to Private Law’ course as a result of the introduction of the new blended learning environment.^22xIn conducting this study the approach of Bishop-Clark and Dietz-Uhler 2012 has been followed. This study has a similar but slightly different structure, in which a different course has been studied, by Van Dongen & Meijerman 2019. The purpose of this study is to measure the learning effect of a blended course design, which focusses on acquiring basic knowledge and keeping the learning continuum of students, and of teaching approaches during face-to-face meetings, on the preparation, learning approaches and learning outcomes of first-year law students. The research questions were as follows: What are the effects of the new (blended) course design on the preparation, the learning approaches and the learning outcomes of first-year law students? What effect does the teachers’ approach on teaching have on students’ learning? We expected that students would be more involved and better prepared using the online learning environment, considering the modern and digital way it was presented and the semi-obligatory nature of using it, and that it would indirectly lead to deeper learning as more time could be devoted to promoting such approach during class by the teacher, and also to improved learning outcomes. Therefore, we expected the approach taken by the teachers to be of importance. It has to be noted that no explicit assignment was given to the teachers as to which approach to teaching they had to take (although, of course, more experienced teachers are (often) familiar with different kind of teaching, and of the difference between surface and deep approaches to 4.3. Data Collection Before the actual research underlying this article was conducted, approval for the research design was obtained from the Faculty’s Ethical Review Committee of LEG.^23xThis (optional) review by the Faculty’s Ethical Review Committee has been conducted in order to safeguard the ethical quality of the research. The Ethics Committee if LEG aims to stimulate and facilitate ethical conduct by the faculty with regard to the rights, safety and well-being of the participants in scientific research, i.e. of the students in ours study. As a result of their suggestions, slight adjustments in the text of the questionnaire, and the introductory text accompanying the questionnaire were made. Concerning the preparation data was collected in week six of the course. The preparation of students as to basic knowledge has been measured by looking at the completion of the Scalable Learning environment (i.e., did they not complete, partially complete or totally complete the learning environment). For three reasons data was only collected in week six. First, uncertainties about the use of this digital environment are expected to be balanced by that time. Second, the topic of the question (i.e., question 4A) that students had to answer during the exam corresponded with the topic presented during week six, so it seems best fit for comparing preparation with actual results. Third, as Learning Analytics had to be entered manually for each student of each tutorial, it was impossible to measure more weeks. In the recording, the entire student cohort was noted except for students who took this course before but had failed the exams, i.e. approx. 600 students. The primarily aim was to check the correctness of the premise, namely whether there is more basic knowledge, and the secondary aim was to find out whether there is a correlation with the mark on question 4A of the exam. Concerning the learning approaches two questionnaires on learning approaches were set out. During the first week of the Introduction of Private Law the approx. 600 students filled in their first questionnaires during the tutorials, and during the last week of the course, the second questionnaires were filled in during the tutorials in 26 student groups. The questionnaires were filled in prior to the start of the course, and again during the last tutorial, i.e. the second-last teaching moment - after all, experience shows that students often skip the last teaching moment. For the questionnaires, two different versions of the so-called Study Process Questionnaire (R-SPQ-2F).^24xBiggs 1987a; Biggs, Kember & Leung 2001, p. 133 et seqq. See also the Dutch version, received from the Centre of Expertise for Higher Education, University of Antwerp, see Stes, De Maeyer & Van Petegem 2013. The questionnaire was adapted and made applicable for Introduction to Private Law by aligning it to the content accordingly. The questionnaires contained 20 questions on students perception of their study process that could be answered using a 5-point Likert scale (1= totally not agree; 5= totally agree), i.e. measuring their deep and surface approaches to learning (each with a motive and strategy subscale). Three extra questions on the use of knowledge clips, digital environments and/or Scalable Learning were added. With these questionnaires we intend to obtain a more profound understanding of how students learn (with the method of learning at the start of the course as starting point), and whether this changes during the course Introduction to Private Law. Concerning the teachers approach to learning between the final week and one week after the course the 10 teachers involved handed in their teachers’ questionnaires. These questionnaires were meant to gain insight into the activities and the role workgroup teachers take on. This questionnaire was a modified version of the Approaches to Teaching Inventory (ATI),^25xStes, De Maeyer & Van Petegem 2008. consisting of 22 questions with a 5-point Likert scale (ranging from ‘this item was only rarely/never true of me’ through to ‘(almost) always true of me’). Examples of questions asked are: ‘During the seminars I thought it was important to present as much as possible factual knowledge to students so that they know what they have to learn for the course Introduction to Private Law’, ‘My aim was to help students develop new insights.’ The ATI contains two scales, representing the two (fundamentally different) approaches to teaching, namely information transmission/teacher-focused approach, and conceptual change/student-focused approach (see para. 4.4). The two scales contain two sub-scales: intention and strategy sub-scales.^26 xSee Trigwell, Prosser & Waterhouse 1999, p. 62. See also Prosser & Trigwell 2006. Two additional questions were added on the use of Scalable Learning and its Learning Analytics by teachers for their teaching.^27xThe Dutch questionnaires mentioned in the previous footnotes as well as the questionnaires made in the context of a previous study (Van Dongen & Meijerman 2019) were the basis for the current questionnaires. The questionnaires were compared with the original English version, adapted to the specific field Introduction to Private Law and supplemented with a few questions. Some colleagues proofread and then we finalised the questionnaires. Concerning the learning outcomes the results of the final exam were collected and the results of question 4A were collected separately since it tested a higher level of learning (make an analysis of a statement about property law)with regard to the subject discussed in the Scalable Learning environment in the sixth week. After collecting all this information, the results from the ATI were collected and entered into an Excel sheet. The modified version of R-SPQ-2F was edited by the Test and Evaluation Service of Education and Learning, FSW, who also imported the results in an Excel sheet. The exam results were inputted in Excel, checked, corrected and supplemented where needed. At the end, all Excel files were merged into one and subsequently imported in SPSS. □ 5. Results 5.1. Validation and General Results In the pre-course survey there were 502 responses, and in the post-course survey 452 responses. 612 students (of the 696 enrolled in the course) participated in the final exam. The scale reliability, in other words, the homogeneity of the items of the two questionnaires, was calculated by means of Cronbach’s Alpha (α). Cronbach’s α is a measure of internal consistency, i.e. how closely a set of items is related as a group. The Cronbach’s α for the 10-item part on the deep approach of the Revised version of the Study Process Questionnaire was .710.^28xThe scale is from 0 to 1, from totally not to perfectly homogeneous. The Cronbach’s α for the 10-item part on the surface approach of the same questionnaire was .745.^29xThese measurements were taken from the pre-course surveys. The Cronbach’s α of the post-course surveys was .777 (deep approach) and .767 (surface approach). The Cronbach’s α for the 22 items of the Approaches to Teaching Inventory had to be measured for the two distinct constructs: the test of one of them, information transmission/teacher-focused approach to teaching (ITTF), received a values of α of .888, while the conceptual change/student-focused approach to teaching (CCSF), received a value of α of .529. These Cronbach’s α values indicate that both the Study Process Questionnaire and the Approaches to Teaching Inventory was as far as it concerns the constructs ITTF, are reliable and are valid to use in this context. The CCSF is Cronbach’s alpha is insufficiently reliable, and therefore, conclusions about the influence and/or role of the latter must be taken with caution. We asked students to compare the usefulness of the knowledge clips in the Scalable Learning environment with knowledge slips used in previous courses in the first-year of the curriculum. They had to answer these questions on a 5-point Likert scale (ranging from ‘this item was only rarely/never true of me’ through to ‘(almost) always true of me’). Students believed that (in comparison) our knowledge clips helped them less in their preparation for the face-to-face meetings and the exams (mean difference between our course and previous courses (M) = -1.15, standard deviation (SD) = 1.49, n = 381). Furthermore, in average they indicate a similar contribution to their comprehension of learning material, in comparison with previous courses (mean difference (M) = -0.053, SD = 1.39, n = 385). Finally, compared to previous courses in which online environments were used, in average their motivation to study the material slightly decreased (mean difference (M) = -0.133, SD = 1.32, n = 385). 5.2 Changes in Students’ Learning Approaches During the Course Based on the pre-course and post-course questionnaires, high scores on one approach to learning (deep or surface) are moderately negatively correlated with low scores on the other approach to learning (deep or surface). A numerical summary of the strength and direction of the relationship between two variables has been calculated by means of the Pearson correlation coefficient (r ).^30xNumber 1 means a perfect correlation, 0 means no correlation at all. The sign in front of the number (- or +) indicates whether there is a negative correlation (if one variable increases, the other decreases) or a positive correlation (if one variable goes up, so does the other). As to the (statistically) significant correlation, the deep approaches of students at the pre- and post-course measurement moments were positively correlated, Pearson’s r (370) = .545, p < .001; while the surface approach at the beginning and at the end of the course were (even more) positively correlated, Pearson’s r (369) = .637, p < .001. A positive relationship corresponds to an increasing relationship between the two variables. This shows that the deep and surface approaches are rather steady. Furthermore, between the pre-course deep approach and the pre-course surface approach exist a medium negative correlation, Pearson’s r (474)=-.350, p < .001;the same applies to the post-course deep and surface approaches, Pearson’s r (436)= -.411, p < .001. Keeping in mind that there was a considerable variance of possible approaches, a negative relationship corresponds to a decreasing relationship between the two variables.^31xThe correlations were significant at the 0.001 level (2-tailed); - In order to test the hypothesis that students who did (partly or fully) use and the students who did not use Scalable learning were associated with statistically different exam results, degree of self-regulated learning and (differences in) deep and surface approaches, an equal random sample of the first group was taken and compared to the second group by means of an independent samples t-test. Additionally, the assumption of homogeneity of variances was tested via Levene’s test. With regard to self-regulated learning (both at the beginning of the course, at the end of the course as well as the difference) equal variances can be assumed but no significant differences existed. As to the exam result on question 4A no equal variance could be assumed, and no significant differences existed between the two groups. Also with regard to differences between pre- and post-course measurements of deep and surface approaches no equal variance could be assumed, but no significant differences between two groups existed. How, on average, did the students gain in their approach during the course? Compared to the pre-course surveys, the post-course surveys did not show a significant increase or decrease of both surface and deep approaches to learning.^32xAlthough factor analysis showed five factors explaining more or less 52% of the variance, for this study we have chosen to stick to Biggs’s division of two factors. Starting from a fixed number of two in the factor analysis (Kaiser-Meyer-Olkin test), questions are arranged quite well, in accordance with the questions arranged by Biggs under the two approaches to learning. Nevertheless only 33% of the variance can be explained by the distinction between deep and surface approaches to learning (KMO of .836 showed that the size of number was very satisfying for the factor analysis). Apparently there are a lot of other factors present. The post-course approaches to learning were compared with the pre-course situation approaches to learning by using a one-sample paired t-test (i.e. a statistical method used to compare the mean difference between two sets of observations). There was no significant difference in the scores for the deep approach at the beginning (M = 3,12, SD = 0,50) and the deep approach at the end of the course (M = 3,10, SD = 0,54) conditions; t(369)=.910, p = .363). Neither was there a significant difference in the scores for the surface approach at the beginning (M = 2,50, SD = 0,59) and the surface approach at the end of the course (M = 2,50, SD = 0,59) conditions; t(368) = .186, p = .852. This outcome is remarkable, when comparing it with the previous study in which the deep approach results decreased.^33xSee Van Dongen & Meijerman 5.3 Differences in Groups and the Teacher No significant difference occurred as to the degree in which the Scalable Learning environment was or was not used by the various groups of students. As described in the last section, both deep and surface approaches remained at the same level throughout the course. However, when comparing individual teachers and looking into the differences of in- or decrease of deep approaches to learning, apparently two teachers acquired remarkably better results (teacher 2 and 7) in comparison with the other teachers (see table 1). Tabel 1 Difference between post- and pre- measurements in deep and surface approach (subdivided per teacher); amounts per group (n), mean ( ) with standard deviation ( n M SD 1 56 -0.0509 0.48929 2 50 0.2120 0.42852 3 26 0.0019 0.38613 4 42 -0.1298 0.58862 DA21 5 37 -0.0757 0.36999 6 47 -0.0489 0.58491 7 14 0.3143 0.51119 8 25 0.0920 0.51065 9 46 -0.1978 0.41925 10 27 -0.1315 0.46722 1 51 0.0000 0.47791 2 51 -0.1118 0.52034 3 30 -0.1200 0.40612 4 42 -0.0310 0.55281 SA21 5 36 0.0278 0.41601 6 48 -0.0208 0.49667 7 14 -0.3643 0.72495 8 25 0.0400 0.51962 9 46 0.1652 0.47992 10 26 0.2038 0.40815 With regard to the decrease of surface approach, also teacher 7 and in lesser degree teachers 2 and 3 stand out above the rest. It must be noted that the group of respondents (remark: not the group size of students) is by far the smallest for teacher number 7. This might have an influence on the outcome, although this cannot be ascertained. We also found that face-to-face teaching can make quite a difference. Based on the differences in deep and surface approaches during the course, the group was divided into three (unequal) groups. These groups were made based on the differences between post- and pre-measurements in deep and surface approach, for the ‘high achieving’ group of teachers: highest on difference in deep, lowest in change in surface; the ‘low achieving’ groups has the opposite characteristics; the middle group has results that fell in between. As we were interested in the general difference in increase and/or decrease in deep/surface approaches, an analysis of variance (ANOVA) was used, i.e. analysis where the means between groups is calculated and it is determined whether any of those means are statistically significantly different from each other. The ‘high achieving’ teachers taught groups in which the mean approach of students at the beginning of the course was lower (M = 2.97, SD = .426) compared to the other groups (M = 3.10, SD = .501 resp. M = 3.20, SD = .483). Of course, in such group an increase is more probable and to be expected.^34xNo significant difference in exam marks could be noted. A significant difference in the results in information transmission/teacher-focused approach to teaching (ITTF) and conceptual change/student-focused approach to teaching (CCSF)^35xA negative correlation was found between both approaches to teaching, Pearson’s r (639) = -.136, p = .001. exists between the three groups of teachers: strangely the highest result in information transmission/teacher-focused approach was found in the worst group, the second highest in the best group and the worst in the medium group of teachers. The groups of ‘high achieving’ and ‘low achieving’ teachers have a mean score though that is very close to each other (M = 3.24, SD = .111 vs. M = 3.14 SD = .668). In order to better understand what teachers did, and not only to base conclusion on the perception of teachers, a study of their actual behaviour is needed. Unfortunately, this was not possible in this study, but would be a fruitful addition for further studies. 5.4 Predictors of Deep Learning Approaches Our first part of the research question was what the effects of the new (blended) course design are on the preparation, the learning approaches and the learning outcomes of first-year law students. One conclusion we can draw based on our study is that only watching and/or completing the Scalable Learning environment in itself did not have any effect on the surface or deep approaches to learning (nor on the exam mark). Our next question concerned the effect of teaching approaches by teachers on the approaches to learning and/or the exam results: how are teacher approaches (information transmission teacher-focused (ITTF) or conceptual change student-focused (CCSF)) related to their deep/surface approaches (possibly in combination with the degree of preparation, based on their efforts in the Scalable Learning environment)? Unfortunately, in their answers to the questionnaire some questions were left unanswered by some teachers. Therefore, we replaced the missing values with the series’ mean values. The student’ approach to learning at the end of the course is the result of the pre-course level of deep learning and the influence of the teacher.^36xDA2 = c + β1 x DA1 + β2 x CCFA + β3 x ITTF. In this regression model C is the constant, β1, β2 and β3 are the regression coefficients. The influence of ITTF appeared not significant. Therefore, the regression analyses have been recalculated and the data without the (not significant) influence of ITTF are reported. The linear regression was calculated to predict the deep approach of students at the end of the course based on both the level of deep learning at the start of the course and the teaching approaches taken by teachers. A significant regression equation was found (F(2,367) = 81.734, p < .000) with an R² of .308). The predicted deep approach at the end of the course (DA2) is equal to 1.867 + (0.596 x DA1) – (0.175 x CCSF) . It appeared that the pre-course level of the deep approach was very dominating.^37xThis is in line with Van Dongen & Meijerman 2019, p. 562. The linear regression was calculated to predict the changes in deep approach of students that occurred during the course based on both the level of deep learning at the start of the course and the teaching approaches taken by teachers, A significant regression equation was found (F(2,367) = 42.940, p < .000) with an R² of .190). The predicted deep approach at the end of the course (DA2) is equal to 1867 – (0.404 x DA1) – (0.175 x CCSF).^38xOnly 3,6% of the variance in change of deep approaches can be explained by looking only to teachers’ approaches to teaching. The linear regression was calculated to predict the changes in deep approach of students during the course only based on the teaching approaches taken by teachers. A significant regression equation was found (F(2,367) = 6.889, p = .001) with an R² of .036). The predicted deep approach at the end of the course (DA2) is equal to 1.185 – (0.092 x ITTF) – (0.252 x CSSF). Interestingly, ITTF is significant here. Only 3,6% of the variance in change of deep approaches can be explained by looking only to teachers’ approaches to teaching. The linear regression was calculated to predict the changes in deep approach of students during the course only based on the teaching approaches taken by teachers. A significant regression equation was found (F(2,367) = 6.889, p = .001) with an R² of .036). The predicted deep approach at the end of the course (DA2) is equal to 1.185 – (0.092 x ITTF) – (0.252 x CSSF). Interestingly, ITTF is significant here.^39xAs the CCSF measure scored poor on its reliability not a lot of value can be given to the final part of this formula.^40xAnother strange result is the very weak but significant negative correlation existed between CSSF and SRL at the end of the course; Pearson’s r (450)= -0.104, p = .027. The linear regression was calculated to predict the surface approach of students at the end of the course based on both the level of surface learning at the start of the course and the teaching approaches taken by teachers, A significant regression equation was found (F(2,366) = 27.058, p < .000) with an R² of .418). The predicted surface approach at the end of the course (SA2) is equal to 0.148 + (0.649 x SA1) + (0.202 x CCSF). The linear regression was calculated to predict the changes in surface approach of students during the course based on both the level of surface learning at the start of the course and the teaching approaches taken by teachers. A significant regression equation was found (F(2,366) = 43.313, p < .000) with an R² of .191). The predicted surface approach at the end of the course (SA21) is equal to 0.148 – (0.351 x SA1) + (0.202 x CCSF). The linear regression was calculated to predict the changes in surface approach of students during the course only based on the teaching approaches taken by teachers. The level of ITTFA was not significant and therefore deleted from the model. Only 2,4% of the variance in change of surface approaches can be explained by looking only to concept changing approach of teachers. A significant regression equation was found (F(1,367) = 8.922, p = .003) with an R² of .024). The predicted deep approach at the end of the course (SA21) is equal to -.868 + (0.241 x CSSF). There was no significant correlation between the efforts in Scalable Learning and deep approach to learning at the end of the course. In their response to the question whether teachers used Learning analytics from Scalable learning for the construction/composition of their lessons, in average students gave a quite neutral answer (M = 3.0). In their response to the proposition that they tried to use Learning analytics from Scalable learning to connect their teaching with the questions of the students, teachers were also neutral but a little more positive (M = 3.3). In the final section it will argued that this, unfortunately, is a missed opportunity and some ideas for improvement will be presented. 5.5. Predictors of Exam Results^41xWhen ‘exam grades’ are mentioned here, both the final grade as well as their performance on the specific question about week 6 of the learning environment are aimed at. As there were no significant differences, no distinction is made in our discussion of the results here. Watching knowledge clips did not have a significant correlation with higher grades. Furthermore, no statistically relevant correlation (i.e. a mutual relationship) could be established between the exam results and the level of students’ surface or deep approaches to learning at the end of the course. Neither is there a significant correlation between self-regulated learning at the beginning or the end of the course and exam results. However, a significant (but only slight) negative correlation was found between the decrease or increase of surface approach over the course, and the decrease or increase of deep approach over the course. Because of this very weak significant negative correlation between the change in surface approach to learning during the course and the exam results, this points to the direction we hoped for: an increase in surface approach during the course is related to a lower mark at the exam, and a decrease of surface approach during the course to a higher mark. Looking at the exam results, the higher mean mark values of one teacher (‘teacher 7’) were reconfirmed. Although CSSF and DA21 a negatively correlated, Pearson’s r (370)= -0.156, p = .003 (-0.156) and CCSF and SA21 are positively correlated, Pearson’s r (369)=.154, p = .003,^42xBoth correlations are significant at the 0.01 level (2-tailed). no statistically significant correlation between teachers’ approaches and the exam results emerged. The absence of a statistically significant correlation between the teachers’ approaches to teaching and the exam results surprised, as our previous study indicated a positive correlation between the conceptual change student focused approach and exam results.^43xVan Dongen & Meijerman 2019, p. 562-563. An ANOVA test showed no significant difference in exam results between the groups or between teachers. Teachers’ approaches were not found to be a significant model to predict deep approaches to learning at the end of the course and the same applies to teachers’ approach as a predictor for the exam results. Of course we have to keep in mind that these conclusions are based on the perception teachers have of their own way of teaching. □ 6. Conclusions The purpose of this study was to measure the effect of a new, flipped, course design with integrated blended learning on learning approaches and learning outcomes of first-year law students in the area of private law (property law). Our point of view is that e-learning initiatives should aim at quality improvement of the teaching and learning experience. Therefore, the research question where we started with was: What are the effects of my new (blended) course design on the preparation, the learning approaches and the learning outcomes of first-year law students? What effect does the teachers’ approach on teaching have on students’ learning? We expected that students would be more involved and better prepared using the online learning environment, considering the modern and digital way it was presented and the semi-obligatory nature of using it (we did not expect a difference between honours and non-honours students), and that it would indirectly lead to higher/deeper learning as more time could be devoted to promoting such approach during class, and also to improved learning outcomes. Therefore, we expected the approach taken by the teachers to be of importance. In coming back to these issues, and answering the questions, three final observations have to be made for further improvement on the following issues: 1. Digital environment (course design) and students’ approaches to learning; 2. Connection between online- and offline activities (role of the teacher); 3. Alignment between exam and learning activities: 1. Digital environment. The added value of our environment for a better understanding of concept and a better preparation to class, was expected to be similar to that of other digital environments used at an earlier stage of the curriculum. It is remarkable that no significant increase or decrease regarding both surface and deep approaches to learning was measured. Furthermore, no significant difference as to the degree in which the Scalable Learning environment was or was not used by various groups of students was established. Clips and questions were made by students (under guidance of a teacher), allowing to take a next quality improvement step of our online learning environments and of the way these foster deep learning. Generally, according to literature, interaction and active learner engagement are important. In online environments learners require quality feedback to help them understand topics at a deeper level. Common practice in online learning environments includes reflective practice, learning-by-doing, active discussions and decision making.^44xCzerkawski 2014, p. 32, 35. Czerkawski argues that in order to foster deeper learning strong support systems, effective pedagogical methods and online community building activities are necessary. Furthermore, in online learning environments creative and meta-cognitive activities should be strongly emphasised.^45xDu, Yu & Olinzock 2011, p. 37. In our opinion, further reflection as to how these elements could be integrated in the course is needed. 2. Connection between online and offline activities (role of the teacher). When comparing individual teachers and studying the differences of in- or decrease between deep and surface approaches to learning, some teachers seem to acquire a remarkably better result in comparison with others. It so seems that face-to-face teaching makes a significant difference. As the mere supervision and/or completion of the Scalable Learning environment did not show any effect on the surface or deep approaches to learning, the added value for the increase of deep approaches to learning might be found in the feedback on offline activities during online activities (‘bridge the gap’) and possibly in the handout of assignments within the digital environment. Teachers could obtain (even) better profits from the value information they receive from Learning analytics. We have also found that teaching approaches (combined with students’ initial approach to learning) may explain part of the final approach to learning (although the initial deep approach results were quite dominant for the end level of deep 3. Alignment between exam and learning activities. In a previous Dutch study, it was stated that knowledge clips had a significant correlation with higher grades.^46xSee Steenman 2016. On the contrary, the outcome of the present study points at a different direction. This could be interpreted in the sense that blended learning has no value. We believe quite the opposite. If blended learning, in this case Scalable Learning, leads to a decrease of teachers’ time spent in class for the transfer of basic knowledge, time could be used more effectively (namely for more in-depth questions and/or more difficult cases). This choice therefore is efficient. Another aspect is the statement that assessment drives learning. At first sight it does not seem a good sign that regardless of the approach, taken it does not make a difference for the grade, i.e. for the degree in which the learning outcomes are fulfilled. Why actively engaging in an online environment if not related in any way to assessments and/or if there is no need for it or even of any use to the final assessment? Ideally, the information given online is needed for a fruitful development of deeper learning, necessary prior to final assessment. However, as the learning goals of the course under review mainly concern the lower orders of thinking (like recall of knowledge and application of knowledge), both approaches might be adequate to achieving the desired outcome. One remark made by a student in the margin of our questionnaire hits the spot: ‘the exams did not reach the scientific level of the in-depth articles that we have to read, and that is unfortunate. This does not motivate understanding and deepening of the materials, but motivate learning by rote [our translation]’. Nevertheless, if higher learning outcomes would be achievable, which we believe is possible in subsequent study years, deep approach to learning should be striven for. Therefore rote learning should be avoided, and exams should also aim at deeper levels of learning. Alexander, S. (2001). E-learning Developments and Experiences. Education + training, 43(4/5), 240-248. Van Asten, D.C.D., et al. (2019). Effects of Flipping the Classroom on Learning Outcomes and Satisfaction: A Meta-Analysis. Educational Research Review, 28, 1-18. Baeten, M. et al. (2010). Using Student-centred Learning Environments to Stimulate Deep Approaches to Learning: Factors Encouraging or Discouraging their Effectiveness. Educational Research Review, 5(3), 243-260. Biggs, J.B. (1987a). The Study Process Questionnaire (SPQ): User’s Manual, Melbourne: Australian Council for Educational Research. Biggs, J.B. (1987b). Student Approaches to Learning and Studying. Hawthorn, Victoria: Australian Council for Educational Research. Biggs, J., & Tang, C. (2011). Teaching for Quality Learning at University. What the Student Does (4th edn.), Maidenhead: Open University Press/McGraw Hill 2011. Biggs, J., Kember D., & Leung, D.Y. (2001). The Revised Two-Factor Study Process Questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71, 133–149. Bishop-Clark, C., & Dietz-Uhler, B. (2012). Engaging in the Scholarship of Teaching and Learning. Sterling, Virginia: Stylus Publishing. Brame, C.J. (2013). Flipping the classroom. Center for Teaching and Learning, Vanderbilt University. Retrieved from http://cft.vanderbilt.edu/guides-sub-pages/flipping-the-classroom/ (last accessed on 25 July 2019). Campbell, J., et al. (2001). Students’ Perceptions of Teaching and Learning: the Influence of Students’ Approaches to Learning and Teachers’ Approaches to Teaching. Teachers and Teaching: Theory and Practice, 7(2), 173-187. Chesterman, S. (2016). Chapter 5. Doctrine, Perspectives, and Skills for Global practice. In C. Gane & R. Hui Huang (Eds.). Legal education in the Global Context. Opportunities and Challenges (pp. 77-85), London-New York: Routledge. Czerkawski, B.C. (2014). Designing Deeper Learning Experiences for Online Instruction. Journal of Interactive Online Learning, 13(2), 29-40. Dietz-Uhler, B. & Hurn, J.E. (2013). Using Learning Analytics to Predict (and Improve) Student Success: A Faculty Perspective. Journal of Interactive Online Learning, 12(1), 17-26. Dihoff, R.E., et al. (2004). Provision of feedback during preparation for academic testing: Learning is enhanced by immediate but not delayed feedback. The Psychological Record, 54(2), Dyckhoff, A.L., et al. (2012). Design and Implementation of a Learning Analytics Toolkit for Teachers. Educational Technology & Society, 15(3), 58-76. Van Dongen, E.G.D., & Meijerman, I. (2019). Teaching a Historical Context in a First-Year ‘Introduction to Private Law’ Course. The Effects of Teaching Approaches and a Learning Environment on Students’ Learning. In V. Amorosi & V.M. Minale (Eds.), History of Law and Other Humanities: Views of the Legal World Across the Time (pp. 551-569), Madrid: Universidad Carlos III. Du, J., Yu, C., & Olinzock, A.A. (2011). Enhancing collaborative learning: Impact of Question Prompts design for online discussion. Delta Pi Epsilon Journal, 53(1), 28-41. Epstein, M.L., et al. (2002). Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. The Psychological Record, 52(2), 187-201. Jacobson, M. J., & Spiro, R. J. (1995). Hypertext learning environments, cognitive flexibility, and the transfer of complex knowledge: An empirical investigation. Journal of educational computing research, 12(4), 301-333. De Jong, B., & Heres, L. (2018). Evaluatierapport verrijkte kennisclips USBO, Utrecht: UU/USBO. Long, P., & Siemens, G. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 31-40. Marton, F., & Säljo, R. (1976). On Qualitative Differences in Learning: I – Outcome and process. British Journal of Educational Psychology, 46, 4-11. Mayer, R. E., & Moreno, R. (1998). A cognitive theory of multimedia learning: Implications for design principles. Journal of Educational Psychology, 91(2), 358-368. McCray, G.E. (2000). The Hybrid Course: Merging On-line Instruction and the Traditional Classroom. Information Technology and Management, 1(4), 307-327. McGill, T.J., Klobas, J.E. & Renzi, S. (2014). Critical Success Factors for the Continuation of E-learning Initiatives. Internet and Higher Education, 22, 24-36. Oliver, M., & Trigwell, K. (2005). Can ‘Blended Learning’ Be Redeemed?’, E-learning 2(1), p. 17-26. Postareff, L., Parpala, A. & Lindblom-Ylänne, S. (2015). Factors Contributing to Changes in a Deep Approach to Learning in Different Learning Environments. Learning Environments Research, 18 (3), 315–333. Prosser, M., & Trigwell, K. (2006). Confirmatory factor analysis of the approaches to teaching inventory. British Journal of Educational Psychology, 76, 405-419. Schutgens, R. (2019). Blended learning: mixed feelings. Enkele kanttekeningen bij de digitalisering van het rechtenonderwijs. Ars Aequi, (3), 237-240. Staker, H., & Horn, M.B. (2012). Classifying K–12 Blended Learning, San Mateo: Innosight Institute. Retrieved from https://files.eric.ed.gov/fulltext/ED535180.pdf (last accessed on 25 July Steenman, S. (2016). Evaluatie gebruik kennisclips. Staats- en Bestuursrecht. Utrecht: Educate-it. Stes, A., De Maeyer, S. & Van Petegem, P. (2008). Een Nederlandstalige versie van de ATI: een valide instrument om onderwijsaanpak van docenten in het hoger onderwijs te meten? Pedagogische studiën, 85, 95-106. Stes, A., De Maeyer, S. & Van Petegem, P. (2013). Examining the Cross-Cultural Sensitivity of the Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) and Validation of a Dutch Version. PLOS ONE, 8(1), 1-7. Trigwell, K., Prosser, M., & Waterhouse, F. (1999). Relations between Teachers’ Approaches to Teaching and Students’ Approach to Learning. Higher Education, 37, 57-70. Vermunt, J.D., & Donche, V. (2017). A Learning Patterns Perspective on Student Learning in Higher Education: State of the Art and Moving Forward. Educ. Psych. Rev., 29, 269-299. De Vries, U.R.M.T. (2019). ‘Blended learning’ in de studie Rechten. Hoe digitale middelen het juridisch onderwijs versterken. Ars Aequi, (3), 233-236. Yildirim, I. (2017). The Effects of Gamification-based Teaching Practices on Student Achievement and Students’ Attitudes towards Lessons. Internet and Higher Education, 33, 86-92. □ 1 In this way, see the letter of the Dutch Minister of Education, Culture and Science on digitalisation of 16 October 2018. □ 2 www.uu.nl/nieuws/educate-it-krijgt-extra-miljoenen-voor-verdere-digitalisering-onderwijs (last accessed on 19 May 2019). □ 3 Utrecht University, Strategic Plan 2016-2020, available at www.uu.nl (last accessed on 19 May 2019). □ 4 Staker & Horn 2012, p. 3. Critical on the term blended learning are Oliver & Trigwell 2005, who defend subverting the term and using it to describe an approach that focuses on the learner and its learning (instead of on the teacher). These authors suggest that an in-depth analysis of the variation in experience of learning of students in a blended learning context is needed in the future. □ 5 De Vries 2019 (our translation). □ 6 Furthermore, Schutgens 2019 stated that old-fashioned live-teaching and, above all, having offline students have their advantages. □ 7 Biggs & Tang 2011. □ 8 Biggs 1987b; Biggs & Tang 2011, esp. p. 24 et seqq. □ 9 Postareff, Parpala & Lindblom-Ylänne 2015, p. 316 with references. □ 10 McCray 2000. Furthermore, according to Yildirim 2017, p. 86, blended learning offers ‘various educational options to learners, minimizes the inequality of opportunity, provides individualized solutions pertinent to learning differences and eliminates hindrances related to space and time.’ □ 11 Mayer & Moreno 1998. □ 12 Jacobson & Spiro 1995. □ 13 Dihoff et al. 2004; Epstein et al. 2002 □ 14 Van Asten et al. 2019. □ 15 See on this topic, e.g., Brame 2013; - It is not the first course in the law curriculum at Utrecht University in which this way of flipping the classroom has been used, a flipping the classroom concept was used already in the first course of the law curriculum (‘Foundations of Law’). □ 16 See, e.g., Campbell et al. 2001. □ 17 Baeten et al. 2010. □ 18 Trigwell, Prosser & Waterhouse 1999. □ 19 See already Marton & Säljo 1976. □ 20 Chesterman 2016, p. 77. □ 21 In the academic year 2018-2019 a project, financed by the Utrecht Education Incentive Fund (Faculty LEG), made it possible to create interactive materials (interactive knowledge clips in Scalable Learning) and to experiment with blended learning. □ 22 In conducting this study the approach of Bishop-Clark and Dietz-Uhler 2012 has been followed. This study has a similar but slightly different structure, in which a different course has been studied, by Van Dongen & Meijerman 2019. □ 23 This (optional) review by the Faculty’s Ethical Review Committee has been conducted in order to safeguard the ethical quality of the research. The Ethics Committee if LEG aims to stimulate and facilitate ethical conduct by the faculty with regard to the rights, safety and well-being of the participants in scientific research, i.e. of the students in ours study. □ 24 Biggs 1987a; Biggs, Kember & Leung 2001, p. 133 et seqq. See also the Dutch version, received from the Centre of Expertise for Higher Education, University of Antwerp, see Stes, De Maeyer & Van Petegem 2013. □ 25 Stes, De Maeyer & Van Petegem 2008. □ 26 See Trigwell, Prosser & Waterhouse 1999, p. 62. See also Prosser & Trigwell 2006. □ 27 The Dutch questionnaires mentioned in the previous footnotes as well as the questionnaires made in the context of a previous study (Van Dongen & Meijerman 2019) were the basis for the current questionnaires. The questionnaires were compared with the original English version, adapted to the specific field Introduction to Private Law and supplemented with a few questions. Some colleagues proofread and then we finalised the questionnaires. □ 28 The scale is from 0 to 1, from totally not to perfectly homogeneous. □ 29 These measurements were taken from the pre-course surveys. The Cronbach’s α of the post-course surveys was .777 (deep approach) and .767 (surface approach). □ 30 Number 1 means a perfect correlation, 0 means no correlation at all. The sign in front of the number (- or +) indicates whether there is a negative correlation (if one variable increases, the other decreases) or a positive correlation (if one variable goes up, so does the other). □ 31 The correlations were significant at the 0.001 level (2-tailed); - In order to test the hypothesis that students who did (partly or fully) use and the students who did not use Scalable learning were associated with statistically different exam results, degree of self-regulated learning and (differences in) deep and surface approaches, an equal random sample of the first group was taken and compared to the second group by means of an independent samples t-test. Additionally, the assumption of homogeneity of variances was tested via Levene’s test. With regard to self-regulated learning (both at the beginning of the course, at the end of the course as well as the difference) equal variances can be assumed but no significant differences existed. As to the exam result on question 4A no equal variance could be assumed, and no significant differences existed between the two groups. Also with regard to differences between pre- and post-course measurements of deep and surface approaches no equal variance could be assumed, but no significant differences between two groups existed. □ 32 Although factor analysis showed five factors explaining more or less 52% of the variance, for this study we have chosen to stick to Biggs’s division of two factors. Starting from a fixed number of two in the factor analysis (Kaiser-Meyer-Olkin test), questions are arranged quite well, in accordance with the questions arranged by Biggs under the two approaches to learning. Nevertheless only 33% of the variance can be explained by the distinction between deep and surface approaches to learning (KMO of .836 showed that the size of number was very satisfying for the factor analysis). Apparently there are a lot of other factors present. □ 33 See Van Dongen & Meijerman 2019. □ 34 No significant difference in exam marks could be noted. □ 35 A negative correlation was found between both approaches to teaching, Pearson’s r (639) = -.136, p = .001. □ 36 DA2 = c + β1 x DA1 + β2 x CCFA + β3 x ITTF. In this regression model C is the constant, β1, β2 and β3 are the regression coefficients. □ 37 This is in line with Van Dongen & Meijerman 2019, p. 562. □ 38 Only 3,6% of the variance in change of deep approaches can be explained by looking only to teachers’ approaches to teaching. The linear regression was calculated to predict the changes in deep approach of students during the course only based on the teaching approaches taken by teachers. A significant regression equation was found (F(2,367) = 6.889, p = .001) with an R² of .036). The predicted deep approach at the end of the course (DA2) is equal to 1.185 – (0.092 x ITTF) – (0.252 x CSSF). Interestingly, ITTF is significant here. □ 39 As the CCSF measure scored poor on its reliability not a lot of value can be given to the final part of this formula. □ 40 Another strange result is the very weak but significant negative correlation existed between CSSF and SRL at the end of the course; Pearson’s r (450)= -0.104, p = .027. □ 41 When ‘exam grades’ are mentioned here, both the final grade as well as their performance on the specific question about week 6 of the learning environment are aimed at. As there were no significant differences, no distinction is made in our discussion of the results here. □ 42 Both correlations are significant at the 0.01 level (2-tailed). □ 43 Van Dongen & Meijerman 2019, p. 562-563. □ 44 Czerkawski 2014, p. 32, 35. □ 45 Du, Yu & Olinzock 2011, p. 37. □ 46 See Steenman 2016.
{"url":"https://www.bjutijdschriften.nl/tijdschrift/lawandmethod/2020/05/lawandmethod-D-19-00007","timestamp":"2024-11-12T22:48:52Z","content_type":"application/xhtml+xml","content_length":"111131","record_id":"<urn:uuid:3375fee5-e148-41a0-9431-ab1cb197a130>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00307.warc.gz"}
Reverse-engineering the division microcode in the Intel 8086 processor While programmers today take division for granted, most microprocessors in the 1970s could only add and subtract — division required a slow and tedious loop implemented in assembly code. One of the nice features of the Intel 8086 processor (1978) was that it provided machine instructions for integer multiplication and division. Internally, the 8086 still performed a loop, but the loop was implemented in microcode: faster and transparent to the programmer. Even so, division was a slow operation, about 50 times slower than addition. I recently examined multiplication in the 8086, and now it's time to look at the division microcode.1 (There's a lot of overlap with the multiplication post so apologies for any deja vu.) The die photo below shows the chip under a microscope. I've labeled the key functional blocks; the ones that are important to this post are darker. At the left, the ALU (Arithmetic/Logic Unit) performs the arithmetic operations at the heart of division: subtraction and shifts. Division also uses a few special hardware features: the X register, the F1 flag, and a loop counter. The microcode ROM at the lower right controls the process. The 8086 die under a microscope, with main functional blocks labeled. This photo shows the chip with the metal and polysilicon removed, revealing the silicon underneath. Click on this image (or any other) for a larger version. Like most instructions, the division routines in the 8086 are implemented in microcode. Most people think of machine instructions as the basic steps that a computer performs. However, many processors have another layer of software underneath: microcode. With microcode, instead of building the CPU's control circuitry from complex logic gates, the control logic is largely replaced with code. To execute a machine instruction, the computer internally executes several simpler micro-instructions, specified by the microcode. This is especially useful for a machine instruction such as division, which performs many steps in a loop. Each micro-instruction in the 8086 is encoded into 21 bits as shown below. Every micro-instruction moves data from a source register to a destination register, each specified with 5 bits. The meaning of the remaining bits depends on the type field and can be anything from an ALU operation to a memory read or write to a change of microcode control flow. Thus, an 8086 micro-instruction typically does two things in parallel: the move and the action. For more about 8086 microcode, see my microcode blog post. A few details of the ALU (Arithmetic/Logic Unit) operations are necessary to understand the division microcode. The ALU has three temporary registers that are invisible to the programmer: tmpA, tmpB, and tmpC. An ALU operation takes its first argument from the specified temporary register, while the second argument always comes from tmpB. An ALU operation requires two micro-instructions. The first micro-instruction specifies the ALU operation and source register, configuring the ALU. For instance, ADD tmpA to add tmpA to the default tmpB. In the next micro-instruction (or a later one), the ALU result can be accessed through a register called Σ (Sigma) and moved to another register. The carry flag plays a key role in division. The carry flag is one of the programmer-visible status flags that is set by arithmetic operations, but it is also used by the microcode. For unsigned addition, the carry flag is set if there is a carry out of the word (or byte). For subtraction, the carry flag indicates a borrow, and is set if the subtraction requires a borrow. Since a borrow results if you subtract a larger number from a smaller number, the carry flag also indicates the "less than" condition. The carry flag (and other status flags) are only updated if micro-instruction contains the F bit. The RCL (Rotate through Carry, Left) micro-instruction is heavily used in the division microcode.2 This operation shifts the bits in a 16-bit word, similar to the << bit-shift operation in high-level languages, but with an additional feature. Instead of discarding the bit on the end, that bit is moved into the carry flag. Meanwhile, the bit formerly in the carry flag moves into the word. You can think of this as rotating the bits while treating the carry flag as a 17th bit of the word. (The RCL operation can also act on a byte.) The rotate through carry left micro-instruction. These shifts perform an important part of the division process since shifting can be viewed as multiplying or dividing by two. RCL also provides a convenient way to move the most-significant bit to the carry flag, where it can be tested for a conditional jump. (This is important because the top bit is used as the sign bit.) Another important property is that performing RCL on a lower word and then RCL on an upper word will perform a 32-bit shift, since the high bit of the lower word will be moved into the low bit of the upper word via the carry bit. Finally, the shift moves the quotient bit from the carry into the register. Binary division The division process in the 8086 is similar to grade-school long division, except in binary instead of decimal. The diagram below shows the process: dividing 67 (the dividend) by 9 (the divisor) yields the quotient 7 at the top and the remainder 4 at the bottom. Long division is easier in binary than decimal because you don't need to guess the right quotient digit. Instead, at each step you either subtract the divisor (appropriately shifted) or subtract nothing. Note that although the divisor is 4 bits in this example, the subtractions use 5-bit values. The need for an "extra" bit in division will be important in the discussion below; 16-bit division needs a 17-bit value. - 0 0 0 0 - 1 0 0 1 - 1 0 0 1 - 1 0 0 1 Instead of shifting the divisor to the right each step, the 8086's algorithm shifts the quotient and the current dividend to the left each step. This trick reduces the register space required. Dividing a 32-bit number (the dividend) by a 16-bit number yields a 16-bit result, so it seems like you'd need four 16-bit registers in total. The trick is that after each step, the 32-bit dividend gets one bit smaller, while the result gets one bit larger. Thus, the dividend and the result can be packed together into 32 bits. At the end, what's left of the dividend is the 16-bit remainder. The table below illustrates this process for a sample dividend (blue) and quotient (green).3 At the end, the 16-bit blue value is the remainder. dividend quotient The division microcode The 8086 has four division instructions to handle signed and unsigned division of byte and word operands. I'll start by describing the microcode for the unsigned word division instruction DIV, which divides a 32-bit dividend by a 16-bit divisor. The dividend is supplied in the AX and DX registers while the divisor is specified by the source operand. The 16-bit quotient is returned in AX and the 16-bit remainder in DX. For a divide-by-zero, or if the quotient is larger than 16 bits, a type 0 "divide error" interrupt is generated. CORD: The core division routine The CORD microcode subroutine below implements the long-division algorithm for all division instructions; I think CORD stands for Core Divide. At entry, the arguments are in the ALU temporary registers: tmpA/tmpC hold the 32-bit dividend, while tmpB holds the 16-bit divisor. (I'll explain the configuration for byte division later.) Each cycle of the loop shifts the values and then potentially subtracts the divisor. One bit is appended to the quotient to indicate whether the divisor was subtracted or not. At the end of the loop, whatever is left of the dividend is the Each micro-instruction specifies a register move on the left, and an action on the right. The moves transfer words between the visible registers and the ALU's temporary registers, while the actions are mostly ALU operations or control flow. As is usually the case with microcode, the details are tricky. The first three lines below check if the division will overflow or divide by zero. The code compares tmpA and tmpB by subtracting tmpB, discarding the result, but setting the status flags (F). If the upper word of the divisor is greater or equal to the dividend, the division will overflow, so execution jumps to INT0 to generate a divide-by-zero interrupt.4 (This handles both the case where the dividend is "too large" and the divide-by-0 case.) The number of loops in the division algorithm is controlled by a special-purpose loop counter. The MAXC micro-instruction initializes the counter to 7 or 15, for a byte or word divide instruction respectively. move action SUBT tmpA CORD: set up compare Σ → no dest MAXC F compare, set up counter, update flags JMP NCY INT0 generate interrupt if overflow RCL tmpC 3: main loop: left shift tmpA/tmpC Σ → tmpC RCL tmpA Σ → tmpA SUBT tmpA set up compare/subtract JMPS CY 13 jump if top bit of tmpA was set Σ → no dest F compare, update flags JMPS NCY 14 jump for subtract JMPS NCZ 3 test counter, loop back to 3 RCL tmpC 10: done: Σ → tmpC shift last bit into tmpC Σ → no dest RTN done: get top bit, return RCY 13: reset carry Σ → tmpA JMPS NCZ 3 14: subtract, loop JMPS 10 done, goto 10 The main loop starts at 3. The tmpC and tmpA registers are shifted left. This has two important side effects. First, the old carry bit (which holds the latest quotient bit) is shifted into the bottom of tmpC. Second, the top bit of tmpA is shifted into the carry bit; this provides the necessary "extra" bit for the subtraction below. Specifically, if the carry (the "extra" tmpA bit) is set, tmpB can be subtracted, which is accomplished by jumping to 13. Otherwise, the code compares tmpA and tmpB by subtracting tmpB, discarding the result, and updating the flags (F). If there was no borrow/ carry (tmpA ≥ tmpB), execution jumps to 14 to subtract. Otherwise, the internal loop counter is decremented and control flow goes back to the top of the loop if not done (NCZ, Not Counter Zero). If the loop is done, tmpC is rotated left to pick up the last quotient bit from the carry flag. Then a second rotate of tmpC is performed but the result is discarded; this puts the top bit of tmpC into the carry flag for use later in POSTIDIV. Finally, the subroutine returns. The subtraction path is 13 and 14, which subtract tmpB from tmpA by storing the result (Σ) in tmpA. This path resets the carry flag for use as the quotient bit. As in the other path, the loop counter is decremented and tested (NCZ) and execution either continues back at 3 or finishes at 10. One complication is that the carry bit is the opposite of the desired quotient bit. Specifically, if tmpA < tmpB, the comparison generates a borrow so the carry flag is set to 1. In this case, the desired quotient bit is 0 and no subtraction takes place. But if tmpA ≥ tmpB, the comparison does not generate a borrow (so the carry flag is set to 0), the code subtracts tmpB, and the desired quotient bit is 1. Thus, tmpC ends up holding the complement of the desired result; this is fixed later. The microcode is carefully designed to pack the divide loop into a small number of micro-instructions. It uses the registers and the carry flag in tricky ways, using the carry flag to hold the top bit of tmpA, the comparison result, and the generated quotient bit. This makes the code impressively dense but tricky to understand. The top-level division microcode Now I'll pop up a level and take a look at the top-level microcode (below) that implements the DIV and IDIV machine instructions. The first three instructions load tmpA, tmpC, and tmpB from the specified registers. (The M register refers to the source specified in the instruction, either a register or a memory location.) Next, the X0 condition tests bit 3 of the instruction, which in this case distinguishes DIV from IDIV. For signed division (IDIV), the microcode calls PREIDIV, which I'll discuss below. Next, the CORD micro-subroutine discussed above is called to perform the division. DX → tmpA iDIV rmw: load tmpA, tmpC, tmpB AX → tmpC RCL tmpA set up RCL left shift operation M → tmpB CALL X0 PREIDIV set up integer division if IDIV CALL CORD call CORD to perform division COM1 tmpC set up to complement the quotient DX → tmpB CALL X0 POSTIDIV get original dividend, handle IDIV Σ → AX NXT store updated quotient tmpA → DX RNI store remainder, run next instruction As discussed above, the quotient in tmpC needs to be 1's-complemented, which is done with COM1. For IDIV, the micro-subroutine POSTIDIV sets the signs of the results appropriately. The results are stored in the AX and DX registers. The NXT micro-operation indicates the next micro-instruction is the last one, directing the microcode engine to start the next machine instruction. Finally, RNI directs the microcode engine to run the next machine instruction. 8-bit division The 8086 has separate opcodes for 8-bit division. The 8086 supports many instructions with byte and word versions, using 8-bit or 16-bit arguments respectively. In most cases, the byte and word instructions use the same microcode, with the ALU and register hardware using bytes or words based on the instruction. In the case of division, the shift micro-operations act on tmpA and tmpC as 8-bit registers rather than 16-bit registers. Moreover, the MAXC micro-operation initializes the internal counter to 7 rather than 15. Thus, the same CORD microcode is used for word and byte division, but the number of loops and the specific operations are changed by the hardware. The diagram below shows the tmpA and tmpC registers during each step of dividing 0x2345 by 0x34. Note that the registers are treated as 8-bit registers. The divisor (blue) steadily shrinks with the quotient (green) taking its place. At the end, the remainder is 0x41 (blue) and the quotient is 0xad, complement of the green value. tmpA tmpC Although the CORD routine is shared for byte and word division, the top-level microcode is different. In particular, the byte and word division instructions use different registers, requiring microcode changes. The microcode below is the top-level code for byte division. It is almost the same as the microcode above, except it uses the top and bottom bytes of the accumulator (AH and AL) rather than the AX and DX registers. AH → tmpA iDIV rmb: get arguments AL → tmpC RCL tmpA set up RCL left shift operation M → tmpB CALL X0 PREIDIV handle signed division if IDIV CALL CORD call CORD to perform division COM1 tmpC complement the quotient AH → tmpB CALL X0 POSTIDIV handle signed division if IDIV Σ → AL NXT store quotient tmpA → AH RNI store remainder, run next instruction Signed division The 8086 (like most computers) represents signed numbers using a format called two's complement. While a regular byte holds a number from 0 to 255, a signed byte holds a number from -128 to 127. A negative number is formed by flipping all the bits (known as the one's complement) and then adding 1, yielding the two's complement value. For instance, +5 is 0x05 while -5 is 0xfb. (Note that the top bit of a number is set for a negative number; this is the sign bit.) The nice thing about two's complement numbers is that the same addition and subtraction operations work on both signed and unsigned values. Unfortunately, this is not the case for signed multiplication and division. The 8086 has separate IDIV (Integer Divide) instructions to perform signed division. The 8086 performs signed division by converting the arguments to positive values, performing unsigned division, and then negating the results if necessary. As shown earlier, signed and unsigned division both use the same top-level microcode and the microcode conditionally calls some subroutines for signed division. These additional subroutines cause a significant performance penalty, making signed division over 20 cycles slower than unsigned division. I will discuss those micro-subroutines below. The first subroutine for signed division is PREIDIV, performing preliminary operations for integer division. It converts the two arguments, stored in tmpA/tmpC and tmpB, to positive values. It keeps track of the signs using an internal flag called F1, toggling this flag for each negative argument. This conveniently handles the rule that two negatives make a positive since complementing the F1 flag twice will clear it. The point of this is that the main division code (CORD) only needs to handle unsigned division. The microcode below implements PREIDIV. First it tests if tmpA is negative, but the 8086 does not have a microcode condition to directly test the sign of a value. Instead, the microcode determines if a value is negative by shifting the value left, which moves the top (sign) bit into the carry flag. The conditional jump (NCY) then tests if the carry is clear, jumping if the value is non-negative. If tmpA is negative, execution falls through to negate the first argument. This is tricky because the argument is split between the tmpA and tmpC registers. The two's complement operation (NEG) is applied to the low word, while either 2's complement or one's complement (COM1) is applied to the upper word, depending on the carry for mathematical reasons.5 The F1 flag is complemented to keep track of the sign. (The multiplication process reuses most of this code, starting at the NEGATE entry point.) Σ → no dest PREIDIV: shift tmpA left JMPS NCY 7 jump if non-negative NEG tmpC NEGATE: negate tmpC Σ → tmpC COM1 tmpA F maybe complement tmpA JMPS CY 6 NEG tmpA negate tmpA if there's no carry Σ → tmpA CF1 6: toggle F1 (sign) RCL tmpB 7: test sign of tmpB Σ → no dest NEG tmpB maybe negate tmpB JMPS NCY 11 skip if tmpB positive Σ → tmpB CF1 RTN else negate tmpB, toggle F1 (sign) RTN 11: return The next part of the code, starting at 7, negates tmpB (the divisor) if it is negative. Since the divisor is a single word, this code is simpler. As before, the F1 flag is toggled if tmpB is negative. At the end, both arguments (tmpA/tmpC and tmpB) are positive, and F1 indicates the sign of the result. After computing the result, the POSTIDIV routine is called for signed division. The routine first checks for a signed overflow and raises a divide-by-zero interrupt if so. Next, the routine negates the quotient and remainder if necessary.6 In more detail, the CORD routine left the top bit of tmpC (the complemented quotient) in the carry flag. Now, that bit is tested. If the carry bit is 0 (NCY), then the top bit of the quotient is 1 so the quotient is too big to fit in a signed value.7 In this case, the INT0 routine is executed to trigger a type 0 interrupt, indicating a divide overflow. (This is a rather roundabout way of testing the quotient, relying on a carry bit that was set in a previous subroutine.) JMP NCY INT0 POSTIDIV: if overflow, trigger interrupt RCL tmpB set up rotate of tmpB Σ → no dest NEG tmpA get sign of tmpB, set up negate of tmpA JMPS NCY 5 skip if tmpB non-negative Σ → tmpA otherwise negate tmpA (remainder) INC tmpC 5: set up increment JMPS F1 8 test sign flag, skip if set COM1 tmpC otherwise set up complement CCOF RTN 8: clear carry and overflow flags, return Next, tmpB (the divisor) is rotated to see if it is negative. (The caller loaded tmpB with the original divisor, replacing the dividend that was in tmpB previously.) If the divisor is negative, tmpA (the remainder) is negated. This implements the 8086 rule that the sign of the remainder matches the sign of the divisor. The quotient handling is a bit tricky. Recall that tmpC holds the complemented quotient. the F1 flag is set if the result should be negative. In that case, the complemented quotient needs to be incremented by 1 (INC) to convert from 1's complement to 2's complement. On the other hand, if the quotient should be positive, 1's-complementing tmpC (COM1) will yield the desired positive quotient. In either case, the ALU is configured in POSTIDIV, but the result will be stored back in the main routine. Finally, the CCOF micro-operation clears the carry and overflow flags. Curiously, the 8086 documentation declares that the status flags are undefined following IDIV, but the microcode explicitly clears the carry and overflow flags. I assume that the flags were cleared in analogy with MUL, but then Intel decided that this wasn't useful so they didn't document it. (Documenting this feature would obligate them to provide the same functionality in later x86 chips.) The hardware for division For the most part, the 8086 uses the regular ALU addition and shifts for the division algorithm. Some special hardware features provide assistance. In this section, I'll look at this hardware. Loop counter The 8086 has a 4-bit loop counter for multiplication and division. This counter starts at 7 for byte division and 15 for word division, based on the low bit of the opcode. This loop counter allows the microcode to decrement the counter, test for the end, and perform a conditional branch in one micro-operation. The counter is implemented with four flip-flops, along with logic to compute the value after decrementing by one. The MAXC (Maximum Count) micro-instruction sets the counter to 7 or 15 for byte or word operations respectively. The NCZ (Not Counter Zero) micro-instruction has two actions. First, it performs a conditional jump if the counter is nonzero. Second, it decrements the counter. The F1 flag Signed multiplication and division use an internal flag called F18 to keep track of the sign. The F1 flag is toggled by microcode through the CF1 (Complement F1) micro-instruction. The F1 flag is implemented with a flip-flop, along with a multiplexer to select the value. It is cleared when a new instruction starts, set by a REP prefix, and toggled by the CF1 micro-instruction. The diagram below shows how the F1 latch and the loop counter appear on the die. In this image, the metal layer has been removed, showing the silicon and the polysilicon wiring underneath. The counter and F1 latch as they appear on the die. The latch for the REP state is also here. X register The division microcode uses an internal register called the X register to distinguish between the DIV and IDIV instructions. The X register is a 3-bit register that holds the ALU opcode, indicated by bits 5–3 of the instruction.9 Since the instruction is held in the Instruction Register, you might wonder why a separate register is required. The motivation is that some opcodes specify the type of ALU operation in the second byte of the instruction, the ModR/M byte, bits 5–3.10 Since the ALU operation is sometimes specified in the first byte and sometimes in the second byte, the X register was added to handle both these cases. For the most part, the X register indicates which of the eight standard ALU operations is selected (ADD, OR, ADC, SBB, AND, SUB, XOR, CMP). However, a few instructions use bit 0 of the X register to distinguish between other pairs of instructions. For instance, it distinguishes between MUL and IMUL, DIV and IDIV, CMPS and SCAS, MOVS and LODS, or AAA and AAS. While these instruction pairs may appear to have arbitrary opcodes, they have been carefully assigned so the microcode can distinguish them. The implementation of the X register is straightforward, consisting of three flip-flops to hold the three bits of the instruction. The flip-flops are loaded from the prefetch queue bus during First Clock and during Second Clock for appropriate instructions, as the instruction bytes travel over the bus. Testing bit 0 of the X register with the X0 condition is supported by the microcode condition evaluation circuitry, so it can be used for conditional jumps in the microcode. Algorithmic and historical context As you can see from the microcode, division is a complicated and relatively slow process. On the 8086, division takes up to 184 clock cycles to perform all the microcode steps. (In comparison, two registers can be added in 3 clock cycles.) Multiplication and division both loop over the bits, performing repeated addition or subtraction respectively. But division requires a decision (subtract or not?) at each step, making it even slower, about half the speed of multiplication.11 Various algorithms have been developed to speed up division. Rather than performing long division one bit at a time, you can do long division in, say, base 4, producing two quotient bits in each step. As with decimal long division, the tricky part is figuring out what digit to select. The SRT algorithm (1957) uses a small lookup table to estimate the quotient digit from a few bits of the divisor and dividend. The clever part is that the selected digit doesn't need to be exactly right at each step; the algorithm will self-correct after a wrong "guess". The Pentium processor (1993) famously had a floating point division bug due to a few missing values in the SRT table. This bug cost Intel $475 million to replace the faulty processors. Intel's x86 processors steadily improved divide performance. The 80286 (1982) performed a word divide in 22 clocks, about 6 times as fast as the 8086. In the Penryn architecture (2007), Intel upgraded from Radix-4 to Radix-16 division. Rather than having separate integer and floating-point hardware, integer divides were handled through the floating point divider. Although modern Intel processors have greatly improved multiplication and division compared to the 8086, division is still a relatively slow operation. While a Tiger Lake (2020) processor can perform an integer multiplication every clock cycle (with a latency of 3 cycles), division is much slower and can only be done once every 6-10 clock cycles (details). I've written numerous posts on the 8086 so far and plan to continue reverse-engineering the 8086 die so follow me on Twitter @kenshirriff or RSS for updates. I've also started experimenting with Mastodon recently as @[email protected]. Notes and references 1. My microcode analysis is based on Andrew Jenner's 8086 microcode disassembly. ↩ 2. The 8086 patent and Andrew Jenner's microcode use the name LRCY (Left Rotate through Carry) instead of RCL. I figure that RCL will be more familiar to people because of the corresponding machine instruction. ↩ 3. In the dividend/quotient table, the tmpA register is on the left and the tmpC register is on the right. 0x0f00ff00 divided by 0x0ffc yielding the remainder 0x0030 (blue) and quotient 0xf04c (green). (The green bits are the complement of the quotient due to implementation in the 8086.) ↩ 4. I described the 8086's interrupt circuitry in detail in this post. ↩ 5. The negation code is a bit tricky because the result is split across two words. In most cases, the upper word is bitwise complemented. However, if the lower word is zero, then the upper word is negated (two's complement). I'll demonstrate with 16-bit values to keep the examples small. The number 257 (0x0101) is negated to form -257 (0xfeff). Note that the upper byte is the one's complement (0x01 vs 0xfe) while the lower byte is two's complement (0x01 vs 0xff). On the other hand, the number 256 (0x0100) is negated to form -256 (0xff00). In this case, the upper byte is the two's complement (0x01 vs 0xff) and the lower byte is also the two's complement (0x00 vs 0x00). (Mathematical explanation: the two's complement is formed by taking the one's complement and adding 1. In most cases, there won't be a carry from the low byte to the upper byte, so the upper byte will remain the one's complement. However, if the low byte is 0, the complement is 0xff and adding 1 will form a carry. Adding this carry to the upper byte yields the two's complement of that To support multi-word negation, the 8086's NEG instruction clears the carry flag if the operand is 0, and otherwise sets the carry flag. (This is the opposite of the above because subtractions (including NEG) treat the carry flag as a borrow flag, with the opposite meaning.) The microcode NEG operation has identical behavior to the machine instruction, since it is used to implement the machine instruction. Thus to perform a two-word negation, the microcode negates the low word (tmpC) and updates the flags (F). If the carry is set, the one's complement is applied to the upper word (tmpA). But if the carry is cleared, the two's complement is applied to tmpA. ↩ 6. There is a bit of ambiguity with the quotient and remainder of negative numbers. For instance, consider -27 ÷ 7. -27 = 7 × -3 - 6 = 7 * -4 + 1. So you could consider the result to be a quotient of -3 and remainder of -6, or a quotient of -4 and a remainder of 1. The 8086 uses the rule that the remainder will have the same sign as the dividend, so the first result would be used. The advantage of this rule is that you can perform unsigned division and adjust the signs afterward: 27 ÷ 7 = quotient 3, remainder 6. -27 ÷ 7 = quotient -3, remainder -6. 27 ÷ -7 = quotient -3, remainder 6. -27 ÷ -7 = quotient 3, remainder -6. This rule is known as truncating division, but some languages use different approaches such as floored division, rounded division, or Euclidean division. Wikipedia has details. ↩ 7. The signed overflow condition is slightly stricter than necessary. For a word division, the 16-bit quotient is restricted to the range -32767 to 32767. However, a 16-bit signed value can take on the values -32768 to 32767. Thus, a quotient of -32768 fits in a 16-bit signed value even though the 8086 considers it an error. This is a consequence of the 8086 performing unsigned division and then updating the sign if necessary. ↩ 8. The internal F1 flag is also used to keep track of a REP prefix for use with a string operation. I discussed string operations and the F1 flag in this post. ↩ 9. Curiously, the 8086 patent states that the X register is a 4-bit register holding bits 3–6 of the byte (col. 9, line 20). But looking at the die, it is a 3-bit register holding bits 3–5 of the byte. ↩ 10. Some instructions are specified by bits 5–3 in the ModR/M byte rather than in the first opcode byte. The motivation is to avoid wasting bits for instructions that use a ModR/M byte but don't need a register specification. For instance, consider the instruction ADD [BX],0x1234. This instruction uses a ModR/M byte to specify the memory address. However, because it uses an immediate operand, it does not need the register specification normally provided by bits 5–3 of the ModR/M byte. This frees up the bits to specify the instruction. From one perspective, this is an ugly hack, while from another perspective it is a clever optimization. ↩ 11. Even the earliest computers such as ENIAC (1945) usually supported multiplication and division. However, early microprocessors did not provide multiplication and division instructions due to the complexity of these instructions. Instead, the programmer would need to write an assembly code loop, which was very slow. Early microprocessors often had binary-coded decimal instructions that could perform additions and subtractions in decimal. One motivation for these instructions was that converting between binary and decimal was extremely slow due to the need for multiplication and division. Instead, it was easier and faster to keep the values as decimal if that was how they were displayed. The Texas Instruments TMS9900 (1976) was one of the first microprocessors with multiplication and division instructions. Multiply and divide instructions remained somewhat controversial on RISC (Reduced Instruction-Set Computer) processors due to the complexity of these instructions. The early ARM processors, for instance, did not support multiplication and division. Multiplication was added to ARMv2 (1986) but most ARM processors still don't have integer division. The popular open-source RISC-V architecture (2015) doesn't include integer multiply and divide by default, but provides them as an optional "M" extension. The 8086's algorithm is designed for simplicity rather than speed. It is a "restoring" algorithm that checks before subtracting to ensure that the current term is always positive. This can require two ALU operations (comparison and subtraction) per cycle. A slightly more complex approach is a "nonrestoring" algorithm that subtracts even if it yields a negative term, and then adds during a later loop iteration. ↩ 4 comments: CORD could be short for CORDIC: https://en.wikipedia.org/wiki/CORDIC Although not the first application, it is mentioned that it can be used for division. This comment has been removed by the author. Thanks for the article, I've been greatly enjoying this series! One nitpick on one of the notes. Most of the ARM microarchitectures have had integer division instructions as standard for a pretty long time now. It appeared as optional for the ARMv7-A and ARMv7-R architectures while mandatory for the embedded ARMv7-M architectures. So it's been around even before ARM introduced the 64-bit AArch64 in ARMv8, which also made integer division Not sure about non-ARM microarchitectures but it first started showing up in A7/A15 cores which have been in products for over a decade and M3 cores which are even older. Getting these non trivial instructions executed in minimal time is very interesting subject. Especially crazy, since new Cannon Lake architecture apparently can do it in between 11-17 cycles. 128bit RDX:RAX divided by 64bit register/memory in 17 cycles is impressive, still curious, how they managed to do that.
{"url":"https://www.righto.com/2023/04/reverse-engineering-8086-divide-microcode.html","timestamp":"2024-11-10T12:37:59Z","content_type":"application/xhtml+xml","content_length":"180003","record_id":"<urn:uuid:c7a765ee-34a6-4fbd-b990-b090b5bde9d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00651.warc.gz"}
[GAP Forum] p-group Vivek Jain jaijinenedra at yahoo.co.in Wed Feb 10 09:13:09 GMT 2010 Dear Prof. Bettina Eick, While doing following Method as mentioned in your email "A" is not a group. The command IsGroup(A), AsGroup(A) fails. I want to determine weather "A" is abelian group or not. Can we get more information about the structure of A? G := function(p) local F, f, r, a, b, c; F := FreeGroup(3); f := GeneratorsOfGroup(F); a := f[1]; b := f[2]; c := f[3]; r := [a^(p^5), b^(p^3), c^(p^2), Comm(b,c)/b^(p^2) ]; (example for p=3): gap> H := G(3); <fp group on the generators [ f1, f2, f3 ]> gap> K := NilpotentQuotient(H); Pcp-group with orders [ 27, 9, 3, 9, 3, 3 ] gap> Length(LowerCentralSeries(K)); gap> A := AutomorphismGroupPGroup(K);; gap> A.size; with regards Vivek Kumar Jain Vivek Kumar Jain Post-Doctoral Fellow Harish-Chandra Research Institute Allahabad (India) --- On Thu, 28/1/10, Bettina Eick <beick at tu-bs.de> wrote: From: Bettina Eick <beick at tu-bs.de> Subject: Re: [GAP Forum] p-group To: "Vivek Jain" <jaijinenedra at yahoo.co.in> Cc: "GAP Forum" <forum at gap-system.org> Date: Thursday, 28 January, 2010, 4:18 PM Dear Vivek kumar jain, you can use GAP to investigate your question for any fixed prime p. For example, the nilpotent quotient algorithm of the NQ package or the NQL package of GAP allows you to determine the largest class-c quotient of a finitely presented groups for any positive integer c or even the largest nilpotent quotient (if this exists). Further, there are methods available in GAP to determine the automorphism group of a finite p-group. Check the AutPGrp package for this purpose. In your given example, you can implement your considered group G in GAP as function in p: G := function(p) local F, f, r, a, b, c; F := FreeGroup(3); f := GeneratorsOfGroup(F); a := f[1]; b := f[2]; c := f[3]; r := [a^(p^5), b^(p^3), c^(p^2), Comm(b,c)/b^(p^2) ]; return F/r; Then you load the relevant packages And then you can do the following (for example for p=3): gap> H := G(3); <fp group on the generators [ f1, f2, f3 ]> gap> K := NilpotentQuotient(H); Pcp-group with orders [ 27, 9, 3, 9, 3, 3 ] gap> Length(LowerCentralSeries(K)); gap> A := AutomorphismGroupPGroup(K);; gap> A.size; Hence for p=3 your group has class 2 and you can see the size of its automorphism group. Generators and further information on the automorphisms is also stored in A, but is perhaps too long to be displayed here. Hope this helps, > "Is it possible using GAP to check that given presentation is a nilpotent group of class 2 or not?" > For example $G=\langle a,b,c| a^{p^5}, b^{p^3}, c^{p^2}, [a,b]=a^{p^3}, [a,c]=c^p, [b,c]=b^{p^2} \rangle $ where $p$ is a prime. > Also how can we determine its automorphism group using GAP? > with regards > Vivek kumar jain > Your Mail works best with the New Yahoo Optimized IE8. Get it NOW! http://downloads.yahoo.com/in/internetexplorer/ > _______________________________________________ > Forum mailing list > Forum at mail.gap-system.org > http://mail.gap-system.org/mailman/listinfo/forum The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/ More information about the Forum mailing list
{"url":"https://www.gap-system.org/ForumArchive2/2010/002686.html","timestamp":"2024-11-04T10:42:10Z","content_type":"text/html","content_length":"7386","record_id":"<urn:uuid:cb99a62e-f8fc-4592-8736-f8446a4b9908>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00455.warc.gz"}
True Displacement - Ocean Navigator True Displacement Sometimes when voyagers are evaluating boats they compare data for similar boats. This data may include the ballast ratio, the displacement/length (D-L) ratio and the sail area/displacement (SA-D) ratio. All of these numbers are commonly taken as measures of performance - the higher the ballast ratio, the lower the D-L ratio, and the higher the SA-D ratio, the better the performance is likely to be. The displacement number used in these formulas should approximate the weight of the boat with all normal equipment and crew on board, the tanks half-filled and the lockers filled with half a full load of food and beverages (in other words, what you would expect to find about midway through a voyage). This is known as half load displacement. In reality, boat builders will use everything from this number to a “lightly loaded” formula (a couple people with no account for non-standard gear and equipment, minimal stores on board and tanks half full) to the construction weight of the boat with nothing on board (the dry weight or light ship weight), or even the designed weight (which is almost always exceeded during the building process, often by a substantial amount). The use of artificially low displacement numbers skews the ballast ratio upward, the D-L ratio downward and the SA-D ratio upwards. The lighter the boat, the more the numbers are likely to be distorted by leaving a realistic payload out of the calculations. The net result is that typical published ratios are, at best, of marginal utility when comparing boats and, at worst, are downright Our Pacific Seacraft 40 Let’s look, for example, at the displacement of our Pacific Seacraft 40. The published displacement of 24,000 lbs is based on light local sailing. This includes the tanks half full and some gear and supplies on board - i.e., a nominal half load displacement of a stock factory boat. I performed a rough weight audit on our boat and discovered that our heavy-duty DC system (total battery capacity of 700 amp-hours, two alternators, cabling, DC to AC inverter, wind generator and solar panels) weighs in at around 800 lbs, probably less than half of which is included in the manufacturer’s published displacement. We also have between 400 and 500 lbs in ground tackle. The water tanks, when half full, weigh 700 lbs, the diesel 200 lbs. The dinghy and outboard, plus fuel, weigh 250 lbs, the wind vane and autopilot 60 lbs. This is around 2,500 lbs, much of which will be found on any offshore voyaging boat, all of which will be added before the first crewmember steps on board and less than half of which is included in the published displacement figure. Then there are all kinds of tools and spares. People average out at 160 lbs. Clothes, books and gear (including such things as crockery, cutlery and galley utensils) weigh in at around 100 lbs per person. Food and beverages are around 6 lbs per person per day, which is to say 200 lbs per person for a moderate voyaging range. When these kinds of numbers are added to the weight of non-standard equipment, the total added weight for a crew of four is soon up around 4,000 lbs, whereas, when I checked with Bill Crealock, the designer of the Pacific Seacraft 40, I found that the total added weight in his light local sailing condition is 1,200 lbs.?? To be fair to Pacific Seacraft, most designers and manufacturers don’t even attempt to approximate half load displacement. The majority of published displacement figures are based on some form of light ship weight, which includes little more than the boat, and does not always include even such basic items as sails, minimum ground tackle and legally-required safety equipment. All too often, these numbers are derived from calculations made by the designer rather than the actual build weight of the boat, which almost always exceeds the calculated weight. The longer a boat has been in production, the more the weight tends to go up as upgrades and changes are made. In going beyond light ship weight, manufacturers such as Pacific Seacraft are, to some extent, stacking the deck against themselves in terms of making comparisons with other boats. Recently, I saw a set of figures for a popular racer/voyager that showed a listed displacement of 28,500 lbs, an actual weight from the builder with a full set of voyaging options of 34,500 lbs, and a fully-loaded weight of 39,500 lbs! Clearly, for coastal voyaging, these numbers are likely to be lower; nevertheless, the point remains the same - accurate displacement figures are dependent on a realistic assessment of weight. With this is mind, a minimum of 2,500 lbs should be added to most published displacement figures to get a ballpark half load displacement for coastal voyaging. For offshore voyaging, 3,750 lbs should be added. Most long-term voyagers and liveaboards will add considerably more weight than this (it would not be unreasonable to up the coastal add-on to 3,750 lbs and the offshore number to 5,000 lbs), while weight-conscious racer/voyagers may add less. Though these numbers will give a reasonable displacement approximation, a better solution is to define your personal voyaging style and thus the weight that you will introduce to a boat, and derive from this what I will call a personal increment number (PIN). This process is described in detail in the sidebar. Armed with your PIN and the published displacement numbers for boats of interest to you, you can determine realistic displacement numbers in voyaging trim. Now let’s determine a ballast ratio, a D-L ratio and an SA-D ratio that is somewhat closer to reality than most published numbers. You may be a little shocked at how the ballast ratio goes down, the D-L ratio up and the SA-D ratio down! Ballast ratio The higher the ballast ratio, the heavier the keel in relation to the rest of a boat and, all other things being equal, the stiffer the boat. This translates into a greater ability to carry sail to windward. However, all other things are not equal! Given two boats with the same ballast ratio, one might have lead ballast in a bulb on a deep fin keel and the other might have internal iron ballast with a shoal draft. The former will be substantially stiffer. As such, the ballast ratio can only be used as a very broad indicator of a boat’s stability, and in fact its utility is significantly limited to comparing boats with a similar hull form. With these issues in mind, we can say that the lower limit for the ballast ratio of a 35-foot to 45-foot voyaging boat at a realistic half load displacement should be around 0.30, with ratios up to 0.40 possible on modern boats where the hull weight has been minimized by the use of high-tech construction techniques and materials. It should be noted, however, that at these higher ballast ratios the trade-off for a stiffer boat is a less comfortable motion. Let’s look at our Pacific Seacraft 40. It has a shoal draft option (5’ 2”) with a ballast weight of 8,880 lbs and a published displacement of 24,280 lbs (light local sailing weight according to the manufacturer’s literature) for a nominal ballast ratio of 0.37. The light ship weight is 23,080 lbs. If we arbitrarily add 2,500 lbs to the light ship weight to approximate the half load displacement in coastal voyaging trim, we get a ballast ratio of 0.35. With a 3,750-lb payload (half load displacement in offshore voyaging trim) this drops to 0.33. With a 5,000-lb payload (full load displacement in offshore voyaging trim) we get 0.32. These figures are still above my 0.30 threshold but significantly lower than the published number. Using published displacement figures (as opposed to a realistic half load displacement), most modern boats have a ballast ratio higher than 0.30. However, when I looked at a sample of modern boats and tried to assess numbers that I felt approximated coastal voyaging trim, a number of the lighter boats dropped below this threshold; while in offshore, voyaging trim fell below it even more. D-L ratio The addition of a substantial amount of weight to a boat with lightweight construction will have a disproportionately greater impact on the ratios and on the boat’s performance, than the addition of the same weight to a heavier boat. The farther it is intended to voyage offshore and the longer the voyage, the greater the load the boat is likely to carry. In general, a moderate to heavy displacement boat will be able to absorb the load better than a light displacement boat. The heavier boat will also have a more comfortable motion in a seaway. Another formula - the D-L ratio - can be used to quantify a boat’s heaviness. The formula is: Displacement in long tons/([0.01 x waterline length]3) A long ton = 2,240 lbs To make realistic comparisons between boats as you will use them, you need to calculate your PIN for the displacement figure before calculating D-L ratios. Looking at our Pacific Seacraft 40: Half load displacement (light ship + 3,750 lbs) = 26,830 lbs 26,830/2,240 = 11.98 long tons LWL = 31.25 feet (0.01 x 31.25)3 = 0.0305 D-L ratio = 11.98/0.0305 = 393 long tons/foot (as opposed to the published figure of 355) This is very much at the heavy end of things for a contemporary boat, although it should be noted that this is mostly because of the boat’s relatively short waterline and long overhangs, which tend to skew the numbers upward. This is also the case for many older designs. A number of modern boats, especially those influenced by the IMS rule, have almost no overhangs, so the LWL is close to the overall length. It is interesting to note that if the same design philosophy were to be used on the Pacific Seacraft 40, with the displacement kept constant, the D-L ratio would go from 393 down to In point of fact, with a PIN of 3,750 lbs the boat sinks almost three inches, which, because of its long overhangs, increases its LWL by more than nine inches, bringing the D-L ratio back down to 366. For the rough and ready purposes of comparing boats, I believe this sinkage factor can be ignored, particularly since an accurate lines plan, which is not likely to be available, is needed to calculate it. However, it is important to bear in mind that the longer the overhangs and the shallower the angle between the overhangs and the surface of the water, the more the D-L ratio will be distorted upward by adding weight without accounting for its impact on the waterline. Extremely lightweight boats may have a D-L ratio in voyaging trim below 100. Older voyaging boats commonly exceed 400. The higher the ratio, the greater the volume below the waterline, which translates into comfortable interiors with plenty of fuel and water capacity but often with a significant performance penalty. A good range for contemporary voyaging boats would be between 250 and 400 (using a realistic half load displacement number). The longer the boat in relation to a given payload, the lower the D-L ratio should be. (Once you get into the realm of 50-foot and longer boats designed for a single voyaging couple, it should be below 200.) SA-D ratio It is commonly assumed that a moderately heavy displacement boat, especially one loaded with stores, will be a dog to sail. This is often the case but need not be. So long as the boat is given adequate sail area to compensate for the weight and so long as it is stiff enough to stand up under this sail area, there is no reason for it not to have excellent performance. The key parameter here is the SA-D ratio, which is calculated for a sloop or cutter by determining the nominal area of the mainsail and the foretriangle in square feet, and dividing their sum by the boat’s displacement in cubic feet taken to the two-thirds power. The equation is: SA-D ratio = [(I x J)/2 + (P x E)/2]/(Displacement in cubic feet)0.67 I = the height of the foretriangle J = the horizontal distance from the forward side of the mast to the bottom of the headstay P = the hoist of the mainsail E = the foot of the mainsail [Editor’s Note: The figures for this section reflect a mathematical correction of the figures published in Nigel Calder’s Cruising Handbook.] Let’s look at our Pacific Seacraft 40 with a half load displacement of 26,830 lbs. Sea water weighs 64 lbs per cubic foot, so the boat displaces 419 cubic feet (26,830/64). The displacement of 419 to the two-thirds power (or the power of 0.67) is 57.13. According to some of Pacific Seacraft’s literature, the boat’s sail area is 1,032 square feet. Using these two figures, divide the sail area by the displacement (1,032/57.13) to get an SA-D ratio of However, in this case, the staysail was used to calculate the sail area, which is improper use of the formula (in other literature the staysail is excluded). Without the staysail, the sail area drops to 846 square feet, and the SA-D ratio to 14.81. A ratio of 15 to 16 is considered acceptable for a traditional voyaging boat; 17 to 19 is typical for performance cruisers; 20 to 22 is on the high My own inclination would be to aim for something around 18, which is on the performance end of things, because in many ways it is easier to reef down in a blow than it is to increase the sail area in a calm.?? In conditions where both the staysail and headsail are flown, the Pacific Seacraft has a fair amount of sail power, so long as the boat is stiff enough to carry this sail area (it is of no use at all if the boat can’t carry it without excessive heeling). The British boating magazine, Yachting Monthly, conducted a boat test in relatively heavy weather and commented that the “Pacific Seacraft 40 gives a sprightly performance, mainly due to its large sail area.” If the staysail is taken out of the picture (which it will be quite often, especially when off the wind), in light airs (which are encountered by voyagers far more than heavy weather), the basic sailplan is likely to leave the boat underpowered, especially when loaded for voyaging. Decent light air sails will be needed. Once again, when comparing boats it is important to use a realistic SA-D ratio. Not only do boat builders commonly exaggerate the sail area of their boats, but they also use some form of light ship displacement. Taken together, these result in completely unrealistic SA-D ratios. If the numbers are reworked using 100 percent of the sail area and a realistic half load displacement, it is not uncommon for the SA-D ratio to drop by three full points (e.g., from 17 to 14). This article is an excerpt from contributing editor Nigel Calder’s current book, Nigel Calder’s Cruising Handbook, recently published by International Marine/McGraw Hill.
{"url":"https://oceannavigator.com/true-displacement/","timestamp":"2024-11-08T12:36:42Z","content_type":"text/html","content_length":"114182","record_id":"<urn:uuid:83ef850b-2b8b-4882-b686-5cc9733984f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00028.warc.gz"}
What determines the torque of a brushless motor? Since torque is a function of current in a brushless motor, and current is equal to the voltage drop in the circuit divided by the circuit resistance, when the back EMF is equal to the forward applied voltage minus the circuit losses the functional torque of the motor will reach a point where it runs out of the torque … How do you calculate torque on a brushless DC motor? 1. T = 4 * N * B * L * R * i. 2. This is the Equations to Know for simple estimation of BLDC torque. 3. � number of turns per phase. 4. � strength of the permanent magnetic field � length of the stator / core (or the magnet too, if they are equal) � radius of the stator. 5. � current in the motor windings. Do brushless motors have constant torque? The motor torque constant for brushed DC motors and brushless motors tells you how current relates to torque. In this equation, T is torque, Kt is the torque constant, and I is current. Kt has units of Newton-meters per Amp. How much torque does a brushless motor have? Brushless motors are capable of producing more torque and have a faster peak rotational speed compared to nitro- or gasoline-powered engines. Nitro engines peak at around 46,800 r/min and 2.2 kilowatts (3.0 hp), while a smaller brushless motor can reach 50,000 r/min and 3.7 kilowatts (5.0 hp). How do you calculate the torque of a DC motor? 1. Tg = armature or gross torque (N-m) = Force × radius. 2. r = radius of the armature in m. 3. N = speed of the armature in rpm = N/60 rps. What is the efficiency of a brushless DC motor? Brushless motor efficiency is very high in comparison to any combustion engine with values averaging between 70% and 90%. Does DC motor have constant torque? DC motors can develop a constant torque over a wide speed range. For a DC motor : Torque is proportional to armature current – this means controlling torque requires simply controlling the motor DC current – easily achieved with a simple DC drive. What does Kv stand for in motor? Kv describes the RPM (revolutions per minute) a motor does per Volt that is put into it. Generally speaking the more Kv a motor has, the more RPM and more power. For example, a 9000Kv motor would be faster than a 2200Kv motor. If Kv is like horsepower, then Turns is the physical attribute of a motor. What is the torque of DC motor? Armature Torque of DC Motor In a DC motor, a circumferential force (F) at a distance r which is the radius of the armature is acted on each conductor, tending to rotate the armature. The sum of the torques due to all the armature conductors is known as armature torque (τa). Which is the type of torque in DC motor? When the current-carrying current is placed in the magnetic field, a force is exerted or it which exerts turning moment or torque F x r. This torque is produced due to the electromagnetic effect, hence is called Electromagnetic torque. How can you increase the efficiency of a DC motor? To achieve maximum motor efficiency, experts from the Department of Energy (DOE) Industrial Technologies Program, Washington, D.C., recommend the following: 1. Eliminate voltage unbalance. 2. Replace V-belts with cogged or synchronous belt drives. 3. Avoid nuisance tripping. 4. Maintain motor shaft alignment. What is DC motor torque? The DC motor torque is the torque required to overcome the magnetic torque of the motor to start rotation. (OR) The force that it takes to spin at a certain speed we call torque and usually, for a dc motor high speed means low torque and high torque means low speed. How does a brushless electric motor work? How brushless motors work. The windings are energized based on the combination of the generated signals. To maintain the motion and keep the motor running, the magnetic field induced in the windings should shift position as the motor moves in order to keep up with the stator field. What is a 12 volt DC motor? A MET 12 Volt DC Motor is a low-voltage motor that is prepared custom to your company’s specific needs. We manufacture DC motors with speed control compatibility, which provide you with variable speeds. All of our Electric Motors have bi-directional and reversible capabilities. They are all of ball bearing construction and have heavy duty finishes. How do brushless hub motors work? Hub motors are typically brushless motors (sometimes called brushless direct current motors or BLDCs), which replace the commutator and brushes with half-a-dozen or more separate coils and an electronic circuit. The circuit switches the power on and off in the coils in turn creating forces in each one that make the motor spin.
{"url":"https://hostforstudent.com/what-determines-the-torque-of-a-brushless-motor/","timestamp":"2024-11-09T19:08:59Z","content_type":"text/html","content_length":"43690","record_id":"<urn:uuid:7cf0d39f-c4b6-4786-90d1-9131b1cb6f89>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00021.warc.gz"}
Am I going too hard? I’m at the end of the Beginner program (coming back from a shoulder injury) and I’m routinely nearing (or once egregious exceeding) RPE 10 for my sessions based on the calories burned according to my apple watch. My total calories burned usually had me around RPE 9 for my week 8s, but ever since I moved to week 9 and up, my RPE has shot up. Is it a sign I should lighten my loads rather than going off of the estimated weight based on percentages? Based on calories burned? RPE is a scale of perceived exertion, so I’m not sure why calories would be considered exceeding estimates. RPE 10 is a max effort lift and anything beyond is simply failed. 1 Like Yes, but in the template it has RPE for the session, and my sessions are technically hitting 10 and above If you’re not literally hitting your absolute maxes on every set and if each session doesn’t feel like the most difficult workout you’ve ever done, you’re not hitting RPE 10 “and above”. “And above” would be failing sets every session or not completing your workouts due to fatigue. Calories are not a relevant metric in RPE estimation. What is your understanding of RPE? 1 Like I mean they are brutal workouts. I understand that RPE for each set is about how many I estimate I have left in the tank, but on the template it has you input Session Time and Session RPE to calculate the calories you burned during the workout. For example, I went 70 minutes, estimating that I worked out at an RPE of 9 for those 70 minutes would give me an estimated calories burned of 630 since it is calculating RPE multiplied by minutes. I had one workout where I burned 740 calories in 65 minutes (which would be 11.5*65) another where I’m buring 723 in 70 minutes, which to get the session calculator to reflect that requires me to put 11 or 10 in that part of the spreadsheet sRPE isn’t a direct mathematical calculation of minutes and calories that can be reverse calculated based on what an external device is telling you. sRPE is how your overall session feels, and how hard it was relative to other workouts you’ve done. You can’t have an sRPE of “11.5”, because that means you failed your workout. 11.5 is not on the RPE scale. Your caloric calculation estimate can’t reverse-engineer your sRPE, since RPE is a subjective report; that’s a needless complication potentially skewing your data. I would enter sRPE first: how hard did the workout feel on a scale of 1-10, with 10 being the absolute most difficult workout you’ve completed? After that, write down your session time. Worry about potential calories burned only after you’ve done that. If you’re actually hitting RPE 9’s for workouts consistently I would expect you’re overshooting your set RPE on a regular basis, possibly depending on what template you’re running. 2 Likes
{"url":"https://forum.barbellmedicine.com/t/am-i-going-too-hard/13789","timestamp":"2024-11-06T09:10:26Z","content_type":"text/html","content_length":"35671","record_id":"<urn:uuid:77a843e5-0e89-4431-bcf4-51651c0ce434>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00163.warc.gz"}
Notation and Modeling Subtraction of Integers Learning Outcomes • Model subtraction of integers You learn as a child how to subtract numbers through everyday experiences. For example, if you have 10 cheese crackers and eat 6 of them, you will have 4 cheese crackers left. Real-life experiences serve as models for subtracting positive numbers, but it is difficult to relate subtracting negative numbers to common life experiences. Most people do not have an intuitive understanding of subtraction when negative numbers are involved. Math teachers use several different models to explain subtracting negative numbers, but we will continue to use counters to model subtraction. Remember, the blue counters represent positive numbers and the red counters represent negative numbers. Perhaps when you were younger, you read [latex]5 - 3[/latex] as five take away three. When we use counters, we can think of subtraction the same way. Through the next few examples, we will model four subtraction scenarios involving both positive and negative integers. • [latex]5 - 3[/latex] , positive [latex]-[/latex] positive • [latex]- 5-\left(-3\right)[/latex] , negative [latex]-[/latex] negative • [latex]-5 - 3[/latex] , negative [latex]-[/latex] positive • [latex]5-\left(-3\right)[/latex] , positive [latex]-[/latex] negative Model: [latex]5 - 3[/latex]. Interpret the expression. [latex]5 - 3[/latex] means [latex]5[/latex] take away [latex]3[/latex] . Model the first number. Start with [latex]5[/latex] positives. Take away the second number. So take away [latex]3[/latex] positives. Find the counters that are left. [latex]5 - 3=2[/latex] The difference between [latex]5[/latex] and [latex]3[/latex] is [latex]2[/latex] . Now you can try a similar problem. try it Model the expression: [latex]6 - 4[/latex] Show Solution Model the expression: [latex]7 - 4[/latex] Show Solution In the previous example we subtracted positive [latex]3[/latex] from positive [latex]5[/latex]. Now we will subtract negative [latex]3[/latex] from negative [latex]5[/latex]. Compare the results of this example to the previous example after you read through it. Model: [latex]-5-\left(-3\right)[/latex] Show Solution You can try a similar problem. try it Model the expression: Show Solution Model the expression: Show Solution Notice that the previous two examples are very much alike. • First, we subtracted [latex]3[/latex] positives from [latex]5[/latex] positives to get [latex]2[/latex] positives. • Then we subtracted [latex]3[/latex] negatives from [latex]5[/latex] negatives to get [latex]2[/latex] negatives. Each example used counters of only one color, and the “take away” model of subtraction was easy to apply. Now let’s see what happens when we subtract one positive and one negative number. We will need to use both positive and negative counters and sometimes some neutral pairs, too. Remember that adding a neutral pair does not change the value (it’s like adding zero to any number). Model: [latex]-5 - 3[/latex] Interpret the expression. [latex]-5 - 3[/latex] means [latex]-5[/latex] take away [latex]3[/latex] . Model the first number. Start with [latex]5[/latex] negatives. Take away the second number. So we need to take away [latex]3[/latex] positives. But there are no positives to take away. Add neutral pairs until you have [latex]3[/latex] positives. Now take away [latex]3[/latex] positives. Count the number of counters that are left. [latex]-5 - 3=-8[/latex] The difference of [latex]-5[/latex] and [latex]3[/latex] is [latex]-8[/latex] . When modeling these types of problems, you will always need to add neutral pairs to the initial value until you have enough of the correct type of counters to remove. Now you can try a similar problem. try it Model the expression: [latex]-6 - 4[/latex] Show Solution Model the expression: [latex]-7 - 4[/latex] Show Solution Now we will subtract a negative number from a positive number. Think of this as taking away the negative. Model: [latex]5-\left(-3\right)[/latex] Show Solution Now you can try a similar problem. try it Model the expression: Show Solution Model the expression: Show Solution Now we will do an example that summarizes the situations above, with different numbers. Recall the different scenarios: • subtracting a positive number from a positive number • subtracting a negative number from a negative number • subtracting a positive number from a negative number • subtracting a negative number from a positive number Model each subtraction. 1. [latex]8 − 2[/latex] 2. [latex]−8 − (−3)[/latex] 3. [latex]−5 − 4[/latex] 4. [latex]6 − (−6)[/latex] Show Solution Now you can try a similar problem. try it Model each subtraction. 1. [latex]7 - (-8)[/latex] 2. [latex]-7 - (-2)[/latex] 3. [latex]4 - 1[/latex] 4. [latex]-6 - 8[/latex] Show Solution Model each subtraction. 1. [latex]4 - (-6)[/latex] 2. [latex]-8 - (-1)[/latex] 3. [latex]7 - 3[/latex] 4. [latex]-4 - 2[/latex] Show Solution Each of the examples so far have been carefully constructed so that the sign of the answer matched the sign of the first number in the expression. For example, in [latex]−5 − 4[/latex], the result is [latex]-9[/latex], which is the same sign as [latex]-5[/latex]. Now we will see subtraction where the sign of the result is different from the starting number. Model each subtraction expression: 1. [latex]2 - 8[/latex] 2. [latex]-3-\left(-8\right)[/latex] Show Solution Now you can try a similar problem. try it Model each subtraction expression. 1. [latex]7 - 9[/latex] 2. [latex]-5-9[/latex] Show Solution Model each subtraction expression. 1. [latex]5 - 8[/latex] 2. [latex]-7-\left(-10\right)[/latex] Show Solution When you subtract two integers, there are two possibilities, either the result will have a different sign from the starting number, or it will have the same sign. Watch the video below to see more examples of modeling integer subtraction with color counters.
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/notation-and-modeling-subtraction-of-integers/","timestamp":"2024-11-07T23:44:53Z","content_type":"text/html","content_length":"135403","record_id":"<urn:uuid:14943683-61cf-496f-97a5-61f9d07ab65b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00853.warc.gz"}
Mathematical Investigation Template A template to help students at the University of Bristol with their Mathematical Investigations projects \documentclass[a4paper,11pt]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{graphicx} \usepackage{enumerate} %you can add more packages using the same code above %------------------ %\setlength{\topmargin}{0.0in} %\setlength{\textheight}{10in} %\setlength{\oddsidemargin}{0.0in} %\setlength{\evensidemargin}{0.0in} %\setlength{\textwidth}{6.5in} %------------------- \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem {conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem*{example}{Example} %------------------ %Everything before begin document is called the pre-amble and sets out how the document will look %It is recommended you don't touch the pre-amble until you are familiar with LateX \begin{document} \title{A template and example of Latex} \ author{authors in alphabetical order} \date{} \maketitle \begin{abstract} Some sorts of documents need abstracts. Others do not. \end{abstract} %The following code is not run because of the percentage sign, but you might find it useful for future work % \tableofcontents \section{Introduction} Start your document with words, written in full sentences and paragraphs. %Using the percentage symbol, you can include comments in your code that do not appear in the output. It is a good idea to break your document into sections and subsections \section{Formating} We can \emph{emphasis} some words, i.e., make them \emph{italic}, and we can make some words \textbf{bold}. Note how using a new line in the code does not correspond to a new line in the output file. Same if we have a large white space. Instead, if we want a new line/new paragraph, you need to press enter twice, or use \\ which starts a new line but not a new paragraph. \subsection{lists} Lists can be numbered or ununmbered, and you can have sub-list inside a list. \begin{enumerate} \item This is the first item in a numbered list. \item And the second \item \begin{enumerate} \item Here the third item is in fact a numbered sub-list. \item item 2 of the numbered sub-list \end{enumerate} \item \begin{itemize} \item Here the fourth item is an unnumbered sub-list. \item item 2 of the unnumbered sub-list \ end{itemize} \end{enumerate} \subsection{Definitions and theorems} Definitions, theorems, lemmas and so on, are 'enviroments' (like documents and lists). They need to begin and end. \begin {definition}\label{my_def} A \emph{label} allows the user to tell Latex 'remember the numbering of that definition/theorem/equation' \end{definition} \begin{lemma} \label{my_lem} If something has a label, then we can refer to it, without knowing what number it is \end{lemma} \begin{proof} For example, by calling up Definition \ref{my_def}. This works even if the ordering of things move. Note that the end of proof square box is already there \end{proof} \begin{theorem} And a final theorem \end{theorem} \begin{proof} Combining Definition \ref{my_def} with Lemman \ref{my_lem} we get Equation \ref{my_eqn} below. \end{proof} \section{Including maths} Some maths, like $\varepsilon>0$ or $a_{23}=\alpha^3$, is written in-line. More important or complex maths is displayed on its own line. For example, $$ \lim_{x\to\infty}f(x)=\frac{\pi}{4}.$$ Sometimes you need multiple lines of maths to line up nicely: \begin{align*} f(x+y)&=(x+y,-2(x+y))\\ &=(x,-2x)+(y,-2y)\\ &=f(x)+f(y), \end {align*} and sometimes you want to number lines in an equation A^{T} & =\begin{pmatrix}1 & 2\\ 3 & 4 \end{pmatrix}^{T}\\ \label{my_eqn} & =\begin{pmatrix}1 & 3\\ 2 & 4 \end{pmatrix} \section {References and Figures} \LaTeX{} \cite{lamport94} also allows you to cite your sources. For more details on how this can be done, we refer the reader to \cite[sec:~Embedded System]{referencing}. But once you have a bibliography, you can use the cite command easily. Finally we add Figure \ref{fig:logo} to show how to add graphics. Note that we first need to make sure to have the graphic uploaded to Overleaf or saved in the same folder as your tex file (whichever is relevant to your case). Notice how the picture was resized using the scale command and that \LaTeX{} determine that the picture looks better above. \begin{figure} \centering \includegraphics[scale=0.3]{logo-full-colour.png} \caption{The logo for the University of Bristol} \label{fig:logo} \end{figure} \begin{thebibliography} {99} \bibitem{lamport94} Leslie Lamport, \textit{\LaTeX: a document preparation system}, Addison Wesley, Massachusetts, 2nd edition, 1994. \bibitem{referencing} Wikibooks, \textit{LaTeX/Bibliography Management}, [0nline], Accessed at https://en.wikibooks.org/wiki/LaTeX/Bibliography\_Management, (DATE ACCESSED). \end{thebibliography} \end{document}
{"url":"https://ko.overleaf.com/latex/templates/mathematical-investigation-template/gjwpnbjxdbmh","timestamp":"2024-11-12T13:59:57Z","content_type":"text/html","content_length":"41482","record_id":"<urn:uuid:d3aed094-7ebe-43d0-b2d1-81348d76fb35>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00795.warc.gz"}
Interpreting the Sign of a Derivative (A new question of the week) Sometimes when we are learning a new subject, in this case calculus, a superficially simple question can be confusing. And that can be a good thing! Let’s look at a question about a derivative that, while not using very advanced concepts, gives a challenge to the learner that forces deeper thought about the concept, requiring distinctions that a routine question would not require the student to The question came from Akhtar (father of a student), last August: Dear Sir, Hi, I have a question related to derivative (rate of change). Suppose you are manager of a trucking firm and one of your drivers reports that, according to her calculations, her truck burns fuel at the rate of G(x) = (1/200)(800/x + x), G(x) = (1/200)((-800/x^2) + 1) gallons per mile when traveling at x miles per hour on a smooth dry road. 1. If the driver tells you that she wants to travel 20 miles per hour, what should you tell her? 2. If the driver wants to go 40 miles per hour, what should you say? I need guidance about the solution of this question. In first part should we take x = 20 miles per hour in G'(x) to get G'(x) = -1/200 ? Book answer is 1/200, go faster. One thing is what does the minus sign tell us here. Second part we get G'(40) = 1/400. Book answer is G'(40) = 1/400, go slower. Thanks in anticipation. There are some points in the question to be clarified or corrected; and the problem itself, assuming it was reported fully, is a little unclear as to what kind of answer is expected. It was very helpful to be told what the book’s answers are, as that both helps us to understand the problem, and shows where Akhtar’s difficulty is. (Sometimes it turns out that a student’s only issue is that the book was wrong, and we don’t find out until the end of a confusing discussion!) Doctor Rick answered, starting with a correction to the question: Hi, Akhtar. I was confused at first because you gave two different functions both called G(x). But when I started work on the problem, I realized that the second G(x) is actually G'(x), the derivative of the first G(x). I presume you accidentally omitted the prime. So really, we are told that the fuel usage at x miles/hour is $$G(x) = \frac{1}{200}\left(\frac{800}{x} + x\right)\text{ gal/mi},$$ and the derivative of this is $$G'(x) = \frac{1}{200}\left(\frac {-800}{x^2} + 1\right).$$ This is the correct derivative, whether it was provided as part of the problem, or supplied by Akhtar as part of his work. (An interesting side question would be, what are the units of the derivative? It’s “gallons per mile, per mile per hour”, or \(\frac{\text{gal/mi}}{\text{mi/hr}}\) – a rate of a rate with respect to a rate, which will be part of the difficulty in this problem!) We are not told the details of the student’s level of knowledge, but can make a guess: It appears that this problem is presented in the lead-up to minimization/maximization problems, and maybe even before the student has become proficient at differentiation, since the problem gives the derivative. All that’s required of the student is to evaluate G'(x) for a particular value, and then to understand how the sign of G'(x) relates to the real world. It isn’t quite clear what is actually being asked for, and what the first answer refers to; the “1/200” can’t be G(20), which is 0.3, so we have to interpret it as “G'(20) = 1/200”, which is not quite right. It’s -1/200, as Akhtar said. (Possibly, they really said that was the absolute value of the derivative.) The second answer gives us a clue about the goal: to decide whether it will be better to go faster or slower. If the book’s answer to the first part said G'(20) = 1/200, that is incorrect; you got that part right. It’s the second task that is confusing you. Notice what G(x) means: it is the rate of fuel consumption in gallons per mile at speed x miles/hour. What do you suppose the manager would like to achieve? I’d say he or she will want to save money by using less fuel on a given trip (of a fixed distance). In light of this goal, can you see why in the first case you would tell the driver to go faster? If it still isn’t clear, tell me your reasoning and we can think further about it. So we need to decide whether increasing or decreasing speed will reduce fuel usage. Akhtar replied: Thank you sir. If G(x) represents the rate of fuel consumption then what is G'(x)? Is it rate of change of fuel consumption? If so then G'(20) = -1/200 =-0.005 it is very low rate of change. So low consumption, then why we suggest to go faster. In second part, G'(40) = 1/400 = 0.0025 so it is high as compare to first part so it is clear to go slower to save money by low consumption ….. but sir explain the first part. Akhtar has evaluated the derivative and found that G'(20) = -0.005, while G'(40) = 0.0025. But he is interpreting these as if they were the rate of consumption, rather than the rate of change of the rate of consumption. This is not easy to interpret at first, particularly since the word “rate” is being used not as a rate with respect to time (as in miles per hour, as we are used to) but with respect to speed. It probably also doesn’t help that x here represents not position but speed! There are many things here that are unfamiliar, and require us to slow down and reevaluate what things Doctor Rick replied: Yes, G'(x) represents the rate of change of fuel consumption (with respect to speed). If the rate of change were constant, G'(x) = -0.005 would mean that for every increase of 1 mile/hour in the truck’s speed, the rate of fuel consumption would decrease by 0.005 gallon/mile. So G’ tells us not how much fuel we are using, but how much (and in what direction) that amount will change if we change our speed. Negative means the consumption will decrease (get better) if we increase our speed. Likewise in the second part, for every increase of 1 mile/hour in speed, the rate of fuel consumption would increase by 0.0025 gallon/mile. That’s a smaller rate of change than in the first part (in terms of distance from zero, that is, the absolute value of G'(x)). So if the decision were to be made on the basis of how much the fuel consumption changes, there would be even less reason to change the speed in the second part than in the first. Akhtar’s focus on the size (absolute value) of G’ is misplaced: But the important part is the direction of the change, that is, the sign of the derivative. In the first part, going faster causes the rate of fuel consumption to decrease, which is what the manager wants. In the second part, going faster causes fuel consumption to increase; going slower causes fuel consumption to decrease. So in the second part the manager says to slow down. This is where the answers come from. The best speed is greater than 20 mph, and less than 40 mph. That’s sufficient to answer the question, but let’s consider your thought that G'(x) = -1/200 is “a very low rate of change.” We may not have a good feeling for the size of a rate of change of fuel consumption — that isn’t something we talk about often. To help us think about it, I’ve graphed G(x): At x = 20 miles/hour (and also at x = 40 miles/hour), the fuel consumption rate is 0.3 gallons/mile. We could also say that the “mileage” of the truck at these speeds is 1/0.3 = 10/3 = 3.33 miles per gallon; that is, the truck will only go 3.33 miles on each gallon of fuel. (The mileage is a more familiar number to me, in the USA, and this is a very low number compared to that for automobiles, where we’d like to go 30 miles or more on a gallon of gasoline. I can believe, however, that 3.33 miles per gallon is reasonable for a large truck.) So at both 20 mph and 40 mph, the fuel consumption itself is the same, and that is a relatively high number. (At 30 miles per gallon, the consumption would be 1/30 = 0.0333 gallons per mile, much less than in our story.) Now compare with the minimum rate of fuel consumption, which I have found and labeled to be about 0.283 gallons/mile at a speed of about 28.284 miles/hour. By increasing the speed by 8.284 miles/ hour, we have reduced the fuel consumption rate by 0.3 – 0.283 = 0.017 gallons per mile. If the truck goes 100 miles, we save 1.7 gallons of fuel. On that 100-mile journey, we use 28.3 gallons rather than 30 gallons, a saving of 5.7%, which the manager probably doesn’t consider to be negligible. A small change in the fuel consumption can be very significant. The way to find the minimum, you may see from the graph, is to find when G'(x) = 0: $$\frac{1}{200}\left(\frac{-800}{x^2} + 1\right) = 0$$ $$\frac{-800}{x^2} + 1 = 0$$ $$x^2 = 800$$ $$x = \sqrt{800} = 28.284$$ The student at this point hasn’t gotten to the point of finding that minimum, and therefore being able to tell how much faster or slower to go, only the direction in which to change the speed to reduce fuel consumption. Yes, increasing speed by just 1 mile/hour won’t do much, but the saving adds up. I hope this helps. So what have we learned? When a problem is confusing, we need to decide what part to focus on, in this case the sign. We have to think carefully about what it means, and apply that to the goal. And that careful thought will hopefully lead to better understanding in the future. Leave a Comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.themathdoctors.org/interpreting-the-sign-of-a-derivative/","timestamp":"2024-11-04T09:01:52Z","content_type":"text/html","content_length":"118556","record_id":"<urn:uuid:9d57f9da-001b-41a3-8c9a-65ac9c601eea>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00713.warc.gz"}
EQ() Formula in Google Sheets Returns `TRUE` if two specified values are equal and `FALSE` otherwise. Equivalent to the `=` operator. Common questions about the EQ formula include: - What does the EQ formula do? - How does the EQ formula work? - How is the EQ formula different from other formulas? The EQ formula can be used appropriately to calculate the equation of a line given two points for the line. The EQ formula is commonly mistyped when forgetting the commas between data points. Common ways the EQ formula is used inappropriately include using it in inappropriate contexts, such as calculating the equation of a curve instead of a line, or using the wrong data between points. Common pitfalls when using the EQ formula include not completely understanding the context in which the formula is used, and not specifying which data points to use when calculating the equation of a Common mistakes when using the EQ Formula include omitting valid points, not using the correct order of points, and using the wrong set of points. Common misconceptions people might have with the EQ Formula include thinking that the formula can be applied to any type of line or curve, and not understanding that the formula only works in a two-dimensional context.
{"url":"https://www.bettersheets.co/formulas/eq","timestamp":"2024-11-14T09:04:15Z","content_type":"text/html","content_length":"32181","record_id":"<urn:uuid:9b4e7554-0f0b-47f9-9749-80614e27674d>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00318.warc.gz"}
15 Comparing quantitative data between individuals | Scientific Research and Methodology 15 Comparing quantitative data between individuals So far, you have learnt to ask a RQ, design a study, collect the data, and describe the data. In this chapter, you will learn to compare quantitative data in different groups. You will learn to: • compare quantitative data between individuals using the appropriate graphs. • compare quantitative between within individuals in summary tables. 15.1 Introduction Relational RQs compare groups. This chapter considers how to compare quantitative variables in different groups. Graphs are useful this purpose, and a table of the numerical summaries usually are also produced. 15.2 Summarising the comparison: difference between means The best way to compare the two groups is to summarise the difference between the two means. A numerical summary table can be constructed summarising both group, and reporting the difference between the two means. Example 15.1 (Numerical summary table) Wright et al. (2021) recorded the number of chest-beats by gorillas (Table 15.1), for gorillas under \(20\) years old ('younger') and \(20\) years and over ('older'). A summary of the data can be tabulated as in Table 15.2. Notice that no standard deviation or sample size is provided for the difference; these make no sense. TABLE 15.1: The chest-beating rate of gorillas (in beats per \(10\)). \(0.7\) \(1.5\) \(1.7\) \(2.6\) \(4.4\) \(0.0\) \(0.3\) \(0.8\) \(1.6\) \(0.9\) \(1.5\) \(1.7\) \(3.0\) \(4.4\) \(0.1\) \(0.4\) \(0.9\) \(4.0\) \(1.3\) \(1.5\) \(1.8\) \(4.1\) \(0.2\) \(0.6\) \(1.1\) TABLE 15.2: A numerical summary of the gorillas data. (in beats per 10 h) (in beats per 10 h) size Younger \(2.22\) \(1.270\) \(14\) Older \(0.91\) \(1.131\) \(11\) Difference \(1.31\) 15.3 Graphs When a quantitative variable is measured or observed in different groups (i.e., between individuals), the distribution of each variable can be graphed separately. However, to comparing the quantitative variable in the groups, appropriate graphs include: • Back-to-back stemplots: best for small amounts of data; only possible for comparing two groups; • 2-D dot charts: best choice for small to moderate amounts of data; • Boxplots: best choice, except for small amounts of data. These situations have one quantitative variable being compared in different groups (defined by one qualitative variable). 15.3.1 Back-to-back stemplot Back-to-back stemplots are two stemplots (Sect. 11.3.2) sharing the same stems; one group has the leaves emerging left-to-right from the stem, and the second group has the leaves emerging right-to-left from the stem. Back-to-back stemplots can only be used when two groups are being compared. Again, one advantage of using stemplots over other plots is that the original data are retained. Disadvantage are that only two groups can be compared, and not all data work well with stemplots. Example 15.2 (Back-to-back stemplots) A back-to-back stemplot for comparing the chest-beating rate of gorillas (Fig. 15.1) has the leaves for younger gorillas right-to-left, and the leaves for older gorillas left-to-right, sharing the same stems. The younger gorillas have a faster chest-beating rate in general. One older gorilla has a much faster rate that the other older gorillas (a potential 15.3.2 2-D dot charts A 2-dimensional (2-D) dot chart places a dot for each observation, separated for each level of the qualitative variable (also see Sect. 12.3.1). Any number of groups can be compared. The axis displaying the counts (or percentages) need not start from zero, since the distance from the axis to the these numbers do not visually imply any quantity of interest. Rather, how the the dots compare in the groups is the main feature of interest. Example 15.3 (Boxplots) For the chest-beating data seen in Example 15.2, a dot chart is shown in Fig. 15.2. Many observations are the same, so some points would be overplotted if points were not stacked (left panel), or jittered (right panel). 15.3.3 Boxplots A boxplot is a picture of the quantiles (Sect. 11.7.3) for each group, drawn together on the same plot (and so are sometimes called parallel boxplots or side-by-side boxplots). Any number of groups can be compared using a boxplot. The distribution for each group is summarised by five numbers: the minimum value; the first quartieel (\(Q_1\)); the median (\(Q_2\)); the third quartile (\(Q_3\)); and the maximum value. Outliers, identified using the IQR rule (Sect. 11.8.2), are usually shown too. The values of \(Q_1\), the medians, and \(Q_3\) can be used to compare the distributions. Different software may use different rules for computing quartiles, and hence may produce slightly different boxplots. The axis displaying these five numbers need not start from zero, since the distance from the axis to the these numbers do not visually imply any quantity of interest. Rather, the boxes display the values of these five numbers for each group relative to each other, which is of interest. Boxplots summarise data with only five numbers, so detail of the distributions are lost. For this reason, boxplots are excellent for comparing distributions, but histograms are better for displaying the distribution of a single quantitative variable. Example 15.4 (Boxplots) The boxplot for the chest-beating data (Example 15.2) is shown in Fig. 15.3. No outliers are identified for younger gorillas; one large outlier is identified for the older gorillas. The boxplot shows a distinct difference between the chest-beating rates of older and younger gorillas. The boxplots are explained in Fig. 15.4. Firstly, focus on just the boxplot for the younger gorillas (i.e., the left box). Boxplots have five horizontal lines; from the top to the bottom of the plot: 1. Top line: The fastest chest-beating rate (largest value) is \(4.4\) per \(10\). 2. Second line from top: \(75\)% of observations are smaller than about \(3\), represented by the line at the top of the central box. This is the third quartile (\(Q_3\)). 3. Middle line: \(50\)% of observations are smaller than about \(1.7\), represented by the line inside the central box. This is an 'average' value, the second quartile (\(Q_2\)). 4. Second line from bottom: \(25\)% of observations are smaller than about \(1.5\), represented by the line at the bottom of the central box. This is the first quartile (\(Q_1\)). 5. Bottom line: The slowest chest-beating rate (smallest value) is \(0.7\) per \(10\). The box for the older gorillas is slightly different (Fig. 15.3, right box): one observation is identified with a point, above the top line. Computer software has identified this observation as a potential (large) extreme outlier using the IQR rule (Sect. 11.8.2), and has plotted this point separately. The values of \(Q_1\), the median and \(Q_3\) are all substantially larger for the younger gorillas, suggesting that younger gorillas have, in general, faster chest-beating rates. Example 15.5 (Boxplots) Boxplots can be plotted horizontally too, which leaves space for the labels of the qualitative variable. In Fig. 15.5 (based on Silva et al. (2016)), the three dental cements are very different regarding their push-out forces. 15.4 Example: water access López-Serrano et al. (2022) recorded data about access to water in three rural communities in Cameroon (Sect. 11.10). The study could be used to determine contributors to the incidence of diarrhoea in young children (\(85\) households had children under \(5\) years of age). The graphs (Fig. 15.6) and summary (Table 15.3) show that households in which diarrhoea was found in the last two weeks in children had older household coordinators, more people in the household, and more children under \(5\) years of age in the household. These may be expected: older female coordinators probably have more children, hence have more children in the household under \(5\) years of age, and so more children (and hence people) are in the household in general. TABLE 15.3: A summary of the quantitative variables in the water-access study, according to whether diarrhoea had been observed in the last two weeks in children under \(5\) years of age, for those household with children under \(5\) years of age. \(n\) Mean Median Std. dev. IQR Woman's age All households with children \(85\) \(40.2\) \(37.0\) \(13.90\) \(28.00\) Incidents of diarrhoea \(26\) \(45.0\) \(46.5\) \(14.04\) \(28.50\) No incidents of diarrhoea \(59\) \(38.1\) \(35.0\) \(13.44\) \(22.50\) Household size Difference \(\phantom{0}6.8\) All households with children \(85\) \(\phantom{0}8.4\) \(\phantom{0}7.0\) \(\phantom{0}4.93\) \(\phantom{0}6.00\) Incidents of diarrhoea \(26\) \(10.5\) \(\phantom{0}8.5\) \(\phantom{0}6.51\) \(\phantom{0}7.75\) Children under 5 in household No incidents of diarrhoea \(59\) \(\phantom{0}7.5\) \(\phantom{0}6.0\) \(\phantom{0}3.78\) \(\phantom{0}4.50\) Difference \(\phantom{0}2.9\) All households with children \(85\) \(\phantom{0}2.2\) \(\phantom{0}2.0\) \(\phantom{0}1.56\) \(\phantom{0}2.00\) Incidents of diarrhoea \(26\) \(\phantom{0}2.8\) \(\phantom{0}2.0\) \(\phantom{0}2.01\) \(\phantom{0}1.75\) No incidents of diarrhoea \(59\) \(\phantom{0}1.9\) \(\phantom{0}2.0\) \(\phantom{0}1.26\) \(\phantom{0}1.00\) Difference \(\phantom{0}0.8\) 15.5 Chapter summary Quantitative data can be compared between different groups (between individuals comparisons) using a back-to-back stemplot, boxplot or \(2\)-D dot chart. A summary table should show the numerical summaries for the levels of the quantitative variable, and the between-group differences. 15.6 Quick review questions Are the following statements true or false? 1. A boxplot is an appropriate graph for comparing a quantitative variable in two or more groups. 2. A back-to-back stemplot is an appropriate graph for comparing a quantitative variable in two or more groups. 3. A case-profile plot is an appropriate graph for comparing a quantitative variable in two or more groups. 4. When comparing a quantitative variable in two or more groups, the sample size for the difference should be included 15.7 Exercises Answers to odd-numbered exercises are available in App. E. Exercise 15.1 Hale et al. (2009) studied two different engineering project delivery methods (Fig. 15.7, left panel): Design/Build and Design/Bid/Build. The grey, horizontal line is where the projected costs are the same as the actual cost. 1. What does the plot reveal about the two methods? 2. What is the median for each method (approximately)? 3. What is the IQR for each method (approximately)? Exercise 15.2 [Dataset: AISsub] Telford and Cunningham (1991) studied athletes at the Australian Institute of Sport (AIS). Numerous physical and blood measurements were taken from high performance athletes. Figure 15.7 (right panel) compares the heights of females in two similar sports: basketball and netball. (Netball was derived from basketball.) 1. What does the plot reveal about the heights of the females in each sport? 2. What is the median for each sport (approximately)? 3. What is the IQR for each sport (approximately)? Exercise 15.3 Match the histograms with the corresponding boxplots in the activity below. Exercise 15.4 Lunn and McNeil (1991; Hand et al. 1996) compared the dimensions of jellyfish at two sites at Hawkesbury River, NSW (Dangar Island; Salamander Bay) to determine the difference between the jellyfish at each site. A histogram of the breadth of jellyfish at Dangar Island Bay is shown in Fig. 15.8 (left panel). 1. Two students are arguing about the median breadth. Who is correct? Student 1 says: The bars in the histogram have heights of \(10\), \(2\), \(4\), \(2\) and \(4\). When these numbers are put in order, they are: \(2\), \(2\), \(4\), \(4\), \(10\). The median breadth is the median of these numbers, so the median breadth is the middle one: \(4\)is the median. Student 2 responds: You have the correct answer, but for the wrong reason! There are five bars, and the middle bar is the third bar. Since the third bar has a height of \(4\), the median breadth is \(4\). 2. Describe the histogram. 3. A boxplot comparing the breadths of jellyfish at Dangar Island and Salamander Bay is shown in Fig. 15.8 (right panel). Describe and compare the breadths of the jellyfish. 4. Which box in the boxplot represents the Dangar Island jellyfish (shown in Fig. 15.8, left panel)? Exercise 15.5 Gatti et al. (2013) studied the productivity of construction workers, recording (among other things) the rate at which concrete panels could be installed by workers. Data for three different female workers in the study are shown in Table 15.4. 1. Compute the IQR for each worker. 2. Construct the boxplot for comparing the three workers. 3. Draw the approximate histograms for each worker. 4. What do you learn about the workers? TABLE 15.4: The productivity of three workers installing concrete panels (in panels per minute). Worker 1 Worker 2 Worker 3 Mean \(1.24\) \(1.73\) \(1.36\) Minimum \(0.59\) \(1.13\) \(0.86\) 1st quartile \(0.88\) \(1.51\) \(1.16\) Median \(1.35\) \(1.70\) \(1.38\) 3rd quartile \(1.49\) \(1.91\) \(1.58\) Maximum \(1.88\) \(3.00\) \(2.17\) Exercise 15.6 In a study of the temperature in offices, Paul and Taylor (2008) compared the temperature in three offices (during working hours) at Charles Sturt University (Australia); the data are summarised in Table 15.5. 1. Compute the IQR for each office. 2. Construct the boxplot for comparing the three offices. 3. Draw the approximate histograms for each office. 4. What do you learn about the offices? TABLE 15.5: A summary of the temperature (in degrees C) in three offices at CSU during working hours according to current smoking status. Office A Office B Office C Mean \(24.1\) \(25.3\) \(25.7\) Minimum \(16.4\) \(15.9\) \(20.1\) \(Q_1\) \(22.8\) \(23.8\) \(24.6\) Median \(24.4\) \(25.5\) \(26.1\) \(Q_3\) \(25.5\) \(26.9\) \(27.2\) Maximum \(27.4\) \(31.0\) \(30.3\) Exercise 15.7 [Dataset: NHANES] Consider this RQ: Among Americans, is the mean direct HDL cholesterol different for current smokers and non-smokers? Data to answer this RQ is available from the American National Health and Nutrition Examination Survey (NHANES) (Pruim 2015). 1. What would be an appropriate graph to display the comparison? 2. Use the software output (Fig. 15.9) to construct an appropriate table showing the numerical summary relevant to the RQ. Exercise 15.8 [Dataset: ForwardFall] Wojcik et al. (1999) compared the lean-forward angle in younger and older women. An elaborate set-up was constructed to measure this angle, using a harnesses. Consider the RQ: Among healthy women, what is difference between the mean lean-forward angle for younger women compared to older women? The data are shown in Table 15.6. 1. What is an appropriate graph to display the comparison? 2. Construct an appropriate numerical summary from the software output (Fig. 15.10). TABLE 15.6: Lean-forward angles for older women (\(n = 10\)) and younger women (\(n = 5\)). \(29\) \(34\) \(33\) \(27\) \(28\) \(18\) \(15\) \(23\) \(13\) \(12\) \(32\) \(31\) \(34\) \(32\) \(27\) Exercise 15.9 [Dataset: Speed] Ma et al. (2019) studied adding additional signage to reduce vehicle speeds on freeway exit ramps. At one site (Ningxuan Freeway), speeds were recorded for \(38\) vehicles before the extra signage was added, and then for \(41\) different vehicles after the extra signage was added (data below). The researchers are hoping that the addition of extra signage will reduce the mean speed of the vehicles. The RQ is: At this freeway exit, how much is the mean vehicle speed reduced after extra signage is added? 1. Using the software output in Fig. 15.11, summarise the data numerically, then construct a suitable summary table. 2. Produce a boxplot of the data (use a computer if necessary). Exercise 15.10 [Dataset: Deceleration] Ma et al. (2019) studied adding additional signage to reduce vehicle speeds on freeway exit ramps. At one site (Ningxuan Freeway), speeds were recorded at various points on the freeway exit for \(38\) vehicles before the extra signage was added, and then for \(41\) vehicles after the extra signage was added. From this data, the deceleration of each vehicle was determined (data below) as the vehicle left the \(120\).h^\(-1\) speed zone and approached the \(80\).h^\(-1\) speed zone. The RQ is: At this freeway exit, what is the difference between the mean vehicle deceleration, comparing the times before the extra signage is added and after extra signage is added? In this context, the researchers are hoping that the extra signage might cause cars to slow down faster (i.e., they will decelerate more, on average, after adding the extra signage). 1. Using the software output in Fig. 15.12, summarise the data numerically, then construct a suitable summary table. 2. Produce a boxplot of the data (use a computer if necessary). Exercise 15.11 [Dataset: Typing] The Typing dataset contains information about the typing speed and accuracy for students, from an online typing test (Pinet et al. 2022). The four variables include are: typing speed (mTS), typing accuracy (mAcc), age (Age), and sex (Sex) for \(1\ 301\) students. 1. Produce appropriate numerical summaries for the quantitative variables. 2. Produce appropriate numerical summaries for comparing the quantitative variables for different values of the qualitative variable. 3. What do you learn from these numerical summaries? Exercise 15.12 [Dataset: Dental] Woodward and Walker (1994) recorded the sugar consumption and the number of decayed, missing or filled teeth (DMFT) in \(29\) industrialised countries and \(61\) non-industrialised countries. 1. Produce appropriate numerical summaries for the two quantitative variables. 2. Produce appropriate numerical summaries for comparing the two quantitative variables for industrialised countries and non-industrialised countries. 3. What do you learn from these numerical summaries?
{"url":"https://bookdown.org/pkaldunn/SRM-Textbook/BetweenQuantData.html","timestamp":"2024-11-12T22:04:49Z","content_type":"text/html","content_length":"84988","record_id":"<urn:uuid:77b5dd21-e1ce-488e-a992-1021afc32987>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00168.warc.gz"}
How to Sum Every nth Row in Excel November 12, 2024 - Excel Office How to Sum Every nth Row in Excel This example shows you how to create an array formula that sums every nth row in Excel. We will show it for n = 3, but you can do this for any number. 1. The ROW function returns the row number of a cell. 2. The MOD function gives the remainder of a division. For example, for the first row, MOD(1,3) equals 1. 1 is divided by 3 (0 times) to give a remainder of 1. For the third row, MOD(3,3) equals 0. 3 is divided by 3 (exactly 1 time) to give a remainder of 0. As a result, the formula returns 0 for every 3th row. Note: change the 3 to 4 to sum every 4th row, to 5 to sum every 5th row, etc. 3. Slightly change the formula as shown below. 4. To get the sum of the product of these two ranges (FALSE=0, TRUE=1), use the SUM function and finish by pressing CTRL + SHIFT + ENTER. Note: The formula bar indicates that this is an array formula by enclosing it in curly braces {}. Do not type these yourself. They will disappear when you edit the formula. Explanation: The product of these two ranges (array constant) is stored in Excel’s memory, not in a range. The array constant looks as follows. This array constant is used as an argument for the SUM function, giving a result of 92
{"url":"https://www.xlsoffice.com/excel-functions/how-to-sum-every-nth-row-in-excel/","timestamp":"2024-11-12T05:13:30Z","content_type":"text/html","content_length":"65118","record_id":"<urn:uuid:1853c9a5-2496-47ff-a526-da8f7d8d2740>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00787.warc.gz"}
Look Up Over 150 Math Definitions With This Handy Glossary (2024) Understanding math terms is important because mathematics is often referred to as the language of science and the universe, and it's not just about numbers. It encapsulates a vast array of concepts, principles, and terminology—from the foundational basics of counting to the complexities of calculus and beyond. In this A to Z glossary, you'll find fundamental math concepts ranging from absolute value to zero slope. There's also a bit of history, with terms named after famous mathematicians. A to Z Glossary of Math Terms Abacus:An early counting tool used for basic arithmetic. Absolute Value:Always a positive number, absolute value refers to the distance of a number from 0. Acute Angle:An angle whose measure is between zero degrees and 90 degrees, or with less than 90-degree radians. Addend:A number involved in an addition problem; numbers being added are called addends. Algebra: The branch of mathematics that substitutes letters for numbers to solve for unknown values. Algorithm: A procedure or set of steps used to solve a mathematical computation. Angle: Two rays sharing the same endpoint (called the angle vertex). Angle Bisector: The line dividing an angle into two equal angles. Area: The two-dimensional space taken up by an object or shape, given in square units. Array: A set of numbers or objects that follow a specific pattern. Attribute: A characteristic or feature of an object—such as size, shape, color, etc.—that allows it to be grouped. Average: The average is the same as the mean. Add up a series of numbers and divide the sum by the total number of values to find the average. Base: The bottom of a shape or three-dimensional object, what an object rests on. Base 10: Number system that assigns place value to numbers. Bar Graph: A graph that represents data visually using bars of different heights or lengths. BEDMAS or PEMDAS Definition: An acronym used to help people remember the correct order of operations for solving algebraic equations. BEDMAS stands for "Brackets, Exponents, Division, Multiplication, Addition, and Subtraction" and PEMDAS stands for "Parentheses, Exponents, Multiplication, Division, Addition, and Subtraction". Bell Curve: The bell shape created when a line is plotted using data points for an item that meets the criteria of normal distribution. The center of a bell curve contains the highest value points. Binomial: A polynomial equation with two terms usually joined by a plus or minus sign. Box and Whisker Plot/Chart:A graphical representation of data that shows differences in distributions and plots data set ranges. Calculus:The branch of mathematics involving derivatives and integrals, Calculus is the study of motion in which changing values are studied. Capacity:The volume of substance that a container will hold. Centimeter:A metric unit of measurement for length, abbreviated as cm. 2.5 cm is approximately equal to an inch. Circumference:The complete distance around a circle or a square. Chord:A segment joining two points on a circle. Coefficient:A letter or number representing a numerical quantity attached to a term (usually at the beginning). For example, x is the coefficient in the expression x(a + b) and 3 is the coefficient in the term 3y. Common Factors:A factor shared by two or more numbers, common factors are numbers that divide exactly into two different numbers. Complementary Angles: Two angles that together equal 90 degrees. Composite Number:A positive integer with at least one factor aside from its own. Composite numbers cannot be prime because they can be divided exactly. Cone:A three-dimensional shape with only one vertex and a circular base. Conic Section:The section formed by the intersection of a plane and cone. Constant:A value that does not change. Coordinate:The ordered pair that gives a precise location or position on a coordinate plane. Congruent:Objects and figures that have the same size and shape. Congruent shapes can be turned into one another with a flip, rotation, or turn. Cosine:In a right triangle, cosine is a ratio that represents the length of a side adjacent to an acute angle to the length of the hypotenuse. Cylinder:A three-dimensional shape featuring two circle bases connected by a curved tube. Decagon:A polygon or shape with ten angles and ten straight lines. Decimal:A real number on the base ten standard numbering system. Denominator:The bottom number of a fraction. The denominator is the total number of equal parts into which the numerator is being divided. Degree:The unit of an angle's measure represented with the symbol °. Diagonal:A line segment that connects two vertices in a polygon. Diameter:A line that passes through the center of a circle and divides it in half. Difference:The difference is the answer to a subtraction problem, in which one number is taken away from another. Digit:Digits are the numerals 0-9 found in all numbers. 176 is a 3-digit number featuring the digits 1, 7, and 6. Dividend:A number divided into equal parts (inside the bracket in long division). Divisor:A number that divides another number into equal parts (outside of the bracket in long division). Edge:A line is where two faces meet in a three-dimensional structure. Ellipse:An ellipse looks like a slightly flattened circle and is also known as a plane curve. Planetary orbits take the form of ellipses. End Point:The "point" at which a line or curve ends. Equilateral:A term used to describe a shape whose sides are all of equal length. Equation:A statement that shows the equality of two expressions by joining them with an equals sign. Even Number:A number that can be divided or is divisible by 2. Event:This term often refers to an outcome of probability; it may answer questions about the probability of one scenario happening over another. Evaluate:This word means "to calculate the numerical value". Exponent:The number that denotes repeated multiplication of a term, shown as a superscript above that term. The exponent of 3^4 is 4. Expressions:Symbols that represent numbers or operations between numbers. Face:The flat surfaces on a three-dimensional object. Factor:A number that divides into another number exactly. The factors of 10 are 1, 2, 5, and 10 (1 x 10, 2 x 5, 5 x 2, 10 x 1). Factoring:The process of breaking numbers down into all of their factors. Factorial Notation:Often used in combinatorics, factorial notations require that you multiply a number by every number smaller than it. The symbol used in factorial notation is ! When you see x!, the factorial of x is needed. Factor Tree:A graphical representation showing the factors of a specific number. Fibonacci Sequence:Named after Italian number theorist Leonardo Pisano Fibonacci, it's a sequence beginning with a 0 and 1 whereby each number is the sum of the two numbers preceding it. For example, "0, 1, 1, 2, 3, 5, 8, 13, 21, 34..." is a Fibonacci sequence. Figure:Two-dimensional shapes. Finite:Not infinite; has an end. Flip:A reflection or mirror image of a two-dimensional shape. Formula:A rule that numerically describes the relationship between two or more variables. Fraction:A quantity that is not whole that contains a numerator and denominator. The fraction representing half of 1 is written as 1/2. Frequency:The number of times an event can happen in a given period of time; often used in probability calculations. Furlong:A unit of measurement representing the side length of one square acre. One furlong is approximately 1/8 of a mile, 201.17 meters, or 220 yards. Geometry:The study of lines, angles, shapes, and their properties. Geometry studies physical shapes and object dimensions. Graphing Calculator:A calculator with an advanced screen capable of showing and drawing graphs and other functions. Graph Theory:A branch of mathematics focused on the properties of graphs. Greatest Common Factor:The largest number common to each set of factors that divides both numbers exactly. The greatest common factor of 10 and 20 is 10. Hexagon:A six-sided and six-angled polygon. Histogram:A graph that uses bars that equal ranges of values. Hyperbola:A type of conic section or symmetrical open curve. The hyperbola is the set of all points in a plane, the difference of whose distance from two fixed points in the plane is a positive Hypotenuse:The longest side of a right-angled triangle, always opposite to the right angle itself. Identity:An equation that is true for variables of any value. Improper Fraction:A fraction whose numerator is equal to or greater than the denominator, such as 6/4. Inequality:A mathematical equation expressing inequality and containing a greater than (>), less than (<), or not equal to (≠) symbol. Integers:All whole numbers, positive or negative, including zero. Irrational:A number that cannot be represented as a decimal or fraction. A number like pi is irrational because it contains an infinite number of digits that keep repeating. Many square roots are also irrational numbers. Isosceles:A polygon with two sides of equal length. Kilometer:A unit of measure equal to 1000 meters. Knot:A closed three-dimensional circle that is embedded and cannot be untangled. Like Terms:Terms with the same variable and same exponents/powers. Like Fractions:Fractions with the same denominator. Line:A straight infinite path joining an infinite number of points in both directions. Line Segment:A straight path that has two endpoints, a beginning, and an end. Linear Equation:An equation that contains two variables and can be plotted on a graph as a straight line. Line of Symmetry:A line that divides a figure into two equal shapes. Logic:Sound reasoning and the formal laws of reasoning. Logarithm:The power to which a base must be raised to produce a given number. If nx = a, the logarithm of a, with n as the base, is x. Logarithm is the opposite of exponentiation. Mean:The mean is the same as the average. Add up a series of numbers and divide the sum by the total number of values to find the mean. Median:The median is the middle value in a series of numbers ordered from least to greatest. When the total number of values in a list is odd, the median is the middle entry. When the total number of values in a list is even, the median is equal to the sum of the two middle numbers divided by two. Midpoint:A point that is exactly halfway between two locations. Mixed Numbers:Mixed numbers refer to whole numbers combined with fractions or decimals. Example 3 ^1/[2] or 3.5. Mode:The mode in a list of numbers are the values that occur most frequently. Modular Arithmetic:A system of arithmetic for integers where numbers "wrap around" upon reaching a certain value of the modulus. Monomial:An algebraic expression made up of one term. Multiple:The multiple of a number is the product of that number and any other whole number. 2, 4, 6, and 8 are multiples of 2. Multiplication:Multiplication is the repeated addition of the same number denoted with the symbol x. 4 x 3 is equal to 3 + 3 + 3 + 3. Multiplicand:A quantity multiplied by another. A product is obtained by multiplying two or more multiplicands. Natural Numbers:Regular counting numbers. Negative Number:A number less than zero denoted with the symbol -. Negative 3 = -3. Net:A two-dimensional shape that can be turned into a two-dimensional object by gluing/taping and folding. Nth Root:The nth root of a number is how many times a number needs to be multiplied by itself to achieve the value specified. Example: the 4th root of 3 is 81 because 3 x 3 x 3 x 3 = 81. Norm:The mean or average; an established pattern or form. Normal Distribution: Also known as Gaussian distribution, normal distribution refers to a probability distribution that is reflected across the mean or center of a bell curve. Numerator:The top number in a fraction. The numerator is divided into equal parts by the denominator. Number Line:A line whose points correspond to numbers. Numeral:A written symbol denoting a number value. Obtuse Angle:An angle measuring between 90° and 180°. Obtuse Triangle:A triangle with at least one obtuse angle. Octagon:A polygon with eight sides. Odds:The ratio or likelihood of a probability event happening. The odds of flipping a coin and having it land on heads are one in two. Odd Number:A whole number that is not divisible by 2. Operation:Refers to addition, subtraction, multiplication, or division. Ordinal:Ordinal numbers give relative positions in a set: first, second, third, etc. Order of Operations:A set of rules used to solve mathematical problems in the correct order. This is often remembered with acronyms BEDMAS and PEMDAS. Outcome:Used in probability to refer to the result of an event. Parallelogram:A quadrilateral with two sets of opposite sides that are parallel. Parabola:An open curve whose points are equidistant from a fixed point called the focus and a fixed straight line called the directrix. Pentagon:A five-sided polygon. Regular pentagons have five equal sides and five equal angles. Percent:A ratio or fraction with the denominator 100. Perimeter:The total distance around the outside of a polygon. This distance is obtained by adding together the units of measure from each side. Perpendicular:Two lines or line segments intersecting to form a right angle. Pi:Pi is used to represent the ratio of the circumference of a circle to its diameter, denoted with the Greek symbol π. Plane:When a set of points join together to form a flat surface that extends in all directions, this is called a plane. Polynomial:The sum of two or more monomials. Polygon:Line segments joined together to form a closed figure. Rectangles, squares, and pentagons are just a few examples of polygons. Prime Numbers:Prime numbers are integers greater than one that are only divisible by themselves and 1. Probability:The likelihood of an event happening. Product:The sum obtained through the multiplication of two or more numbers. Proper Fraction:A fraction whose denominator is greater than its numerator. Protractor:A semi-circle device used for measuring angles. The edge of a protractor is subdivided into degrees. Quadrant:One quarter (qua) of the plane on the Cartesian coordinate system. The plane is divided into 4 sections, each called a quadrant. Quadratic Equation:An equation that can be written with one side equal to 0. Quadratic equations ask you to find the quadratic polynomial that is equal to zero. Quadrilateral:A four-sided polygon. Quadruple:To multiply or to be multiplied by 4. Qualitative:Properties that must be described using qualities rather than numbers. Quartic:A polynomial having a degree of 4. Quintic:A polynomial having a degree of 5. Quotient:The solution to a division problem. Radius:A distance found by measuring a line segment extending from the center of a circle to any point on the circle; the line extending from the center of a sphere to any point on the outside edge of the sphere. Ratio:The relationship between two quantities. Ratios can be expressed in words, fractions, decimals, or percentages. Example: the ratio given when a team wins 4 out of 6 games is 4/6, 4:6, four out of six, or ~67%. Ray:A straight line with only one endpoint that extends infinitely. Range:The difference between the maximum and minimum in a set of data. Rectangle:A parallelogram with four right angles. Repeating Decimal:A decimal with endlessly repeating digits. Example: 88 divided by 33 equals 2.6666666666666... ("2.6 repeating"). Reflection:The mirror image of a shape or object, obtained from flipping the shape on an axis. Remainder:The number left over when a quantity cannot be divided evenly. A remainder can be expressed as an integer, fraction, or decimal. Right Angle:An angle equal to 90 degrees. Right Triangle:A triangle with one right angle. Rhombus:A parallelogram with four sides of equal length and no right angles. Scalene Triangle:A triangle with three unequal sides. Sector:The area between an arc and two radii of a circle, sometimes referred to as a wedge. Slope:Slope shows the steepness or incline of a line and is determined by comparing the positions of two points on the line (usually on a graph). Square Root:A number squared is multiplied by itself; the square root of a number is whatever integer gives the original number when multiplied by itself. For instance, 12 x 12 or 12 squared is 144, so the square root of 144 is 12. Stem and Leaf:A graphic organizer used to organize and compare data. Similar to a histogram, stem and leaf graphs organize intervals or groups of data. Subtraction:The operation of finding the difference between two numbers or quantities by "taking away" one from the other. Supplementary Angles:Two angles are supplementary if their sum is equal to 180°. Symmetry:Two halves that match perfectly and are identical across an axis. Tangent:A straight line touching a curve from only one point. Term:Piece of an algebraic equation; a number in a sequence or series; a product of real numbers and/or variables. Tessellation:Congruent plane figures/shapes that cover a plane completely without overlapping. Translation:A translation, also called a slide, is a geometrical movement in which a figure or shape is moved from each of its points the same distance and in the same direction. Transversal:A line that crosses/intersects two or more lines. Trapezoid:A quadrilateral with exactly two parallel sides. Tree Diagram:Used in probability to show all possible outcomes or combinations of an event. Triangle:A three-sided polygon. Trinomial:A polynomial with three terms. Unit:A standard quantity used in measurement. Inches and centimeters are units of length, pounds, and kilograms are units of weight, and square meters and acres are units of area. Uniform:Term meaning "all the same". It can be used to describe size, texture, color, design, and more. Variable:A letter used to represent a numerical value in equations and expressions. Example: in the expression 3x + y, both y and x are the variables. Venn Diagram:A Venn diagram is usually shown as two overlapping circles and is used to compare two sets. The overlapping section contains information that is true of both sides or sets and the non-overlapping portions each represent a set and contain information that is only true of their set. Volume:A unit of measure describing how much space a substance occupies or the capacity of a container, provided in cubic units. Vertex: The point of intersection between two or more rays, often called a corner. A vertex is where two-dimensional sides or three-dimensional edges meet. Weight:The measure of how heavy something is. Whole Number:A whole number is a positive integer. X-Axis:The horizontal axis in a coordinate plane. X-Intercept:The value of x where a line or curve intersects the x-axis. X:The Roman numeral for 10. x:A symbol used to represent an unknown quantity in an equation or expression. Y-Axis:The vertical axis in a coordinate plane. Y-Intercept:The value of y where a line or curve intersects the y-axis. Yard:A unit of measure that is equal to approximately 91.5 centimeters or 3 feet. Zero Slope: The slope of a horizontal line. Its slope is zero because a horizontal line has no incline.
{"url":"https://tp0610.com/article/look-up-over-150-math-definitions-with-this-handy-glossary","timestamp":"2024-11-13T22:07:33Z","content_type":"text/html","content_length":"127594","record_id":"<urn:uuid:5601f047-f9e8-435a-ac79-6439a75c3a01>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00373.warc.gz"}
Re: DOE augmentation help with an already blocked designRe: DOE augmentation help with an already blocked design Hi there, I was hoping I could get some help with an experiment I am currently running? It is a custom design with 3 factors in total. 2 continuous variables and 1, 2-level categorical variable. This is ran on JMP 15.2 Lets call them Continuous factor 1 = A (Range 25-75), Continuous factor 2 =B (Range 10-180) and Categorical factor= C( Acid, Neutral) I was interested in collecting information in the middle of the range of the continuous variables as well as any 2nd order interactions so the model contains the main effects, 2nd order interactions and the quadratic terms for the continuous variables. The default design suggested a 18 run design which was too many runs for one day so I selected to group the design into two blocks of 9 runs. Block 1 of testing has now been completed and it is clear from eyeballing the data that Factor A appears to be the most important factor by far and that optimal results are only going to be found around the midpoint of this factor. Block two of testing will proceed as planned but I am already thinking of how best to fix Factor A to its midpoint while seeing what impact varying factor B and C will have. Would it be possible to augment the design while locking Factor A to its mid point or would this be difficult as the design already contains a blocking factor and augmentation would add a third blocking factor to the mix?
{"url":"https://community.jmp.com/t5/Discussions/DOE-augmentation-help-with-an-already-blocked-design/m-p/801870","timestamp":"2024-11-04T19:57:21Z","content_type":"text/html","content_length":"755745","record_id":"<urn:uuid:68849d9b-7cfa-4569-a421-46bee7fccb60>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00017.warc.gz"}
All solutions here are SUGGESTED. Mr. Teng will hold no liability for any errors. Comments are entirely personal opinions. (i) Let X, Y be the output daily from mines A and B, respectively. (ii) CONTACT US We would love to hear from you. Contact us, or simply hit our personal page for more contact information
{"url":"https://theculture.sg/tag/year-2014/","timestamp":"2024-11-02T18:47:25Z","content_type":"text/html","content_length":"185670","record_id":"<urn:uuid:6c1e48c7-afd4-448b-bb38-047d4a526385>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00409.warc.gz"}
Simplifying Expressions - Definition, With Exponents, Examples - Grade Potential Washington DC, DC Simplifying Expressions - Definition, With Exponents, Examples Algebraic expressions can be challenging for beginner pupils in their primary years of college or even in high school. Still, learning how to process these equations is essential because it is primary information that will help them move on to higher mathematics and advanced problems across multiple industries. This article will go over everything you must have to learn simplifying expressions. We’ll cover the laws of simplifying expressions and then verify our skills via some practice problems. How Do I Simplify an Expression? Before you can learn how to simplify them, you must learn what expressions are at their core. In arithmetics, expressions are descriptions that have at least two terms. These terms can include numbers, variables, or both and can be connected through subtraction or addition. As an example, let’s review the following expression. 8x + 2y - 3 This expression contains three terms; 8x, 2y, and 3. The first two include both numbers (8 and 2) and variables (x and y). Expressions that include coefficients, variables, and occasionally constants, are also known as polynomials. Simplifying expressions is important because it paves the way for learning how to solve them. Expressions can be expressed in complicated ways, and without simplifying them, everyone will have a hard time attempting to solve them, with more opportunity for a mistake. Obviously, every expression differ concerning how they're simplified depending on what terms they include, but there are general steps that apply to all rational expressions of real numbers, regardless of whether they are square roots, logarithms, or otherwise. These steps are refered to as the PEMDAS rule, or parenthesis, exponents, multiplication, division, addition, and subtraction. The PEMDAS rule states that the order of operations for expressions. 1. Parentheses. Resolve equations within the parentheses first by applying addition or applying subtraction. If there are terms right outside the parentheses, use the distributive property to apply multiplication the term outside with the one on the inside. 2. Exponents. Where workable, use the exponent principles to simplify the terms that include exponents. 3. Multiplication and Division. If the equation calls for it, use multiplication or division rules to simplify like terms that apply. 4. Addition and subtraction. Finally, use addition or subtraction the simplified terms in the equation. 5. Rewrite. Make sure that there are no more like terms that need to be simplified, and then rewrite the simplified equation. The Rules For Simplifying Algebraic Expressions Along with the PEMDAS sequence, there are a few more principles you must be informed of when simplifying algebraic expressions. • You can only simplify terms with common variables. When adding these terms, add the coefficient numbers and leave the variables as [[is|they are]-70. For example, the expression 8x + 2x can be simplified to 10x by adding coefficients 8 and 2 and retaining the variable x as it is. • Parentheses containing another expression outside of them need to apply the distributive property. The distributive property gives you the ability to to simplify terms on the outside of parentheses by distributing them to the terms on the inside, for example: a(b+c) = ab + ac. • An extension of the distributive property is known as the property of multiplication. When two stand-alone expressions within parentheses are multiplied, the distributive principle applies, and all individual term will have to be multiplied by the other terms, resulting in each set of equations, common factors of each other. For example: (a + b)(c + d) = a(c + d) + b(c + d). • A negative sign right outside of an expression in parentheses denotes that the negative expression will also need to be distributed, changing the signs of the terms on the inside of the parentheses. As is the case in this example: -(8x + 2) will turn into -8x - 2. • Likewise, a plus sign on the outside of the parentheses will mean that it will be distributed to the terms on the inside. However, this means that you are able to eliminate the parentheses and write the expression as is due to the fact that the plus sign doesn’t change anything when distributed. How to Simplify Expressions with Exponents The previous rules were easy enough to implement as they only applied to properties that impact simple terms with variables and numbers. However, there are a few other rules that you have to implement when dealing with exponents and expressions. Next, we will discuss the principles of exponents. 8 properties impact how we utilize exponents, that includes the following: • Zero Exponent Rule. This property states that any term with a 0 exponent is equivalent to 1. Or a0 = 1. • Identity Exponent Rule. Any term with a 1 exponent will not change in value. Or a1 = a. • Product Rule. When two terms with equivalent variables are multiplied by each other, their product will add their exponents. This is written as am × an = am+n • Quotient Rule. When two terms with matching variables are divided, their quotient will subtract their two respective exponents. This is expressed in the formula am/an = am-n. • Negative Exponents Rule. Any term with a negative exponent equals the inverse of that term over 1. This is expressed with the formula a-m = 1/am; (a/b)-m = (b/a)m. • Power of a Power Rule. If an exponent is applied to a term already with an exponent, the term will end up having a product of the two exponents applied to it, or (am)n = amn. • Power of a Product Rule. An exponent applied to two terms that possess differing variables will be applied to the respective variables, or (ab)m = am * bm. • Power of a Quotient Rule. In fractional exponents, both the numerator and denominator will acquire the exponent given, (a/b)m = am/bm. How to Simplify Expressions with the Distributive Property The distributive property is the principle that denotes that any term multiplied by an expression within parentheses needs be multiplied by all of the expressions on the inside. Let’s witness the distributive property in action below. Let’s simplify the equation 2(3x + 5). The distributive property states that a(b + c) = ab + ac. Thus, the equation becomes: 2(3x + 5) = 2(3x) + 2(5) The expression then becomes 6x + 10. How to Simplify Expressions with Fractions Certain expressions can consist of fractions, and just as with exponents, expressions with fractions also have multiple rules that you need to follow. When an expression consist of fractions, here is what to keep in mind. • Distributive property. The distributive property a(b+c) = ab + ac, when applied to fractions, will multiply fractions one at a time by their numerators and denominators. • Laws of exponents. This shows us that fractions will usually be the power of the quotient rule, which will subtract the exponents of the numerators and denominators. • Simplification. Only fractions at their lowest form should be included in the expression. Apply the PEMDAS rule and make sure that no two terms contain matching variables. These are the exact rules that you can apply when simplifying any real numbers, whether they are decimals, square roots, binomials, logarithms, linear equations, or quadratic equations. Practice Questions for Simplifying Expressions Example 1 Simplify the equation 4(2x + 5x + 7) - 3y. In this example, the principles that should be noted first are the distributive property and the PEMDAS rule. The distributive property will distribute 4 to all other expressions inside the parentheses, while PEMDAS will decide on the order of simplification. Because of the distributive property, the term outside of the parentheses will be multiplied by the individual terms inside. The expression then becomes: 4(2x) + 4(5x) + 4(7) - 3y 8x + 20x + 28 - 3y When simplifying equations, be sure to add the terms with matching variables, and each term should be in its lowest form. 28x + 28 - 3y Rearrange the equation this way: 28x - 3y + 28 Example 2 Simplify the expression 1/3x + y/4(5x + 2) The PEMDAS rule expresses that the first in order should be expressions on the inside of parentheses, and in this scenario, that expression also needs the distributive property. In this scenario, the term y/4 will need to be distributed within the two terms on the inside of the parentheses, as seen in this example. 1/3x + y/4(5x) + y/4(2) Here, let’s put aside the first term for now and simplify the terms with factors associated with them. Because we know from PEMDAS that fractions will need to multiply their numerators and denominators individually, we will then have: y/4 * 5x/1 The expression 5x/1 is used for simplicity since any number divided by 1 is that same number or x/1 = x. Thus, The expression y/4(2) then becomes: y/4 * 2/1 Thus, the overall expression is: 1/3x + 5xy/4 + 2y/4 Its final simplified version is: 1/3x + 5/4xy + 1/2y Example 3 Simplify the expression: (4x2 + 3y)(6x + 1) In exponential expressions, multiplication of algebraic expressions will be used to distribute all terms to each other, which gives us the equation: 4x2(6x + 1) + 3y(6x + 1) 4x2(6x) + 4x2(1) + 3y(6x) + 3y(1) For the first expression, the power of a power rule is applied, which means that we’ll have to add the exponents of two exponential expressions with the same variables multiplied together and multiply their coefficients. This gives us: 24x3 + 4x2 + 18xy + 3y Since there are no other like terms to apply simplification to, this becomes our final answer. Simplifying Expressions FAQs What should I remember when simplifying expressions? When simplifying algebraic expressions, remember that you are required to obey the distributive property, PEMDAS, and the exponential rule rules and the principle of multiplication of algebraic expressions. Ultimately, ensure that every term on your expression is in its lowest form. What is the difference between solving an equation and simplifying an expression? Solving equations and simplifying expressions are quite different, although, they can be combined the same process since you must first simplify expressions before you begin solving them. Let Grade Potential Help You Hone Your Math Skills Simplifying algebraic equations is one of the most fundamental precalculus skills you need to study. Mastering simplification strategies and properties will pay dividends when you’re practicing advanced mathematics! But these ideas and laws can get challenging fast. Grade Potential is here to assist you, so have no fear! Grade Potential Washington DC gives professional tutors that will get you up to speed at your convenience. Our expert tutors will guide you through mathematical concepts in a straight-forward way to Contact us now
{"url":"https://www.washingtondcinhometutors.com/blog/simplifying-expressions-definition-with-exponents-examples","timestamp":"2024-11-08T02:05:36Z","content_type":"text/html","content_length":"89244","record_id":"<urn:uuid:19556335-e9fd-4d47-9063-dc3a663dc64f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00417.warc.gz"}
Roman Numerals Chart to 20 Archives - Multiplication Table Chart Roman Numerals 1-20: Mathematics as a subject is known to be difficult and children often struggle to master this subject. There are various problems, multiplication table charts, fractions and Roman Numerals chart that require its respective understanding, learning and problem-solving skills. Roman Numerals 1-20 Roman Numerals 1-20 Printable In the primary classes, students have to start learning and knowing various different table charts, methods and numbering system that makes the basics of the students. Mathematics as a subject requires students to acquire skills from the primary classes that will help them in not only scoring good marks but also taking all these skills in life after schooling as well. Roman Numerals Chart 1 to 20 Along with numbering systems, multiplication table charts there are Roman Numerals Charts that introduces a different language in mathematics to students. The Roman Numeral language was derived from ancient romans time and has a different representation of numbering as compared to the usual numbering system. However, the meaning of the number remains the same only the representation gets different. So, the method of solving the problem and the original problems becomes different. Roman numerals charts also starts from the number 1 and ends at 100, 1000 and so on. The only difference is the representation of the number is different. Roman Numbers 1 to 20 However, a student has to learn the Roman Numerals chart in its own language and solve the problems as well. The first step is to understand all about the language and know its significance in mathematics subject. The next step is to start reading, learning and remembering the Roman Numeral tables staring from the number 1. The table charts are later used in solving the Roman Numeral However in the primary classes learning a new language and remembering all the tables is a difficult task. So, regular practice and constant efforts require to learn these tables by a child. Regular practice require by parents, teachers and children to remember these Roman Numerals table charts. There are Free Printable Roman Numerals Chart 1-20 Template available online. These charts are colorful and represented in quite a colorful manner along with tables divided into different roman Roman Numerals 1 to 20 pdf Each table has its own significance and use in solving various types of mathematical problems. The beginning of learning these tables is from roman numerals tables 1 to 20. The 1 to 20 tables will available in the Roman Numerals signs, which is to learn by the students. So, get your child to start to learn these tables. Roman Numbers 1 to 20 for Kids
{"url":"https://multiplicationchart.net/tag/roman-numerals-chart-to-20/","timestamp":"2024-11-04T20:05:53Z","content_type":"text/html","content_length":"89298","record_id":"<urn:uuid:431c8af1-4559-463e-84e7-21b37f39ad48>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00769.warc.gz"}
Centering Matrix Given a vector $v$, with mean value of its elements $m$, we can center the vector by subtracting the mean $m$ from each element, import numpy as np n = 10 v = np.random.randn(n) v_c = v - v.mean() This operation is easy and obvious. However, the formalism is not elegant. In some cases, we would like to formulate the process of centering the elements as operators, $$ v_c = \operatorname{\hat H}v. $$ In this case, the operator $\operatorname{\hat H}$ is simply a matrix $$ \operatorname{\hat H} \to I_n - \frac{1}{n} J_n, $$ where $n$ is the dimension of the vector $v$, $I_n$ is a identity matrix, $J_n$ is a matrix of all $1$s. cm = np.identity(n) - np.ones((n, n)) / n np.matmul(cm, v) Planted: by L Ma; L Ma (2021). 'Centering Matrix', Datumorphism, 11 April. Available at: https://datumorphism.leima.is/cards/math/statistics-centering-matrix/.
{"url":"https://datumorphism.leima.is/cards/math/statistics-centering-matrix/?ref=footer","timestamp":"2024-11-02T14:46:42Z","content_type":"text/html","content_length":"110940","record_id":"<urn:uuid:aabf4bd0-d083-4ab5-bec8-9680475d62ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00323.warc.gz"}
Playing to retain the advantage Let P be a monotone increasing graph property, let G = (V, E) be a graph, and let q be a positive integer. In this paper, we study the (1: q) Maker-Breaker game, played on the edges of G, in which Maker's goal is to build a graph that satisfies the property P. It is clear that in order for Maker to have a chance of winning, G itself must satisfy P. We prove that if G satisfies P in some strong sense, that is, if one has to delete sufficiently many edges from G in order to obtain a graph that does not satisfy P, then Maker has a winning strategy for this game. We also consider a different notion of satisfying some property in a strong sense, which is motivated by a problem of Duffus, Łuczak and Rödl [6]. Dive into the research topics of 'Playing to retain the advantage'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/playing-to-retain-the-advantage-6","timestamp":"2024-11-07T19:26:55Z","content_type":"text/html","content_length":"50088","record_id":"<urn:uuid:358f0f44-af90-49f8-be40-385adebe0ec4>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00889.warc.gz"}
3.2 - Properties Of Electron Beams Electron beams can be described in several ways. Energy Spectra The energy spectra of an electron beam refers to the distribution of kinetic energies possessed by electrons in a beam. • When a beam hits the flattening filter, it is essentially monoenergetic with most electrons possessing a single beam energy • By passing through the treatment head and applicator, the distribution of electron energies begins to spread out. • At the phantom surface, there is a distribution of electron energies, with a maximum energy of (E[max])[0]. The mean energy, $\bar{E_0}$, is the average energy of electrons at the surface. The most probable energy, (E[p])[0], is the position of the spectral peak - different to the mean energy. • As electrons penetrate the phantom, they will lose energy in a stochastic way, meaning that at a particular depth there will be a much broader spectrum of energies than at the surface. The most probable energy at a depth of z cm, (E[p])[z], is related to the probable range of the electrons R[p]. Range, as discussed in electron-interactions, is the distance traveled by an individual electron. Range is used to describe several properties of electron beams: • The R[x] value refers to the depth in centimetres that x% of electrons travel. • The R[p] value is the practical range, the point on the depth axis that is crossed by a line that continues the linear descent seen in electron depth dose curves.
{"url":"http://ozradonc.wikidot.com/properties-of-electron-beams","timestamp":"2024-11-02T20:54:27Z","content_type":"application/xhtml+xml","content_length":"26850","record_id":"<urn:uuid:8a846164-ce35-4441-8e20-465a763ac976>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00867.warc.gz"}
Excel Formula for Clickable Link to Tab In this tutorial, we will learn how to create a clickable link in Excel using a Python formula that takes you to the tab with the same name. This can be useful when you have multiple tabs in your Excel workbook and you want to quickly navigate to a specific tab by clicking on a cell. We will achieve this using the HYPERLINK function in Excel. To create the clickable link, we will use the formula =HYPERLINK("#'"&A1&"'!A1", A1). This formula uses the HYPERLINK function to create a link that takes you to the tab with the same name as the value in cell A1. Here's a step-by-step explanation of the formula: 1. The HYPERLINK function is used to create a clickable link. 2. The first argument of the HYPERLINK function is the link location. We use the "#" symbol to indicate that the link is within the same workbook. 3. The second argument of the HYPERLINK function is the display text. We use the value in cell A1 as the display text. 4. To create the link location, we concatenate the "#" symbol, the single quotation mark, the value in cell A1 (which represents the tab name), another single quotation mark, the exclamation mark, and the cell reference "A1". This creates a link that takes you to cell A1 in the tab with the same name as the value in cell A1. You can use this formula in any cell where you want the clickable link to appear. When you click on the link, it will take you to the corresponding tab. Let's consider an example to understand how this formula works. Suppose we have a workbook with three tabs named "Sheet1", "Sheet2", and "Sheet3". In cell A1 of Sheet1, we have the value "Sheet2". If we enter the formula =HYPERLINK("#'"&A1&"'!A1", A1) in cell B1 of Sheet1, it will create a clickable link with the text "Sheet2". When we click on the link, it will take us to cell A1 in the "Sheet2" Similarly, if we enter the formula =HYPERLINK("#'"&A1&"'!A1", A1) in cell B1 of Sheet2 with the value "Sheet3" in cell A1, it will create a clickable link with the text "Sheet3". When we click on the link, it will take us to cell A1 in the "Sheet3" tab. By using this formula, you can create dynamic links that take you to different tabs based on the value in a cell. This can be helpful in navigating large Excel workbooks efficiently. An Excel formula =HYPERLINK("#'"&A1&"'!A1", A1) Formula Explanation This formula uses the HYPERLINK function to create a clickable link that takes you to the tab with the same name as the value in cell A1. Step-by-step explanation 1. The HYPERLINK function is used to create a clickable link. 2. The first argument of the HYPERLINK function is the link location. In this case, we use the "#" symbol to indicate that the link is within the same workbook. 3. The second argument of the HYPERLINK function is the display text. In this case, we use the value in cell A1 as the display text. 4. To create the link location, we concatenate the "#" symbol, the single quotation mark, the value in cell A1 (which represents the tab name), another single quotation mark, the exclamation mark, and the cell reference "A1". This creates a link that takes you to cell A1 in the tab with the same name as the value in cell A1. 5. The formula is entered in the cell where you want the clickable link to appear. For example, let's say we have a workbook with three tabs named "Sheet1", "Sheet2", and "Sheet3". In cell A1 of Sheet1, we have the value "Sheet2". If we enter the formula =HYPERLINK("#'"&A1&"'!A1", A1) in cell B1 of Sheet1, it will create a clickable link with the text "Sheet2". When we click on the link, it will take us to cell A1 in the "Sheet2" tab. Similarly, if we enter the formula =HYPERLINK("#'"&A1&"'!A1", A1) in cell B1 of Sheet2 with the value "Sheet3" in cell A1, it will create a clickable link with the text "Sheet3". When we click on the link, it will take us to cell A1 in the "Sheet3" tab. This formula can be used to create dynamic links that take you to different tabs based on the value in a cell.
{"url":"https://codepal.ai/excel-formula-generator/query/GjRjKzuy/excel-formula-clickable-link-to-tab","timestamp":"2024-11-06T20:34:29Z","content_type":"text/html","content_length":"96057","record_id":"<urn:uuid:279002b0-4d9f-4ea0-9cd5-791992249808>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00396.warc.gz"}
Entropy and Information In the modern scientific culture, the statement that the entropy and information are the same things is ubiquitous. For example the famous paper of Edwin T. Jaynes [1] according to Google Scholar [2] has been cited more than 5000 times. The list of books from Prof Gordon [3] about the deep relationship between entropy and information in the biology is about 30 titles and I am pretty sure that one can find even more book along this line. On the other hand, if I consider my personal area of expertise: thermodynamics and experimental thermodynamics, information as such is not there. A simple example. Below is the Fe-C phase diagram as computed according to the CALPHAD approach (see for example [4], picture is used to be on www.calphad.com). The thermodynamic entropy has been employed, the information concept not. Hence we have an interesting situation. Everybody is convinced that the entropy is information but when we look at the thermodynamic research, the information is not there. How it could happen? As usual, a shoemaker has no shoes? I had several discussions about this issue, first on the biotaconv list [5], then on the embryophysics list [6], and finally on the everything list [7, 8, 9 (now deleted)]. Below there is a summary. I will start with a short historical overview in order to show how the intimate relationship between the entropy and information has been developed. Next I will briefly overview experimental thermodynamics, as in my view it is important to understand that when we talk about the thermodynamic entropy, it has been measured and tabulated the same way as other thermodynamic values. After that, I will list my examples to clarify the relationship between thermodynamic entropy and information entropy and finally I present my personal opinion on this issue. Historical Perspective on Entropy and Information The thermodynamics and entropy have been developed in order to describe heat engines. After that chemists have found many creative uses of thermodynamics and entropy in chemistry. Most often chemists employ thermodynamics in order to compute equilibrium composition, what should happen at the end when some species are mixed with each other, see for example figure with the phase diagram above. The development of classical thermodynamics was not an easy born (see for example Truesdell [10]) and many people find classical thermodynamics difficult to understand. No doubt, the Second Law here is the reason; people find it at least as non intuitive. Please note that the classical thermodynamics is a phenomenological science and it does not even require an assumption of the atomic hypothesis. Hence when the existence of atoms has been proved, statistical thermodynamics based on the atomic theory has been developed and there was the hope to find a good and intuitive way to introduce entropy. Unfortunately, it did not happen. In order to explain you this, let us consider a simple experiment. We bring a glass of hot water in the room and leave it there. Eventually the temperature of the water will be equal to the ambient temperature. In classical thermodynamics, this process is considered as irreversible, that is, the Second Law forbids that the temperature in the glass will be hot again spontaneously. It is in complete agreement with our experience, so one would expect the same from statistical mechanics. However there the entropy has statistical meaning and there is a nonzero chance that the water will be hot again. Moreover, there is a theorem (Poincaré recurrence) that states that if we wait long enough then the temperature of the glass must be hot again. No doubt, the chances are very small and time to wait is very long, in a way this is negligible. Some people are happy with such statistical explanation, some not. In any case, statistical mechanics has changed nothing in practical applications of thermodynamics, rather it helps to derive missing information from atomic properties. We will not find information as such in the classical works of Boltzmann and Gibbs on statistical thermodynamics, that means that statistical thermodynamics without information has existed for quite awhile. Shannon has introduced the information entropy in his famous paper [11] where he writes “The form of H will be recognized as that of entropy as defined in certain formulations of statistical mechanics where pi is the probability of a system being in cell i of its phase space. H is then, for example, the H in Boltzmann s famous H theorem.” Please note that Shannon has just shown that the equation employed to the problems of information transfer is similar to that in statistical thermodynamics. He has not made a statement about the meaning of such a similarity, that is, he did not identify his entropy as the thermodynamic entropy. He has just used the same term, nothing more. Yet, some confusion was already there as since then we have a similar equations and two similar terms, the thermodynamic entropy and information entropy. In any case, such a state with a similar equation describing two different phenomena took again Edwin T. Jaynes has made the final step in [1] p. 622 after eq (2-3) (this is the Shannon equation) “Since this is just the expression for entropy as found in statistical mechanics, it will be called the entropy of the probability distribution p_i; henceforth we will consider the terms “entropy” and “uncertainty” as synonymous.” This is exactly the logic that brought the deep relationship between Shannon’s information and the thermodynamic entropy. As the two equations are the same, they should describe the same phenomenon, that is, information is the thermodynamic entropy. Russell Standish [12] has listed other authors contributed to binding the thermodynamic entropy and information: “Because I tend to think of “negentropy”, which is really another term for information, I tend to give priority to Schroedinger who wrote about the topic in the early 40s. But Jaynes was certainly instrumental in establishing the information based foundations to statistical physics, even before information was properly defined (it wasn’t really until the likes of Kolmogorov, Chaitin and Solomonoff in the 60s that information was really understood). But Landauer in the late 60s was probably the first to make physicists really wake up to the concept of physical information.” Anyway in all discussions, I have seen a single logic: if the equation for the entropy in statistical mechanics is the same as for the information entropy in Shannon’s paper, then the entropy is information. I will give my opinion on this later in the section discussion. Right now it is enough to say that at present we have three terms, information in IT, information in physics and the thermodynamic entropy. Some people consider these three terms as synonyms, and some not. Experimental Thermodynamics I have already mentioned that thermodynamics is employed extensively to solve practical problems in engineering and chemistry. Let me quote a couple of paragraphs from the Preface to the JANAF Tables [13] (ca. 230 Mb): “Beginning in the mid-1950s, when elements other than the conventional carbon, hydrogen, oxygen, nitrogen, chlorine, and fluorine came into consideration as rocket propellant ingredients, formidable difficulties were encountered in conducting rigorous theoretical performance calculations for these new propellants. The first major problem was the calculational technique. The second was the lack of accurate thermodynamic data.” “By the end of 1959, the calculation technique problem had been substantially resolved by applying the method of minimization of free energy to large, high speed digital computers. At this point the calculations become as accurate as the thermodynamic data upon which they were based. However, serious gaps were present in the available data: For propellant ingredients, only the standard heat of formation is required to conduct a performance calculation. For combustion products, the enthalpy and entropy must be known, as a function of temperature, in addition to the standard heat of formation.“ In order to solve the second problem there was extensive development in experimental thermodynamics and the results are presented in thermodynamics tables, most famous being the JANAF Thermochemical Tables (Joint Army-Naval-Air Force Thermochemical Tables) [13] (ca. 230 Mb). As the name says, the JANAF Tables have been originally developed for military. I guess that the very first edition was classified. Yet, there are many peaceful applications as well and chemists all over the world use these tables nowadays to predict equilibrium composition of a system in question. Among other properties, the JANAF Tables contains the entropy. I believe that this is a very good starting point for everybody that would like to talk about the entropy, just to take the JANAF Tables and see that chemists have successfully measured the entropy for a lot of compounds. As the JANAF Tables are pretty big (almost 2000 pages), a simpler starting point is the CODATA Tables [14]. In essence, the entropy in chemical thermodynamics is a quantitative property that has been measured and tabulated for many substances. You may want to think it this way: chemists have been using thermodynamics and entropy for a long time to create reliable processes to obtain substances needed and they have been successful. I highly recommend you at this point to download the JANAF Tables and browse them. If we talk about the thermodynamic entropy, this is a pretty good starting point. Practical Examples to Think Over In my view, the best way to discuss a theory is to try to use it on simple practical examples. To this end, below you will find examples to the thermodynamic entropy, information entropy and the number of states in physics. 1. The Thermodynamic Entropy: What is information in these examples? 1.1) From CODATA [14] tables S ° (298.15 K) J K-1 mol-1 Ag cr 42.55 ± 0.20 Al cr 28.30 ± 0.10 What these values tell us about information? 1.2) At constant volume dS = (Cv/T) dT and dU = CvdT. Does Cv is related to information? Does the internal energy is related to information? 1.3) In the JANAF Tables there is a column for the entropy as well as for the enthalpy (H = U + pV). The latter could be safely considered as energy. How people obtain the entropy in the JANAF Tables? The answer is that they measure the heat capacity and then take the integral at constant pressure of one atmosphere S_T = Integral_from_0_to_T Cp/T dT If there are phase transitions then it is necessary to add Del H_ph_tr/T_ph_tr. At the same time the change in the enthalpy H_T – H_0 = Integral_from_0_to_T Cp dT Here there is a question to think over. What is the difference between Integral Cp/T dT and Integral Cp dT? Should both have something to do with information or only the first one? 1.4) Problem. Given temperature, pressure, and initial number of moles of NH3, N2 and H2, compute the equilibrium composition. The thermodynamic entropy in this example is present. What is the meaning of information? To solve the problem one should find thermodynamic properties of NH3, N2 and H2 for example in the JANAF Tables and then compute the equilibrium constant. From thermodynamics tables (all values are molar values for the standard pressure 1 bar, I have omitted the symbol o for simplicity but it is very important not to forget it): Del_f_H_298(NH3), S_298(NH3), Cp(NH3), Del_f_H_298(N2), S_298(N2), Cp(N2), Del_f_H_298(H2), S_298(H2), Cp(H2) 2NH3 = N2 + 3H2 Del_H_r_298 = Del_f_H_298(N2) + 3 Del_f_H_298(H2) – 2 Del_f_H_298(NH3) Del_S_r_298 = S_298(N2) + 3 S_298(H2) – 2 S_298(NH3) Del_Cp_r = Cp(N2) + 3 Cp(H2) – 2 Cp(NH3) To make life simple, I will assume below that Del_Cp_r = 0, but it is not a big deal to extend the equations to include heat capacities as well. Del_G_r_T = Del_H_r_298 – T Del_S_r_298 Del_G_r_T = – R T ln Kp When Kp, total pressure and the initial number of moles are given, it is rather straightforward to compute equilibrium composition. If you need help, please just let me know. 1.5) The phase diagram of Fe-C shown in Introduction. It has been computed from the Thermocalc database [4] that contains entropies found by experimental thermodynamics. What is the meaning of 2. The Information Entropy: What is the thermodynamic system in these examples? 2.1) The example of the entropy used by engineers in informatics has been given by Jason [15] and I will quote him below. Could you please tell me, the thermodynamic entropy of what is discussed in his example? On 03.02.2012 00:14 Jason Resch said the following: > Sure, I could give a few examples as this somewhat intersects with my > line of work. > The NIST 800-90 recommendation > for random number generators is a document for engineers implementing > secure pseudo-random number generators. An example of where it is > important is when considering entropy sources for seeding a random > number generator. If you use something completely random, like a > fair coin toss, each toss provides 1 bit of entropy. The formula is > -log2(predictability). With a coin flip, you have at best a .5 > chance of correctly guessing it, and -log2(.5) = 1. If you used a > die roll, then each die roll would provide -log2(1/6) = 2.58 bits of > entropy. The ability to measure unpredictability is necessary to > ensure, for example, that a cryptographic key is at least as > difficult to predict the random inputs that went into generating it > as it would be to brute force the key. > In addition to security, entropy is also an important concept in the > field of data compression. The amount of entropy in a given bit > string represents the theoretical minimum number of bits it takes to > represent the information. If 100 bits contain 100 bits of entropy, > then there is no compression algorithm that can represent those 100 > bits with fewer than 100 bits. However, if a 100 bit string contains > only 50 bits of entropy, you could compress it to 50 bits. For > example, let’s say you had 100 coin flips from an unfair coin. This > unfair coin comes up heads 90% of the time. Each flip represents > -log2(.9) = 0.152 bits of entropy. Thus, a sequence of 100 coin > flips with this biased coin could be represent with 16 bits. There > is only 15.2 bits of information / entropy contained in that 100 bit > long sequence. 2.2) This is a paper from control J.C. Willems and H.L. Trentelman, H_inf control in a behavioral context: The full information case IEEE Transactions on Automatic Control, Volume 44, pages 521-536, 1999 The term information is there. What is the related thermodynamic entropy? 2.3) In my understanding, when we consider an algorithm, this is a pure IT construct, that does not depend whether I will implement it with an abacus or some Turing machine, with Intel or PowerPC processor. From this follows that the algorithm and hence its information entropy does not depend on temperature or pressure of a physical system that does the computation. In my view it makes sense. Let us consider consciousness now. Our brains produces it and our brain has some thermodynamic entropy. If we assume that the same effect could be achieved with some robot, does it mean that the thermodynamic entropy of the robot must be the same as that of the brain? 2.4) Information as a representation on a physical object. Let us consider a string “10″ for simplicity. Let us consider the next cases. I will cite first the thermodynamic properties of Ag and Al from CODATA tables (we will need them) S ° (298.15 K) J K-1 mol-1 Ag cr 42.55 ± 0.20 Al cr 28.30 ± 0.10 In J K-1 cm-3 it will be Ag cr 42.55/107.87*10.49 = 4.14 Al cr 28.30/26.98*2.7 = 2.83 A) An abstract string “10″. B) Let us make now an aluminum plate (a page) with “10″ hammered on it (as on a coin) of the total volume 10 cm^3. The thermodynamic entropy is then 28.3 J/K. C) Let us make now a silver plate (a page) with “10″ hammered on it (as on a coin) of the total volume 10 cm^3. The thermodynamic entropy is then 41.4 J/K. D) We can easily make another aluminum plate (scaling all dimensions from 2) to the total volume of 100 cm^3. Then the thermodynamic entropy is 283 J/K. Now we have four different combinations to represent a string “10″ and the thermodynamic entropy is different. Any comment? 3. Information in Physics: the number of states The thermodynamic entropy could be considered a measure of number of states and one could say that this is the information in physics. I will quote Brent’s comment to problem 2.4 [16]: “The thermodynamic entropy is a measure of the information required to locate the possible states of the plates in the phase space of atomic configurations constituting them“. This is formally correct but then the question is the relationship of a number of states with information in IT. 3.1) It would certainly interesting to consider what happens when we increase or decrease the temperature (in the limit to zero Kelvin, according to the Third Law the entropy will be zero at zero Kelvin). What do you think, can we save less information on a copper plate at low temperatures as compared with higher temperatures? Or more? If engineers would take the statement “the maximum possible value for information increases with temperature” literally, they should operate a hard disk at higher temperatures (the higher the better according to such a statement). Yet this does not happens. Do you know why? If I operate my memory stick in some reasonable range of temperatures, the information it contains does not change. Yet, the entropy in my view changes. Why it happens this way? 3.2) My example would be Millipede I am pretty sure that when IBM engineers develop it, they do not employ the thermodynamic entropy to estimate its information capabilities. Also, the increase of temperature would be destroy saved information there. 3.3) In general we are surrounded devices that store information (hard discs, memory sticks, DVD, etc.). The information that these devices can store, I believe, is known with accuracy to one bit. Can you suggest a thermodynamic state which entropy gives us exactly that amount of information? 3.4) Let us consider a coin and let us imagine that the temperature is going to zero Kelvin. What happens with the text imprinted on the coin in this case? The examples presented above betray my personal opinion. Yes, I believe that the thermodynamic entropy and information entropy do not relate to each other. Personally I find the reasoning clumsy. In my view, the same mathematical structure of equations does not say that the phenomena are related. For example the Poisson equation for electrostatics is mathematically equivalent to the stationary heat conduction equation. What does it mean? Well, this allows a creative way to solve an electrostatic problem for people who have a thermal FEM solver and do not have an electrostatic solver. They can solve an electrostatic problem by using a thermal FEM solver by means of mathematical analogy. This does happen but I doubt that we could state that the stationary heat conduction is equivalent to electrostatics. On the everything list, Brent has recommended me the paper [17]. In the paper, the information entropy and the thermodynamic entropy are considered and the conclusion was as follows. p. 28(142) “First, all notions of entropy discussed in this essay, except the thermodynamic and the topological entropy, can be understood as variants of some information-theoretic notion of I understand it this way. When I am working with gas, liquid or solid at the level of experimental thermodynamics, the information according to the authors is not there (at this point I am in agreement with them). Yet, as soon as theoretical physicists start thinking about these objects, they happen to be fully filled with information. Alternatively one could say that the viewpoint “information and the entropy are the same” brings us useful conjectures. However, no one told me exactly what useful conjectures follow. In discussion on the biotaconv [5], I have suggested to consider two statements • The thermodynamic and information entropies are equivalent. • The thermodynamic and information entropies are completely different. and have asked what the difference it makes in the artificial life research. I still do not know the answer. I would like to conclude by a quote from Arnheim [18]. In my view, it nicely characterizes the current status of relationship between the entropy and information. “The absurd consequences of neglecting structure but using the concept of order just the same are evident if one examines the present terminology of information theory. Here order is described as the carrier of information, because information is defined as the opposite of entropy, and entropy is a measure of disorder. To transmit information means to induce order. This sounds reasonable enough. Next, since entropy grows with the probability of a state of affairs, information does the opposite: it increases with its improbability. The less likely an event is to happen, the more information does its occurrence represent. This again seems reasonable. Now what sort of sequence of events will be least predictable and therefore carry a maximum of information? Obviously a totally disordered one, since when we are confronted with chaos we can never predict what will happen next. The conclusion is that total disorder provides a maximum of information; and since information is measured by order, a maximum of order is conveyed by a maximum of disorder. Obviously, this is a Babylonian muddle. Somebody or something has confounded our language.” 1. E. T. Jaynes, Information theory and statistical mechanics, Phys. Rev. Part I: 106, 620–630 (1957), Part II: 108, 171–190 (1957) 2. Google Scholar, http://scholar.google.com/scholar?q=E.+T.+Jaynes 3. Books on entropy and information, /2012/02/books-on-entropy-and-information.html 4. J-O Andersson, Thomas Helander,Lars Hdghmd, Pingfang Shi, Bo Sundman, THERMO-CALC & DICTRA, Computational Tools For Materials Science, Calphad, Vol. 26, No. 2, pp. 273-312, 2002 5. Entropy and Artificial Life, see secion below. 6. Entropy and Information, http://groups.google.com/group/embryophysics/t/a14b0a6b9294cf3 7. deleted. 8. deleted. 9. deleted. 10. C. Truesdell, The Tragicomical History of Thermodynamics, 1822-1854 (Studies in the History of Mathematics and Physical Sciences), 1980. 11. C. E. Shannon, (1948). A mathematical theory of communication [corrected version]. Bell System Technical Journal 27, 379–423, 623–656. 12. Russell Standish, http://groups.google.com/group/everything-list/msg/fc0272531101a9e6 13. NIST-JANAF Thermochemical Tables, Fourth Edition, J. Phys. Chem. Ref. Data, Monograph No. 9, 1998. 14. J. D. Cox, D. D. Wagman, V. A. Medvedev, CODATA Key Values for Thermodynamics, Hemisphere Publishing Corp., New York, 1989. 15. Jason Resch, http://groups.google.com/group/everything-list/msg/c2da04dbd4ce2f8d 16. Brent, http://groups.google.com/group/everything-list/msg/bd727f700d0d58c0 17. Roman Frigg, Charlotte Werndl, Entropy: Guide for perplexed, in Claus Beisbart and Stephan Hartmann (eds.): Probability in Physics, Oxford University Press, 2011, 115-42. 18. Rudolf Arnheim, Entropy and Art: An Essay on Disorder and Order, 1971 25.12.2010 Entropy and Artificial Life I used to work in chemical thermodynamics for quite a while. No doubt, I have heard of the informational entropy but I have always thought that it has nothing to do with the entropy in chemical thermodynamics. Recently I have started reading papers on artificial life and it came to me as a complete surprise that so many people there consider that the thermodynamic entropy and the informational entropy are the same. Let me cite for example a few sentences from Christoph Adami “Introduction to Artificial Life”: p. 94 “Entropy is a measure of the disorder present in a system, or alternatively, a measure of our lack of knowledge about this system.” p. 96 “If an observer gains knowledge about the system and thus determines that a number of states that were previously deemed probable are in fact unlikely, the entropy of the system (which now has turned into a conditional entropy), is lowered, simply because the number of different possible states in the lower. (Note that such a change in uncertainty is usually due to a measurement). p. 97 “Clearly, the entropy can also depend on what we consider “different”. For example, one may count states as different that differ by, at most, del_x in some observable x (for example, the color of a ball drawn from an ensemble of differently shaded balls in an urn). Such entropies are then called fine-grained (if del_x is small), or course-grained (if del_x is large) entropies.” This is completely different entropy as compared with that used in chemical thermodynamics. The goal of this document is hence to demonstrate this fact. I have had a discussion about these matters on biotaconv that helped me to understand better the point of view accepted in the artificial life community. I am thankful to the members of the list for a nice discussion. I will start with a short description of how the entropy is employed in chemical thermodynamics, then I will briefly review the reasons for an opinion that the thermodynamic and information entropies are the same, and finally I will try to show that information and subjectivity have nothing to do with the entropy in chemical thermodynamics. Entropy in Chemical Thermodynamics The thermodynamics and entropy have been developed in order to describe heat engines. After that chemists have found many creative uses of thermodynamics and entropy in chemistry. Most often chemists employ thermodynamics in order to compute equilibrium composition, what should happen at the end when some species are mixed with each other. To this end there are thermodynamics tables, most famous being the JANAF Thermochemical Tables (Joint Army-Naval-Air Force Thermochemical Tables). As the name says, the JANAF Tables have been originally developed for military. I guess that the very first edition was classified. Yet, there are many peaceful applications as well and chemists all over the world use these tables nowadays to predict equilibrium composition of a system in question. Among other properties, the JANAF Tables contains the entropy. I believe that this is a very good starting point for everybody that would like to talk about the entropy, just to take the JANAF Tables and see that chemists have successfully measured the entropy for a lot of compounds. As the JANAF Tables are pretty big (almost 2000 pages), a simpler starting point is the CODATA Tables. In essence, the entropy in chemical thermodynamics is a quantitative property that does not depend on an observer. One can close eyes or open them, one can know about the JANAF Tables or not: this does not influence the entropy of substances at all. You may want to think it this way: chemists have been using thermodynamics and entropy for a long time to create reliable processes to obtain substances needed and they have been successful. Entropy and Information The members of biotaconv brought my attention to works of Edwin T. Jaynes who was presumably the first to show the equivalence between the thermodynamics and information entropies. His papers Information theory and statistical mechanics (Part I and II) are available in Internet (the links are in Wikipedia) In the papers, the author considers statistical mechanics only, so let me first describe the relationship between classical and statistical thermodynamics. As the name says it, classical thermodynamics was created first. It was not an easy born (see for example, Truesdell’s The Tragicomical History of Thermodynamics) and many people find classical thermodynamics difficult to understand. No doubt, the Second Law here is the reason; people find it at least as non intuitive. On the other hand, statistical thermodynamics was based on the atomic theory and here was the hope to find a good and intuitive way to introduce entropy. Well, it actually did not happen. In order to explain you this, let us consider a simple experiment. We bring a glass of hot water in the room and leave it there. Eventually the temperature of the water will be equal to the ambient temperature. In classical thermodynamics, this process is considered as irreversible, that is, the Second Law forbids that the temperature in the glass will be hot again spontaneously. It is in complete agreement with our experience, so one would expect the same from statistical mechanics. However there the entropy has some statistical meaning and there is a nonzero chance that the water will be hot again. Moreover, there is a theorem (Poincaré recurrence) that states that if we wait long enough then the temperature of the glass must be hot again. No doubt, the chances are very small and time to wait is very long, in a way this is negligible. Some people are happy with such statistical explanation, some not. Therefore the goal of Edwin T. Jaynes was to bring a new explanation of above. The author uses the formal equivalence between the Shannon’s and thermodynamic entropies and based on this suggests the entropy inference or subjective statistical thermodynamics. I should say that I enjoyed the part I. Here the assumption is that as we do not have complete information about the system, we use what is available and then just maximize the entropy to get the most plausible description. In a way it is some data fitting problem and it seems to be similar to the maximum likelihood in statistics. Yet, the term information here is more like available experimental data about the system and the entropy is not the same as in classical thermodynamics. Unfortunately I was not able to follow the logic in the second paper. I will make just a couple of citations. “With such an interpretation the expression “irreversible process” represents a semantic confusion; it is not the physical process that is irreversible, but rather our ability to follow it. The second law of thermodynamics then becomes merely the statement that although our information as to the state of a system may be lost in a variety of ways, the only way in which it can be gained is by carrying out further measurements.” “It is important to realize that the tendency of entropy to increase is not a consequence of the laws of physics as such, … . An entropy increase may occur unavoidably, due to our incomplete knowledge of the forces acting on a system, or it may be entirely voluntary act on our part.” This is somewhat similar to what Christoph Adami says (see Introduction). What I am going to do next is to be back to chemical thermodynamics. Information and Chemical Thermodynamics Let us assume that the entropy is subjective and that it describes the information. If this is true, then this must be also applied in chemical thermodynamics. At this point I will cite a couple of paragraphs from the Preface to the JANAF Tables: Beginning in the mid-1950s, when elements other than the conventional carbon, hydrogen, oxygen, nitrogen, chlorine, and fluorine came into consideration as rocket propellant ingredients, formidable difficulties were encountered in conducting rigorous theoretical performance calculations for these new propellants. The first major problem was the calculational technique. The second was the lack of accurate thermodynamic data. By the end of 1959, the calculation technique problem had been substantially resolved by applying the method of minimization of free energy to large, high speed digital computers. At this point the calculations become as accurate as the thermodynamic data upon which they were based. However, serious gaps were present in the available data: For propellant ingredients, only the standard heat of formation is required to conduct a performance calculation. For combustion products, the enthalpy and entropy must be known, as a function of temperature, in addition to the standard heat of formation. One could imagine that Edwin T. Jaynes knew nothing about this, as it his times this could be even classified. Well, chemical thermodynamics was already a developed science in 1950s, the paragraphs above concern just a particular application of chemical thermodynamics. In any case, it hard to understand why Christoph Adami does not know how chemists employ entropy in their work. Thus, what is subjective in the JANAF Tables? What the entropy in the JANAF Tables has to do with the information? I have found no answers to these questions so far. Another point. People attribute information and subjectivity to the entropy. At the same time they do not see any problem with the energy. In the JANAF Tables there is a column for the entropy as well as for the enthalpy (H = U + pV). The latter could be safely considered as energy. How people obtain the entropy in the JANAF Tables? The answer is that they measure the heat capacity and then take the integral at constant pressure of one atmosphere S_T = Integral_from_0_to_T Cp/T dT If there are phase transitions then it is necessary to add Del H_ph_tr/T_ph_tr. At the same time the change in the enthalpy H_T - H_0 = Integral_from_0_to_T Cp dT Here there is another question. What is the difference between Integral Cp/T dT and Integral Cp dT? Why the first integral has something to do with information and the second not? Why the first integral has something to do with subjectivity and the second not? I have tried to show that subjectivity and information do not belong to chemical thermodynamics. In my view, this means that the thermodynamic entropy has nothing to do with the informational entropy. Let me repeat my logic once more. Chemical thermodynamics makes extensive use of the entropy. The entropy is tabulated in thermodynamic tables and then among other thermodynamic properties it is used to compute equilibrium composition or a complete phase diagram (see for example CALPHAD). I would expect that people stating that the informational entropy is the same as the thermodynamic entropy must show how such a statement is working in chemical thermodynamics. Personally I see no way to apply the information entropy and subjectivity in chemical thermodynamics. To this end, there is a good video from MIT, Teaching the Second Law, where a panel of scientist discuss how the Second Law should be taught. The video shows that the entropy is a difficult concept indeed. The scientists do not agree with each other on how to teach the Second Law. Yet, the concept of the entropy as information is not there at all. I will conclude by yet another question. Let us consider two statements • The thermodynamic and information entropies are equivalent. • The thermodynamic and information entropies are completely different. What the difference it makes in the artificial life research? So far I have not seen a good answer, why it is so important in artificial life to state that the information entropy has something to do with the entropy in chemical thermodynamics.
{"url":"http://blog.rudnyi.ru/2012/02/entropy-and-information.html","timestamp":"2024-11-07T10:06:33Z","content_type":"text/html","content_length":"97087","record_id":"<urn:uuid:c13fa8d8-42d9-4fad-ad51-2dcc486da45c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00786.warc.gz"}
Java Program to Swap Two Numbers | Vultr Docs Swapping two numbers in programming involves exchanging the values of two variables. In Java, this task can be accomplished in several ways, each with its own unique approach and use case. Understanding these methods is crucial for efficient data manipulation and forms a fundamental part of algorithm design. In this article, you will learn how to swap two numbers in Java using different techniques. Explore examples that demonstrate the use of temporary variables, arithmetic operations, and bitwise XOR operation to achieve this task effectively. Swapping Using a Temporary Variable Basic Swap with Temporary Storage 1. Declare and initialize two integer variables. 2. Use a third temporary variable to facilitate the swap. int a = 5; int b = 10; int temp; temp = a; // Store the value of 'a' in 'temp' a = b; // Assign the value of 'b' to 'a' b = temp; // Assign the value stored in 'temp' to 'b' This code block initializes variables a and b with values 5 and 10, respectively. The variable temp is used to hold the value of a temporarily while a is set to the value of b, and finally, b is set to the original value of a stored in temp. Swapping Without Using a Temporary Variable Arithmetic Operations 1. Use addition and subtraction to swap values without a temporary variable. int a = 5; int b = 10; a = a + b; // a becomes 15 b = a - b; // b becomes 5 (15 - 10) a = a - b; // a becomes 10 (15 - 5) Here, the numbers are swapped using addition and subtraction. Initially, a is updated to be the sum of both numbers. Then, b is recalculated by subtracting the new value of a minus the old b, and finally, a is updated by subtracting the new b. Bitwise XOR Operation 1. Apply the Bitwise XOR operation for an efficient swap without any extra space. int a = 5; int b = 10; a = a ^ b; // XOR a and b, and store it in a b = a ^ b; // Now b is the value of original a a = a ^ b; // Now a is the value of original b The XOR operation is an efficient bitwise technique to swap variables without a third variable. The process involves transforming the values into a series of bits and using XOR to switch these bits between the two variables without any additional storage. Swapping two numbers in Java can be achieved through multiple methods depending on the scenario and performance needs. Whether using a temporary variable, arithmetic manipulations, or a bitwise XOR, each approach offers unique advantages. Mastering these techniques not only aids in fundamental programming tasks but also enhances understanding of how data can be manipulated efficiently within memory. Use these methods appropriately to keep your code clean and optimal.
{"url":"https://docs.vultr.com/java/examples/swap-two-numbers","timestamp":"2024-11-10T15:52:18Z","content_type":"text/html","content_length":"204273","record_id":"<urn:uuid:8a7ddd91-ca9f-45a3-975f-0908ea442aca>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00016.warc.gz"}
What is Voltage? Voltage (V), also known as electric potential difference or electromotive force, measures the potential energy difference between two points in an electric circuit. In simple words, it is the “pressure or force” that pushes electricity (electrons) between two points. Voltage is measured in volts. It is named after the Italian scientist Alessandro Volta, who invented the first electrical battery. What is Current? Current is the rate of the flow of electrons in a circuit. It's measured in amperes and denoted by using the letter "I." What is power? The power used in a circuit is measured in watts. Watts are calculated by multiplying the voltage by the current. Power is a measure of the rate at which energy is transferred or converted in a circuit. It can be calculated using different formulas depending on the known quantities. The most common formula for calculating electrical power (P) is based on Ohm's Law: • P stands for power (in watts, W). • V stands for voltage (in volts, V). • I stands for current (in amperes, A). What is Ohm's Law? One of the fundamental laws in electronics, Ohm's Law states that the electric current is proportional to voltage and inversely proportional to resistance. • V stands for voltage (in volts). • I stands for current (in amperes). • R stands for resistance (in ohms). [Resistance is a measure of the opposition to current flow in an electrical circuit.] How to Convert Volts to millivolts 1 V = 10^3 mV = 1000 mV 1 mV = 10^-3 V = 1/1000 V Formula of Volts(V) to millivolts(mV) To convert volts to millivolts, you can use the fact that one volt (V) equals 1000 millivolts (mV). This means that you multiply the number by 1000 to convert volts to millivolts or you can divide 1V by 1000 to convert 1mV to V Check our video tutorial on volts to millivolts for easy understanding V[(mV)] = V[(V)] × 1000 Convert 7 Volts to millivolts: V(mV) = 7V × 1000 = 7000 mV Volts to millivolts(mV) conversion table Volts Microvolts 0.001 V 1 mV 0.002 V 2 mV 0.003 V 3 mV 0.004 V 4 mV 0.005 V 5 mV 0.001 V 1 mV 0.01 V 10 mV 0.1 V 100 mV 1 V 1,000 mV
{"url":"https://dailytechbyte.co.in/tools/convert/voltage/volt-to-mv.php","timestamp":"2024-11-11T00:38:17Z","content_type":"text/html","content_length":"30364","record_id":"<urn:uuid:f3515ec0-ee00-4612-b2a2-35cb4e0beb54>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00765.warc.gz"}
Understanding Exponential Changes (3.4.1) | Edexcel GCSE Maths (Higher) Notes | TutorChase Exponential growth and decay are fundamental concepts in mathematics, describing processes where quantities increase or decrease at rates proportional to their current value. These phenomena are observed in various real-world situations, including population dynamics, radioactive decay, and financial applications such as interest calculation and depreciation. This section aims to introduce the principles of exponential growth and decay, providing examples from depreciation and population change scenarios to enhance understanding. Introduction to Exponential Growth and Decay Exponential growth occurs when a quantity increases over time at a rate proportional to its current value, leading to a rapid rise as time progresses. Conversely, exponential decay describes a process where a quantity diminishes over time at a rate proportional to its current value, resulting in a rapid decrease. The general formula for exponential growth and decay is expressed as: $P(t) = P_0 \cdot e^{rt}$ • $P(t)$ is the quantity at time $t$, • $P_0$ is the initial quantity, • (r) is the rate of growth (if positive) or decay (if negative), • (e) is the base of the natural logarithm, approximately equal to 2.71828. Image courtesy of Online Math Learning Exponential Growth: Population Change Scenario Example 1: Population Growth Consider a population of 1,000 bacteria that doubles every hour. Calculate the population after 5 hours. Here, $P_0 = 1000$, $r = \ln(2)$ because the population doubles (growth rate of 100%), and $t = 5$. $P(t) = 1000 \cdot e^{\ln(2) \cdot 5}$$P(5) = 1000 \cdot 2^5$$P(5) = 1000 \cdot 32 = 32,000$ Thus, the population of bacteria will be 32,000 after 5 hours. Exponential Decay: Depreciation Scenario Example 2: Depreciation of a Car A car worth £20,000 depreciates at a rate of 10% per year. Calculate its value after 3 years. For depreciation, the rate $r$ is negative because the value decreases over time. Here, $P_0 = 20,000$, $r = -0.10$, and $t = 3$. $P(t) = 20,000 \cdot e^{-0.10 \cdot 3}$$P(3) = 20,000 \cdot e^{-0.30}$$P(3) ≈ 20,000 \cdot 0.74082 ≈ £14,816.40$ Therefore, the value of the car after 3 years will be approximately £14,816.40. Understanding the Formula The exponential growth and decay formula, $P(t) = P_0 \cdot e^{rt}$, is versatile, allowing for the calculation of quantities over any time period. The key components to remember are: • Initial Quantity $(P_0)$: The starting point from which growth or decay is measured. • Growth/Decay Rate $(r)$: Expressed as a decimal, this rate determines the speed of the change. A positive rate indicates growth, while a negative rate signifies decay. • Time $(t)$: The period over which the growth or decay occurs. • Natural Base $(e)$: A constant (~2.71828) that ensures the formula accurately models exponential changes. Application in Real Life Exponential functions are not just theoretical concepts but have practical applications in everyday life, including: • Environmental Studies: Understanding population dynamics of species. • Finance: Calculating compound interest and depreciation. • Medicine: Modelling the spread of diseases or the decay of drug concentration in the body. Image courtesy of Number Dyslexia
{"url":"https://www.tutorchase.com/notes/edexcel-gcse/maths-higher/3-4-1-understanding-exponential-changes","timestamp":"2024-11-09T23:34:48Z","content_type":"text/html","content_length":"1049132","record_id":"<urn:uuid:3af0dddb-d78a-4dbc-9935-56185b248f9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00790.warc.gz"}
Implementing Your Own Version of F#’s List.Filter As I’ve been thinking more about F#, I began to wonder how certain methods in the F# stack work, so I decided to implement F#’s List.filter method. For those of you who aren’t familiar, List.Filter takes a function that returns true or false and a list of values. The result of the call is all values that fulfill the fuction. For example, if we wanted to keep just the even numbers in our list, then the following would accomplish that goal. 1 let values = [1;2;3;4] 2 let isItEven x = x % 2 = 0 5 let evenValues = List.filter isItEven values 6 // val it : int list = [2; 4] Now that we know the problem, how would we begin to implement? First, we need to define a function called filter: However, to match the signature for List.filter, it needs to take a function that maps integers to bools and the list of values to work on 1 let filter (func:int->bool) (values:int List) = Now that we have the signature, let’s add some logic to match on the list of values. When working with lists, there are two possibilities, an empty list and a non-empty list. Let’s first explore the empty list option. In the case of an empty list of values, then it doesn’t matter what the func parameter does, there are no possible results, so we should return an empty list for the result. 1 let filter (func:int->bool) (values:int List) = 2 match values with 3 | [] -> [] Now that we’ve handled the empty list, let’s explore the non-empty list scenario. In this branch, the list must have a head and a tail, so we can deconstruct the list to follow that pattern. 1 let filter (func:int->bool) (values:int List) = 2 match values with 3 | [] -> [] 4 | head::tail -> // what goes here? Now that we’ve deconstructed the list, we can now use the func parameter with the head element. If the value satisfies the func parameter, then we want to add the head element to the list of results and continue processing the rest of the list. To do that, we can use recursion to call back into filter with the same func parameter and the rest of the list: 1 let rec filter (func:int->bool) (values:int List) = 2 match values with 3 | [] -> [] 4 | head::tail -> 5 if func head then head :: filter func tail At this point, we need to handle the case where the head element does not satisfy the func parameter. In this case, we should not add the element to the list of results and we should let filter continue the work 1 let rec filter (func:int->bool) (values:int List) = 2 match values with 3 | [] -> [] 4 | head::tail -> 5 if func head then head :: filter func tail 6 else filter func tail By handling the base case first (an empty list), filter can focus on the current element in the list (head) and then recurse to process the rest of the list. This solution works, but we can make this better by removing the type annotations. Interestingly enough, we don’t care if we’re working with integers, strings, or whatever. Just as long as the function takes some type and returns bool and the list of values matches the same type as the func parameter, it works. So then we end up with the following: 1 let rec filter func values = 2 match values with 3 | [] -> [] 4 | head::tail -> if func head then head :: filter func tail else filter func tail In general, when working with lists, I tend to start by matching the list with either an empty list or non-empty. From there, I’ve got my base case, so I can focus on the implementation for the first element. After performing the work for the first element, I can then recurse to the next element.
{"url":"https://blog.thesoftwarementor.com/articles/2015/10/13/implementing-your-own-version-of-fs-listfilter/","timestamp":"2024-11-04T21:41:31Z","content_type":"text/html","content_length":"47780","record_id":"<urn:uuid:7bf8c4d7-65d0-406c-a50f-50a60bb24d5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00016.warc.gz"}
High School Math Finland - Pythagorean theorem The Pythagorean theorem states: When squares are drawn on the legs of a right triangle, the sum of these areas is equal to the area of the square drawn on the hypotenuse of the same triangle. We use Pythagorean theorem Because it is the length of the hypotenuse, only a positive answer is valid. Example 2 Find x We use Pythagorean theorem Because it is the length of the leg, only a positive answer is valid. Example 3 Liisa-Petter was thinking of building a slide from her balcony to the parking lot next to her car. She was a pretty lazy person. Liisa-Petter's balcony was 15 meters high and the parking space 45 meters from her house. How long should the slide be? Give the answer to the nearest tenth of a metre. First we draw the situation The slide is the hypotenuse of the formed right triangle. We use Pythagorean theorem to solve this. Turn on the subtitles if needed
{"url":"https://x.eiramath.com/home/maa3/pythagorean-theorem","timestamp":"2024-11-09T17:09:03Z","content_type":"text/html","content_length":"316803","record_id":"<urn:uuid:b80eac33-575a-493a-b121-e6ba42e1a44a>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00423.warc.gz"}
Worksheet Graphing Quadratics From Standard Form Worksheet Answers - Graphworksheets.com Graphing Quadratics In Standard Form Worksheet 1 Answer Key – Reading graphs is a skill that is useful in many fields. They allow people to quickly compare and contrast large quantities of information. A graph of temperature data might show, for example, the time at which the temperature reached a certain temperature. Good graphs have … Read more Worksheet Graphing Quadratic Equations From Standard Form Worksheet Graphing Quadratic Equations From Standard Form – Graphing equations is an essential part of learning mathematics. This involves graphing lines and points and evaluating their slopes. Graphing equations of this type requires that you know the x and y-coordinates of each point. You need to know the slope of a line. This is the … Read more Worksheet Graphing Quadratics From Standard Form Worksheet Graphing Quadratics From Standard Form – The 7th Grade Graph Worksheets can be a valuable resource for students who are studying graphs at school. They are available for download in PDF format and include worksheets for every type of graph that a student will come across. They are an excellent way to introduce a … Read more
{"url":"https://www.graphworksheets.com/tag/worksheet-graphing-quadratics-from-standard-form-worksheet-answers/","timestamp":"2024-11-02T20:48:02Z","content_type":"text/html","content_length":"61051","record_id":"<urn:uuid:06d4f092-b504-497c-b888-5cee6aae61c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00400.warc.gz"}
From liveness to promptness Liveness temporal properties state that something "good" eventually happens, e.g., every request is eventually granted. In Linear Temporal Logic (LTL), there is no a priori bound on the "wait time" for an eventuality to be fulfilled. That is, Fθ asserts that θ holds eventually, but there is no bound on the time when θ will hold. This is troubling, as designers tend to interpret an eventuality Fθ as an abstraction of a bounded eventuality F^≤kθ, for an unknown k, and satisfaction of a liveness property is often not acceptable unless we can bound its wait time. We introduce here PROMPT-LTL, an extension of LTL with the prompt-eventually operator F[p]. A system S satisfies a PROMPT-LTL formula φ if there is some bound k on the wait time for all prompt-eventually subformulas of φ in all computations of S. We study various problems related to PROMPT-LTL, including realizability, model checking, and assume-guarantee model checking, and show that they can be solved by techniques that are quite close to the standard techniques for LTL. Publication series Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Volume 4590 LNCS ISSN (Print) 0302-9743 ISSN (Electronic) 1611-3349 Conference 19th International Conference on Computer Aided Verification, CAV 2007 Country/Territory Germany City Berlin Period 3/07/07 → 7/07/07 Dive into the research topics of 'From liveness to promptness'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/from-liveness-to-promptness-13","timestamp":"2024-11-12T18:53:20Z","content_type":"text/html","content_length":"50341","record_id":"<urn:uuid:93cfcd2d-971f-4291-8f8a-7e44e00a9074>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00404.warc.gz"}
Actual vs Budget or Target Chart in Excel - Variance on Clustered Column or Bar Chart Charts & Dashboards Actual vs Budget or Target Chart in Excel – Variance on Clustered Column or Bar Chart Learn how to display the variance between two columns or bars in a clustered chart or graph. Includes a step-by-step tutorial and free workbook to download. Great for displaying budget vs actual This post will explain how to create a clustered column or bar chart that displays the variance between two series. Actual vs Budget or Target. Clustered Column Chart with Variance Clustered Bar Chart with Variance The clustered bar or column chart is a great choice when comparing two series across multiple categories. In the example above, we are looking at the Actual versus Budget (series) across multiple Regions (categories). The basic clustered chart displays the totals for each series by category, but it does NOT display the variance. This requires the reader to calculate the variance manually for each category. However, the variance can be added to the chart with some advanced charting techniques. A sample workbook is available for download below so you can follow along. This chart works when comparing any two numbers. It can be Actual versus Target, Forecast, Goal, Milestone, etc. The file below uses a slightly different technique by using a clustered column chart to display the variance, and then uses the Value from Cells option to display the data labels. This only works in Excel 2013. The advantage is that you can automatically display the variance label above the bar, and you don't have to move it manually as the numbers change. Data Requirements With any chart, it is critical that the data is in the right structure before the chart can be created. The following image shows an example of how the data should be organized on your sheet. It is a simple report style with a column for the category names (regions) and two columns for the series data (budget & actual data). This technique only works when comparing two different series of data. This can include a comparison of any data type: budget vs. actual, last year vs. this year, sale price vs. full price, women vs. men, etc. The number of categories is only limited to the size of the chart, but typically you want to have five or less for simplicity. Chart Requirements The chart utilizes two different chart types: clustered column/bar chart and stacked column/bar chart. The two data series we are comparing (budget & actual) are plotted on the clustered chart, and the variance is plotted on the stacked chart. The chart also utilizes two different axes: the comparison series is plotted on the secondary axis, and the variance is plotted on the primary axis. This puts the stacked chart (variance) behind the clustered chart (budget & actual). How-to Guide Data Calculations The first step is to add three calculation columns next to your data table. • Variance Base – The base variance is calculated as the minimum of the two series in each row. This gives you the value for plotting the base column/bar of the stacked chart. The bar in the chart is actually hidden behind the clustered chart. • Positive Variance – The variance is calculated as the variance between series 1 and series 2 (actual and budget). This is displayed as a positive result. An IF statement is used to return a blank value if the variance is negative. The blank value will not be plotted on the chart, and no data label will be created for it. • Negative Variance – This is the same basic calculation as the positive variance, but we use the absolute function (ABS) to return a positive value for the negative variance. The negative variance needs to be plotted as a positive value to bridge the gap between the two series. Calculating this in a separate column allows us to assign the negative series a different color, so the reader can easily differentiate it from the positive variance. How to Create the Chart The example file (free download below) contains step-by-step instructions on how to create the column version of this chart. Creating the bar chart is the exact same process with stacked and clustered bars instead of columns. The chart is not too difficult to create, and provides an opportunity to learn some advanced techniques. 1. Series 1 (Actual) and Series 2 (Budget) need to be plotted on the secondary axis. Right-click on the Actual series column in the chart, and click “Format Data Series…” Select the “Secondary Axis” radio button from the Series Options tab. Repeat this for the Budget Series (series 2). 2. Change the chart type for series 1 & 2 to a Clustered Column Chart. Select the Actual series in the chart, or in the Chart Elements drop-down on the Layout tab of the Ribbon (chart must be selected to see the Chart Tools contextual tab). Click the Change Chart Type button on the design tab and change the chart type to a Clustered Column chart. We can now start to see the chart take shape. The Acutal and Budget data are displayed in side-by-side columns for comparison. The Variance series are displayed in the background as a stacked 3. Adjust the Gap Width property for both charts. The gap width can be changed in the Series Options tab of the Format Data Series window. This controls the width of the columns. A smaller number will create a larger column, or smaller gap between categories. 4. Format the chart. The chart is just plain ugly with its default formatting options. We can make a few adjustments to make it more presentable. – Move the legend to the top and delete the 3 variance series. – Add a Chart Title. – Delete the Axis Labels. – Change the border and fill colors for the columns. – Delete the horizontal guidelines. 5. Add the data labels. The variance columns in the data table contain a custom formatting type to display a blank for any zeros: _(* #,##0_);_(* (#,##0);_(* “”_);_(@_) These blanks also display as blanks in the data labels to give the chart a clean look. Otherwise, the variance columns that are not displayed in the chart would still have data labels that display zeros. The data labels for a stacked column chart do not have an option to display the label above the chart. So you will have to manually move the variance label above, and to the left or right of the Additional Resources Checkout my series of posts and videos on the column chart that displays percentage change. I take you through a series of iterations to improve on the chart based on feedback from members of the Excel Campus community. This chart is a great way to display the series data and the variance amount in one chart. The guide is meant to help you understand how to create and edit these charts to tell your story. The source data table is simple in structure, and the chart can be re-used with different data so you do not have to go through this process every time. Please click here to subscribe to my free email newsletter to receive more great tips like this. You will also receive a free gift. It's a win-win! 🙂 What do you think? Do you use another type of chart to display variances? Please leave a comment. 🙂 76 comments • Haven’t tried it yet but this is exactly something my boss would be interested in. Can it be done in 2007? Based on your screenshots you’re using 2010 or higher. Thanks Jon! □ Hi Diana, Yes, this technique will work in Excel 2007 and even 2003. I think your boss will be impressed. 🙂 Thanks! • Is the download missing? □ Hi Ken, It is available now. Thanks for letting me know! • Hi john, i want to show % variance instead of Absolute number. followed this method but did not find similar result as primary and secondary axis are different units. can u please suggest…how to show % variance in same graph. □ Hi Prabin, Great question. You can show the % variance by linking each data label in the variance series of the chart to a cell that calculates the variance. I updated the file (the one available for download on this page) with an additional chart on the ‘Examples’ tab that displays the variances. Here are some instructions on how to do this. 1. In column I of the ‘Examples’ tab I added the % variance calculation. 2. Click on the first variance label in the chart. This will select all the variance labels for that particular series (positive/negative). 3. Click on the same label again. This will select only the single label, and all other labels in the series will be deselected. 4. In the formula bar, type the equals sign (=), then select the cell that contains the % variance (cell I6 in the example). 5. Click Enter. This will set the data label value equal to the cell value and keep chart label linked to the cell value through the formula. So if the numbers change in your source data the chart will automatically be updated. 6. Repeat steps 2-5 for each variance label in the chart. If you are displaying the positive and negative variances in separate series to display in different colors, then you can add an additional column to calculate both the positive and negative variances. Then repeat the steps above for all value labels in both variance series. Excel 2013 has a built-in feature that makes this process much easier. 1. If you are using 2013, right click on any one of the data labels and select “Format Data Labels” from the menu. 2. In the Label Options menu on the right side you will see an option named “Values From Cells”. 3. Click this selection and you will be prompted to select the range that contains the labels you want to display. 4. Select the cells that contain your calculated percentage variances (column I in the example) and click OK. This is much faster than selecting each label and creating a formula to point to the variance calculation cell. Please let me know if you have any questions. • It works…Thanks John for your prompt response. I like to visit other posts as well to find some new tricks. Thanks Again! • Thanks for the great explanation so far but I have a further question. Is there a way to do these bar graphs if the targets are negative numbers? For example, Budget was -100, actual was -50 so we be the budget by 50 positive. When I tried it the variance was on the positive side of the graph and the budget and actual bars were on the negative side. Since my base variance was negative, the actual variance did “add on” to it. Any suggestions? Thanks. Lizz □ Hi Lizz, Great question! You will need to change a few formulas to get this to work. On the ‘Examples’ sheet of the file, make the following changes: – Cell F6: =MAX(C6:D6) – Cell G6: =IF(E6>0,-E6,””) – Cell H6: =IF(E6<0,E6,"") Use the formulas above when both numbers being compared are negative. If your chart contains both positive and negative sets then you will need to wrap the formulas in another IF statement to first check if the numbers are positive or negative. Here are the formulas to use when you have some rows that contain positives and some rows contain negatives: - Cell F6: =IF(MIN(C6:D6)>0,MIN(C6:D6),MAX(C6:D6)) – Cell G6: =IF(MIN(C6:D6)>0,IF(E6>0,E6,””),IF(E6>0,-E6,””)) – Cell H6: =IF(MIN(C6:D6)>0,IF(E6<0,ABS(E6),""),IF(E6<0,E6,"")) You could also use the CHOOSE function instead of nesting the formulas in the IF statement. This would calculate faster if your workbook is very large and slow. I can provide more explanation on that if needed. Please let me know if you have any questions. ☆ Thanks, that solved my problem! My data isn’t too complex so the IF statements were fine. • I think the above is excellent I am looking to graphically represent the % movement from a baseline target on a graph. I have 10 departments. Each has a split of Permanent and Non Permanent staff. This gives a total of say 1000 people per department. Each department has a separate unique target of the % of non permanent staff they want to hit. I am looking to show in a graph the movement from the base to their position now. Ideally i would like to show the movement in overall numbers but also in the % of non permanent staff change in that time period. So the graph you have displayed is great, i just need to understand how to use it to best represent my problem Any help would be gratefully appreciated!! □ Hi Paul, Great question on how to add the percentage variance to the data labels. If you are using Excel 2013 there is a new feature that allows you to display data labels based on a range of cells that you select. It is the “Value From Cells” option in the Label Options To display the percentage variance in the data label you will first need to calculate the percentage variance in a row/column of your data set. In the example file on this post, the percent variance is calculated in cells I6:I9 of the ‘Examples’ tab. Next you will right click on any of the data labels in the Variance series on the chart (the labels that are currently displaying the variance as a number), and select “Format Data Labels” from the menu. On the right side of the screen you should see the Label Options menu and the first option is “Value From Cells”. Click the check box and it will prompt you to select a range. Select the cells that contain the percentage variance, and click OK. You should now see both the percentage and number variance displayed in the data label. Checkout the screenshot on the link below. Display Percentage Variance on Excel 2013 Chart Screenshot You can then uncheck the Value option if you only want to display the percentage variance. If you have an older version of Excel this process is still possible, but a bit more time consuming. You can set each data label to a cell value. 1. Click on a data label in the chart. Make sure only one data label is selected, NOT all labels in the series. You typically have to click the data label twice for the single selection. 2. In the formula bar, type the equals symbol = then select the cell that contains the percentage variance value. 3. Click Enter. The variance will be displayed in the label. 4. Repeat steps 1-3 for each data label in your chart. Check out this screenshot for details. Set Data Labels to Cell Values Screenshot Excel 2003-2010 The nice part about either of these methods is that the data labels are linked to the values in the cells. If your numbers change or you update the data, the labels will automatically be refreshed and display the correct results. Please let me know if you have any questions. • Thanks, this is great! Is there a way to do a stacked chart for the plan and actual along with the variance? For example, the plan consists of 3 groups and we want to show the breakout for each group for the overall plan and same for actual. □ Hi P, Great question! I would actually advise against a stacked chart if possible. I’m not a fan of stacked charts because it is hard to identify the trends in each group. Check out this article I wrote about stacked charts and some alternatives. I understand that you might still be required to create a stacked chart. If that is the case, and my article doesn’t convince you otherwise, please let me know and I’ll help you with a solution. 🙂 This post by Jon Peltier will explain how to create the clustered stacked chart, and we can probably add a total variance bar to the stacks. Thanks for stopping by! • This was great information! I was able to graph exactly what I needed. Another question for you….I have 3 different groups of data that I would like to show on one graph. The first is a whole number between 500-600, the second is a whole number that is between 1-10, and the third is a percentage. Would it be possible to show all this information on a line graph? Thank you for your □ Hi Andrea, My apologies for not getting back to you sooner! I am glad you found the information useful. In regards to your question, I would recommend creating a separate chart for each of these metrics. You could stack the charts on top of each other in what is called a panel chart. This will allow you to keep the vertical axis relevant to the data being plotted and display the changes in the trend. If you have them all on one chart, the smaller numbers will just look like a flat line. The following article shows an example of a panel chart. Please let me know if you have any questions. • Hi John, I have used this graph to show actual v budget data for hours. I want to now add in another column to compare attandence time v actual v budget hours, and also i want to show 3 line graphs, productivity, efficiency and booking efficicency, which are all in %. I assume i would need a third axis and this would not be possible? Any help greatly appreciated □ Hi Sarah, For your first question about comparing the three different series (attendance v actual v budget), you will have to create a “clustered and stacked” column/bar chart on the primary axis. Jon Peltier has an article that explains this technique, and it also shows you how to add a line to the chart. For your second question, the three line charts that are based on percentages would be difficult to add to this chart. It is also going to complicate the chart quite a bit, which is my biggest concern with it. I understand that the metrics are related, but it is probably best to think about what story you want to tell with the data. You might need to break the analysis into multiple charts, with each chart telling a story that leads to the overall conclusion. It is not always easy to do, but it will help you communicate your findings to the reader. Please feel free to email me your chart. I would be happy to take a look and possibly do a case study on it. • The download is missing, can someone help me? Thankss! Great job Jon, awesome! □ Hi Vitor, Thanks for the comment! The download should be available now. I switched hosting providers over the weekend and there was a little down time. Please let me know if you are able to download the file now. ☆ Jon, thanks, its working perfectly! See you • This is great thanks. I do have one additional problem I’d love to solve with this – my actuals are budget numbers are made up of say sales of apples and oranges as well as north/south/east/west. I’d love to still show one column each for the actuals and budgets but each column shows the breakout of apples and oranges (a stacked column I think). thanks. □ Hi Senead, Jon Peltier has written a great article on how to accomplish the stacked & clustered charts. I tend to advise against this technique, or at least be cautious when you’re using it. If you only have to different series in your stack (apples and oranges), then this technique might work. The stacked charts tend to get cluttered with too many data points and it is hard for the reader to quickly draw conclusions from it. I would recommend creating separate charts for each series. This will give each series a common baseline, and allow the reader to quickly see the trends. Checkout this article I wrote about stacked charts and let me know what you think. Thanks again! • Hi, thank you for providing this! The formulas listed for looking for negative and positive numbers, don’t seem to work. The second one specifically. Any thoughts? Thank you so much! • Never mind! I figured it out — it had do with the quotes when I cut and paste. Thanks so much! □ Hi Teresa, I am glad you got it working. 🙂 Let me know if you have any other questions. Thanks! • Jon, Thank you very much for the info! I do have one question. Here you are comparing just two different series of data. If I need to compare year over year data for 6 years but don’t want to show each year twice on the graph. How can I do it? Thanks a lot for your help! □ Hi Amelia, Are you looking for something like the following image? This chart doesn’t have labels yet, but it can be done. Let me know what you think. ☆ Hi Jon, Hope all is well with you. Can I request for your example spreadsheet for this type of chart. I am currently working on a very similar chart that needs to show variance for 3 series i.e., 2017 actual vs 2018 actual vs 2018 budget. Hope I’ll get to hear from you the soonest. Thanks! • Hi, this excel is extremely great! I have some issue to arrange the variance position..for example..In North, the variance is 300. If let say I increased the budget to 2000, the budget bar is overlapping the variance..I am trying to fix the position so that the variance will always on top of the budget or actual bar…but I couldn’t find this kind of setting…anyway to fix it? □ Thanks Nelson! This is a great question. Are you using Excel 2013 by chance? If so, this is actually possible using Clustered column charts and the Value from Cells label option. Let me know if you are interested and I’ll explain how to do it. □ I added a file in the download section above for the technique using Excel 2013. ☆ Thanks Jon! It is really helps! Yes i am using excel 2013, but at the label option > Label position, i couldn’t see the “Outside End” option..mine only have center, inside end and inside base..any idea? • Brilliant guide, thank you! (Was a bit trickier to “translate” this into Excel 2013 language, but I figured it out eventually – they actually make it easier to do this with 2013). Now my graphs look as impressive as the data is 🙂 □ Thanks Emma! I should probably update for Excel 2013/2016. Glad you were able to figure it out. 🙂 • Hi Jon, this is great thank you! I just have one question: I’m trying to compare consumption data across 3 years (2014 to 2016) – Is it possible to change the data table so that the graph includes 3 columns with consumption figures, and shows the variance between 2014 and 2015, and also 2015 and 2016? Let me know if this isn’t clear, thank you for your help! □ Hi Marie, Great question. That is going to be pretty tough to do. I don’t want to say it’s impossible, but will require quite a few more data series and logic. I will give it some thought. That might actually be a good example/use for a post I recently did on dynamic chart data labels. In that post I explain how to create data labels with different metrics. One of those metrics is variance to prior period. I’m probably going to update that post with examples for different chart types. I used the stacked bar, but it is a bit cluttered and I’m not a big fan of stacked bar charts. The technique could be used for bar, column or line charts too. I hope that helps. Thanks! • Thanks for this Jon, An excellent guide- and I’m now the proud owner of my own cluster chart with variances! Love it! □ Awesome! Thanks Mark! 🙂 • Hi Jon, thanks for this excel, but i am really having trouble when the change is a small percentage. for example if i haev only one row with the following entries budget =1000 and actual = 1050, the graphs representation doesnt really show the right picture. the actual and budget looks like its in a different axis. Appreciate your help in fixing this □ Hi Ameer, The axis max values might have changed to automatic. You will need to manually change the max and min axis values for both the Primary and Secondary axis of your chart. That should get everything lined up. I hope that helps. Thanks! ☆ What’s up,I read your blogs named “Variance on Clustered Column or Bar Chart – Budget vs Actual – Excel Campus” daily.Your humoristic style is awesome, keep it up! And you can look our website about مهرجانات 2017. • Hello Jon, First thanks for the explanation ! 🙂 I want to know, how can i put another indicator like PY (previous year) in the same series of Budget and Actual? now to see the GAP vs. PY thank you very much have anice day □ Hi Andrea, It’s pretty challenging to add a third bar to this. There is a lot more complexity with the positive and negative bars lining up. Thanks for the question though. 🙂 • How can I show a % variance based off of this model? □ Hi Hunter, You can use the “Values from Cells” option for the Data Labels to populate the percentages from cells instead of dollar values. I mention value from cells in this article on dynamic data labels. I hope that helps. • Asking this to see if it might be possible, didn’t see a comment that was trying to quite do this. I would like to stack another column on top of the Actual bar to represent forecasted expenditures. Then show the variance between Actual + Forecast vs. Budget. The only way I can think to do that is to have some type of third axis, which I have no idea how to do and doesn’t sound possible (or at least manageable). □ Hi Nathan, I don’t believe it’s possible with this technique. I never like to say things are impossible, but getting the width of the variance bars to line up is going to be really challenging. It might be possible to overlay two charts with a clear background, but I have not tried that. It would still require a lot of maintenance. Sorry, I wish I had better news. □ Hi Nathan, I recently built a similar chart i.e.( Actual Type-A + Actual Type-B) v/s Budget. Sharing the same. This is the method that I used: Use Jon Acampora’s above mentioned method to calculate Base Variance, Positive Variance and Negative Variance.(Thanks Jon!) Plot Actual & Forecast as Stacked Bar Chart on Primary Axis. Plot Budget as Clustered Bar Chart on Secondary Axis. To eliminate overlapping of Primary and Secondary Bars, add required no. of ‘Dummy Series’ with zero values on Secondary Axis. Then use ‘Error Bars’ to show the Variances. Steps to get the Error Bars in place: Plot ‘Positive Variances’ on the ‘Budget’ Bar: Click on Layout>Error Bars>More Error Bar Options> Select your’Budget’series After you select the series, the ‘Format Error Bars’ window opens up. In the Format Error Bars Window: Under Error Amount section, select ‘Custom’ option and click on ‘Specify Value’. Then, select the cell range having ‘Positive Variance’ values as ‘Positive Error Value’. Enter 0 as ‘Negative Error Value’ Plot ‘Negative Variances’ on the stacked ‘Actual+Forecast’ Bar: Click on Layout>Error Bars>More Error Bar Options> Select ‘Actual’ or ‘Forecast’, whichever is on topside of the stacked bars After you select the series, the ‘Format Error Bars’ window opens up. In the Format Error Bars Window: Under Error Amount section, select ‘Custom’ option and click on ‘Specify Value’. Then, select the cell range having ‘Negative Variance’ values as ‘Positive Error Value’. Enter 0 as ‘Negative Error Value’ Now you should have the error bars in place. You can then format the Error Bars to have width equal to the Bars in the chart & to have different colors for Positive and Negative Variances. Hope this helps. • Hi, i added new data column in the same chart but the axis is not aligning good it goes left and right somewhere. can somebody help me with that • Hi, i added new data column in the same chart but the axis is not aligning good it goes left and right somewhere. can somebody help me with that. • Hello Jon, will it be possible to show in same excel graph the cumulated positive variance which will be consumed by negative variance during future time periods (calendar weeks, months). I used your template to visualize weekly demands from customer against our weekly capacity. I wanted to see until which week our positive variance will be enough to cover negative variances. thank you, • Jon, I tried to revised the file to include 12 rows of data instead of only 4 as I am trying to represent a yearly comparison. For some reason it did not work. Do you have any TIPS? □ Harley, I adjusted the file for the same reason as you did and the only thing I had to do to update the graphs was adjusting the ranges of the graph so now it include the new rows I added. Right click on the graph -> select data – > edit Legend entries and Horizontal axis labels □ I’m using the technique based on a Pivot Table, with slicers, that filter by the fiscal year. I get all 12 months based on the year i choose from the slicer. It works great. My only issue now (that I’m currently working to resolve) is that my “Base Variation shows up on top of the Posative and Negative, so those bars can’t be seen, only the $ amounts in the labels are visible. I’m working on it… • Hi Jon, Great chart but one question as it is always useful to present the variance in % on top of the variance value. Is there a way where % can be shown on top of the variance value? Kind regards, • Thanks for sharing this technique and was useful to represent the variance chart □ Thanks A K! 🙂 • Hi Jon, Thanks for this guide, it is very useful. I’ve tried them from scratch on my own, but for some reason the scale of the whole chart is wrong. Proportions for the positive/negative variance and for the actual are not accurate. Let’s say the positive variance is 3 and the actual is 1000, the positive variance bar still looks larger than the actual, which of course makes no sense. I’ve checked the axis for both of them, and they are the same, so I don’t know what the mistake could be. Do you have any idea? I don’t see this happening in your charts. • Hi again Jon, I figured out what is causing my bars to be out of scale. If I take your excel file and I erase categories South, East, and West, leaving North alone, the whole scale of the positive variance goes wrong and it starts looking larger than the actual. Can you please help me figure out a way to only show one category without this happening? • This is very helpful. I wonder if we can also include forecast on the chart? • I combined this with one of the techniques explained by Jon Peltier in something recent you shared to get the data label to always be above so you dont have to move them. And then they are dynamic. I have a Max Positive, and Max Negative column. I do a line graph for the two data sets and just hide the line and the data points and have the label above. • Jon, I follow the instructions above and it all works great with my 12 month fiscal year budget vs actual dashboard using the slicers to choose the fiscal year. The only problem is…. The Stacked Columns have the “Base Variance” on top and the Negative in the middle, and the positive variance on the bottom of the stack. how do I flip these so I can see the right one at the top (already have them color filled correctly, just can’t see the colors the way they’re stacked). • Hi Jon- How do I adjust the chart data if my budget/target is positive but my actuals are negative (e.g. negative margin)? Is there a solution for that? For example: Budget = $40 Actual = -$6 Variance = -$46 If I use the standard formulae, the negative variance column “crosses over” into the budget column. The graph makes the actual column also “cross over” into the budget column below zero on the Any help would be much appreciated! Thank you! • I’m fascinated with your innovative strategy, we should connect some time. • Very helpful. 🙂 • Hi Jon, Here’s my two cents Hope you like it. • Thank you this is very helpful! • Really appreciate this, Jon! I wish I had seen this 5 years ago! = ) • I was looking for something for a personal budget and it does not seem to show low variances well. That said this is great for work. Fantastic great looking charts – thank you for taking the time to put this together. • Hi – not sure what I am doing wrong but I have attempted it a few times now and you lose me at ‘3’ because my resulting Clustered Column Chart looks very different. What am I doing wrong? □ I have the same problem. Anyone got the solution? • Excellent Instructions; The Best I have seen, and have been easily replicate it for Engineering Team Dept Plan vs Act Hrs – However Only problem I have been unable to put data labels above the Variance Sections – Any tips? – that bit (to me) I don’t understand, KR, Trevor • Hi John, I’m trying to create the chart but I have a problem in the part where I have to change series 1 and 2 to a clustered column chart, I don’t know for what reason excel transform my chart into a clustered column chart but it separates base variance and negative variance to the left side of each region, and actual and budget to the right side of each region, this isn’t allowed me to create the final chart. • Hi Jon, Under Overview, the download link for the Sample Workbook does not work (I used Chrome and Edge). Could you please have someone check that link? • Is it possible to recreate this in Google Sheets?
{"url":"https://www.excelcampus.com/charts/variance-clustered-column-bar-chart/","timestamp":"2024-11-04T04:26:07Z","content_type":"text/html","content_length":"334723","record_id":"<urn:uuid:e1354630-d675-4304-8266-7a4dbaba463e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00167.warc.gz"}
Speaking | Math = Love June 2023 Engaging Activities for Practicing Any Math Topic Oklahoma Council of Teachers of Mathematics Summer Conference, Collinsville High School June 2023 Puzzles for Building Math Skills Co-Presented with Shaun Carter Oklahoma Council of Teachers of Mathematics Summer Conference, Collinsville High School April 2023 Panda Squares Co-Presented with Shaun Carter Tulsa Math Teachers’ Circle, University of Tulsa April 2023 Engaging Students Through Hands-On Data Collection Activities Southeast Oklahoma Math Teachers’ Circle, East Central University June 2022 Embracing JOY in Math Class (Keynote Presentation) Oklahoma Council of Teachers of Mathematics Summer Conference, Norman High School June 2022 Engaging Students Through Hands-On Data Collection Activities Oklahoma Council of Teachers of Mathematics Summer Conference, Norman High School August 2021 Engaging Algebra Activities NEOK Math Teacher Gathering, Collinsville High School August 2021 Notable Numbers: An Exploration of Friedman Numbers and Happy Numbers NEOK Math Teacher Gathering, Collinsville High School May 2021 Notable Numbers: An Exploration of Friedman Numbers and Happy Numbers Southeast Oklahoma Math Teachers’ Circle March 2021 Notable Numbers: An Exploration of Friedman Numbers and Happy Numbers Central Oklahoma Math Teachers’ Circle August 2020 Virtual Activities Sampler: A Technology Taste Test Professional Day, Coweta High School A Puzzling Classroom NEOK Math Teacher Gathering, Jenks High School Interactive Notebooks 101 OK-MAP Workshop Making Note-Taking Fun & Interactive OEQA Action Academy Insights from Great Teachers Panel (Alongside NPR) Making Note-Taking Fun & Interactive Oklahoma Council of Teachers of Mathematics Summer Workshop Inspiring Mathematical Curiosity Through Hands-On Activities Twitter Math Camp, Jenks High School Teaching Math Through Motion Global Math Department Webinar Hands-On Learning Professional Day, Drumright High School
{"url":"https://mathequalslove.net/speaking-presentations/","timestamp":"2024-11-08T23:36:48Z","content_type":"text/html","content_length":"160685","record_id":"<urn:uuid:1dcaa6f0-35e6-4a80-8f8e-7680a9839e6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00784.warc.gz"}
How do you convert a state model to a transfer function? How do you convert a state model to a transfer function? 3.12 Converting State Space Models to Transfer Functions 1. Take the Laplace transform of each term, assuming zero initial conditions. 2. Solving for x(s), then y(s) (it should be noted that often D = 0) 3. where G(s) is a transfer function matrix. 4. or in matrix form (with m inputs and r outputs) 5. Example 3.9: Isothermal CSTR. How do you make a transfer function in MatLab? Create the transfer function G ( s ) = s s 2 + 3 s + 2 : num = [1 0]; den = [1 3 2]; G = tf(num,den); num and den are the numerator and denominator polynomial coefficients in descending powers of s. For example, den = [1 3 2] represents the denominator polynomial s2 + 3s + 2. How do you write a transfer function to state space in MatLab? Transfer Function 1. For discrete-time systems, the state-space matrices relate the state vector x, the input u, and the output y: x ( k + 1 ) = A x ( k ) + B u ( k ) , y ( k ) = C x ( k ) + D u ( k ) . 2. For continuous-time systems, the state-space matrices relate the state vector x, the input u, and the output y: How do you convert to transfer function? To find the transfer function, first take the Laplace Transform of the differential equation (with zero initial conditions). Recall that differentiation in the time domain is equivalent to multiplication by “s” in the Laplace domain. The transfer function is then the ratio of output to input and is often called H(s). What is the MatLab command we use for converting state space to transfer unction? To make this task easier, MatLab has a command (ss2tf) for converting from state space to transfer function. What is transfer function model? Transfer function models describe the relationship between the inputs and outputs of a system using a ratio of polynomials. The model order is equal to the order of the denominator polynomial. The roots of the denominator polynomial are referred to as the model poles. What is the MatLab command we use for converting state-space to transfer unction? What is state model equation? Advertisements. The state space model of Linear Time-Invariant (LTI) system can be represented as, ˙X=AX+BU. Y=CX+DU. The first and the second equations are known as state equation and output equation respectively. What is state model in oops? State model describes those aspects of objects concerned with time and the sequencing of operations – events that mark changes, states that define the context for events, and the organization of events and states. Actions and events in a state diagram become operations on objects in the class model. What is transfer function and its properties? The transfer function of a system is the mathematical model expressing the differential equation that relates the output to input of the system. • The transfer function is the property of a system independent of magnitude and the nature of the input. What is transfer function DSP? In the realms of statistical time series analysis, of signal processing and of control engineering, a transfer function is a mathematical relationship between the numerical input to a dynamic system and the resulting output. The theory concerning the transfer functions of linear time-invariant sys- Who invented state space model? The term “state space” originated in 1960s in the area of control engineering (Kalman, 1960). What is a state space statistics? Statistical Glossary State space is an abstract space representing possible states of a system. A point in the state space is a vector of the values of all relevant parameters of the system. What are the advantages of state model over transfer function model? The major benefit of state space control over transfer function methods is its applicability to a wide range of systems: linear and non-linear; time-varying and time-invariant; single-input, single-output (SISO) and multiple-input, multiple-output (MIMO).
{"url":"https://bigsurspiritgarden.com/2022/11/13/how-do-you-convert-a-state-model-to-a-transfer-function/","timestamp":"2024-11-09T12:24:22Z","content_type":"text/html","content_length":"51796","record_id":"<urn:uuid:be74f58e-c154-406f-afca-f9551a187df3>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00743.warc.gz"}
Chirikov criterion From Scholarpedia Dima Shepelyansky (2009), Scholarpedia, 4(9):8567. doi:10.4249/scholarpedia.8567 revision #199625 [link to/cite this article] Chirikov criterion or Chirikov resonance-overlap criterion was introduced in 1959 by Boris Chirikov and successfully applied by him to explain the confinement border for plasma in open mirror traps observed in experiments at the Kurchatov Institute. This was the very first physical and analytical criterion for the onset of chaotic motion in deterministic Hamiltonian systems. According to the Chirikov criterion [1], a deterministic trajectory will begin to move between two nonlinear resonances in a chaotic and unpredictable manner as soon as these unperturbed resonances overlap. This occurs when the "perturbation" or "chaos" parameter becomes larger than unity: \[\tag{1} K \sim S^2 > 1 \;\; , \; S = \Delta\omega_r/\Omega_d \] Here \( S \) is the resonance-overlap parameter. \(S\) is the ratio of the sum of the unperturbed resonance half-widths in frequency \( \Delta \omega_r \) and the frequency distance \( \Omega_d \) between two unperturbed resonances (see Fig.1). The width \( \Delta \omega_r \) is often computed in the pendulum approximation and is typically proportional to the square-root of the perturbation amplitude. Since its introduction, the Chirikov criterion has become an important analytical tool for the determination of the chaos border in Hamiltonian systems (see [2,3]). This criterion, while supplying important insight to the onset of chaos, is not rigorous. A more rigorous approach was supplied by John Greene [4,5], who identified the chaos boarder with the destabilization of higher order fixed points in a vicinity of an invariant curve with a chosen fixed rotation number [4,5]. Although it is more precise, the approach of Greene requires significant numerical computation and is not often suitable for a derivation of analytical dependencies on system parameters. The accuracy of the Chirikov criterion can also be improved using a renormalization approach [6] that takes into account resonances on smaller and smaller scales, provided the overlapping resonances are of similar size. For unequal size overlapping resonances the criterion becomes less accurate. Also, it should be noted that the criterion uses the pendulum approximation for a nonlinear resonance which is valid in a regime of moderate nonlinearity [2,3]. Thus, the criterion is not directly applicable when the unperturbed Hamiltonian is linear in actions and its linear frequencies are degenerate. For a example, a three-dimensional oscillator with three equal frequencies has approximately half of phase space being chaotic at an arbitrarily weak quartic nonlinear coupling between modes [7]. But such linearly degenerate systems are special; moreover, Kolmogorov-Arnold-Moser theory also does not apply naively in these cases. To determine the chaos border for the Chirikov standard map using the Chirikov criterion we write the system Hamiltonian \[ H(I,\theta,t)= I^2/2 +K \cos \theta \delta_{1}(t) , \] where \( \delta_{1} (t) \) is a periodic \( \delta- \)function with period \( 1 \) in time \( t \) and \( I, \theta \) are action-angle variables. After the expansion of the \( \delta- \)function in a Fourier series this Hamiltonian takes the form \[ H(I,\theta,t)= I^2/2 + K \sum_{m} \cos (\theta - 2\pi mt) , \] where summation is over all integers \( m \ .\) Assuming the perturbation to be small we obtain the positions of the "principal" or "forced" resonances at \( I=2\pi J_r= 2\pi m \) (see Fig.1). These resonances are identical and the distance between two neighboring resonances is \( \Omega_d =2\pi\). Taken by itself, dynamics of an individual resonance is described by the pendulum Hamiltonian \( H(I,\theta)= I^2/2 + K \cos \theta \) with the separatrix width at energy \( H(I,\theta)=E=K \) being \( \omega_r = \Delta I =4 \sqrt{K} \). Consequently, the overlap of these unperturbed resonances occurs when \( S = \Delta\omega_r/\Omega_d = 2 \sqrt{K}/\pi > 1\). Due to higher order resonances and chaotic separatrix layers, overlap, leading to global chaos, actually occurs when \( K \approx 2.5 S^2 > 1 \). The transition to global chaos in the vicinity of critical parameter \( K \approx 1\) is clearly seen in Figs.2,3. Using this numerical adjustment factor, the overlap criterion gives the global chaos border in various dynamical systems with a few percent accuracy. The Chirikov criterion determines the chaos border in generic nonlinear systems. A certain class of nonlinear systems can be completely integrable and in this case the criterion does not work. A well known example is the Toda lattice [8] which remains integrable even at very strong nonlinearity. Apart of these specific completely integrable systems, the Chirikov criterion usually works well for generic systems. Quantum resonance overlap Example of quantum resonance overlap in the Chirikov standard map is shown in Fig.5 of Ehrenfest time and chaos (see discussion of quantum properties of chaos there). History and applications In 1954 G.I.Budker proposed a mirror magnetic trap for plasma confinement. Particle confinement in such an open trap is provided by the conservation of an adiabatic invariant, the magnetic moment of a particle, which is known, however, to be an approximate motion integral only. The experimental studies of such a system have been done by S.N.Rodionov [9]. The confinement border observed in experiments was explained by Boris Chirikov on the basis of invented by him resonance-overlap criterion [1]. More details about chaos border for particle confinement in magnetic traps can be find in [10,11]. The Chirikov criterion finds applications for the dynamics of solar system, particle dynamics in accelerators, magnetic plasma traps, microwave ionization of Rydberg atoms and various other Chaotic stories • Boris Chirikov first presented his results on the stochastic instability of magnetically confined plasma at the Kurchatov seminar in Moscow in 1958, when the plasma research was classified secret. Only after the London plasma conference of 1958 did the results become public, and Kurchatov ordered the plasma results to be published quickly. This led to Chirikov's celebrated 1959 theoretical paper [1] in a special issue of the journal Atomic Energy. Boris Chirikov had started his career as an experimenter, but the world would now know him as the theorist who invented the resonance overlap criterion. What is less known is the story of the writing of the paper in the same journal issue that describes the related plasma experiments of S. Rodionov [9]. Though Rodionov's name appears as the sole author, the paper was written by Chirikov. Why? The story goes that Rodionov had broken his right hand (in an overcrowded public bus) and was in the hospital. Chirikov was ordered by the KGB to take his secret notes, go to the hospital, and write the paper from the words of Rodionov. The KGB orders included that Chirikov take a weapon, a revolver, to ensure the security of the secret documents, but Chirikov refused, arguing that it would be too dangerous to take a revolver on the public buses that, in those days, were always very overcrowded with people. Finally, the KGB agreed that Chirikov would not have to carry the revolver, but he was obliged to return all his notes, including the "Rodionov manuscript" back to the secure place. What we now know as the Chirikov criterion came as a result of Chirikov's generalizing the theoretical analysis he had first performed for the stochastic instability of confined plasma. (from Reminiscences of Boris Chirikov). (see more at http://www.quantware.ups-tlse.fr/chirikov/publications.html) 1. B.V.Chirikov, "Resonance processes in magnetic traps", At. Energ. 6: 630 (1959) (in Russian; Engl. Transl., J. Nucl. Energy Part C: Plasma Phys. 1: 253 (1960)) 2. B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity",Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969), (Engl. Trans., CERN Trans. 71-40 (1971)) 3. B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979) 4. J.M.Greene, "2-Dimensional measure-preserving mappings", J.Math.Phys. 9: 760 (1968) 5. J.M.Greene, "Method for determining a stochastic transition", J.Math.Phys. 20: 1183 (1979) 6. D.F.Escande, "Stochasticity in classical Hamiltonian systems: universal aspects" Phys. Rep. 121: 165 (1985) 7. B.V.Chirikov and D.L.Shepelyanskii, "Dynamics of some homogeneous models of classical Yang-Mills fields", Sov. J. Nucl. Phys. 36(6): 908 (1982) (Yad. Fiz. 36: 1563 (1982)) 8. M.Toda, "Studies of a non-linear lattice", Phys. Rep. 18: 1 (1975) 9. S.N.Rodionov, "Experimental test of the behavior of charged particles in an adiabatic trap", At. Energ. 6: 623 (1959) (in Russian; Engl. Transl. J. Nucl. Energy Part C: Plasma Phys. 1: 247 (1960) 10. B.V.Chirikov, "Particle dynamics in magnetic traps", Reviews of Plasma Physics, Ed. Acad. B.B.Kadomtsev, 13: 1 (1987), Consultants Bureau, Plenum, New York, Translated from Russian by J.G.Adashko 11. B.V.Chirikov, "Particle confinement and adiabatic invariance", Proc. Royal Soc. London A 413: 145 (1987) Internal references • Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. • David H. Terman and Eugene M. Izhikevich (2008) State space. Scholarpedia, 3(3):1924. Recommended reading A.J.Lichtenberg, M.A.Lieberman, "Regular and chaotic dynamics", Springer, Berlin (1992). L.E.Reichl, "The Transition to chaos in conservative classical systems and quantum manifestations", Springer, Berlin (2004). External links See also internal links Chirikov standard map, Boris Valerianovich Chirikov , Hamiltonian systems, Mapping, Chaos, Kolmogorov-Arnold-Moser Theory, Kolmogorov-Sinai entropy, Aubry-Mather theory, Quantum chaos, Ehrenfest time and chaos
{"url":"http://www.scholarpedia.org/article/Chirikov_criterion","timestamp":"2024-11-12T09:31:23Z","content_type":"text/html","content_length":"47637","record_id":"<urn:uuid:47dd6a4b-9992-4dc9-abe0-88d7082ad2be>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00502.warc.gz"}
Re: Linear and logarithmic fit • To: mathgroup at smc.vnet.net • Subject: [mg39387] Re: Linear and logarithmic fit • From: Bill Rowe <listuser at earthlink.net> • Date: Thu, 13 Feb 2003 04:54:10 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com On 2/12/03 at 3:54 AM, joh_nson at yahoo.com (jay Johnson) wrote: >If I have 9 points in a 2 dimensional space how do I decide if they >fit better a linear function or a logarithmic function? One choice would be to use the sum of the errors squared. For example: First create some data. do a linear fit 0.573502\[InvisibleSpace]-0.0848516 x do a logarithmic fit g=Fit[data,{1 Log[x]},x] -0.455209 Log[x] Sum the squared errors for the linear fit Tr[((#1[[2]] - f /. x -> #1[[1]])^2 & ) /@ data] sum the squared errors for the logarithmic fit Tr[((#1[[2]] - g /. x -> #1[[1]])^2 & ) /@ data] Since sum of the squared errors for the logarithmic fit is greater than for the linear fit, conclude the linear fit is better. Note, using the sum of the squared errors is probably the most commonly used measure of goodness of fit, but it is not the only measure that could be used. I've used this measure since that is what Fit minimizes to compute the fit coefficients. Also, if you want to compare several different models using a least squares fit, it would probably be better to use the Regress function in the package Statistics`LinearRegression` . There are options for this function to include in the output the sum of the error squared as well as other statistics to judge the fit of the model to the data. Finally, if you are going to be fitting models to data often it would be a good idea to read a good text on fitting models to data. There are several pitfalls that can lead to very misleading results. A couple of possilbe texts are Fitting Equations to Data by Daniel and Wood Applied Linear Regression by Weisberg
{"url":"https://forums.wolfram.com/mathgroup/archive/2003/Feb/msg00214.html","timestamp":"2024-11-14T14:12:18Z","content_type":"text/html","content_length":"31846","record_id":"<urn:uuid:c6c6df3d-157e-4366-9de7-c2ce4a2ccbbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00852.warc.gz"}
Sz_i - Sz_j Correlator at Different Times I am working with a Heisenberg spin-1 AF chain, trying to get the out-of-time correlators for z-components of the spins at different sites so that I can construct the spectral function. More explicitly, I am trying to calculate objects of the form e^{i E_0 t} <0|S^z_i e^{-i H t} S^z_j |0> In the correlators example http://itensor.org/docs.cgi?page=tutorials/correlations the MPS are turned into ITensor objects so that indices can be contracted. However, we can only time evolve MPS (I am using FitApplyMPO). I need to apply Sz_j before time evolving.. How can I accomplish this? I keep on getting errors. Thanks so much for your help! Hi jamarks, So you have the right idea, namely that you should apply @@S^z_j@@ to your MPS, then time evolve it, then measure @@S^z_i@@ after time evolving (and by measure, I mean compute @@S^z_i@@ "sandwiched" between your original state "0" on one side and your time-evolved state on the other side). To apply @@S^z_j@@ to an MPS, all you need to do is this: psi.Aref(j) *= sites.op("Sz",j); The first line gets the @@S^z_j@@ operator from the site set "sites" and contracts it with the jth "A" tensor or MPS tensor of the MPS psi. The second line removes the prime on the site index that results from the first line, since site operators have one unprimed index and one primed index by our library's convention. Hope that helps!
{"url":"https://www.itensor.org/support/1030/sz_i-sz_j-correlator-at-different-times","timestamp":"2024-11-08T22:17:19Z","content_type":"text/html","content_length":"23677","record_id":"<urn:uuid:28b670ea-a811-4294-a676-f16485bc93de>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00340.warc.gz"}
Place Value (find digit value in a number) - Quiz - a-quiz.com Place Value (find digit value in a number) – Quiz Grade 5 /Whole numbers and place value Place Value (find digit value in a number) - Quiz Challenge yourself with this quiz to find the value of specific digits in large numbers based on their place value 1 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 1. Which number's underlined digit is worth 4,000,000? 2 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 2. Which number's underlined digit is worth 4,000,000? 3 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 3. Which number's underlined digit is worth 3,000? 4 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 4. Which number's underlined digit is worth 70,000? 5 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 5. Which number's underlined digit is worth 300,000? 6 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 6. Which number's underlined digit is worth 3,000,000? 7 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 7. Which number's underlined digit is worth 500,000? 8 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 8. Which number's underlined digit is worth 4,000,000? 9 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 9. Which number's underlined digit is worth 60,000? 10 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 10. Which number's underlined digit is worth 3,000? 11 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 11. Which number's underlined digit is worth 70,000? 12 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 12. Which number's underlined digit is worth 100,000? 13 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 13. Which number's underlined digit is worth 3,000,000? 14 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 14. Which number's underlined digit is worth 3,000,000? 15 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 15. Which number's underlined digit is worth 800,000? 16 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 16. Which number's underlined digit is worth 2,000,000? 17 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 17. Which number's underlined digit is worth 50,000,000? 18 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 18. Which number's underlined digit is worth 4,000? 19 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 19. Which number's underlined digit is worth 100,000? 20 / 20 Category: Grade 5/Whole Numbers and Place Value/Place Value 20. Which number's underlined digit is worth 800,000? Which Number’s Underlined Digit is Worth 9,000,000? (Place Value Explained) Place value is one of the most fundamental concepts in math. It helps us determine the value of a digit based on its position in a number. Understanding place value allows us to read and understand large numbers by recognizing the worth of each digit. In this post, we’ll focus on a specific place value example: identifying the number where the underlined digit is worth 9,000,000. What is Place Value? Place value is a system where the position of a digit in a number tells us its value. Each position in a number represents a specific power of ten. The further to the left a digit is, the higher its place value. Here are the most common place values for large numbers: • Ones Place (1): The first digit from the right. • Tens Place (10): The second digit. • Hundreds Place (100): The third digit. • Thousands Place (1,000): The fourth digit. • Ten Thousands Place (10,000): The fifth digit. • Hundred Thousands Place (100,000): The sixth digit. • Millions Place (1,000,000): The seventh digit. • Ten Millions Place (10,000,000): The eighth digit. • Hundred Millions Place (100,000,000): The ninth digit. • Billions Place (1,000,000,000): The tenth digit. Understanding this system helps us figure out the value of each digit in large numbers. Example of Finding a Digit’s Value: Worth 9,000,000 Let’s take an example where we need to find a number in which the underlined digit is worth exactly 9,000,000. Consider the number 49,572,381. In this number, the digit 9 is underlined. To determine its value, we look at its position, which is in the millions place (the seventh position from the right). The value of a digit in the millions place is found by multiplying the digit by 1,000,000. Therefore, the value of the underlined 9 is: • 9 × 1,000,000 = 9,000,000 Thus, in the number 49,572,381, the underlined digit 9 is worth 9,000,000 because it is located in the millions place. Explanation of Place Value Calculation To fully understand how we arrived at this answer, let’s break down the number by its place values: • 4 is in the ten millions place, so it represents 40,000,000. • 9 is in the millions place, so it represents 9,000,000. • 5 is in the hundred thousands place, so it represents 500,000. • 7 is in the ten thousands place, so it represents 70,000. • 2 is in the thousands place, so it represents 2,000. • 3 is in the hundreds place, so it represents 300. • 8 is in the tens place, so it represents 80. • 1 is in the ones place, so it represents 1. When we add all of these values together, we get the full number: 40,000,000 + 9,000,000 + 500,000 + 70,000 + 2,000 + 300 + 80 + 1 = 49,572,381 As you can see, the underlined digit 9 holds the value of 9,000,000 because it is in the millions place. Why is Place Value Important? Place value is crucial for understanding how numbers work, especially when dealing with large numbers like millions or billions. It helps us: • Read and write large numbers: Knowing place value allows you to correctly identify and understand numbers with many digits. • Compare numbers: You can quickly compare two numbers by looking at the digit in the highest place value. • Perform calculations: Whether adding, subtracting, multiplying, or dividing, understanding place value is key to performing accurate calculations with large numbers. In this example, identifying that the digit 9 is worth 9,000,000 in the number 49,572,381 is possible because we understand the importance of place value.
{"url":"https://a-quiz.com/place-value-find-digit-value-in-a-number-quiz/","timestamp":"2024-11-05T04:21:19Z","content_type":"text/html","content_length":"343171","record_id":"<urn:uuid:91afe60e-f9ec-421d-bb87-8d85fa2ec76f>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00587.warc.gz"}
language Archives | R-BAR R’s Inf keyword – Have you’ve ever wondered what to do with it? If so, this is the second in series of posts that explore how we can exploit the keyword’s interesting properties to get the answers we need and improve code robustness. If you want to catch up on the first post where we look at Inf and the cut() function, please see Infamous Inf – Part I For those unfamiliar with R’s Inf keyword, it is defined as a positive or negative number divided by zero yielding positive or negative infinity, respectively. c(plus_inf = 1/0, minus_inf = -1/0) # plus_inf minus_inf # Inf -Inf Sounds very theoretical. So… R’s Inf keyword – Have you’ve ever wondered what to do with it? If so, this is the first in series of posts that explore how we can exploit the keyword’s interesting properties to get the answers we need and improve code robustness. For those unfamiliar with R’s Inf keyword, it is defined as a positive or negative number divided by zero yielding positive or negative infinity, respectively. c (plus_inf = 1/0, minus_inf = -1/0) # plus_inf minus_inf # Inf -Inf Sounds very theoretical. So how we can make practical use of infinity in R? In this first post, we’ll be discussing how Inf can make binning data with cut() a… Infamous Inf – Part I
{"url":"https://r-bar.net/tag/language/","timestamp":"2024-11-03T00:05:10Z","content_type":"text/html","content_length":"50026","record_id":"<urn:uuid:2b15712b-2d99-47eb-b3ba-0ea2e9036c21>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00514.warc.gz"}
Flow Coefficient and Work Coefficient A blog on what's new, notable, and next in turbomachinery Two often used quantities to characterize turbomachinery are flow coefficient and work coefficient. The two are generally represented as Φ for flow coefficient and φ for work coefficient. The mathematical definition for the two quantities are as follows: Cm is the meridional velocity (meridional velocity is the component of velocity in the radial and axial plane). U is the local rotational speed (radius*rotational velocity) and Δh0 is the change in total enthalpy or energy of the fluid. Although not immediately obvious, the two quantities are directly related to the shape of the velocity triangle. This becomes more clear when we consider the Euler turbomachinery equation: Let's take a simple axial compressor example to help visualize the equations. The figure on the left is a simple compressor stage with the rotor (in blue) moving to the right and the stator (in red) which is not rotating. The absolute velocity (C) is in red. The relative velocity (W) is blue and represents what the rotor actually “sees” of the fluid. The absolute velocity is the vector sum of the relative velocity and the local wheel speed (U) shown in green. Since this is a purely axial example, there is no difference between the outflow and inflow radius and therefore the wheel speed U is also uniform. The Euler turbomachinery equation reduces to: Plugging this into the equation for work coefficient we get: If we normalize the fluid velocities by U and plot them together, we get a handy graphical representation of the coefficients: We can see immediately that the shape of the triangle and the resulting angles are determined by the coefficients. There is, actually, a third variable at play, which is the reaction of the stage (see blog: Reaction Verses Impulse). The reaction of the stage is the overall pressure split between the rotor and stator. Sometimes the reaction is defined in terms of energy, but the principle is the same. In this example, the reaction of the stage is 50% and the velocities that the rotor sees (relative) and the velocities that the stator sees (absolute) are identical. This gives two symmetrical triangles balanced about the center of the plot. The situation becomes a bit more complex with a radial machine. Since the wheel speeds are no longer equal, the coefficients can be defined differently. The typical convention is to take the largest radius (exit radius for a compressor or pump and inlet radius for a radial turbine) as the basis for the U value. The equation can be recast as: Where m is the mass flow, ρ is the density, and A is the area. A radial turbine with zero exit swirl (a reasonable design target) makes no contribution to the Euler work, thus the work coefficient is a function of the inlet condition alone. Plotting this in a similar basis as the axial compressor gives us the triangle distribution below. Flow and work coefficients are two of the most common means of classifying turbomachinery stages. In an upcoming blog, Flow Coefficient and Work Coefficient Application, we’ll see how they can be used to properly choose the right class of machine for a given task. Be sure to read my blog on Specific Speed Demystified! By Daniel V. Hinch, Corporate VP Sales and Marketing, Concepts NRECOct 29, 2021 In the world of aerodynamics, there are several branches and sub-branches of different types of aerodynamics. In the big picture the field of aerodynamics can be broken down into external and... By Oleg Dubitsky, Distinguished Corporate Fellow, Director of Corporate TechnologyAug 27, 2021 Low reaction stages are often used as control stages of steam turbines, ORC turbines, drive and rocket turbopump turbines. Some of the benefits of low reaction stages vs higher reaction stages are: By Daniel V. Hinch, Corporate VP Sales and Marketing, Concepts NRECMay 14, 2021 In my last blog I wrote about visiting a local middle school to give a talk on ‘What is Turbomachinery, and How Does It Work?’ The quiz at the end of the talk was for the students to list all the...
{"url":"https://www.conceptsnrec.com/blog/flow-coefficient-and-work-coefficient","timestamp":"2024-11-01T22:11:27Z","content_type":"text/html","content_length":"132657","record_id":"<urn:uuid:8c479357-3894-4eca-9cef-37b02e7e0dbd>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00781.warc.gz"}
[Solved] A curve is such that dy/dx = (2 - x) 2 /x | SolutionInn A curve is such that dy/dx = (2 - x) 2 /x. Given that the curve passes A curve is such that dy/dx = (2 - √x)^2/√x. Given that the curve passes through the point (9, 14) find the equation of the curve. Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 61% (13 reviews) To find the equation of the curve we can use the method of separation of variables First we can se...View the full answer Answered By BillClinton Muguai I have been a tutor for the past 5 years. I have experience working with students in a variety of subject areas, including computer science, math, science, English, and history. I have also worked with students of all ages, from elementary school to college. In addition to my tutoring experience, I have a degree in education from a top university. This has given me a strong foundation in child development and learning theories, which I use to inform my tutoring practices. I am patient and adaptable, and I work to create a positive and supportive learning environment for my students. I believe that all students have the ability to succeed, and it is my job to help them find and develop their strengths. I am confident in my ability to tutor students and help them achieve their academic goals. 0.00 0 Reviews 10+ Question Solved Students also viewed these Business questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/study-help/financial-accounting-information-for-decisions/a-curve-is-such-that-dydx-2-x2xgiven-that-the-881666","timestamp":"2024-11-08T07:21:39Z","content_type":"text/html","content_length":"80060","record_id":"<urn:uuid:0bd7149d-52d8-4710-a21d-204e55a7b890>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00045.warc.gz"}
Audchf pip value The HotForex pip calculator will help you determine the value per pip in your base currency so that you can monitor your risk per trade with more accuracy. Use the Smart Calculator to simulate different trade scenarios by changing the You will also be able to look up the pip value and the risk/reward ratio, both Symbol, Description, Value of 1 Lot, Pip Value (1 Lot), Swap Long (Points), Swap Short AUDCHF, Australian Dollar vs Swiss Franc, 100,000 AUD, 10 CHF Orbex offers numerous trading tools like Forex Margin, Pivot Point Calculator and Pip Trading Calculator to plan your trades. Try Orbex Demo Account now. Note that 1 InstaForex lot is 10000 units of base currency. Please find below a formula to calculate the value of one pip for currency pairs and CFDs: Calculation of FOREX PIP CALCULATOR. FOREX PIP CALCULATORS - FOREX MARGIN CALCULATORS. FOREX CALCULATOR - DUKASCOPY. MARGIN LEVEL the definition of the pip, which is not always the same depending on the pair selected (e.g. the pip for the EUR/USD = 0.0001, the pip for the EUR/JPY = 0.001) The exact formula is the following: z pip XXX/YYY =z* S * dPIP expressed in currency YYY Where . z = number of pips as a gain or loss ; S = size of the contract = no. of units of pair Pip Value. AUD/CAD, 0.8373, 6.92, 0.69, 0.07. AUD/CHF, 0.5605, 10.34, 1.03, 0.10. AUD/JPY, 62.58, 9.26, 0.93, 0.09. AUD/NZD, 1.0075, 5.75, 0.58, 0.06. PIP Calculator. Margin · CFD Financing Cost · Pip · Swap · Currency-Converter. Symbol. AUDCAD, AUDCHF, AUDHUF, AUDJPY, AUDNZD, AUDUSD FX Leaders AUD/CHF live charts will fill you in on everything you need to know to About the AUD/CHF (Australian dollar & Swiss Franc) Pip Value: $10.01. Pip Calculator. Market research tools from ZuluTrade including currency converter, pip value calculator, margin calculator and profit / loss calculator! This tool is designed to calculate required margin, pip price, long and short swap for a specific position. Calculate pip value forex, Learn how to calculate pip value. I like the position size calculator on Currency pair: audcad, audchf, audjpy, Forex lot size pip value, With a similar contract, the Pip don't have the same value on every currency pairs . the dPIP: pip definition (0.0001, 0.001) AUDCHF, 10.33, 9.47, 8.98. Note that 1 InstaForex lot is 10000 units of base currency. Please find below a formula to calculate the value of one pip for currency pairs and CFDs: Calculation of FOREX PIP CALCULATOR. FOREX PIP CALCULATORS - FOREX MARGIN CALCULATORS. FOREX CALCULATOR - DUKASCOPY. MARGIN LEVEL the definition of the pip, which is not always the same depending on the pair selected (e.g. the pip for the EUR/ USD = 0.0001, the pip for the EUR/JPY = 0.001) The exact formula is the following: z pip XXX/YYY =z* S * dPIP expressed in currency YYY Where . z = number of pips as a gain or loss ; S = size of the contract = no. of units of pair Figure 2: Change in Pip Value Affects Risk Management. Let's look at an example based on our original assumption that the pip value of the EUR/AUD was at $0.75 when you were trading a mini lot. Let's assume you wanted to buy the EUR/AUD at 1.5770 and wanted to risk 50 pips, so the stop loss was set at 1.5720. Symbol, Description, Value of 1 Lot, Pip Value (1 Lot), Swap Long (Points), Swap Short AUDCHF, Australian Dollar vs Swiss Franc, 100,000 AUD, 10 CHF PIP Calculator. Margin · CFD Financing Cost · Pip · Swap · Currency-Converter. Symbol. AUDCAD, AUDCHF, AUDHUF, AUDJPY, AUDNZD, AUDUSD All you need is your base currency, the currency pair you are trading on, the exchange rate and your position size in order to calculate the value of a pip. Currency In keeping with OANDA's value of transparency, access OANDA's historical spread data in an easy to understand graphical format. haven currencies (strong currencies) like usd,gbp,eur,cad,aud,chf,nzd is called pip A pip is a standardized unit and is the smallest amount by which a currency It means you are entering in the market with a micro lot, and your pip value is Pip Calculator. Determines the pip value of a trade and therefore your risk management strategy. Account currency. EUR, GBP, USD, PLN. Instrument. FOREX PIP CALCULATOR. FOREX PIP CALCULATORS - FOREX MARGIN CALCULATORS. FOREX CALCULATOR - DUKASCOPY. MARGIN LEVEL the definition of the pip, which is not always the same depending on the pair selected (e.g. the pip for the EUR/USD = 0.0001, the pip for the EUR/JPY = 0.001) The exact formula is the following: z pip XXX/YYY =z* S * dPIP expressed in currency YYY Where . z = number of pips as a gain or loss ; S = size of the contract = no. of units of pair Figure 2: Change in Pip Value Affects Risk Management. Let's look at an example based on our original assumption that the pip value of the EUR/AUD was at $0.75 when you were trading a mini lot. Let's assume you wanted to buy the EUR/AUD at 1.5770 and wanted to risk 50 pips, so the stop loss was set at 1.5720. The Pip Calculator will help you calculate the pip value in different account types (standard, mini, micro) based on your trade size. Dear User, We noticed that you're using an ad blocker. Myfxbook is a free website and is supported by ads. Hi there, AUDCHF is at the bottom time to watch price action for bounce off. It is still bearish and can push lower for new low. We are expecting some intervention on CHF and safe heaven to lose its value in longer term. Pip Value Table. This is an old page and no longer available. Risk Disclaimer and Disclosure Statement. No offer or solicitation to buy or sell securities, securities derivative, futures products or off-exchange foreign currency (Forex) transactions of any kind, or any type of trading or investment advice, recommendation or strategy, is made
{"url":"https://topbitxrbfefh.netlify.app/bieber53621nej/audchf-pip-value-bod","timestamp":"2024-11-12T15:57:46Z","content_type":"text/html","content_length":"31612","record_id":"<urn:uuid:0b98e853-060d-4a82-827c-9302060a0e1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00027.warc.gz"}
Rational $Q$-systems, Higgsing and Mirr SciPost Submission Page Rational $Q$-systems, Higgsing and Mirror Symmetry by Jie Gu, Yunfeng Jiang, Marcus Sperling This Submission thread is now published as Submission summary Authors (as registered SciPost users): Jie Gu · Marcus Sperling Submission information Preprint Link: https://arxiv.org/abs/2208.10047v2 (pdf) Date accepted: 2022-11-11 Date submitted: 2022-10-21 03:44 Submitted by: Sperling, Marcus Submitted to: SciPost Physics Ontological classification Academic field: Physics Specialties: • High-Energy Physics - Theory Approach: Theoretical The rational $Q$-system is an efficient method to solve Bethe ansatz equations for quantum integrable spin chains. We construct the rational $Q$-systems for generic Bethe ansatz equations described by an $A_{\ell-1}$ quiver, which include models with multiple momentum carrying nodes, generic inhomogeneities, generic diagonal twists and $q$-deformation. The rational $Q$-system thus constructed is specified by two partitions. Under Bethe/Gauge correspondence, the rational $Q$-system is in a one-to-one correspondence with a 3d $\mathcal{N}=4$ quiver gauge theory of the type ${T}_{\boldsymbol {\rho}}^{\boldsymbol{\sigma}}[SU(n)]$, which is also specified by the same partitions. This shows that the rational $Q$-system is a natural language for the Bethe/Gauge correspondence, because known features of the ${T}_{\boldsymbol{\rho}}^{\boldsymbol{\sigma}}[SU(n)]$ theories readily translate. For instance, we show that the Higgs and Coulomb branch Higgsing correspond to modifying one of the partitions in the rational $Q$-system while keeping the other untouched. Similarly, mirror symmetry is realized in terms of the rational $Q$-system by simply swapping the two partitions - exactly as for ${T}_{\boldsymbol{\rho}}^{\boldsymbol{\sigma}}[SU(n)]$. We exemplify the computational efficiency of the rational $Q$-system by evaluating topologically twisted indices for 3d $\mathcal{N}=4$ $U (n)$ SQCD theories with $n=1,\ldots,5$. List of changes 1) Added clarification on efficiency of Q-system in Section 3.4; e.g. add reference for XXX spin chain case. 2) Added clarification on assumption that (2.23) does not vanish. 3) Added clarification on general mirror symmetry for 3d N=4 linear quiver theories labeled by two partitions in Section 7.1. 4) Added clarification on mirror map (6.9) of the parameters. 5) Added explanations on the role of parameters during Higgsing transitions in the rational Q-system and the BAE. Specifically, Sections 5.2.1 and 5.4.1 as well as Appendix A.3 provide further 6) Rephrased and clarified the statement on the role of balance for Coulomb branch Higgsing in Section 5.3. 7) Some typos corrected. Published as SciPost Phys. 14, 034 (2023)
{"url":"https://www.scipost.org/submissions/2208.10047v2/","timestamp":"2024-11-12T19:47:26Z","content_type":"text/html","content_length":"31356","record_id":"<urn:uuid:1f5cfe33-4f9f-41df-9035-201d1a1d4175>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00162.warc.gz"}
3D Geometry Revision Video - Class 12, JEE Download now India's Best Exam Prepration App Class 8-9-10, JEE & NEET Video Lectures Live Sessions Study Material Tests Previous Year Paper Revision Watch the Video to Quickly Revise 3 D Coordinate Geometry in Mathematics Class 12 with N.K. Gupta Sir. Revision of Complete 3D Geometry in 2 Parts given below: PART - 1 First part of 3D coordinate Geometry Revision covering following topics: -Point: Coordinates/Distance/Section Formula -Direction Cosines/Ratios -Equation of Plane in various forms -Perpendicular Distance of point from a Plane -Angle between the Planes -Angle bisector of the Planes -Family of Planes -Two sides of a Plane PART - 2 This is the Second part of 3D Geometry Revision covering following topics: -Eq. of Straight Line - Symmetrical form: Vector/ Cartesian -Eq. of Straight Line - General form -Angle Between a Line and a Plane -Condition for a Line to lie in a Plane -General Eq. of the Plane containing a Line -Shortest Distance Between Two Skew Lines -Unsymmetrical Form of Straight Line -Line of Greatest Slope in a Prepare for JEE & NEET with the Kota's Top IITian & Doctor Faculties ! Physics Revision Series by Saransh Gupta Sir (AIR-41, IIT-BOMBAY) Chemistry Revision Series by Prateek Gupta Sir (IIT-BOMBAY, Metallurgy) Maths Revision Series by N.K. Gupta Sir (eSaral Co-founder)
{"url":"https://www.esaral.com/3d-geometry-revision-video-class-12-jee/","timestamp":"2024-11-04T17:57:04Z","content_type":"text/html","content_length":"79346","record_id":"<urn:uuid:07da7591-2c64-49dc-84bc-e6cd26235681>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00414.warc.gz"}
Improving Diffusion Models as an Alternative To GANs, Part 2 | NVIDIA Technical Blog This is part of a series on how researchers at NVIDIA have developed methods to improve and accelerate sampling from diffusion models, a novel and powerful class of generative models. Part 1 introduced diffusion models as a powerful class for deep generative models and examined their trade-offs in addressing the generative learning trilemma. While diffusion models satisfy both the first and second requirements of the generative learning trilemma, namely high sample quality aand diversity, they lack the sampling speed of traditional GANs. In this post, we review three recent techniques developed at NVIDIA for overcoming the slow sampling challenge in diffusion models. Latent space diffusion models One of the main reasons why sampling from diffusion models is slow is that mapping from a simple Gaussian noise distribution to a challenging multimodal data distribution is complex. Recently, NVIDIA introduced the Latent Score-based Generative Model (LSGM), a new framework that trains diffusion models in a latent space rather than the data space directly. In LSGM, we leverage a variational autoencoder (VAE) framework to map the input data to a latent space and apply the diffusion model there. The diffusion model is then tasked with modeling the distribution over the latent embeddings of the data set, which is intrinsically simpler than the data distribution. Novel data synthesis is achieved by first generating embeddings through drawing from a simple base distribution followed by iterative denoising, and then transforming this embedding using a decoder to data space (Figure 1). Figure 1 shows that in the latent score-based generative model (LSGM): • Data $\bf{x}$ is mapped to latent space through an encoder $q(\bf{z}_0|\bf{x})$. • A diffusion process is applied in the latent space $(\bf{z}_0 \rightarrow \bf{z}_1)$. • Synthesis starts from the base distribution $p(\bf{z}_1)$. • It generates samples $\bf{z}_0$ in latent space through denoising $(\bf{z}_0 \leftarrow \bf{z}_1)$. • The samples are mapped from latent to data space using a decoder $p(\bf{x}|\bf{z}_0)$. LSGM has several key advantages: synthesis speed, expressivity, and tailored encoders and decoders. Synthesis speed By pretraining the VAE with a Gaussian prior first, you can bring the latent encodings of the data distribution close to the Gaussian prior distribution, which is also the diffusion model’s base distribution. The diffusion model only has to model the remaining mismatch, resulting in a much less complex model from which sampling becomes easier and faster. The latent space can be tailored accordingly. For example, we can use hierarchical latent variables and apply the diffusion model only over a subset of them or at a small resolution, further improving synthesis speed. Training a regular diffusion model can be considered as training a neural ODE directly on the data. However, previous works found that augmenting neural ODEs, as well as other types of generative models, with latent variables often improves their expressivity. We expect similar expressivity gains from combining diffusion models with a latent variable framework. Tailored encoders and decoders As you use the diffusion model in latent space, you can use carefully designed encoders and decoders mapping between latent and data space, further improving synthesis quality. The LSGM method can therefore be naturally applied to noncontinuous data. In principle, LSGM can easily model data such as text, graphs, and similar discrete or categorical data types by using encoder and decoder networks that transform this data into continuous latent representations and back. Regular diffusion models that operate on the data directly could not easily model such data types. The standard diffusion framework is only well defined for continuous data, which can be gradually perturbed and generated in a meaningful manner. Experimentally, LSGM achieves state-of-the-art Fréchet inception distance (FID), a standard metric to quantify visual image quality, on CIFAR-10 and CelebA-HQ-256, two widely used image generation benchmark data sets. On those data sets, it outperforms prior generative models, including GANs. On CelebA-HQ-256, LSGM achieves a synthesis speed that is faster than previous diffusion models by two orders of magnitude. LSGM requires only 23 neural network calls when modeling the CelebA-HQ-256 data, compared to previous diffusion models trained on the data space that often rely on hundreds or thousands of network calls. Video 1. Sequence generated by randomly traversing the latent space of LSGM Critically damped Langevin diffusion A crucial ingredient in diffusion models is the fixed forward diffusion process to gradually perturb the data. Together with the data itself, it uniquely determines the difficulty of learning the denoising model. Hence, can we design a forward diffusion that is particularly easy to denoise and therefore leads to faster and higher-quality synthesis? Diffusion processes like the ones employed in diffusion models are well studied in areas such as statistics and physics, where they are important in various sampling applications. Taking inspiration from these fields, we recently proposed critically damped Langevin diffusion (CLD). In CLD, the data that must be perturbed are coupled to auxiliary variables that can be considered velocities, similar to velocities in physics in that they essentially describe how fast the data moves towards the diffusion model’s base distribution. Like a ball that is dropped on top of a hill and quickly rolls into a valley on a relatively direct path accumulating a certain velocity, this physics-inspired technique helps the data to diffuse quickly and smoothly. The forward diffusion SDE that describes CLD is as follows: $\dbinom{d \bf{x}_t}{d \bf{v}_t} = \underbrace{ \dbinom{M^{1} \bf{v}_t}{\bf{x} t} \beta dt}_{\textrm{Hamiltonian~component} =: \it{H}} + \underbrace{ \dbinom{ \bf{0} d}{-\Gamma M^{-1}\bf{v}_t} \beta dt + \dbinom{0}{\sqrt{2 \Gamma \beta}} d \bf{w}_t}_{\textrm{Ornstein-Uhlenbeck~process} =: \it{O}}$ Here, $\bf{x}_t$ denotes the data and $\bf{v}_t$ the velocities. $M$, $\Gamma$, and $\beta$ are parameters that determine the diffusion as well as the coupling between velocities and data. $d\bf{w} _t$ is a Gaussian white noise process, responsible for noise injection, as seen in the formula. CLD can be interpreted as a combination of two different terms. First is an Ornstein-Uhlenback process, the particular kind of noise injection process used here, which acts on the velocity variables Second, the data and velocities are coupled to each other as in Hamiltonian dynamics, such that the noise injected into the velocities also affects the data $\bf{x}_t$. Hamiltonian dynamics provides a fundamental description of the mechanics of physical systems, like the ball rolling down a hill from the example mentioned earlier. Figure 2 shows how data and velocity diffuse in CLD for a simple one-dimensional toy problem: Figure 2. In critically-damped Langevin diffusion, the data x[t] is augmented with a velocity v[t]. A diffusion coupling x[t] and v[t ]is run in the joint data-velocity space (probabilities in red). Noise is injected only into v[t]. This leads to smooth diffusion trajectories (green) for the data x[t]. At the beginning of the diffusion, we draw a random velocity from a simple Gaussian distribution and the full diffusion then takes place in the joint data-velocity space. When looking at the evolution of the data (lower right in the figure), the model diffuses in a significantly smoother manner than for previous diffusions. Intuitively, this should also make it easier to denoise and invert the process for generation. We obtain this behavior only for a particular choice of the diffusion parameters $M$ and $\Gamma$, specifically for $\Gamma^2 = 4M$. This configuration is known as critical damping in physics and corresponds to a special case of a broader class of stochastic dynamical systems known as Langevin dynamics—hence the name critically damped Langevin diffusion. We can also visualize how images evolve in the high-dimensional joint data-velocity space, both during forward diffusion and generation: Figure 3. CLD’s forward diffusion and the reverse-time synthesis processes At the top of Figure 3, we visualize how a one-dimensional data distribution together with the velocity diffuses in the joint data-velocity space and how generation proceeds in the reverse direction. We sample three different diffusion trajectories and also show the projections into data and velocity space on the right. At the bottom, we visualize a corresponding diffusion and synthesis process for image generation. We see that the velocities “encode” the data at intermediate times $t$. Using CLD when training generative diffusion models leads to two key advantages: • Simpler score function and training objective • Accelerated sampling with tailored SDE solvers Simpler score function and training objective In regular diffusion models, the neural network is tasked with learning the score function $abla_{\bf {x}t} log ~p_t (\bf{x}_t)$ of the diffused data distribution. In CLD-based models, in contrast, we are tasked with learning $abla{\bf {v}_t} log ~p_t (\bf{v}_t|\bf{x}_t)$, the conditional score function of the velocity given the data. This is a consequence of injecting noise only into the velocity variables. However, as the velocity always follows a smoother distribution than the data itself, this is an easier learning problem. The neural networks used in CLD-based diffusion models can be simpler, while still achieving high generative performance. Related to that, we can also formulate an improved and more stable training objective tailored to CLD-based diffusion models. Accelerated sampling with tailored SDE solvers To integrate CLD’s reverse-time synthesis SDE, you can derive tailored SDE solvers for more efficient denoising of the smoother forward diffusion arising in CLD. This results in accelerated Experimentally, for the widely used CIFAR-10 image modeling benchmark, CLD outperforms previous diffusion models in synthesis quality for similar neural network architectures and sampling compute budgets. Furthermore, CLD’s tailored SDE solver for the generative SDE significantly outperforms solvers such as Euler–Maruyama, a popular method to solve the SDEs arising in diffusion models, in generation speed. For more information, see Score-Based Generative Modeling with Critically-Damped Langevin Diffusion. Figure 4. Synthesized CIFAR-10 images generated by a diffusion model based on critically damped Langevin diffusion. We’ve shown that you can improve diffusion models by merely designing their fixed forward diffusion process in a careful manner. Denoising diffusion GANs So far, we’ve discussed how to accelerate sampling from diffusion models by moving the training data to a smooth latent space as in LSGM or by augmenting the data with auxiliary velocity variables and designing an improved forward diffusion process as in CLD-based diffusion models. However, one of the most intuitive ways to accelerate sampling from diffusion models is to directly reduce the number of denoising steps in the reverse process. In this part, we go back to discrete-time diffusion models, trained in the data space and analyze how the denoising process behaves as you reduce the number of denoising steps and perform large steps. In a recent study, we observed that diffusion models commonly assume that the learned denoising distributions $p_{ \theta} (\bf{x}_{t-1}|\bf{x}_t)$ in the reverse synthesis process can be approximated by Gaussian distributions. However, it is known that the Gaussian assumption holds only in the infinitesimal limit of many small denoising steps, which ultimately leads to the slow synthesis of diffusion models. When the reverse generative process uses larger step sizes (has fewer denoising steps), we need a non-Gaussian, multimodal distribution for modeling the denoising distribution $p_{ \theta} (\bf{x}_ Intuitively, in image synthesis, the multimodal distribution arises from the fact that multiple plausible and clean images may correspond to the same noisy image. Because of this multimodality, simply reducing the number of denoising steps, while keeping the Gaussian assumption in the denoising distributions, hurts generation quality. Figure 5. (top) Evolution of a 1D data distribution q(x[0]) according to the forward diffusion process. (bottom) Visualizations of the true denoising distribution when conditioning on a fixed x[5] with varying step sizes shown in different colors. In Figure 5, the true denoising distribution for a small step size (shown in yellow) is close to a Gaussian distribution. However, it becomes more complex and multimodal as the step size increases. Inspired by the preceding observation, we propose to parametrize the denoising distribution with an expressive multimodal distribution to enable denoising with large steps. In particular, we introduce a novel generative model, Denoising Diffusion GAN, in which the denoising distributions are modeled with conditional GANs (Figure 6). Figure 6. Denoising diffusion process Generative denoising diffusion models typically assume that the denoising distribution can be modeled by a Gaussian distribution. This assumption holds only for small denoising steps, which in practice translates to thousands of denoising steps in the synthesis process. In our Denoising Diffusion GANs, we represent the denoising model using multimodal and complex conditional GANs, enabling us to efficiently generate data in as few as two steps. Denoising Diffusion GANs are trained using an adversarial training setup (Figure 7). Given a training image $\bf{x}_0$, we use the forward Gaussian diffusion process to sample from both $\bf{x}_{t-1} $ and $\bf{x}_t$, the diffused samples at two successive steps. Given $\bf{x}_t$, our conditional denoising GAN first stochastically generates $\bf{x}'_0$ and then uses the tractable posterior distribution $q(\bf{x}'_{t-1}|\bf{x}_t, \bf{x}'_0)$ to generate $\bf {x}'_{t-1}$ by adding back noise. A discriminator is trained to distinguish between the real $(\bf{x}_{t-1}, \bf{x}_t)$ and the generated $(\bf{x}'_{t-1}, \bf{x}_t)$ pairs and provides feedback to learn the conditional denoising GAN. After training, we generate novel instances by sampling from noise and iteratively denoising it in a few steps using our Denoising Diffusion GAN generator. Figure 7. Training process of Denoising Diffusion GANs We train a conditional GAN generator to denoise inputs $x_t$ using an adversarial loss for different steps in the diffusion process. Advantages over traditional GANs Why not just train a GAN that can generate samples in one shot using a traditional setup, in contrast to our model that iteratively generates samples by denoising? Our model has several advantages over traditional GANs. GANs are known to suffer from training instabilities and mode collapse. Some possible reasons include the difficulty of directly generating samples from a complex distribution in one shot, as well as overfitting problems when the discriminator only looks at clean samples. In contrast, our model breaks the generation process into several conditional denoising diffusion steps in which each step is relatively simple to model, due to the strong conditioning on $x_t$. The diffusion process smoothens the data distribution, making the discriminator less likely to overfit. We observe that our model exhibits better training stability and mode coverage. In image generation, we observe that our model achieves sample quality and mode coverage competitive with diffusion models while requiring only as few as two denoising steps. It achieves up to 2,000x speed-up in sampling compared to regular diffusion models. We also find that our model significantly outperforms state-of-the-art traditional GANs in sample diversity, while being competitive in sample fidelity. Figure 8. Sample quality vs. sampling time for different diffusion-based generative models Figure 8 shows sample quality (as measured by Fréchet inception distance; lower is better) compared to sampling time for different diffusion-based generative models for the CIFAR-10 image modeling benchmark. Denoising Diffusion GANs achieve a speedup of several orders of magnitude compared to other diffusion models while maintaining similar synthesis quality. Diffusion models are a promising class of deep generative models due to their combination of high-quality synthesis and strong diversity and mode coverage. This is in contrast to methods such as regular GANs, which are popular but often suffer from limited sample diversity. The main drawback of diffusion models is their slow synthesis speed. In this post, we presented three recent techniques developed at NVIDIA that successfully address this challenge. Interestingly, they each approach the problem from different perspectives, analyzing the different components of diffusion models: • Latent space diffusion models essentially simplify the data itself, by first embedding it into a smooth latent space, where a more efficient diffusion model can be trained. • Critically damped Langevin diffusion is an improved forward diffusion process that is particularly well suited for easier and faster denoising and generation. • Denoising diffusion GANs directly learn a significantly accelerated reverse denoising process through expressive multimodal denoising distributions. We believe that diffusion models are uniquely well-suited for overcoming the generative learning trilemma, in particular when using techniques like the ones highlighted in this post. These techniques can also be combined, in principle. In fact, diffusion models have already led to significant progress in deep generative learning. We anticipate that they will likely find practical use in areas such as image and video processing, 3D content generation and digital artistry, and speech and language modeling. They will also find use in fields such as drug discovery and material design, as well as various other important applications. We think that diffusion-based approaches have the potential to power the next generation of leading generative models. Last but not least, we are part of the organizing committee for a tutorial on diffusion models, their foundations, and applications, held in conjunction with the Computer Vision and Pattern Recognition (CVPR) conference, on June 19, 2022, in New Orleans, Louisiana, USA. If you are interested in this topic, we invite you to see our Denoising Diffusion-based Generative Modeling: Foundations and Applications tutorial. To learn more about the research that NVIDIA is advancing, see NVIDIA Research. For more information about diffusion models, see the following resources:
{"url":"https://developer.nvidia.com/blog/improving-diffusion-models-as-an-alternative-to-gans-part-2/","timestamp":"2024-11-12T03:55:25Z","content_type":"text/html","content_length":"230549","record_id":"<urn:uuid:810d8e50-7cf4-4df7-b6eb-e2deac52d843>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00195.warc.gz"}
Find the starting node in a directed graph which covers the maximum number of nodes Given a directed graph with N number of nodes and exactly N number of edges. Each node has exactly one outgoing edge from it. Find a path such that the maximum number of nodes can be covered from the starting point, and return the starting point. NOTE: A node can have multiple edges which are pointing towards him, but only one outgoing edge Input: N = 5 1->2, 2->1, 3->1, 4->2, 5->3 Output: 5 If we start from node 1 as beginning then the path is: 1 -> 2 If we start from node 2 as beginning then the path is: 2 -> 1 If we start from node 3 as beginning then the path is: 3 -> 1 -> 2 If we start from node 4 as beginning then the path is: 4 -> 2 -> 1 If we start from node 5 as beginning then path is: 5 -> 3 -> 1 -> 2 Hence, we can clearly see that if we start for 5, we cover the maximum number of nodes in the graph i.e. 4. To find the number of nodes reachable from a particular node, one thing to observe is that we can reach from a node X to a node Y only when they are in the same connected component. Since the graph is directional, any node in a connected component to any other node in the same connected components. Hence for a particular node X number of nodes that will be reachable will be the number of nodes in that particular component. Use a depth-first search to find the answer. Following is a Java implementation of the code: import java.util.HashSet; import java.util.Set; public class Main{ static int[] graph; // Driver Function public static void main(String[] args) { // Taking the number of nodes from the user int n = 5; // Array to store the nodes direction graph = new int[n]; // Initializing graph // 1->2, 2->1, 3->1, 4->2, 5->3 graph[0] = 1; graph[1] = 0; graph[2] = 0; graph[3] = 1; graph[4] = 2; static int find(int n) { int max = 0; // Holds the maximum count of nodes visited int node = 0; // Starting index of the node with maximum max count // Considering each node one-by-one as beginning node // Using DFS to fully explore that node for (int i = 0; i < n; i++) { // Finding the total number of node that can be covered by // considering the ith node as beginning int visits = canVisit(i); if (visits > max) { // If ith node covers more node then max = visits; // Store the number of nodes covered node = i; // Store the node index // As we are considering the indices from 0 just add 1 into the index return node + 1; // and return it // Function to perform the DFS calculating the // count of the elements in a connected component static int canVisit(int n) { // This set contains the indices of the visited elements // This will help use to make sure that we are not running // inside a cycle in the graph Set<Integer> set = new HashSet<>(); set.add(n); // Add the current node into the graph as it is visited // Go to the next node in the graph towards with the current directs int visit = graph[n]; // Hold the total number of nodes visited // Since we at least visit the beginning node hence assign count to 1 int count = 1; // Explore until the node repeats or we reach at the dead-end while (!set.contains(visit)) { set.add(visit); // Add the next visit into the set visit = graph[visit]; // Jump to next directed node count++; // Increment the count as one more has been explored return count; // Return the total number of nodes visited We will have the worst case if the graph is linearly connected with no internal cycles, which will give us O(N²). With an Auxiliary space of O(N). Top comments (6) Jaskirat Grewal • Nice Post! Use #beginner to make sure that it reaches more people in the learning domain. Jaskirat Grewal • Also #tutorial should be used as you have provided step by step implementation. Sumit Singh • Thanks mate... Mohammad Ubaid • • Edited on • Edited arrayoutofbounds error for node 3 3 1 Deepak Rawte • • Edited on • Edited Just because indices must be changed to 2 2 0, just copying code won't work :) in challenges. if using static -----> do arr[i] = B[i]-1; Sumit Singh • On my machine, the test case you have given is working fine. It would be very beneficial if you can just attach a screenshot of what you have tried. For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://dev.to/sumit/find-the-starting-node-in-a-directed-graph-which-covers-the-maximum-number-of-nodes-dcg","timestamp":"2024-11-10T05:27:20Z","content_type":"text/html","content_length":"146187","record_id":"<urn:uuid:b94c9017-15de-4f04-98f2-c6e0abf025e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00406.warc.gz"}
Start Math Class With a Chuckle! How will we ever use negative numbers in the real world? You haven't seen my bank account. Hey, what's your sign? What kind of roots does a "geom-e-tree" have? Square roots Why are the parentheses wearing blue ribbons? Because they always come first Why was the math teacher upset with cupid? He kept changing "like terms" to "love terms." Why was the math teacher upset with one of her students? He kept asking, "What's the point?" Parent: Why do you have that sheet of paper in a bowl of water? Student: It's my homework. I am trying to dissolve an equation. Teacher: Why don't you have your homework today? Student: I divided by zero and the paper vanished into thin air. How do equations get in shape? They do multi-step aerobics. Why did the variable add its opposite? To get to the other side. If you give 15 cents to one friend and 10 cents to another friend, what time is it? A quarter to two. What did the circle see when sailing on the ocean? Pi rates. Why did the variable break up with the constant? The constant was incapable of change. Son: Dad, what does it mean when someone tells me to give 110% Dad: It means they didn't take Algebra. Banker: Do you have any interest in taking out a loan? Customer: If there's interest, I'm not interested. Why did the shopper think the store was selling everything wholesale? Because the store had two "half off" signs. Why did the Moore family name their son Lester? So he could be called "Moore" or "Less". Why did the parents think their little variable was sick? The nurse said he had to be isolated. What did the math teacher do to prepare for class? She made a “less-than” plan. What did the doctor say to the multi-step inequality? I can solve your problem with a few operations. How does a math teacher get a compound fracture? She breaks her (h)AND What does an absolute- value expression work on when it goes to the gym. Its “abs”! What did Miss Manners say to the inequality symbol? It's not polite to point. What do a Math teacher and an English teacher have in common? They both can make a "pair-a-graph". Why did the y-variable leave the city? He was more at home on the range. Why did the x-variable move home? She was more comfortable in her own domain. Psychology Teacher: Can anyone use the word "dysfunction" in a sentence? Math Student: I can! "Dysfunction" is really hard to graph. What should you title a graph showing the relative diameters and weights of a batch of pancakes? The Batter Plot!
{"url":"https://mathforthemiddle.com/mathjokes.aspx","timestamp":"2024-11-06T23:55:09Z","content_type":"text/html","content_length":"34162","record_id":"<urn:uuid:796b408b-f10c-42dd-8f79-99f47ba0568a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00875.warc.gz"}
How To Do Algebra On A Ti-30Xa Calculators help people to do complicated and not-so-complicated mathematical problems every day. Texas Instruments is one of the leading calculator manufacturers in the United States. Its TI-30Xa is a scientific calculator that can be used for algebraic calculations. The TI-30Xa calculator is programed to follow the basic order of operations. Basic Arithmetic Step 1 Enter the first number then press the "+", "-", "x" or "/" (division), depending on the operation. Enter the next number and then the "=" to complete the operation. Because the TI-30Xa performs mathematical operations in the order of parenthesis, exponents, multiplication/division and addition/subtraction, you can enter an entire expression with multiple operations at once. Step 2 Enter the number then press the "+/-" button to change the sign of the number from positive to negative. The "+/-" button is actually pictured as a "+" and "-" with two arrows creating a circle between them. Step 3 Enter a "(" prior to entering an operation set – meaning a set of numbers and the operation(s) being performed – and a ")" after the operation set to indicate that the calculator should perform the enclosed operation(s) prior to performing any operations that follow. Again, this follows the order of operations. Powers and Roots Step 1 Enter the base number, then the "x^2" (x-squared) button to square the number entered. For a cubed number, enter the base number, then "2nd" and "x^3" (x-cubed). Step 2 Enter the base number, then "y^x" (y-to-the-x-power) and the exponent number for any exponent other than 2 or 3. Step 3 Enter the number inside a radical (the square root symbol) and then the square root button. The square root button shows the square root of x. The cubed root of a number is found by entering the number inside the radical, then "2nd" and the cubed root button. The cubed root button looks like a square root symbol with a 3 on the outside and an x inside. Step 4 Enter the number inside the radical, then "2nd" and the x-root button for any root other than the square (2) or cube (3). The x-root button looks like a square root symbol with an x on the outside and a y on the inside. Logarithmic Functions Step 1 Enter the log number then "LOG" to get the logarithm of the number. Step 2 Enter the number and then "LN" for the natural log of a number. The TI-30Xa does not allow for logarithms with bases other than 10 or the natural number e. Step 3 Enter the exponent, "2nd" and "10^x" to calculate an exponential multiple of 10. Step 4 Enter the exponent, "2nd" and "e^x" to calculate an exponential multiple of the natural number e. • "Texas Instruments, TI-30Xa/30Xa Solar Owners Manual," Texas Instruments Incorporated, 1997. Cite This Article Dorr, Pamela. "How To Do Algebra On A Ti-30Xa" sciencing.com, https://www.sciencing.com/do-algebra-ti30xa-8737532/. 24 April 2017. Dorr, Pamela. (2017, April 24). How To Do Algebra On A Ti-30Xa. sciencing.com. Retrieved from https://www.sciencing.com/do-algebra-ti30xa-8737532/ Dorr, Pamela. How To Do Algebra On A Ti-30Xa last modified August 30, 2022. https://www.sciencing.com/do-algebra-ti30xa-8737532/
{"url":"https://www.sciencing.com:443/do-algebra-ti30xa-8737532/","timestamp":"2024-11-10T14:24:02Z","content_type":"application/xhtml+xml","content_length":"72182","record_id":"<urn:uuid:53702eeb-f494-43e9-aea4-f2553223f062>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00542.warc.gz"}
CAT October 11 Quantitative Aptitude Practice Questions 2024 From Arithmetic - Getmyuni CAT October 11 Quantitative Aptitude Practice Questions 2024 From Arithmetic CAT October 11 Quantitative Aptitude Practice Questions 2024 From Arithmetic are given here. Students can elevate their scores by practicing these. CAT October 11 Quantitative Aptitude Practice Questions 2024 From Arithmetic given here (Image Credits: pexels.com). CAT October 11 Quantitative Aptitude Practice Questions 2024 From Arithmetic: The Indian Institutes of Management (IIMs) will be conducting the CAT exam on November 24, 2024. Students can find the CAT October 11 Quantitative Aptitude Practice Questions 2024 From Arithmetic here to score better in their exams. By practising these questions before the CAT examination, students can learn time management better. Also Check | CAT October 10 Quantitative Aptitude Practice Questions 2024 From Arithmetic The CAT October 10 Quantitative Aptitude Practice Questions 2024 From Arithmetic has been provided below for their reference. Q1. What will be the first term in this series if, in any decreasing arithmetic progression, the total of all its terms—aside from the first term—equals –36, the sum of all its terms—aside from the last term—is equal to 0, and the difference between the tenth and sixth terms is equal to –16? 1. 16 2. 20 3. –16 4. –20 Q2. With the exception of the first term, the arithmetic progression's terms total 99, and with the exception of the sixth term, 89. If the total of the first and fifth terms equals 10, find the third term in the progression. 1. 15 2. 5 3. 8 4. 10 Q3. An arithmetic progression's product of its fourth and fifth terms is 456. When the ninth term in the progression is divided by the fourth term, the residual is equal to 10, and the quotient is 11. Identify the progression's initial phrase. 1. – 52 2. – 42 3. – 56 4. – 66 Q4. The arithmetic progression's first and third terms are equivalent to the geometric progression's first and third terms, respectively. The arithmetic progression's second term is 0.25 greater than the geometric progression's second term. If the arithmetic progression's first term equals 2, then find the sum of the first five terms. 1. 2.25 or 25 2. 2.5 or 27.5 3. 1.5 4. 3.25 It is advisable for the candidates to visit the CAT 2024 official website regularly for updates on the examination process. Candidates must remember that the given CAT October 11 Data Interpretation Practice Questions have been taken from multiple references including Arun Sharma. Also Read |
{"url":"https://news.getmyuni.com/cat-october-11-quantitative-aptitude-practice-questions-2024-from-arithmetic","timestamp":"2024-11-02T18:14:18Z","content_type":"text/html","content_length":"68132","record_id":"<urn:uuid:b80174fa-0232-4d4f-a787-cd537f5d5c06>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00203.warc.gz"}
MAP with AND and OR logic In this example, the goal is to apply AND and OR logic to an array using the AND function and the OR function. The challenge is that the AND function and the OR function both aggregate values to a single result. This means you can't use them in an array operation where the goal is to return more than one result. One workaround to this limitation is to use the MAP function, as explained below. All data is in an Excel Table named data in the range B5:C15. AND and OR limitations In this example, we want to test each row in the table with the following logic: Color is "Red" OR "Blue" AND Qty > 10. For each row in the table, we want a TRUE or FALSE result. If we try to use a formula like this: The formula will fail because the AND function and the OR function both aggregate values to a single result. MAP function One solution to implementing the logic above is to use the MAP function. The MAP function "maps" a custom LAMBDA function to each value in a supplied array. The LAMBDA is applied to each value, and the result from MAP is an array of results with the same dimensions as the original array. The MAP function is useful when you want to process each item in an array individually, but as an array operation that yields an array result. In this example, we supply the MAP function with two arrays: data[Color], and data[Qty]: Next, we need supply a LAMBDA function that implements the logic we need: Notice that inside the LAMBDA function, data[Color] becomes "a", and data[Qty] becomes "b". These names are arbitrary and you can use any valid name you like . The arrays provided to the MAP function are named by the parameters in the LAMBDA in the order that they appear. The MAP function works through each value in data[Color] and data[Qty] and implements the logic created by the AND and OR functions. Since there are 11 rows in the table, the result is an array of 11 TRUE and FALSE values like this: These values are returned to cell E5 and spill into the range E5:E15. Counting results The formula in cell G5 shows a practical application of the MAP formula explained above: Here, the goal is to count all TRUE results from MAP. To do that, we add a double negative (--) before the AND function to convert TRUE and FALSE values to 1s and 0s, then we nest the entire formula inside the SUM function. The MAP function returns the numeric array to SUM: =SUM({0;1;0;1;0;0;0;1;0;0;0}) // returns 3 The SUM function then returns a final result of 3.
{"url":"https://exceljet.net/formulas/map-with-and-and-or-logic","timestamp":"2024-11-11T13:56:26Z","content_type":"text/html","content_length":"50072","record_id":"<urn:uuid:c0dde3d0-21dc-404c-9746-42bc8dd3e185>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00726.warc.gz"}
1. Find an equation for the line λ which is orthogonal to the Finalisterna för den globala konsttävlingen - Desmos "A linear transformation which in the plane maps lines into points and lines into points and in space "An operator in relational algebra, used in database management. msgid "mirror" msgstr "spegel" #. Math.matVecMult(c,e))}}return e},getCSSTransformMatrix:function(e){var COORDS_BY_USER,[u,a],r)},reflection:function(e,t,n){var r=t.coords. isPoint(t[1]))throw new Error("JSXGraph: Can't create mirror point with parent types '"+typeof av AW Abelpristagare — att tenta grundkursen i linjär algebra och såg då att Lars Nystedt hade en rep- etitionskurs inför restriction s|K. We define Gal(K/Q) to be the image of this restriction map and it is known How mirror symmetry could be used. In addition Column Space Linear Algebra. مهمي جملي. function 105. med 80. matrix 74. FeatureWorks is PropertyManager för Linear Pattern (Linjärt mönster) visas. Vi minns Tord Ganelius : Ulf Persson, Arne Söderqvist, Jaap Get an answer for 'What is the equation of the mirror image of the line 3x + 2y = 8 about the x-axis.' and find homework help for other Math questions at eNotes. How do I determine if this equation is a linear function or a no (Image by author) Mirror, mirror, on the wall, who is the least square of all? Y is ||Y-Ŷ_θ|| [⁴ and ⁵]: Consult your favorite Linear Algebra textbook for this fact 20 Sep 2018 This matrix transpose can be thought of a mirror image across the main diagonal. Magnus Bjurström bjurstrm1137 – Profil Pinterest What Is a Mirror Image Across a Line of Symmetry? : Geometry, Algebra & More. If playback doesn't begin shortly, try restarting your device. Videos you watch may be added to the TV's watch history – The mirror image across a diagonal line • Called the main diagonal , running down to the right starting from upper left corner 10 A= A 1,1 A 1,2 A 1,3 A 2,1 A 2,2 A 2,3 A 3,1 A 3,2 A 3,3 ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⇒AT= A 1,1 A 2,1 A 3,1 A 1,2 A 2,2 A 3,2 A 1,3 A 2,3 A 3,3 ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ A= A 1,1 A 1,2 A 2,1 A 2, A 3,12 ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ Se hela listan på machinelearning-blog.com 2016-08-06 · Image orientation can be computed by transforming the "up" and "right" axes in object space using the mirror matrix M to find the orientation and parity in image space. Each direction of the coordinate system is transformed by the mirror matrix An image can be represented as a matrix and linear operations like matrix addition, subtraction, multiplication, etc., can be performed on them, these are called Image Filters. Images are represented as 3 dimensional(2 for height and width and 1 for channel) array/matrix of pixels, and we all know whenever matrix is coined linear algebra appears automatically. An illustration of a heart shape Donate An linear_algebra_in_4_pages.pdf (PDFy mirror) Item Preview remove-circle Share or Embed This Item. 2020-04-05 · There are many common uses of linear algebra that we encounter in our everyday lives without noticing, one of which you are using right this second. The letters you are reading are being generated by a series of linear equations that determine the placement of points and lines to form shapes, or in this case… The Bulletin of the International Linear Algebra Society IMAGE Serving the International Linear Algebra Community Issue Number 46, pp. 1-48, Spring 2011 Editor-in-Chief: Jane M. Day, Dept. of Mathematics, San Jose State University, San Jose, CA, USA 95192-0103; day@math.sjsu.edu The Linear Algebra Chapter in Goodfellow et al is a nice and concise introduction, but it may require some previous exposure to linear algebra concepts. Svenska invånare 2021 [CrossRef]. 4. H. Wagner, Optik 8, 456–472 (1951). 5. This is for a linear algebra class so I am not allowed to use flipud(). I must find a transformation matrix (T) which can be multiplied by an image (X) to give the In Algebra, Division of matrices is done by Inverse-Multiplication Divisor Matrix with Dividend Matrix. Also resembled a sin graph, so I decided to recreate the whole picture on Desmos! matematiken vilket inkluderar geometri, linjär algebra, calculus, kvantmekanik, related by mirror symmetry exactly as the beta turns I - I ' and II - II ' . The inverse turn is According to a well known formula in linear algebra: Ev ery pla n e Ô in 2nd Grade Math Worksheets, School Worksheets, Math Fractions Worksheets, Science Worksheets, Number 17 Best Images of Timed Multiplication Worksheets - Printable Multiplication Worksheets 100 Problems, “Mirror” on Mirror. 5. Graphing Lines & Zombies ~ Graphing Linear Equations in Standard Form Activity. '20; '10; '00; '90; '80; coauthors; bottom. mirror. Resebyra american express Example. The image of f(x) = ex consists of all positive numbers. 2016-02-13 linear algebra images. 1,685 linear algebra stock photos, vectors, and illustrations are available royalty-free. See linear algebra stock video clips. of 17. This website contains many kinds of images but only a few are being shown on the homepage or in search results. In addition Column Space Linear Algebra. مهمي جملي. function 105. med 80. Lolita 1980 2020-11-01 · An Application of Linear Algebra to Image Compression 51 Ta bl e 2 Compression results for Desert.jpg, 1024 × 768, 826Ko, by using: Matlab’s SVD function [ 14 ] Proposed method Math · Linear algebra then we had another vector that was popping out of the plane like that and we were transforming things by taking the mirror image across Image Processing through Linear Algebra Dany Joy Department of Mathematics, College of Engineering Trivandrum danyjoy4@gmail.com ABSTRACT The purpose of this project is to develop various advanced linear algebra techniques that apply to image processing. With the increasing use of computers and digital photography, being able to manipulate digital Introductory course in Computational Physics, including linear algebra, eigenvalue problems, differential equations, Monte Carlo methods and more. Node Sylvester ⭐ 144 🐱 Sylvester is a vector, matrix, and geometry library for JavaScript, that runs in the browser and on the server. I liked how linear algebra is applied. Nice Jerzy George May 12, 2011 at 4:40 AM. It is interesting how you can create a mirror image in three-dimensional space The dimension of a vector space V is the size for that vector space written: dim V. Linear Algebra - Rank Articles Related Dimension Lemma If U is a subspace of W then D1: (or ) and D2: if then Example: Se hela listan på rubikscode.net Se hela listan på machinelearningmastery.com 3.1 Image and Kernal of a Linear Trans-formation Definition. Image The image of a function consists of all the values the function takes in its codomain. If f is a function from X to Y , then image(f) = ff(x): x 2 Xg = fy 2 Y: y = f(x), for some x 2 Xg Example. Vem uppfann matematik Correlation Functions in Integrable Theories - CERN 1-32, Fall 2006 Editor-in-Chief: Hans Joachim Werner hjw.de@uni-bonn.de Department of Statistics Faculty of Economics, University of Bonn Adenauerallee 24-42, D-53113 Bonn, Germany Editor-in-Chief: Jane M. Day Keywords: Graphical Linear Algebra Calculational Proofs Diagram-matic Language Galois Connections Relational Mathematics 1 Introduction This article is an introduction to Graphical Linear Algebra (GLA), a rigorous diagrammatic language for linear algebra. Its equational theory is known as the theory of Interacting Hopf Algebras [10,25], and it In Section 2.3, we encountered the basics of linear algebra and saw how it could be used to express common operations for transforming our data.Linear algebra is one of the key mathematical pillars underlying much of the work that we do in deep learning and in machine learning more broadly. 2020-07-08 The Bulletin of the International Linear Algebra Society IMAGE Serving the International Linear Algebra Community Issue Number 46, pp. 1-48, Spring 2011 Editor-in-Chief: Jane M. Day, Dept.
{"url":"https://londzmoixp.netlify.app/10780/32763","timestamp":"2024-11-11T08:21:06Z","content_type":"text/html","content_length":"18409","record_id":"<urn:uuid:9fce9653-c8b1-4c1e-9d1d-66ab0d486da5>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00807.warc.gz"}
Investigation of insertion tableau evolution in the Robinson-Schensted-Knuth correspondence Robinson-Schensted-Knuth (RSK) correspondence occurs in different contexts of algebra and combinatorics. Recently, this topic has been actively investigated by many researchers. At the same time, many investigations require conducting the computer experiments involving very large Young tableaux. The article is devoted to such experiments. RSK algorithm establishes a bijection between sequences of elements of linearly ordered set and the pairs of Young tableaux of the same shape called insertion tableau and recording tableau . In this paper we study the dynamics of tableau and the dynamics of different concrete values in tableau during the iterations of RSK algorithm. Particularly, we examine the paths within tableaux called bumping routes along which the elements of an input sequence pass. The results of computer experiments with Young tableaux of sizes up to 108 were presented. These experiments were made using the software package for dealing with 2D and 3D Young diagrams and tableaux. Full Text Robinson-Schensted-Knuth (RSK) algorithm also known as Robinson- Schensted-Knuth correspondence which maps permutations to the pairs of Young tableaux, plays an important role into various combinatorial problems. The combinatorics of Young diagrams and Young tableaux including RSK algorithm, finds numerous applications in physics, mathematics and informatics [1]-[3]. RSK correspondence can be easily generalized from the case of permutations to the case of infinite sequences of linearly ordered set. In such instance, an insertion tableau is a semi-standard Young tableau filled by elements of this ordered set. This implies that the RSK algorithm is applicable to a sequence of random independent values uniformly distributed over the interval [0, 1], i.e. to the Bernoulli scheme. A correspondence between two dynamical systems such as Bernoulli shift and iterations of Schützenberger transformation was built in [4]. Later [5] it was proved that this correspondence is isomorphism. It was also proved there that the first element of an infinite sequence of uniformly distributed random values can be unambiguously restored only by the limit angle of inclination of Schützenberger path of a recording tableau. In practice, we are interested in the restoration of the first element of a finite sequence. Unlike the case of infinite sequences, we also need an insertion tableau in addition to a recording tableau to restore the first element. Since tableau changes during every iteration, the investigation of tableau evolution properties is also important for studying the algorithms of restoration of an entire sequence. The results of computer experiments related to the estimation of the first element value in a finite segment of an infinite sequence using tableau are given in [6]. The subject of this article is to examine how tableau changes during RSK insertions. 2. Definitions Young diagrams are popular combinatorial structures which correspond to integer partitions. There are many ways to present a Young diagram. Particularly, in this paper we define it by so-called French notation as leftjustified and bottom-justified finite set of square boxes (see Figure 1 (a)). y 12 v y x x 0 12 -2 1. French notation u 0 2 2. Russian notation Figure 1. An example of a Young diagram Another way of presenting Young diagrams called Russian notation is shown in Figure 1 (b). The Russian notation was proposed by Vershik and Kerov [7] and is derived from the French notation by rotating the axes 45 degrees counterclockwise. Note that the diagram in Figure 1 (b) is normalized in such a way that the total area of boxes is 1. This notation is used in many papers because it makes studying the Plancherel measure much easier. It is convenient to consider Young diagrams as vertices of infinite oriented graded graph called the Young graph. In this graph, edges connect diagrams which differ in one box. If the edge connects a diagram of the size with a diagram +1 of the size + 1, then +1 can be obtained from by adding a single box. If we assign to each edge a certain transition probability, a Markov process will be defined on the graph. The most important class of such processes is the class of central processes for which the probabilities of different paths between a fixed pair of diagrams are equal. A complete description of all central processes on the 2D Young graph was obtained by Vershik in [8]. The only central process on the Young graph with o() speed of growth along the axes is called the Plancherel process. This process and explicit formulas of its transition probabilities are described in [8]. The limit shape of the Plancherel process called the Vershik- Kerov-Logan-Schepp (VKLS) limit shape [7] is given by the formula: ⎧{ 2 ( arcsin() + √ 1 - 2), || ⩽ 1, (1) { = ⎨ ⎩||, || ⩾ 1, where is a coordinate in the Russian notation. A Young tableau is a Young diagram filled by values increasing in rows and columns. These values can be elements of an arbitrary linearly ordered set. Wherein we say that is a shape of . A standard Young tableau (SYT) is a Young diagram filled by integers [1, ], > 0 which grow strictly in rows and columns. It is easy to see that a Young tableau corresponds to a path on the Young graph. The numbers in tableau set the order of adding the boxes when walking from the root of the graph. A semistandard Young tableau (SSYT) is a Young tableau with values strictly increasing in columns and weakly increasing in rows. In addition to the finite Young tableaux consisting of boxes, infinite tableaux can be considered as well. By infinite Young tableau we mean + a mapping ∶ ℤ2 ⇒ ℕ such that for the fixed , ∈ ℕ the values , and ,, ∈ ℕ grow strictly. These infinite tableaux are also called enumerations ℤ of the integer lattice 2 . For the case of SYT or SSYT, some integers may be + ℤ missing, i.e. the corresponding mapping 2 + → ℕ is not necessary bijective. In this research we consider SYT filled by integers and SSYT filled by real numbers belonging to the interval [0, 1]. 3. Robinson-Schensted-Knuth algorithm RSK algorithm establishes a bijection between a set of permutations of distinct integers and a set of pairs of standard Young tableaux of size of the same shape. These tableaux are called insertion tableau and recording tableau . At the beginning, the first value of a permutation is put into the empty tableau and 1 is inserted in the tableau . In each step of the algorithm, the next value of permutation is being compared with values of the first column of . If exceeds all these values, it is being put on the top of the first column. Otherwise, it replaces the closest larger value of the first column. The replaced value is being bumped in the second column and being processed in the same way. This process continues until a certain value is put on the top of a column at position (, ). Finally, the index of a processed value is being put into tableau at (, ). So, and tableaux are supported to have the same shape. The algorithm finishes when all the values of a permutation are processed. Note that above steps can be performed in reverse order, i.e. a permutation can be constructed from a pair of Young tableaux of the same shape. Such a procedure is called reverse RSK algorithm. Also, RSK algorithm is applicable to any ordered sequences such as sequences of integer or real values. RSK algorithm defines two equivalence relations on a set of permutations. The permutations are called Knuth-equivalent if they correspond to the same tableau and dual Knuth-equivalent if they correspond to the same tableau . Another Donald Knuth’s definition of these equivalence classes directly in terms of permutations is given in [9]. Some interesting properties of Knuth-equivalent and dual Knuth-equivalent permutations were investigated in [6]. 4. Visualization of Plancherel tableaux In order to study the properties of the RSK algorithm, it is of interest to examine how the shape of tableau changes in time. The evolution of tableau has a simple description: it is proved by Donald Knuth that RSK transforms an uniformly distributed random sequence in a pair of Plancherel-distributed Young tableaux. Therefore, tableau grows as a tableau in a Markov process which generates the Plancherel measure. There exists an interesting way to visualize a Young tableau in the 3D space proposed by A. M. Vershik. Consider a function on the set of boxes of the corresponding Young diagram. The values of this function are the numbers within the corresponding boxes. A Young tableau can be represented as a 3D graph of this function. For the Plancherel tableaux, with a rise of their sizes this graph tends to a surface which can be described as follows. Consider a set of positively directed rays on the 2D plane emanating from the origin. Each ray intersects with the VKLS limit shape (1) at some point (′, ′). For a point ( ⋅ ′, ⋅ ′) which lies on this ray, = 2. So, this surface intersects with the planes containing axis along parabolas which touches the coordinate plane , in the origin. This property completely characterizes the surface. An example of such a visualization for the tableau of size 105 is shown in Figure 2 (a). Note that the coordinates (, ) are divided by √. A very similar picture can be produced by generating a Markov chain of Plancherel distributed Young diagrams. Figure 2 (b) demonstrates how the number of added box divided by 105 depends on normalized coordinates of this box in a Young tableau. 1 0.9 0.8 Number 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.5 00 0.25 1.25 0.75 1 x/Ö`n 1.5 1.75 2 0 0.5 0.25 1 0.75 1.5 1.25 y/Ö`n 2 1.75 1 0.9 Adding No./Ö`n 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.5 00 0.25 1.25 0.75 1 x/Ö`n 1.5 1.75 2 0 0.5 0.25 1 0.75 1.5 1.25 y/Ö`n 2 1.75 (a) (b) Figure 2. (a) Values of boxes in tableau after 105 RSK iterations; (b) Positions of added boxes in Plancherel process after adding 105 boxes 5. Bumping forest Each time a new value comes to the input of the RSK algorithm, it bumps a certain element in the first column of tableau and takes its position. Then, the bumped element bumps another element in the second column and so on. A bumping route is a sequence of all boxes bumped in a single RSK iteration. A bumping route is defined for each position in the first column. Bumping routes were presented in [10]. Also a problem of hydrodynamic description of bumping routes was raised there. The limit behaviour of bumping routes including explicit formula of their limit shape was described in [11]. The possible use of bumping routes to speed up the RSK algorithm was discussed in [12]. In the current research we have constructed all the bumping routes for tableau of size 108. Some of them are shown in Figure 3. 20000 18000 16000 14000 12000 10000 8000 6000 4000 2000 0 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 Figure 3. Some bumping routes of tableau A bumping tree is a set of bumping routes converging into a single box. A bumping forest is a union of all bumping routes. Figure 4 (a) demonstrates an example of a Young tableau and its bumping forest. The bumping forest itself is illustrated in Figure 4 (b). 100 98 97 94 93 90 89 74 96 72 86 59 82 58 67 79 52 65 78 39 43 66 76 95 44 64 71 75 84 87 99 (a) (b) Figure 4. (a) A Young tableau and its bumping forest; (b) A bumping forest 6. Dynamics of insertion tableau Along with studying the dynamics of the entire tableau, we are also interested in investigating the dynamics of concrete values in tableau . Here we discuss the results of our computer experiment dedicated to analysis of motion of these values within a semi-standard Young table filled by random real numbers from the interval [0, 1]. The idea of this algorithm is as follows. Firstly, we construct tableau of size . Next, the observed value is fed to the input of RSK. Then, we observe how the position of changes while RSK processes next - values. Each trajectory is close to a Vershik-Kerov-Logan-Schepp limit shape (1). The results of this experiment is illustrated in Figure 5. We examined trajectories of 9 different numbers: = [0.1, 0.2, … , 0.9]. The horizontal curves are trajectories of different . Black points are the final positions of for / = [0.1, 0.3, 0.5, 0.7, 0.9], = 107. It is easily seen from the Fig. 5 that the dynamics of motion of different values in RSK looks very similar. The average dynamics of a certain can be obtained by rescaling the unique average motion dynamics of = 1. Note that, with a rise of , the motion of these value continues until they eventually reach the coordinate plane. Unfortunately, this process often takes a really huge amount of RSK iterations what makes it hard to simulate it using available computation power. 0.9 0.8 0.7 0.6 0.5 z 0.4 0.3 0.2 z=0.9 z=0.8 z=0.7 z=0.6 z=0.5 z=0.4 z=0.3 z=0.2 z=0.1 2 1.8 1.6 1.4 0.1 0 0 0.2 0.4 0.6 0.8 1 x 1.2 1.4 1.6 0 1 0.8 0.6 0.4 0.2 1.2 y Figure 5. Evolution of random values in RSK algorithm 7. Conclusions The results of numerical experiments presented in this article demonstrate two types of dynamics in an insertion tableau of the Robinson- Schensted-Knuth algorithm. The first investigated dynamics is a modification of tableau after a single RSK iteration when a new value moves along a certain path called a bumping route. The second dynamics is related to the motion of the concrete value during many RSK iterations. Numerical experiments for studying these dynamics are presented in this paper. About the authors Saint Petersburg Electrotechnical University “LETI” Author for correspondence. Email: vsduzhin@etu.ru assistant of Department of Algorithmic Mathematics 5, Professora Popova St., St. Petersburg 197376, Russian Federation 1. N. O’Connell, “A path-transformation for random walks and the Robinson - Schensted correspondence,” Transactions of the American Mathematical Society, vol. 355, no. 9, pp. 3669-3697, 2003. eprint: www.jstor.org/stable/1194859. 2. D. Dauvergne, “The Archimedean limit of random sorting networks,” 2018. arXiv: arXiv:abs/1802.08934 [math.PR]. 3. O. Angel, A. E. Holroyd, D. Romik, and B. Virág, “Random sorting networks,” Advances in Mathematics, vol. 215, no. 2, pp. 839-868, 2007. doi: 10.1016/j.aim.2007.05.019. 4. S. V. Kerov and A. M. Vershik, “The characters of the infinite symmetric group and probability properties of the Robinson-Schensted-Knuth algorithm,” SIAM J. Algebraic Discrete Methods, vol. 7, no. 1, pp. 116- 124, 1986. doi: 10.1137/0607014. 5. D. Romik and P. Śniady, “Jeu de taquin dynamics on infinite Young tableaux and second class particles,” Annals of Probability: An Official Journal of the Institute of Mathematical Statistics, vol. 43, no. 2, pp. 682- 737, 2015. doi: 10.1214/13-AOP873. 6. N. N. Vassiliev, V. S. Duzhin, and A. D. Kuzmin, “Investigation of properties of equivalence classes of permutations by inverse Robinson - Schensted - Knuth transformation [Issledovaniye svoystv klassov ekvivalentnosti perestanovok s pomoshch’yu obratnogo preobrazovaniya Robinsona],” Informatsionno-upravliaiushchie sistemy [Information and Control Systems], no. 1, pp. 11-22, 2019, in Russian. DOI: 10.31799/ 1684-8853-2019-1-11-22. eprint: https://elibrary.ru/item.asp? id=36930159. 7. A. M. Vershik and S. V. Kerov, “Asymptotic of the largest and the typical dimensions of irreducible representations of a symmetric group,” Functional Analysis and Its Applications, vol. 19, no. 1, pp. 21-31, 1985. doi: 10.1007/BF01086021. 8. A. M. Vershik and S. V. Kerov, “Asymptotic theory of characters of the symmetric group,” Functional Analysis and Its Applications, vol. 15, no. 4, pp. 246-255, 1981. doi: 10.1007/BF01106153. 9. G. E. Andrews, The Theory of Partitions, ser. Encyclopedia of Mathematics and its Applications. Cambridge: Cambridge University Press, 1984. doi: 10.1017/CBO9780511608650. 10. C. Moore. (2006). Flows in young diagrams. online resource, [Online]. Available: http://tuvalu.santafe.edu/~moore/gallery.html. 11. D. Romik and P. Śniady, “Limit shapes of bumping routes in the Robinson-Schensted correspondence,” Random Structures & Algorithms, vol. 48, no. 1, pp. 171-182, Sep. 2014. doi: 10.1002/rsa.20570. 12. V. Duzhin, A. Kuzmin, and N. Vassiliev, “RSK bumping trees and a fast RSK algorithm,” in International Conference Polynomial Computer Algebra ‘2019; St. Petersburg, April 15-20, 2019 / Euler International Mathematical Institute, Ed. by N. N. Vassiliev, VVM Pubishing, 2019. eprint: https://elibrary.ru/item.asp?id=41320890.
{"url":"https://journals.rudn.ru/miph/article/view/22914/en_US","timestamp":"2024-11-06T16:41:06Z","content_type":"application/xhtml+xml","content_length":"98509","record_id":"<urn:uuid:5269ca28-650d-49c7-bf90-ebdf6cb903de>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00582.warc.gz"}
Balancing Abstract Chemical Equations with One Kind of Atom This Demonstration solves chemical-like equations with one kind of atom, represented by the letter . An expression like can be thought of as a "molecule" of atoms of . For the equation to be balanced , the counts of the atoms on both sides must be the same. Balancing the equation is equivalent to solving the Diophantine equation , where parameters , , and are positive integers, and the solution s hould be in non-negative integers , and positive . The Diophantine equation , where , are positive and with a solution in non-negative integers, is a Frobenius equation. The largest for which the equation has no solution is called the Frobenius numbe r. So if is the Frobenius number of the equation, then the Frobenius equation has solutions for all . The problem of balancing the chemical equation is reduced to solving the Frobenius equations for , where is the smallest number for which and is as small as possible.
{"url":"https://www.wolframcloud.com/objects/demonstrations/BalancingAbstractChemicalEquationsWithOneKindOfAtom-source.nb","timestamp":"2024-11-12T02:13:11Z","content_type":"text/html","content_length":"244937","record_id":"<urn:uuid:06a69697-1252-457d-879b-44a309376f29>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00232.warc.gz"}
DOUBLE[(M,D)] [SIGNED | UNSIGNED | ZEROFILL] DOUBLE PRECISION[(M,D)] [SIGNED | UNSIGNED | ZEROFILL] REAL[(M,D)] [SIGNED | UNSIGNED | ZEROFILL] A normal-size (double-precision) floating-point number (see FLOAT for a single-precision floating-point number). Allowable values are: • -1.7976931348623157E+308 to -2.2250738585072014E-308 • 0 • 2.2250738585072014E-308 to 1.7976931348623157E+308 These are the theoretical limits, based on the IEEE standard. The actual range might be slightly smaller depending on your hardware or operating system. M is the total number of digits and D is the number of digits following the decimal point. If M and D are omitted, values are stored to the limits allowed by the hardware. A double-precision floating-point number is accurate to approximately 15 decimal places. UNSIGNED, if specified, disallows negative values. ZEROFILL, if specified, pads the number with zeros, up to the total number of digits specified by M. REAL and DOUBLE PRECISION are synonyms, unless the REAL_AS_FLOAT SQL mode is enabled, in which case REAL is a synonym for FLOAT rather than DOUBLE. See Floating Point Accuracy for issues when using floating-point numbers. For more details on the attributes, see Numeric Data Type Overview. CREATE TABLE t1 (d DOUBLE(5,0) zerofill); INSERT INTO t1 VALUES (1),(2),(3),(4); SELECT * FROM t1; | d | | 00001 | | 00002 | | 00003 | | 00004 |
{"url":"https://docs.w3cub.com/mariadb/double/index","timestamp":"2024-11-12T19:33:47Z","content_type":"text/html","content_length":"7932","record_id":"<urn:uuid:c15fc929-ea53-4593-b7ef-96591b02e43a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00242.warc.gz"}
An explicit representation for disappointment aversion and other betweenness preferences Theoretical Economics 15 (2020), 1509–1546 An explicit representation for disappointment aversion and other betweenness preferences Simone Cerreia-Vioglio, David Dillenberger, Pietro Ortoleva One of the most well-known models of non-expected utility is Gul (1991)'s model of Disappointment Aversion. This model, however, is defined implicitly, as the solution to a functional equation; its explicit utility representation is unknown, which may limit its applicability. We show that an explicit representation can be easily constructed, using solely the components of the implicit one. We also provide a more general result: an explicit representation for preferences in the Betweenness class that also satisfy Negative Certainty Independence (Dillenberger, 2010) or its counterpart. We show how our approach gives a simple way to behaviorally identify the parameters of the representation and to study the consequences of disappointment aversion in a variety of applications. Keywords: Disappointment Aversion, Betweenness, Cautious Expected Utility, utility representation JEL classification: D80, D81 Full Text:
{"url":"https://econtheory.org/ojs/index.php/te/article/viewArticle/20201509/0","timestamp":"2024-11-02T14:27:24Z","content_type":"text/html","content_length":"4384","record_id":"<urn:uuid:12bcb11c-5b12-4087-9302-5ff84019db3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00498.warc.gz"}
NeurIPS 2019 Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center Reviewer 1 The work is amazingly well-written and easy to follow. The work does a pretty good job of introducing the required logic background for understanding the rest of the work (I was not introduced to the notions and terminology before and I found it very easy to follow). As a researcher in both fields, I find this new direction very much and appreciate the work's significance. However, from a practitioner's point of view, the main question is the applicability of the introduced algorithm. The algorithm requires computations up to the order of the feature space size (which is intractably large even for the simplest problems). I am leaning towards an acceptance score due to its novelty but the actual contribution and its usefulness in practice should become more clear. Reviewer 2 This paper address an intuition that has been present in the literature for some time, but has not been formalized or published that i am aware of. Many papers hint at the duality between adversarial examples and explanations, but by formalizing these notions and proving the duality between them, the authors make an important contribution to the literature, the paper is quite dense, and spends most of its time on theoretical proofs and justifications for the relationship between adversarial examples. It may find a wider readership if it allocates some more space to introducing the concepts it employs, even such simple things as "subset-minimal" may not be widely known in the adversarial community. The final section focusing on the experimental result making use of a subset of a binarized version of MNIST is compelling, and makes the theoretical work significantly easier to grasp. The running example of the restaurant running problem, also helps illustrate the theorems presented, but the paragraph describing the specifics of the problem is probably not needed. Overall i think this paper is an important contribution to the research literature, but could be made more approachable and subsequently reach a broader audience with some light rewriting with a focus on making more accessible. Reviewer 3 Overall Comments First I should state upfront that my expertise is not in first order logic, so it is somewhat difficult to assess this paper. I am familiar with the literature on adversarial examples and explanations though. My main high level point in this work is that I am missing a so what, i.e., what is the significance of demonstrating the duality between explanations and adversarial examples. In addition, there has been intense empirical work showing that explanations can be used to craft adversarial examples and adversarial examples might be good instances for producing explanations. Originality The main point of this paper has been demonstrated empirically in prior work. The authors present a set of theorems based on formal logic demonstrating a duality between adversarial examples and explanations. In terms of the theorems, I am not familiar with the formal logic literature , though in looking at citations [21-23], it does seem that these theorems are new. Clarity The work is reasonably well written and free of typos. Several of the key formal logic terms are also defined and clarified. Theorem 1 was clearly stated and the proof clarified in the text. A proof sketch was also provided for theorem 2. Significance In looking at the prior work, citations [21-23], it seems the theorems here are new. In general, there have been previous connections made between explanations and adversarial examples: https://www.aclweb.org/anthology/P18-1176, and https://homes.cs.washington.edu/~marcotcr/acl18.pdf, however these were not formal. In fairness, it is hard for me to assess the significance of this work since it seems like the key insight is bringing the FOL point of view to clarify the relationship between adversarial examples and explanations. Some Issues - The MNIST Example. I sense this is probably the wrong example to use to show the power of your analysis. It seems like MNIST is high-dimensional for the logic based models or decision set type framework. It would've been more powerful for me, if the paper had shown their results on lower dimensional dataset with enough categorical variables to show the power of this work. The current MNIST example feels toyish. - Can the authors further clarify the implications of the duality that they motivate in this work? UPDATE I have read the author rebuttal and feel the authors provided justification for their approach and why this work is important. I support that this work be accepted.
{"url":"https://papers.nips.cc/paper_files/paper/2019/file/7392ea4ca76ad2fb4c9c3b6a5c6e31e3-Reviews.html","timestamp":"2024-11-05T06:39:16Z","content_type":"text/html","content_length":"6115","record_id":"<urn:uuid:df7bbd77-3598-452e-ad72-ffad7a90ba6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00338.warc.gz"}
Aging Tokens One of the distinguishing features of APN is its implementation of the aging token concept. Originally introduced in 2004, aging tokens allow additional flexibility in providing tokens with some memories about their past. As always, there is an issue of a balance between modeling power and simplicity. Token's age is simply a value \(\eta\in \left[0,1\right]\) that represents this past in a simplified fashion. This value affect the firing time of a transition. Let us consider a situation when there are two tasks needed to be accomplished sequentually within a given amount of time. We will refer collectively to the instance of the problem as a "case" alluding to a workflow problem for specificity. We would like to know whether this could be achieved successfully or not. First, let us look at a model that does not utilize aging tokens. Figure 1: APN model for two sequential tasks without aging Transition T5 represents a global "clock": when the alotted time expires, the case token in the Start place moves to the End place, which triggers immediate transitions T3 and T4 and forces the case token to move into the Failure place. The source file for this APN model can be found here. While this "global" solution is workable for this simple scenario, there are difficulties in extending its applicability to realistic applications. For example, we might be interested in several cases occuring in parallel, where each instance represented by a case token would need to accomplish two tasks, so the considered network is a fragment of a larger network. In this situation, we would have to enforce some synchronization ensuring that tokens in the Task 1 and Start places arrive at the same time, and would only be able to treat one token at a time in this subnet. To treat multiple cases, we would need multiple subnets each dedicated to a single case at a time. Let us contrast this with the model that uses aging tokens. Here the T3 transition is selected to be aging for the Task 1 place (in APN software one needs to bring up the place property dialog, and click on aging transition button that enables the selection of the aging transtion). The main idea is that when the first task is completed and the token moves to the Task 2 place, its age is used to ensure that the delay for the T4 transition is adjusted to account for the time spent in the Task 1 place. For deterministic (fixed) delays the interpretation is more straightforward, so we will consider it first (the source file can be found here). Figure 2: APN model for two sequential tasks with aging If the aging transition (T3 in this case) has a fixed delay \(\tau\), the age \(\eta\) is simply the fraction of the time it was enabled \(t_e\)for this token as compared to that aging transition delay: \(\eta=t_e/\tau\). For the T4 transition we can select an age-dependent fixed type of a transition with the value the same as T3. There are situations where it is beneficial to have a fixed delay that is independent of the age of the token, and then fixed delay can be selected. Let us consider specific values for the transitions: \(\tau_1=1\) and \(\tau_2=\tau_3=\tau_4=2\). At time \(t= 0\), the age of the token is zero. At \(t_1=1\), T1 fires and \(\eta=1/2=0.5\). At this point both T2 and T4 are enabled for this token. T2 is scheduled to fire at time \(t_2=t_1+\tau_2=1+2=3\), while T4 is scheduled to fire at time \(t_3=t_1+(1-\eta)\tau_2=1+2\cdot 0.5=2\), so T4 fires the token first at \(t_3=2\) and the token moves to the Failure place. Next, let us consider transitions with probabilistic delays that follow a cumulative distribution functions \(F_1(t),F_2(t),F_3(t) \) for the T1, T2, and T3 transitions respectively (T4 will have the same distribution as T3). Let us consider the situation when T1 fires the token at \(t_1\). Since the token was aging by the T3 transition, its age upon firing is \(F_3(t_1)\). This ensures that the T4 transition (that also have \(F_3(t)\) distribution) fires at the same time as the T3 transition would fire if the token stayed in the Task 1 place. Since aging is an intrinsic "local" property of a token, no additional synchronization is required, and multiple tokens can populate the same net and age in accordance with their own schedule. Figure 3: APN model for warm spares In addition to simply continuing the same process for the same token in a new place wheere it was left off in the old place, the aging allows to account for different pace of aging in a consistent fashion. Let us consider a warm spare scenario that is quite common in reliability: the spare is aging, but at a slower rate than an active component. The corresponding APN model is shown in Figure 3. The source file for this APN model can be found here Here T1 is the aging transition for the Warm Spare place, and T2 is an immediate transition (implemented in APN by selecting a fixed transition with a negligible delay, e.g., \(\epsilon=1\times 10^{-6}\)). An inhibitor of multiplicity one disables the T2 transition as long as the token representing the active component is in the Active place. When the active component fails (let us say at time \(t_1 \)), the T2 transition becomes enabled and fires the spare component token into the Active place. The calculation of the corresponding age \(\eta=F_1(t_1)\) and equivalent time \(t^*=F_3^{-1}(\eta)=F_3^{-1}(F_1(t_1))\) are depicted graphically in Figure 4. Figure 4: Conversion of the equivalent time \(t^*\) In Figure 4 both \(F_1 \) and \(F_3 \) are Weibull distributions \(F(t)=1-\exp{\left[-(t/\theta)^\beta \right]}\) with the scales \(\theta_1=2, \theta_3=1 \) and shapes \(\beta_1=2, \beta_3=3 \), respectively. The process of using the aging variable as the current value of the cumulative distribution function is applicable to any distribution and allows muliple jumps from one distribution to One of the attributes of a transition in APN is an age adjustment factor \(a\in \left[0,1\right]\), which multiples the age of a fired token (the transition does not need to be aging). Setting this adjustment factor to zero allows for a renewal of the fired token's age. Aging tokens provide a convenient means to enforce different queueing policies. Let us demonstrate how First-in-First-out (FIFO) policy can be implemented. Let us consider a simple queue with abandonment: if a customer does not receive a service after a specified threshold time \(\tau_{max}\), she abandons the queue. The example is selected to emphasize the sensitivity to the selection policy (aggregate statistics are often insensitive to queuing policies). The model is shown in Figure 5. Figure 5: APN model of a queue with abandonment Here the T1 transition controls the customer's arrival. The number of servers is controlled by the inhibitor for the T3 transition, let us consider a single server, so the inhibitor has multiplicity one. The T2 transition has a fixed delay \(\tau_{max}\). Both arrival (T1) and service (T4) follow exponential distributions. Transitions T5 and T6 have fixed delay \(\epsilon=1\times 10^{-6}\). As a result, customer tokens pause momentarily in the Abandonded or Completed place, respectively. Both places have sensors (denoted in APN with the solid square in the lower right corner) that allow to register each "hit", thus measuring the rates of abandonining or completing the service, respectively. We will also measure the mean number of customers in service and in the queue. In order to implement the FIFO policy we will make the T2 transition aging for the Queue place, and select the age-dependent fixed delay \(\epsilon=1\times 10^{-6}\) for the T3 transition. This will ensure that when the server is free a token that stayed in the Queue place the longest (and therefore has the largest age) is fired first. Individial tokens have their unique IDs displayed (this is an option in APN), so during the animation one can visually track whether FIFO is observed. Figure 6: Abandonment rate for FIFO and Random policies, \(\rho=0.8\) For comparison we also consider a random order for selecting the customers. To implement this policy we can switch off the aging for the Queue place, and select a very fast exponential delay, say \(1 /\epsilon=1\times 10^{6}\) for the T3 transition (since T3 is an exponential transition, whether the tokens age or don't does not matter). You can download the source files for the FIFO and random policies, and also the Excel spreadsheet that can control the model parameters for both models (Simply link the XLS file from the parameters menu in APN, and after you linked it once, you can make the changes to the XLS spreadsheet, save it, and then click on the "Update from Linked File" from the parameter menu in APN to automatically update all the model parameters). Figure 7: Queue size for FIFO and Random policies, \(\rho=0.8\) First let us consider a "subcritical" case: \(\lambda=0.8, \mu=1\), so that \(\rho=\lambda/\mu=0.8\) and \(\tau_{max}=10\). Figure 6 shows the benefits of FIFO policy in terms reducing the number of abandoments by more than a half as compared to random. This is expected, since being a "fair" policy, FIFO ensures a more uniform waiting times, while for random policy there will be a larger portion of customers that reach the \(\tau_{max}\) of waiting and leave. There is a penalty, however as observed in Figure 7, as the average time in the queue for FIFO is higher than for random (the \(\tau_ {max}\) limit is used more efficiently by FIFO). Here the steady state value of expected queue is 2.27 vs. 1.787 customers for FIFO vs. random policies, respectively. The results reported here are based on ten million replications, with the steady-state value evaluated as the time-average value for the last 10% of the simulation time (simulation time is 100). This might be an acceptable trade-off if the leaving customers are costly. Figure 8: Abandonment rate for FIFO and Random policies, \(\rho=1.5\) Figure 9: Queue size for FIFO and Random policies, \(\rho=1.5\) It is instructive to consider the same problem in "super-critical" regime where \(\rho>1\) and the system is stable only due to the abandonment effect. Let us consider \(\lambda=\rho=1.5\), while keeping the rest of the parameters the same as before. The simulation time is also the same, but we will use the results from 1 million replications that provide sufficient accuracy here (as compared to the previous example, the number of abandonment is larger, so fewer replications are needed to estimate it). Figure 8 shows the abandonment rates, which are much closer to each other after the warm-up period. In this regime the benefit of FIFO is very small in the long run, while, as seen in Figure 9, the penalty on waiting remains significant (the steady state values are of expected queue is 13.05 vs. 8.911 customers for FIFO vs. random policies, respectively). Effectively, in this scenario the customers wait longer in vain.
{"url":"https://volovoi.com/fullapn/aging.html","timestamp":"2024-11-11T15:58:07Z","content_type":"text/html","content_length":"29679","record_id":"<urn:uuid:66d22327-bf8d-4140-821f-5314022ad59a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00116.warc.gz"}
Rules Blockly - Persistence # Persistence # Introduction Persistence blocks enable access to and manipulation of historical data stored by the default persistence service. For more information on persistence, the default service, and its configuration see the persistence documentation(opens new window). The date-blocks shown in this section are described previously in Date handling blocks(opens new window). # Overview of the Persistence blocks # Persistence Blocks Persistence blocks enable access of historical data stored by the default persistence service. For more information on persistence, the default service, and its configuration see the persistence documentation(opens new window). The date-blocks shown in this section are described in Date handling blocks(opens new window). More about that topic can be viewed at Using Persistence data(opens new window) # Get statistical value of an item Function: computes any of the below functions for the given item since the time provided by ZonedDateTime-Block openHAB supports history and future values. A typical example for future values is a weather forecast. Due to adding future values in openHAB the amount of attributes has been vastly increased and the names had to be renamed to become more specific in terms of historic and future states. Important: Due to a breaking change of the internal methods in openHAB 4.2, Blockly rules that use persistence methods need to be migrated once. This does not happen automatically but needs to be done opening the blockly rule once and re-save it. Blockly then automatically rewrites the rule to be compatible. The following values are available as historic and future representations. • persisted state: gets the persisted state at a certain point in time • average: gets the average value of the State of a persisted Item since a certain point in time. This method uses a time-weighted average calculation • delta: gets the difference in value of the State of a given Item since a certain point in time • deviation: gets the standard deviation of the state of the given Item since a certain point in time • variance: gets the variance of the state of the given item since a certain point in time • evolution rate: gets the evolution rate of the state of the given Item in percent since a certain point in time (may be positive or negative) • minimum: gets the minimum value of the State of the given Item since a certain point in time • maximum: gets the maximum value of the State of the given Item since a certain point in time • sum: gets the sum of the State of the given Item since a certain point in time In the case of the following two functions the block changes its appearance by replacing the time with an option to chose if the equal value should be skipped or not: • previous state value: Gets the previous state with option to skip to different value as current • next state value: Gets the next state with option to skip to different value as current • previous state numeric value: same as above but directly returns a number without a unit • previous state value time: Gets the time when previous state last occurred with option to skip to different value as current • next state value time: Gets the time for which the next state is available with option to skip to different value as current The persistence dropdown allows to select the persistence storage from which the value should be retrieved. It automatically shows only the storage types that are currently installed on your openHAB Note that not all persistence storage types (i.e. the default rrd4j) support all statistical methods. The skip option set to true allows to search for first state that is different from the current state. Important: This option is not supported by all persistence databases and may throw an error in that case (for example the standard rrd4j does not support it while influxdb does support it). Notes: in case no or 0 values are retrieved, make sure that the item in question is actually persisted. Previous State Example # Check item change / update since a point in time Function: checks if an item was updated or changed since a certain point in time Type: boolean true or false # Provide last updated date of an Item Function: Provides the last updated date (including time) of an Item Type: ZonedDateTime # Return to Blockly Reference
{"url":"https://next.openhab.org/docs/configuration/blockly/rules-blockly-persistence","timestamp":"2024-11-07T23:45:33Z","content_type":"text/html","content_length":"88015","record_id":"<urn:uuid:189752ce-8207-497c-b60a-a80d23b94284>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00749.warc.gz"}
Ideals, Varieties, and Algorithms Fourth Edition, 2015 Contents of Web Page: This book is an introduction to computational algebraic geometry and commutative algebra at the undergraduate level. It discusses systems of polynomial equations ("ideals"), their solutions ("varieties"), and how these objects can be manipulated ("algorithms"). In 2016, Ideals, Varieties, and Algorithms was awarded the Leroy P. Steele Prize for Mathematical Exposition by the American Mathematical Society. The article The Story of Ideals, Varieties, and Algorithms tells how the book came to be written. Fourth Edition The fourth edition was originally published in 2015. A corrected publication appeared in 2018. • Typos known as of August 2016 are available in pdf . These typos were corrected in the 2018 publication. • Typos in the 2018 corrected publication are available in pdf . These typos also appear in the version published in 2015. You can determine which version of the fourth edition you have by looking for "corrected publication 2018" on the copyright page. Third Edition Lists of typographical errors are available for the third edition. There are two lists, depending on which printing you have. Because of the fourth edition, the typo list for the third edition is no longer being updated. • Typos in the first printing corrected in the second printing: pdf or postscript . • Typos present in the first and second printings: pdf or postscript . To find out which printing you have, check the second line from the bottom on the copyright page; the last digit displayed is the printing number. If you have the first printing, you will need to download both lists; if you have the second printing, you only need the second. Second Edition Lists of typographical errors are available for the second edition. Because of the fourth edition, the typo list for the second edition is no longer being updated. There is a separate list for each printing. To find out which printing you have, check the second line from the bottom on the copyright page; the last digit displayed is the printing number. First Edition Lists of typographical errors are also available for the first edition. Because of the fourth edition, the typo list for the first edition is no longer being updated. There is a separate list for each printing. To find out which printing you have, check the third line from the bottom on the copyright page; the last digit displayed is the printing number. A complete solutions manual for Ideals, Varieties, and Algorithms has been written up by David Cox and Ying Li of St. Francis University. The solutions are not posted here because having open access to the solutions would diminish the value of the text. Ideals, Varieties, and Algorithms is a book where you learn by doing. If you are teaching from Ideals, Varieties, and Algorithms or are studying the book on your own, you may obtain a pdf copy of the solutions by sending email to jlittle@holycross.edu. The book describes the computer algebra systems Maple, Mathematica and Sage in some detail. Maple and Mathematica are commercial products, while Sage is freely available. at In addition, here are some other computer algebra programs which can do Gröbner basis calculations: Of these, all are free except for Magma. • Appendix C of first edition of IVA describes the obsolete grobner package for Maple, while subsequent editions describe the Groebner package now used in Maple. • In the 5th printing of the second edition of IVA, a production error caused plus signs to appear as minus signs on many pages. Hence the 5th printing of the second edition is defective. • The first three editions of IVA mention the existence of computer packages for Maple and Mathematica. These packages are no longer supported and are not available. • Our earlier practice of paying $1 US for each new typographical error has been discontinued, though we are always grateful when readers notify us about errors they find in the book. Click here for the web page for our book Using Algebraic Geometry. This book is an introduction to Gröbner bases and resultants, which are two of the main tools used in computational algebraic geometry and commutative algebra. It also discusses local methods and syzygies, and gives applications to integer programming, polynomial splines and algebraic coding theory. The second edition was published by Springer in the summer of 2005. It is available in both hardcover and paperback. The catalog entry for Ideals, Varieties, and Algorithms in the Springer-Verlag on-line catalog contains a brief description of the book and also includes ordering information. You can contact the authors at the following email addresses: dacox@amherst.edu jlittle@holycross.edu doshea@ncf.edu
{"url":"https://dacox.people.amherst.edu/iva.html","timestamp":"2024-11-10T17:44:03Z","content_type":"text/html","content_length":"14751","record_id":"<urn:uuid:3108eb43-f4ad-40e2-b04e-fdae2b42a4a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00278.warc.gz"}
Math-Linux Insights In my previous blog post, Factorials For Fun, I wrote about factorial notation and its amenability to being calculated by iteration and recursion. In this post, I want to relate the use of factorials in permutations, named by John H Conway as arrangement numbers. While there is a factorial-based formula for counting how many permutations, it takes much more discipline to enumerate them all correctly (boring too, if done by hand). This has become such an extensive review, so I will relate in a separate future post factorials with combinations (choice numbers, also named by John H. Conway). A Fundamental Counting Principle The Product Rule: Suppose that a procedure can be split into a sequence of k tasks. If there are n[1] ways to do the first task and for each of these n[1] ways, there are n[2] ways of doing the second task, and generally, for all of these n[1], n[2], …, n[k-1] ways, there are n[k] ways of doing the k^th task, then there are `bb n_1*n_2*…*n_k` ways to do the entire procedure. This principle is applied whenever you are choosing each object from its own group. Suppose your camping wardrobe is made up of 2 pairs of hiking boots, 3 vests, 6 shirts, 1 hat and 3 jackets. How many different outfits are possible? Since each item belongs to its own group, we have: `bb 2*3*6*1*3 = 108` different outfits. A permutation of n objects is an ordered arrangement of n distinct items (objects) in a row. Consider 3 coins, such as a Nickel, Dime, and Quarter. The Nickel (N), Dime (D) and Quarter (Q), in that order, is a single permutation of the three coins. The entire list of all possible permutations of the three coins is here: {N, D, Q}, {N, Q, D}, {Q, N, D}, {D, N, Q}, {D, Q, N}, {Q, D, N} [1] This list of permutations has a count of 6. This is equivalent to 3⋅2⋅1 arrangements. This is because the first coin can be any one of 3 denominations (3 ways). Once chosen, the second coin can be any one of the remaining 2 denominations (2 ways). So for the first two positions, there are 3⋅2 ways. Once the second coin is chosen, the third coin is whatever denomination remains and is chosen (1 way). So for the first three positions, there are 3⋅2⋅1 ways. Because each position is filled from what is sequentially available (remaining) among the 3 coins, this is called selection without replacement. Question: How many permutations (arrangements) of n distinct objects are possible (without replacement)? Value of k: 1 2 3 k (n-1) n Ways to choose k: n (n-1) (n-2) . . . (n – k + 1) . . . 2 1 [Table 1] Row position: — ——- —— ————- — — In the first position, we can choose any of the n objects. Choose one. In the second position, since one object was already chosen, we can choose any of the (n-1) objects. Choose one. In the third position, since two objects have already been chosen, we can choose any of the (n-2) objects. Choose one. In this same way, in the k^th position, we can choose (n – k + 1) objects since k objects have already been selected. Choose one. Finally, for the last position, since all but one, namely (n-1) objects have been chosen, there is only 1 object left to choose and put in the last position. Choose it and complete a single arrangement (ordering) in a row. Therefore the number of ways to choose (arrange) n objects out of a total of n objects is: n⋅(n-1)⋅(n-2)⋅…⋅2⋅1 = n! [2] And the number of ways (permutations) to choose k (unique) objects out of a total of n (unique) objects is: n⋅(n-1)⋅(n-2)⋅…⋅(n-k+1) = `bb (n!)/((n-k)!)` = `bb _nP_k` [3] The fraction in the center in [3], is the product of all the numbers from 1 to n, (i.e. n!) except for those being a product from 1 to n – k ( which is (n – k)! ). Because n! is a multiple of (n – k)! , this artificial fraction evaluates to a product of consecutive integers, below, the arrangement numbers: This “fraction” in [3] has a variety of notational labels and all mean the same thing: Symbols Spoken or Meaning When k = n P(n,k) a k-permutation P(n,n) = n!/0! = n! p[nk ]a k-permutation arranged in a row p[nn ]= n!/0! = n! [n]P[k] the permutations of n things taken k at a time [n]P[n ]= n!/0! = n! (n)[k] a k-list (n)[n ]= n!/0! = n! n^k n to the k falling (falling power) n^n = n!/0! = n! n factorial is also recursively written as: n! = n⋅(n-1)! [4] which is valid for all positive integers n ≥ 1. It is also true that, by solving [4] for (n-1)! , we have: (n – 1)! = `bb (n!)/n` (n – 1)! also represents the number of circular arrangements (or necklaces) of n distinct objects. This is because two permutations (i.e. complete placements of people around a circular table) can be considered identical if one becomes the other by rotating the circle (or table). There are n ways to rotate the circle (or table). So n divides the n! In summary, n! is the count of the number of permutations (or linear arrangements) of n distinct objects in a row and (n-1)! is the count of the number of circular permutations (or arrangements). Computation Aids: Online Calculators The website calculatorsoup.com has these special permutation calculators available for those dealing with larger values of n. They are: This calculator has examples to show how to use and Calculate [n]P[k] where 0 ≤ n, k ≤ 9999. See: Permutations Calculator. Circular Permutations Calculator: Calculates (n-1)! where 0 ≤ n ≤ 9999. See: Circular Permutations Calculator Other specialized calculators are interspersed further below. 5 Examples (1) How many different ways can the letters in the word numerals be arranged? Since different ways is a synonym for arrangements, and the 8 letters are distinct, the permutation formula [2] (or [3] with k=n] , expressed as n! , is to be applied: 8! = 8⋅7⋅6⋅5⋅4⋅3⋅2⋅1 = 40320 ways (2) How many ways are there to select a first-prize winner, a second-prize winner and a third-prize winner from 50 different people who have entered a contest? In this question, it is important to know which person wins which prize. So the number of ways to choose the three prize-winners is the number of ordered selections of 3 people (out of 50). So we consider that the first prize can be won by any of the 50 people; then the second prize can be won by any of the remaining 49 people; and the third prize can be won by any of the remaining 48 people. This is reflected in formula [3], where n = 50 and k = 3. [ 50]P[3] = `bb (50!)/((50-3)!) = ( 50*49*48*…*1)/(47*46*45*…*1)` = 50⋅49⋅48 = 142100 ways. (3) How many permutations of the letters STUVWXYZ contain the consecutive lettered string XYZ? Because the consecutive lettered (sub)string XYZ must all appear as an ordered group (first X, then Y, then Z) in the larger 8 lettered arrangement, we can think of the string XYZ as a single entity. Then the number of arrangements of {S, T, U, V, W, XYZ} is based on 6 items rather than 8. So: 6! = 6⋅5⋅4⋅3⋅2⋅1 = 720 ways (4) In how many ways can 9 people be seated around a circular table? Because this fits the conditions for a circular permutation (with arrangements that can be rotated being considered indistinguishable), we use the formula in [5] with n = 9: (9 – 1)! = 8! = 8⋅7⋅6⋅5⋅4⋅3⋅2⋅1 = 40320 ways. (5) Find the number of ways in which 5 people Al, Betty, Cam, Dana and Ed have place settings (seats) at a circular table, such that: (i) Dana and Ed must always sit together, and (ii) Al and Betty must not sit together. Because this fits the conditions for a circular permutation (with rotations of any arrangement being considered equivalent), we will start with formula [5]. Consider the first restriction. Dana and Ed sitting together represent a couple, a single entity. So n = 4 {Al, Betty, Cam, Couple} or {a, b, c, de) and: 2⋅(4 – 1)! = 2⋅3! = 2⋅3⋅2⋅1 = 2⋅6 = 12 In the expression evaluation, as there are two positions the couple can assume, the 6 is multiplied by 2 to get 12. To show each circular arrangement, I visualized a clock with de at the Noon position, a at the 3:00, b at the 6:00 and c at the 9:00 position, respectively. By keeping de anchored at the Noon position, the rest of the people (n – 1) can be permuted (arranged) normally for counting purposes and rotations do not occur. Using initials, the Part (i) count of arrangements enumerates as follows: (abcde), (acbde), (cabde), (cbade), (bacde), (bcade), (abced), (acbed), (cabed), (cbaed), (baced), (bcaed) Now consider the second restriction: Al and Betty must not be neighbors. This means that Al and Betty (a and b) must be separated on either side of a or b by both c and de. Notice that the four uncolorful arrangements are just the ones that are needed. We can determine this more analytically by thinking about creating four quartets: adeb, bdea, aedb, beda, which separate a and b and which get permuted with c in only 1 way each (this is because cade b and adebc are equivalent rotations). Since c must be on the outside of the quartet, c is also between b and a (or a and b). Alternatively, we can delete from the part (i) enumeration of 12, the 8 arrangements where a and b are next to each other. So we must subtract from the list of 12, these 8 arrangements: (abcde), (cab de), (cbade), (bacde), (abced), (cabed), (cbaed) and (baced). This leaves 4 circular arrangements (acbde), (bcade), (acbed) and (bcaed) that simultaneously fulfill the two restrictions. Permutation Relaxations So far, this discussion focused on Permutations that were distinct and had no repetitions and were counted without replacement. How does the counting change if we allow repetitions or if we allow replacement or both? • Permutations with repetitions The rule is that if there are zero or more repetitions of an object, because they are indistinguishable objects, their multiplicity factorial must divide the overall arrangement factorial to get the proper count. (with zero repetitions, we have a multiplicity of 1; i.e. the object is unique.) `bb (n!)/(a!*b!*c!*…*k!)` , where a+b+c+…+k = n [6] As an example, suppose we want to know how many different ways the distinct letters in the word bananas can be arranged? We calculate: `bb (7!)/(3!*2!) = (7*6*5*4*3*2*1)/((3⋅2⋅1)⋅(2⋅1)) = 7*6*5*2` = 420. This is because ‘a‘ is repeated three times and ‘n‘ is repeated twice; for each of these letters, the repetitions are indistinguishable, so each multiplicity factorial (3! and 2!) must divide the factorial of the 7 letter word to get the correct count. • Permutations with replacement If we are counting Permutations with replacement, then Table 1 changes and becomes: Value of k: 1 2 3 k (n-1) n Ways to choose k: n n n . . . n . . . n n [Table 2] Row position: —- —- —- —- —- —- In the first position, we can choose any of the n objects. Choose one. In the second position, we can choose any of the n objects. Choose one. In the third position, we can choose any of the n objects. Choose one. In this same way, in the k^th position, we can choose any of the n objects. Choose one. Finally, for the last position, we can choose any of the n objects. Choose one. In this way, we complete a single arrangement (ordering) in a row. Therefore the number of ways to choose (arrange) n objects out of a total of n objects (with replacement) is: n of these ` <———>` n⋅n⋅n⋅…⋅n⋅n = n^n [7] And the number of ways (permutations) to choose k (unique) objects out of a total of n (unique) objects (with replacement) is: k of these ` <———>` n⋅n⋅n⋅…⋅n⋅n = n^k [8] There is a Permutations With Replacement Calculator. It offers examples of how to use and Calculate n^r where 0 ≤ n, r ≤ 9999. See: Permutations With Replacement Calculator An Example: U.S. zip (postal) codes consist of an ordering of five digits, hyphen, followed by four digits chosen from 0-9 with replacement (i.e. digits may be reused). How many zip codes are in the set of all possible zip codes? Since we are looking for arrangements with replacement, we use n, the number of digits to be 0-9, which is 10 digits and the number of positions of the zip code, which is 9. Using the formula [8], we calculate: 10^9 = 1 Billion. Since the zip code 00000-0000 is not used, we subtract it from 1 Billion to get: 999,999,999 different zip codes. If 00000-nnnn are also to be unused, we have instead: 10^9 – 10^4 = 9.9999 × 10^8 = 999,990,000 different zip codes. • Derangements (and subfactorials) A Derangement is a permutation of the elements of n objects or elements, such that no element appears in its original position. If an original permutation is {1, 2, 3, 4, 5}, one derangement would be {2, 3, 4, 5, 1}.The symbol !n (sometimes called a subfactorial), represents the number of derangements of n objects and its formula is: !n = n!`bb sum_(k=0)^n ((-1)^k)/(k!)`[9] In this formula, n ≥ 2. Here are the listings of !n up to n=4. !0 = 1(the empty permutation) !1 = None !2 = 1(21) !3 = 2(231) (312) !4 = 9(2143), (2341), (2413), (3142), (3412), (3421), (4123), (4312), (4321) Derangements with a single fixed point are another special permutation calculation. The number of permutations having at least one fixed point, (e.g.: {2, 4, 3, 5, 1} thus not being (a complete) derangement, is given by: n! – !n = n! – n!`bb sum_(k=0)^k ((-1)^n)/(k!) = n!sum_(k=1)^n ((-1)^k)/(k!)` [10] Another Example: You have 6 balls in 6 different colors, and for every ball you have a box of the same color. How many derangements do you have, if no ball is in a box of the same color? If at least one ball is in a box of the same color? To apply [9] to this problem, we calculate: !6 = 6!`bb ((1 – 1 + 1/2 – 1/6 + 1/24 – 1/120 + 1/720)) =` 6*5*4*3 – 5! + 6*5 – 6 + 1 = 360 – 120 + 30 – 6 + 1 = 265 For the second part of the question, keeping in mind that what is not a derangement is a “partial” derangement, we subtract the total number of derangements from the total number of permutations. 6! – !6 = 720 – 265 = 455 or evaluating the right side of [10] explicitly, we have 6!`bb ((1 – 1/2 + 1/6 – 1/24 + 1/120 – 1/720)) =` 720 – 6*5*4*3 + 5! – 6*5 + 6 – 1 = 720 – 360 + 120 – 30 + 6 – 1 = 455 • Permutation Cycles A permutation cycle is a subset of a permutation whose elements trade places with one another. For example, in the permutation group {4, 2, 1, 3}, (143) is a 3-cycle and (2) is a 1-cycle. Here, the notation (143) means that starting from the original ordering {1, 2, 3, 4}, the first element is replaced by the fourth, the fourth by the third, and the third by the first, i.e., `bb 1 rarr 4 rarr 3 rarr 1`. ( `1, 2, 3, 4` )So `1 rarr 4 , 4 rarr 3, 3 rarr 1` creates the 3-cycle (143) and `darr darr darr darr``2 rarr 2` creates the fixed point or 1-cycle (2). ( `4, 2, 1, 3` )So we have, (143)(2) making up the two cycles in the permutation group {4, 2, 1, 3}. ( See: Weisstein, Eric W. “Permutation Cycle.” From MathWorld–A Wolfram Web Resource. <mathworld.wolfram.com/PermutationCycle.html> ) • Odd And Even Permutations So is {4, 2, 1, 3} an odd or even permutation? By definition, an odd permutation is a permutation obtainable from an odd number of two-element swaps, i.e., a permutation with permutation symbol (Levi-Civita signature symbol) equal to -1. For the initial set (1,2,3,4) the twelve odd permutations are those with one swap (1,2,4,3, 1,3,2,4, 1,4,3,2, 2,1,3,4, 3,2,1,4, 4,2,3,1) and those with three swaps (2,3,4,1, 2,4,1,3, 3,1,4,2, 3,4,2,1, 4,1,2,3, 4,3,1,2). For a set of n elements and n ≥ 2, there are `bb (n!)/2` odd permutations *, which is the same as the number of even permutations. For n =1, 2, …, the total odd permutations are given by 0, 1, 3, 12, 60, 360, 2520, 20160, 181440, … (OEIS A001710). (* D’Angelo, J. P. and West, D. B. Mathematical Thinking: Problem-Solving and Proofs, 2nd ed. Upper Saddle River, NJ: Prentice-Hall, 2000. (p.111) and Weisstein, Eric W. “Odd Permutation” From MathWorld–A Wolfram Web Resource. <mathworld.wolfram.com/OddPermutation.html> ) By definition, an even permutation is a permutation obtainable from an even number of two-element swaps, i.e., a permutation with permutation symbol (Levi-Civita signature symbol) equal to +1. For the same initial (1,2,3,4), the twelve even permutations are those with zero swaps: (1,2,3,4); and those with two swaps: (1,3,4,2, 1,4,2,3, 2,1,4,3, 2,3,1,4, 2,4,3,1, 3,1,2,4, 3,2,4,1, 3,4,1,2, 4,1,3,2, 4,2,1,3, 4,3,2,1). For a set of n elements and n ≥ 2, there are `bb (n!)/2` even permutations, which is the same as the number of odd permutations. For n=1, 2, …, the numbers are given by 0, 1, 3, 12, 60, 360, 2520, 20160, 181440, … as above. (OEIS A001710). ( See: Weisstein, Eric W. “Even Permutation” From MathWorld–A Wolfram Web Resource. <mathworld.wolfram.com/EvenPermutation.html> ) An Odd Permutations Calculator may be used to Calculate `bb (n!)/2` where 2 ≤ n ≤ 100. See: Odd Permutations Calculator Also, An Even Permutations Calculator may be used to Calculate `bb (n!)/2` where 2 < n ≤ 9999. See: Even Permutations Calculator So {4, 2, 1, 3} is an even permutation and its 3-cycle (143) and 1-cycle (2) represent two even cycles. A rule of thumb is that the if a permutation cycle has an odd number of elements within, it is considered even, otherwise if an even number of elements within, it is considered odd. An analogy exists with odd and even numbers which is also true of permutation cycles: If two even numbers (cycles) are added together, the result is even. If two odd numbers (cycles) are added together, the result is also even. If one even and one odd number (cycles) are added together, the result is odd. Thus {4, 2, 1, 3} is an even permutation based on two even cycles (143) and (2). Finally, the rosettacode.org site offers over 70 different computer language codes and scripts (although not any of the Unix/Linux Shells as yet). These programs deal with permutations with or without replacement (as well as that for combinations). See Enumerating Permutations Programs. The site also has programmatic algorithms for • permutations with repetitions, • Finding the missing permutation: finding the missing enumerated permutation from the full set of permutations (47 programs and an interesting discussion tab) See: Find_the_missing_permutation and • permutations and derangements: See: Derangement programs (26 programs) I should mention that the most popular applications for permutations are for people that love Scrabble (at least two or more play), Anagrams and Jumble word and cartoon puzzles. Other applications are when a host is planning a formal a sit down, catered meal for at least 6 people that has seating preferences and constraints. As always, if you are unable to successfully seat everyone agreeably, make the problem bigger and invite more guests. Book References: Conway, John H., Guy, Richard K., (1995). The Book of Numbers. Springer-Verlag, New York, NY. (p. 66-67) Knuth, Donald E. (1997). Art of Computer Programming, Volume 1: Fundamental Algorithms, 3rd Ed. Addison-Wesley, Boston. MA.(p. 45-50, 164-167) Rosen, Kenneth H., Editor-In-Chief (2000). Handbook of Discrete and Combinatorial Mathematics. CRC Press, Boca Raton, FL (p. 84-91, 96-107) Rosen, Kenneth H. (2012). Discrete Mathematics and its Applications, 7th Ed. McGraw-Hill, New York, NY (p. 385-443) π GPS (Greater Precision Solutions) In my previous blog post π Places, I reviewed special fractions that were very good approximations to the value of π to 2 and 6 decimal places: i.e. `22/7` and `355/113` respectively. In this post, I would like to explore the more modern (since 1593) approximations based on infinite series. Note: If the mathematics shown creates immediate “eye-glaze”, please skip to the Activities section, which is much more entertaining in comparison. A Series sums up a finite or infinite collection of terms of a Sequence. If finite, there are a definite, bounded number of values that add up to a specific, measurable value. If infinite, there are an endless, unbounded number of values that add up to either a specific, measurable value (converge), or add up to an indefinite, unmeasurable value (diverge), which is denoted as infinity (∞). Here are two examples of each: Finite Series First 10 Natural Numbers sum `1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = sum_(n=1)^10 n=(10(10+1))/2=55`[1] π expansion to 7 decimal places: `3 + 1/10 + 4/10^2 + 1/10^3 + 5/10^4 + 9/10^5 + 2/10^6 + 6/10^7 = 3.1415926`[2] Infinite Series (Convergent) π decimal expansion: `3 + 1/10 + 4/10^2 + 1/10^3 + 5/10^4 + 9/10^5 + 2/10^6 + 6/10^7 + … = ` π[3] Geometric Series: `1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + … + 1/2^(n-1) + … = sum_(n=1)^oo 1/2^(n-1)=2`[4] Infinite Series (Divergent) Natural Number Series: `1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + … = sum_(n=1)^oo n=oo`[5] Harmonic Series: `1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + … + 1/n + … = sum_(n=1)^oo 1/n=oo`[6] For those interested in mathematically (in)finite series, look at the following List of Mathematical Series. Summed general expressions are given for series organized by sums of powers, power series, binomial coefficients, trigonometric and rational functions. At the top, there are subscripted letters and Greek letter functions (such a necessary annoyance) that are separately named and linked. Greater Decimal Places Approximators Over the years, many mathematicians have tried to evaluate a finite number of terms in an infinite series to approximate π. • Francois Viete in 1593, using an infinite product, calculated π to 9 decimal digits, by applying the Archimedes method of exhaustion to a polygon with `bb 6 × 2^16 = 393216` sides and calculating the circumference (perimeter) when the diameter is 1. π = `2/[sqrt(1/2)*(sqrt[1/2+(1/2)*sqrt(1/2)])*(sqrt[1/2+(1/2)*sqrt{1/2+(1/2)*sqrt{1/2}]))*…` [7] Quite the eye-roller! Alternatively, this looks slightly less volatile, but is equivalent: π `~~ 2^k*sqrt(2–sqrt(2+sqrt(2+sqrt(2+sqrt(2+sqrt(2+sqrt(2+…))))))) ~~ 3.14157294`[8] • James Gregory in 1671 published the arctangent (arctan) infinite series expansion and Gottfried W. Leibniz in 1682 published the specific case with x = 1 and based on arctangent`(1) = pi/4` [9] Expanding this via the infinite series for arctangent`(x)`, where `bb | x | ≤ 1`, we get: arctangent`(x)= x – x^3/3 + x^5/5 – x^7/7 + … = sum_(n=0)^oo (-1)^n*(x^(2n+1))/(2n+1)` [10] • John Machin in 1706, using arctangent and the Gregory-Leibniz Infinite Series expansion for arctangent, as shown below, calculated it to 100 decimal digits. π = 4∙[4∙arctangent`(1/5)` + arctangent`(1/239)`][11] π = [12] `4*{ 4*[ 1/5 – 1/(3*5^3) + 1/(5*5^5) – 1/(7*5^7)+… ] + [ 1/239 – 1/(3*239^3) + 1/(5*239^5) – 1/(7*239^7)+… ] }` • Srinivasa Ramanujan, was a renowned Indian mathematician who made novel contributions to mathematics and to determining π during his short life (1887-1920). The first formula below gives π to 11 decimal places. π = `root(4)[81+(19^2/22)] = 3.14159265262`[13] The following iterative formula was due to Jonathan and Peter Borwein based on Ramanujan’s formulas: π = `[9801/(2*sqrt(2))]*[1/(sum_(n=0)^oo {([(4n)!]/(n!)^4)*[(1103 + 26390n)/(4*99)^(4n)]}]]`[14] Ramanujan’s integration formula for π is also quite innovative: `(pi/2)^3 = int_0^oo [(log x)^2/(1+x^2)]dx`[15] which can easily be solved for π. • David and Gregory Chudnovsky in 1989 in New York City, created their own supercomputer (named m zero) to compute 1 billion decimal places of π from their home. It operated at 100 billion calculations per minute for almost a week. In 1997, they moved to Polytechnic Institute of Brooklyn (my Alma Mater and now part of New York University), creating the Institute for Mathematics and Advanced Supercomputing. By then, their computer worked for a week (with a better algorithm) to compute up to 8 billion decimal places for π. Their π calculation is based on: `1/pi = 12*sum_(n=0)^oo (-1)^n*{([(6n)!]/[(n!)^3*(3n)!])*[(13591409 + 545140134n)/(640320^(3n+3/2))]}`[16] • Yasumasa Kanada holds the 2002 record for `bb 1.2411 × 10^12` decimal places for π. His computer programs (written in Fortran and C) were based on K. Takano’s 1982 arctangent formula [17] below and ran on a Hitachi SR8000/MPP with 144 nodes. The computations were carried out in hexadecimal (base 16) arithmetic and converted at the end to decimal for maximal efficiency. Further details about the computation are found on Yasumasa Kanada 2002 π Summary Page. π `~~` [17] `48*arctan(1/49) +128*arctan(1/57) – 20*arctan (1/239) + 48*arctan(1/110443)` This took 400 hours (hexadecimal computing) + `bb 23 1/3` hours (conversion). It was verified with F.C.M. Stoemer’s 1896 arctangent formula: π `~~` [18] `176*arctan(1/57) +28*arctan(1/239) – 48*arctan (1/682) + 96*arctan(1/12943)` The verification formula [18] took 157.067 hours (hexadecimal computing) + 21.53 hours (conversion) to evaluate and compare. • As of October 8, 2014, an anonymous mathematician named “houkouonchi” completed computing and verifying π to a total of `bb 1.33 × 10^13` decimal places using the Chudnovsky formula [16]. It took his program 208 days to compute and 182 hours to verify on an 2 x Xeon E5-4650L @ 2.6 GHz. See: Validation of π computation. π Activities To Try: Entertaining Websites: Within the mathisfun.com website, there is an activity to guide you to find the approximate value of π. Look at π approximation Many people are emotionally attached to the numbers in their life. Their social security number (e.g. 423456789), their birthday (02022015), their 7-digit telephone number (3234777). There is a great website, called the Pi Search Page that lets you enter your special number and it reports where, in the 200 million decimal places of π that it is located, how many times it shows up (including not at all) and how long it took to search (typically in `bb 1/5` of a second). Go to π Search Page and interact with it. [Updated to add:] To see 10,000, 100,000 or 1 million digits of `bb pi` in all their glory in a downloaded file, a related site called digits of Pi; lets you view 10,000 places or access the download Note that the probability that your birthday number string (8 digits long) will be embedded in the first 200 Million decimal places of π is equal to `bb (1 – 1/e^2)` which is just below 86.47%. There is a World Ranking List of people who have memorized some number of digits of π and recited them. Please view π World Ranking List Books worth reading: • Blatner, David (1997). The Joy of π . London, UK. Walker/Bloomsbury Books.The website Joy of π has a set of links to many π oriented pages including: π mysteries, music and π, memorizing π digits, having fun and enjoying weird aspects of π. David Blatner wrote about the statistical distribution of digits in the first million decimal places of π which are comprised of: 100026 2's 100229 3's 100230 4's 100359 5's 100106 9's For these data, I computed the following statistics: the mean (average) is 100000; the median (middle) is 100005.5; the standard deviation is 247.41 and the range of the data is 811, with the digit 5 being most frequent and the digit 6 being least frequent. • Beckmann, Petr (1971). A History of π (pi). New York, NY. St. Martin’s PressThis impressive source book traces the history of the constant and of the mathematicians that sought greater and greater π precision. There are many illustrations of artifacts and notations used to demonstrate the mathematics involved. This completes the technical and non-technical tour of π. Please let me know if I’ve missed anything that could be further explored.
{"url":"http://www.arkaye.com/blog/2015/02/","timestamp":"2024-11-15T04:31:51Z","content_type":"text/html","content_length":"62324","record_id":"<urn:uuid:169cb805-e64c-4d46-ae1c-79e9ea2cf17d>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00280.warc.gz"}
Discount Calculator | Calculate percentage off price easily Percent Discount Calculator | How to calculate discount? Category: Finance Calculators Sale items are worth less than the original price. To make it easier for you to calculate how much you need to pay, we have developed this discount calculator. Using the percentage discount calculator, you can calculate not only the final cost after applying discounts, but also how much you save on your purchase. How to calculate discount & discounted price? Calculation of the discounted price will be the same as percentage off price calculator does: Final price = Original price – (Original price × Discount percentage / 100) To calculate an amount of money you may save, it is necessary to use a little bit other formula: Discount benefit = Original price × Discount percentage / 100 The origin price means the old cost before applying discounts. Discount percentage is just a discount in percent (for example, -5%, -75%, etc.). The final price is the discounted price, which is reduced by the amount equal to the discount. Just plug your values into the formulas and you’ll get the correct result. However, it is much easier to do this with our percent discount calculator. To calculate the result, you only need to enter the old price and the percentage discount. Enter your numbers and click on the button to get an answer. Examples how to calculate discount percentage Let’s calculate discount percentage together: Calculation of 10% off In this case, you can simply divide the starting price by 10. This will be your discount in monetary terms. This is a simplified calculation scheme that speeds up the calculation, in which the price is divided by 100, and then multiplied by 10. It’s the same thing. To calculate the final price after applying the discount, this amount is subtracted from the original cost. For example, if the initial cost of the product was $500, discount benefit would be $50 ($500/10 or $500/100×10), while the final cost would be equal to $450 ($500 – $50). What is 20% off? 20 percent off (discount –20%) is calculated in such way: 1. Divide the original cost by 100 and multiply it by 20. This is the amount saved. 2. Subtract the result from the original price. Well done, it is the discounted value you should pay. Let’s take $1000 as the initial value. To take 20% off, we divide $1000 by 100 (=$10) and multiply it by 20 (=$200). The result will be $200 which is our discount benefit. The final price in this example is calculated by subtracting $200 from original cost $1000. It means 800% after discounting. How to calculate discount rate 30%? To find out the discount amount of money saved and the final cost you should follow the same instruction using the discount percentage 30%. For instance, if the sales price was $150, you would calculate discount step by step: 1. Divide the original cost by 100. Now it is $1.5 (as 1% of the initial amount). 2. Multiply it by 30. $45 is the amount you saved. 3. Subtract $45 from the original price and you would get $105. How much is 50% off of the price? To calculate discount 50%, it is possible just to divide the starting price in half. The result will be equal to both the amount of money saved and the final price. Let’s take $700 as an example of original price: 1. Divide $700 in half and get $350 as discount benefit. 2. Subtract $350 from $700 and get $350 as a final price. What if the discount is 75% off? If you’re shopping and the discount of your purchase is 75%, it means that only a quarter of the amount will remain of the original price. How the price discount calculator determines it: 1. It divides the sum by 100 and multiplies by 75 (or just multiplies by 0.75). That is the result of saved amount. 2. Then sales discount calculator subtracts the result from initial price you’ve entered. For example, if you enter the price $60, the discount will be equal to $45, while the final price you need to pay will be $15 (quarter of $60). Calculate 90 percent-off Calculation for 90% discount is so: 1. You divide the cost by 100 and multiply by 90 (it is allowed just to multiply price by 0.9) to get the amount you can save. 2. Then it is necessary to subtract the result from starting price. If the starting price is $200, you will save $180, while after-discount price will be $20. Calculate percent off price if the discount is % of initial cost $ =
{"url":"https://calculator-online.info/finance/discount-calculator/","timestamp":"2024-11-06T04:45:01Z","content_type":"text/html","content_length":"147352","record_id":"<urn:uuid:c1a17c07-df3d-4383-9ebe-73f208be73cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00115.warc.gz"}
Real Analysis - (Volume 3) This book titled “Real Analysis (Volue-3)” is a continuation of Real Analysis (Volume-1) and (Volume-2). It is designed for UG and PG students of mathematics to understand the underlying principles and essence of real analysis. To achieve them authors have given detailed proofs for theorems and detailed solutions of a large number of problems on each topic. The main aim of this book is a student can learn its content through his self-study. This book consisting of six chapters Mean value theorems, Taylor’s and Maclaurin’s expansion, indeterminate forms, Riemann integration, Riemann-Stiljes integration and improper integrals. Dr. K. Sambaiah, Formerly Professor of Mathematics, Kakatiya University, Warangal, Telangana, India. Dr. E. Rama Associate Professor of Mathematics, Osmania University, Hyderabad, Telangana, India. Dr. A. Chandulal Assistant professor of Mathematics, National Sanskrit University, Tirupati, Andhra Pradesh, India • Paperback: 562 pages • Publisher: White Falcon Publishing; 1 edition (2023) • Author: K. Sambaiah, E. Rama, A. Chandulal • ISBN-13: 9788119510214 • Product Dimensions: 7 x 1 x 10 Inches Indian Edition available on: We Also Recommend
{"url":"https://store.whitefalconpublishing.com/en-gh/products/real-analysis-volume-3","timestamp":"2024-11-11T14:53:39Z","content_type":"text/html","content_length":"96729","record_id":"<urn:uuid:02ae24de-934c-4f73-afc3-41bbf5a34a43>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00555.warc.gz"}
Regional Real Estate Statistical Analysis: Hypothesis Testing – Two-Tailed Test 1. Regional Real Estate Statistical Analysis: Hypothesis Testing – Two-Tailed Test Problem Description: This Statistical Analysis homework focuses on the central question in real estate, whether property prices are primarily determined by the size of the property in square feet. While several factors influence property prices, the geographical region is one of the critical variables to consider. This study aims to investigate whether the average cost per square foot of properties is the same or significantly different between the Mid-Atlantic and South Atlantic regions. To address this question, we employed a two-tailed t-test to examine our hypothesis. We took 375 observations randomly from mid-Atlantic and South Atlantic regions. Hence, the total number of data points were 750. We used the two-sample two-tailed t-test. The population parameter is average cost per square feet in the region. The hypotheses is specified as: • H0: There is no significant difference between average cost per square feet in Mid-Atlantic and South Atlantic, μ_1= μ_2. • H1: There is significant difference between average cost per square feet in Mid-Atlantic and South Atlantic, μ_1≠μ_2. A two-sample two tailed t-test was used to test the said hypothesis. Data Analysis Preparations The sample contains 750 data points – 350 from mid-Atlantic region where average cost per square foot was 135.64 (SD = 134.4) and 350 from south Atlantic where average cost per square foot was 132.56 (SD = 62.15) The histogram for both mid-Atlantic and south Atlantic cost per square foot indicates right skewed. Hence, the normal distribution assumption might not be met. The assumptions for the two-sample t-test are: • Normality of the data: This assumption is not satisfied based on Shapiro Test for normality for both the groups, p <.001 for both the groups. • Independence: The samples can be assumed to be independence by construction • Equality of variance: This assumption is also not met based on Levene’s Test, p=.04; hence, the variance in two groups is not same. The t-test statistic is calculated as: The test-statistic has t-distribution with 748 degrees of freedom. The two-tailed p-value is P(|t(748)|>0.4)=0.69 Figure 1: Representation of test on normal curve. As degrees of freedom is high, t is nicely approximated by normal distribution. Test Decision The p-value is the probability of type 1 error if null hypothesis is true. The two tailed p-value for this test is 0.69 which is larger than the level of significance which is 5%. The shaded area in below plot indicates the probability of null hypothesis being true. Since, p-value is larger than the level of significance, we do not reject the null hypothesis. It was concluded that the average cost per square foot was same for both mid-Atlantic and south Atlantic regions as there was no evidence of significant difference in the data. The test assumptions were not fully satisfied; hence, the test result may be less reliable. A non-parametric test can be done to further analyze the difference. However, based on t-test and descriptive statistics, we conclude that there is no significant difference in cost per square foot in two regions.
{"url":"https://www.statisticshomeworkhelper.com/hypothesis-testing-for-regional-real-estate-statistical-analysis/","timestamp":"2024-11-05T00:22:54Z","content_type":"text/html","content_length":"48921","record_id":"<urn:uuid:d492302d-164d-49c2-a53c-d3730ba8dad5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00559.warc.gz"}
Best colormap for Matlab/Matplotlib plots Much has been written on selecting best colormaps from among: sequential, divergingm and qualitative. Sequential colormaps are good for representing magnitude of data. How much flow, how much precipitation, how much weight, temperature, etc. Having a monotonic lightness factor is important for perceptual consistency. Non-linear lightness is used to emphasize certain ranges of data, perhaps where snow changes to ice or rain. Non-monotonic lightness can be used to emphasize different types of precipitation or phase changes, etc. Example sparse data plots with reversed sequential colormaps: colormap_white_min.py, colormap_white_min.m Reversed sequential colormaps are useful for sparse data such as astronomical images or precipitation data where a lot of data is at or near zero relative to other data. The reversal leads to near-zero areas being white and higher intensities being darker. While any colormap can be reversed, typically sequential colormaps are used with/without reversal. Matplotlib colormaps are reversed by appending _r to the colormap name. For example: Matlab and GNU Octave colormaps are reversed by flipud() the colormap. Colormaps in .m code are represented as an (N,3) array, where N is the number of steps in the colormap (typically 64 or 256). Matlab cubehelix.m is like Matplotlib. Diverging colormaps are useful for positive or negative data where the sign is as important as the magnitude. For example, in/out flows, positive/negative charge. These colormaps are white near the zero point (which can be offset) and intensify as their absolute magnitude increases. Qualitative colormaps emphasize difference between values, but without a particular sense of ordering. This can be useful for categories, say a histogram of salary vs. employee type.
{"url":"https://www.scivision.dev/colormap-selection-reversal-matlab-matplotlib/","timestamp":"2024-11-11T22:46:31Z","content_type":"text/html","content_length":"7914","record_id":"<urn:uuid:2e71b12e-41fe-405d-a92a-2b6930f7b979>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00251.warc.gz"}
How to find the root of an equation in Matlab? @eloise If the equation is a polynomial (or polynomial), then of course it is most convenient to find the roots through the roots() function, setting the values of the coefficients: 1 % 2*x^2-16*x+14=0; 2 roots([2 -16 14]); we get 7 and 1 in the answer. If the equation is given in a general form, then the easiest way is to use symbolic calculations and the vpasolve() solver: 1 syms x 2 f1(x) = 2*x^2-1/x+exp(x); 3 vpasolve(f1) 4 fplot(f1); grid on; 5 saveas(gca, 'out1.png'); In the answer, we get a high-precision root: ans = 0.48082057254304785525398632869782 Graph of this function: You can also find the zeros of a function using the fzero() function, but this solution has some limitations - the function should not go to infinity and gives a maximum of 1 root.
{"url":"https://devhubby.com/thread/how-to-find-the-root-of-an-equation-in-matlab","timestamp":"2024-11-11T16:38:09Z","content_type":"text/html","content_length":"135762","record_id":"<urn:uuid:caa22753-65e4-4b56-8d02-71d38e181e8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00606.warc.gz"}
Checking direction fields I was recently asked about how to spot which direction field corresponds to which differential equation. I hope that by working through a few examples here we will get a reasonable intuition as to how to do this. Remember that a direction field is a method for getting the general behaviour of a first order differential equation. Given an equation of the form: For any function of x and y, the solution to this differential equation must be some function (or indeed family of functions) where the gradient of the function satisfies the above relationship. The first such equation that we looked at was the equation: We are trying to find some function, or indeed family of functions y(x) which satisfy this equation. We need to find a function whose derivative (y'(x)) at each point x is equal to the value of the function (ie. y(x)), plus that value of x. ie. at the point x=5, we must have that the gradient of the function (y'(5) is equal to 5 plus the value of the function (y(5)). Because we have not specified the initial, or boundary condition for the equation, there will be an infinite number of solutions which satisfy the differential equation alone, and we can imagine that so long as f(x,y) is not singular at some point (x,y), there will be a solution to the equation which passes through that point. The only constraint on the solution is that the gradient of the function, as it passes through that point is equal to x+y(x). Indeed we can even put in a direction field when f(x,y) is singular, but really we know that the function is not defined at that point. The direction field will simply correspond to a vertical line. Rather than finding the solution in its entirety we can simply ask, for a sample of points, what, roughly, will the lines passing through the points look like? From the equation, we only have a constraint on the first derivative, and so we will simply put a short, straight line through all of our sample points in the (x,y) plane, such the gradient of that short straight line satisfies the differential equation above. In the following two plots we use two different sets of sample points. In the plot on the left, we have chosen to sample at half integer positions in x and y. In the right plot we have quarter integer samples. Note importantly that each line has a very special property: Its gradient is equal to the x value plus the y value of the middle of the line. ie. the line passing through the point (1,1) has gradient 2, and that passing through (1,-1) has gradient 0. This is all we ever have to check when seeing if indeed a given direction field plot satisfies our differential equation. Let’s look at another example. This time we are looking at the differential Can we see that this is correct? Well, we see a couple of clear features in the plot. The first is that the gradient along the line y=x always seems to be the same: 1. Indeed at the points along the line y=x, we expect that the value of x/y will be 1 (ie. at the point (2,2), the value of x/y=2/2=1). Along the line y=-x, we see another set of lines, each of which have the same gradient: -1. Again, this makes sense because when y=-x, y/x=-1. Another feature that we see is that close to the x-axis (ie. the points for which y=0), the gradient of the lines seem to be getting larger and larger, as we would expect for x/y as y gets small (unless $x\le y$). And close to the y-axis, (ie. the points for which x=0), the gradient of the lines seem to be getting smaller and smaller (ie. they are flatter and flatter). Again, this is in line with these lines corresponding to the differential equation above. Let’s look at a less trivial example: The direction field is as follows: Again, let’s look for the most obvious features: The first is that all of the lines for which x=-2 seem to be horizontal. See the lines in the red box here: Indeed this makes sense because $\frac{\sqrt{-2+2}}{\tan(y)}=0$ and so we expect these to be flat – ie. their gradients to be 0. Along the y-axis we have vertical lines. See the lines in the red box here: This also makes sense because when $y=0, tan(y)=0$, and so the gradient at these points will be infinite (in fact this really means that the solution is ill defined at this point). We also see that there are a set of lines around points with y values of 1.6 an -1.6 for which the lines seem to be close to flat. See the lines in the red boxes here: This is because 1.6 is close to $\frac{\pi}{2}$ and at values close to this, tan(y) becomes very large (large and positive, or large and negative, depending on which side of $\frac{\pi}{2}$), so $\ frac{\sqrt{x+2}}{\tan(y)}$ will get small when tan(y) is large. We can also see that the lines below and above the y=1.6 lines change in gradient from being positive gradient to being negative. This is because tan(y) is changing sign either side of y=1.6, so we expect that the gradients will change sign. Anyway, I hope that with these examples it gives a few things to look out for when checking that a direction field does indeed satisfy a given differential equation. Simply make sure that the lines in the direction field satisfy the relationship between the gradient and the x and y values of the equation. One Comment 1. […] Differential equations – checking direction fields […]
{"url":"http://www.mathemafrica.org/?p=13193","timestamp":"2024-11-06T06:04:35Z","content_type":"text/html","content_length":"209937","record_id":"<urn:uuid:d33713b4-cf83-417f-98c0-12844c0352c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00396.warc.gz"}
Function Anatomy | Microsoft Excel 2019 - Basic & Advanced About this lesson Use to understand the anatomy of Excel functions, and what their components mean. Lesson versions Multiple versions of this lesson are available, choose the appropriate version for you: 2013, 2016, 2019/365. Quick reference Function Anatomy Understanding Excel Function Anatomy. When to use Use to understand the anatomy of Excel functions, and what their components mean. What is a Function? • A pre-packaged algorithm which accepts parameters to return a result. Key points to remember • Functions must be used inside formulas. • The function name is always followed by ( ) • The parameters for the current function show in Excel’s “Intellisense”. • The current parameter is always listed in bold. • Optional parameters are surrounded by [ ] • Parameters not enclosed in [ ] are required. • Ranges can be used as parameters. • Other functions can be used as parameters. Login to download • 00:04 In this video, we're gonna talk about Excel functions. • 00:08 It's important to understand that Excel functions and formulas, • 00:12 although often referred to as the same thing, are actually very different. • 00:17 What a function is is it's a pre-packaged algorithm • 00:20 that allows us to pass certain parameters into it and get a result back out. • 00:26 Now the interesting part is that they work very well with formulas. • 00:29 As a matter of fact, in order to use the function, it has to be inside a formula. • 00:34 And we can use multiple functions or only one function or • 00:37 none at all inside a formula to get the results that we need. • 00:41 Every function has some key parts to it. • 00:44 It has a function name, it will have required parameters and • 00:47 potentially optional parameters as well. • 00:50 And this is taking from the IntelliSense help that we see that pops up • 00:54 inside of Excel. • 00:55 You'll notice that the function name is always followed by an open bracket and • 00:59 then the list of any parameters followed by a closing bracket. • 01:03 The current parameter is always listed in bold, so • 01:06 when you look at the sum function here, you can assume that we actually snapped • 01:11 a picture of this as we were working on number1, which is the first parameter. • 01:16 Optional parameters are surrounded by these square brackets. • 01:19 So in this case, number2 and anything after it, the comma dot, dot, dot, • 01:24 allows us to know that we do not have to provide a number two if we don't want to. • 01:29 But we also know by default then that anything that doesn't have square brackets • 01:33 around it, as is the case of number one here: is not optional, it's required. • 01:40 Something that's also important to understand about functions is that we can • 01:43 use ranges as these parameters. • 01:45 We can use regular numbers, we can use text, or • 01:47 we can use ranges depending on what the function needs. • 01:50 So let's go take a look in Excel and see how we can actually use • 01:54 a function in order to build a good formula to work with. • 01:59 So for this example, we have a little income statement for • 02:02 a shoe store here, with has a couple of values in it that I want to summarize. • 02:06 Now, I could use just hard coded numbers but • 02:10 I don't wanna do that because if somebody changes the January, February numbers, • 02:13 I want it to update and flow all the way through the statement. • 02:16 I could also say =B7 + C7, but if someone inserted a new column between January and • 02:23 February and then renames the column, it wouldn't pick that data up. • 02:26 So the best way to do this is to actually leverage the sum function in a formula. • 02:31 And to do that, I'm gonna start by typing =. • 02:34 And, I'm gonna start typing the function name, su, and • 02:37 you will notice that the IntelliSense list shortens down to all of • 02:41 the functions that actually start with su. • 02:43 So we've got substitute, we've got subtotal. • 02:46 I'm now gonna go and I'm gonna type in the m. • 02:48 And that will get me to sum. • 02:50 And at this point that's the function that I want, and • 02:53 I have a way I can actually accept this I could double-click on it. • 02:56 But the way I always do this is by pressing the Tab key. • 02:59 And what you'll watch is that as soon I do this two things will happen. • 03:03 Number one, it will convert it to upper case and • 03:05 number two it puts in the opening parenthesis that's needed. • 03:09 I always go and lock in to every function by pressing the Tab key. • 03:13 I'll type as little as I possibly can to make that happen. • 03:17 And now to sum the date up, we'll just select January and • 03:21 February so we get B7 up to C7. • 03:23 Close the parenthesis, and we'll hit Enter, and I get $2500. • 03:29 Now, I can obviously copy this guy here. • 03:32 And I've got a little bit of formatting on this one. • 03:34 I'm gonna select these two cells here and I'm gonna right-click. • 03:37 And I'm gonna choose to paste formulas. • 03:40 And that way, it doesn't change the borders on these particular cells. • 03:44 Now let's do another one. • 03:45 We'll go up and this time again, I'm gonna go =su. • 03:49 And I'll show you that if I got a lot of functions in list, • 03:52 I can use my drop down arrow key to arrow down a couple of times to sum. • 03:56 And now I can Tab the sum as well. • 03:59 And what's interesting here is I don't even need to reach for the mouse. • 04:02 I can arrow up and then I can hold down my Shift key and arrow down one, • 04:08 close the parentheses, and hit Enter and my sum is now working there as well. • 04:12 So I can do everything keyboard driven. • 04:15 I can then copy this guy across as well. • 04:18 Even better, I can grab these cells. • 04:21 Control + C, Control + V, and • 04:24 I can grab these cells here if I want to and pace them as well. • 04:29 And you'll notice that because everything is relative, it's working quite nicely. • 04:35 Now there's another way that we can use the sum function as well. • 04:38 So I'm gonna do this for gross profit. • 04:39 We're gonna say =su, arrow down a couple times, m. • 04:43 And you'll notice that right now I'm on number one. • 04:46 So I could grab this one and then I could say, and I could grab number two. • 04:52 And I could put these two things in individually so • 04:54 I can use different ranges individual or multiple here and I can Enter, of course, • 04:59 that's not the right calculation. • 05:01 I actually need to go back and say let's go and subtract the cost in • 05:08 other to get the proper gross profit, at which point I can now fill this across. • 05:13 Working with the sum function this way is similar to saying this plus this, • 05:18 it's just a little bit different style. • 05:20 And remember, you've got lots of functions here that you can work with, sums, and • 05:23 averages, and counts. • 05:24 They're all built in exactly the same fashion but • 05:28 return the results as named by the specific function. Lesson notes are only available for subscribers.
{"url":"https://www.goskills.com/Course/Excel-2019/Lesson/541/Function-Anatomy","timestamp":"2024-11-10T01:44:47Z","content_type":"text/html","content_length":"90518","record_id":"<urn:uuid:e3ae336a-2dae-4405-8365-b1314310650d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00012.warc.gz"}
Explanation of the many names for types of phylogenetic networks Two types of phylogenetic network are commonly recognized, although there can be gradations between the two extremes. These go by many different names, which inevitably leads to some confusion on the part of users. Some of the names are listed here, along with an explanation of what the terminology is intended to convey. The terms are arranged in pairs, indicating the two different types of network. The "network" part of the name is assumed in each case unless indicated otherwise. Type 1 Type 2 1. Affinity Genealogical 2. Data-display Reticulogeny 3. Implicit Explicit 4. Directed Undirected 5. Rooted Unrooted 6. Splits graph Augmented tree, Reconciliation, Recombination, This reflects the biologists' perspective, describing the different purposes for which networks have been used. Affinity networks display overall similarity relationships among the organisms, whereas genealogical networks display only historical relationships of ancestry. This reflects the assumptions used for the data analysis. Data-display networks are interpreted solely as visualizations of the patterns of variation in the data, while the reticulogenies are based on some inferences about those data patterns (such as their possible cause). Some network types, such as Reduced Median Networks and Median-Joining Networks, are based on algorithms that make partial inferences from the data. Data-display networks have mainly been used as affinity networks and reticulogenies as genealogical networks. This reflects the computational perspective, describing the goal of the algorithm used to analyze the data. Explicit networks are intended to provide a phylogeny in the traditional sense used for phylogenetic trees, displaying both vertical and horizontal patterns of descent with modification. Implicit networks provide information that can be used to explore phylogenetic patterns in a dataset without any direct interpretation as necessarily showing a phylogeny. Implicit networks have mainly been used as data-display networks and explicit networks as reticulogenies. This reflects the mathematical interpretation of networks as line graphs. In a directed graph the edges have a direction, usually indicated by an arrow, in which case the edges are more correctly referred to as arcs. Undirected graphs do not have directed edges. This reflects the tree-thinking view of phylogenetic networks, in which directed graphs are called rooted trees and undirected graphs are called unrooted trees. Rooted networks are usually treated as explicit networks and are thus used as genealogical networks, although there is no reason why they could not be used simply as a convenient form of data display. This reflects the modelling approach to network analysis based on mathematical structures. Splits graphs model phylogenetic patterns as bipartitions of the data, and build the network from those partitions (the result will be a tree if there are no incompatible bipartitions). Augmented trees are essentially trees with a few added reticulation edges / arcs, while reconciliation networks are based on reconciling the differences between trees. Recombination networks are based on analyzing data patterns in terms of a simple model of genetic cross-over, while hybridization networks model the data in terms of patterns in conflicting trees. So, there are reasons why so many different terms have appeared in the literature. Unfortunately, they are not always used consistently with the meaning that was originally intended.
{"url":"https://phylonetworks.blogspot.com/2012/11/explanation-of-names-for-phylogenetic.html","timestamp":"2024-11-12T15:34:48Z","content_type":"text/html","content_length":"130779","record_id":"<urn:uuid:44e299ea-84c7-4633-ab9a-bb4423c11ec2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00307.warc.gz"}
Commutative Property of Multiplication Commutative Property of Multiplication and Addition/Preservation 7th Grade Properties of Addition and Multiplication 6th Grade Milestone Review - Properties Commutative and Associative Property Properties of Addition and Multiplication Math Properties Vocabulary Properties of Addition and Multiplication Expressions and Equations Vocabulary Identifying Math Properties Explore Commutative Property of Multiplication Worksheets by Grades Explore Other Subject Worksheets for grade 6 Explore printable Commutative Property of Multiplication worksheets for 6th Grade Commutative Property of Multiplication worksheets for Grade 6 are essential resources for teachers looking to enhance their students' understanding of this fundamental math concept. These worksheets provide a variety of problems and exercises that challenge students to apply the Commutative Property of Multiplication in different contexts, helping them develop a strong foundation in math. By incorporating these worksheets into their lesson plans, teachers can ensure that their Grade 6 students grasp the importance of this property and its application in real-life situations. Furthermore, these worksheets also cover other Properties of Multiplication, such as the Associative and Distributive properties, providing a comprehensive learning experience for students. In conclusion, Commutative Property of Multiplication worksheets for Grade 6 are invaluable tools for teachers who aim to foster a deep understanding of multiplication and its properties in their students. Quizizz is an excellent platform for teachers to access a wide range of resources, including Commutative Property of Multiplication worksheets for Grade 6, as well as other math-related materials. This platform offers interactive quizzes, engaging games, and customizable worksheets that cater to different learning styles and abilities, making it an ideal resource for teachers to supplement their lesson plans. In addition to multiplication worksheets, Quizizz also provides resources for other math topics, such as fractions, decimals, and geometry, ensuring that teachers have access to a comprehensive collection of materials to support their students' learning. By incorporating Quizizz into their teaching strategies, educators can create a dynamic and engaging learning environment that not only reinforces key math concepts but also fosters a love for learning in their Grade 6 students.
{"url":"https://quizizz.com/en-us/commutative-property-of-multiplication-worksheets-grade-6","timestamp":"2024-11-04T23:35:58Z","content_type":"text/html","content_length":"150935","record_id":"<urn:uuid:aae037a9-dfb1-4578-9c4b-d63aadf55ba7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00893.warc.gz"}
The phenomenon of phase noise generation in oscillator/VCOs has been the main focus of important research efforts and is still an open issue, despite significant gains in practical experience and modern CAD tools for design. In the design of oscillator/VCOs, minimization of the phase noise is the prime task and this objective has been accomplished using empirical rules. Therefore, the predictive power of the model is limited.^1–5 The phase noise is a critical figure-of-merit because it affects the dynamic range, selectivity and sensitivity of a receiver.^2–4 The ability to achieve minimum phase noise performance is paramount in most RF and MW designs, and the continued minimization of phase noise in oscillator/ VCOs is required for the efficient use of the frequency spectrum. This article presents an analytical approach for noise minimization techniques in terms of the oscillator circuit component parameters, leading to minimum phase noise for a given class of VCO Oscillator Theory Fig. 1 Colpitts oscillator with base lead inductance and package capacitance using a three-terminal active device. Figure 1 shows a simplified Y matrix approach to describe a typical oscillator circuit and the flow chart on how to convert two-port S-parameters to a three-port configuration, using a three-terminal active device. The expression for the input impedance of the oscillator circuit shown is given by Y[ij] (i,j=1,2) = Y-parameters L[p] = base-lead inductance C[p] = base-emitter package capacitance of the BJT From Equations 1 and 2, the base-lead inductance makes the input capacitance appear larger and the negative resistance appears smaller. The equivalent negative resistance R[NEQ] and capacitance C[EQ] can be defined as^4 f[0] = oscillator resonance frequency The performance of an oscillator can be evaluated by the figure-of-merit (FOM) and can be described by Fig. 2 Frequency spectrum of ideal and real oscillators (a) and jitter in time domain relating to phase noise in the frequency domain (b). The first and third terms of Equation 11 represent the contributions of phase noise and power consumption (P[DC]) to FOM, respectively. From Equation 11, the phase noise for a given offset has a greater impact on FOM than the power consumption does for a given oscillator frequency f[0]. From Equation 3, the degree to which an oscillator generates a constant frequency f[0] throughout a specified period of time is defined as the frequency stability of the signal source. The frequency instability, due to the presence of the noise in the oscillator circuit, modulates the signal, causing a change in frequency spectrum commonly known as phase noise. Figure 2 illustrates the frequency spectra of ideal and real oscillators and the frequency fluctuation corresponding to jitter in the time domain, which is a random perturbation of the zero crossing of a periodic signal. Phase noise and timing jitter are both measures of uncertainty in the output of an oscillator. Phase noise defines the frequency domain uncertainty of an oscillator, whereas timing jitter is a measure of oscillator uncertainty in the time domain. The equation for an ideal and real oscillator in the time domain is given by^4 where A[0], A(t), φ[0], φ(t) and f[0] are the fixed amplitude, time variable-amplitude, fixed phase, time variable-phase and free running frequency of the oscillator. From Equations 5 and 6, the fluctuations introduced by A(t) and φ(t) are functions of time and lead to sidebands around the center frequency f[0], giving a direct relationship between phase noise and the spectral output of the oscillator. The phase noise is defined in terms of the noise spectral density, in unit of decibels below the carrier per Hertz and is given by From Equation 7, the expression for the phase noise is given by^16 where [m]), f[m], f[0], f[c], Q[L], Q[0], F, k, T, P[o], R and K[0] are the ratio of the sideband power in a 1 Hz bandwidth at f[m] to total power in dB, offset frequency, flicker corner frequency, loaded Q, unloaded Q, noise factor, Boltzman’s constant, temperature in Kelvins, average output power, equivalent noise resistance of tuning diode and voltage gain. Fig. 3 Equivalent circuit of a Colpitts oscillator with noise sources. From Equation 8, the phase noise performance depends on the noise factor F of the oscillator circuit for a given resonator network and oscillator/VCO topology; therefore, optimization of the noise factor will lead to the minimization of the phase noise. Figure 3 shows the equivalent circuit of a Colpitts oscillator for the purpose of the noise factor analysis.^4,16 The predictive power of Equation 8 is limited due to the parameter noise factor F, which is not known a priori. The approximate expression of the noise factor F in terms of the oscillator feedback components (C[1] and C[2]) for the circuit shown is given by^4 From Equations 3 and 4, the free running frequency f[0] of the oscillator circuit is given by^4 Fig. 4 Noise figure vs. frequency as a function of C[1] with C[2] = 2.2 pF. With the transistor (Q) NEC68830, C[P] = 1.1 pF, C[1] = C[P] = 3.3 pF, C[2] = 2.2 pF, C[c] = 0.4 pF, R[PR] = 18000, C[PR] = 4.7 pF, L[PR] = 5 nH; the free running frequency is calculated from Equation 10 as f[0] With Y[e] = 0.9 Ω at 28 mA, ß = 100, f = 1 GHz, f[T] = 10 GHz; the noise factor F is calculated from Equation 6, as F = 104.7 [10](F) = 20.18 dB. Fig. 5 Noise figure vs. frequency as a function of C[2] with C[1] = 3.3 pF. Figures 4 and 5 illustrate the dependency of the noise figure F (dB) on feedback capacitors C[1] and C[1]. From Equation 8, the phase noise of the oscillator circuit can be optimized by optimizing the noise factor terms as given in Equation 9, with respect to the feedback capacitors C[1] and C[2]. For the example circuit shown, the output power = 13 dBm, C[1] = 3.3 pF, C[2] = 2.2 pF, Y With Y[0] = 1000, Q[L] = 380, F = 20 dB (calculated from Equation 9). From Equations 8, 9 and 10, the calculated phase noise plot for the circuit is shown in Figure 6, which closely agrees with the simulated (Ansoft Designer) phase noise plot within the variation of 3 dB, as shown in Figure 7. From Equation 8, Fig. 6 Calculated phase noise for the Colpitts oscillator. m = ratio between the loaded and unloaded Qs From Equations 8 and 11, the minimum phase noise can be found by differentiating Equation 11 with respect to m, and equating to zero for maxima and minima as^4 Fig. 7 Simulated phase noise of the Colpitts oscillator. Figure 8 shows the typical phase noise plot at 10 kHz offset with respect to m for the 1 GHz oscillator circuit. For different values of the noise figure F (F[3] > F[2] > F[1]), the phase noise is minimum at m[opt], and the plot is typically like a bathtub curve, which is shifted symmetrically about m[opt]. Fig. 8 Phase noise vs. m at 10 kHz offset for different values of F. This implies that for low noise wideband application, the value of m should be dynamically controlled over the tuning range and should lie in the vicinity of m[opt] for minimum phase noise From Equation 9, the circuit topology and the resonator are selected in such a way that the feedback parameters (C[1] and C[2]) are dynamically tuned for minimum noise figure (F), and m = 0.5 over the desired tuning range. From Equation 8, the phase noise of the oscillator circuit can be described exclusively in terms of the prior known circuit parameters as^16 Fig. 9 Schematic of a high Q resonator-based 1 GHz Colpitts VCO.14 where y+[21], y+[11] are the large-signal [Y] parameters of the active device, K[f] is the flicker noise coefficient, AF is the flicker noise exponent, R[L] is the equivalent loss resistance of the tuned resonator circuit, I[c] is the RF collector current, I[b] is the RF base current, V[cc] is the RF collector voltage, C[1], C[2] are the feedback capacitors, and p and q are constants depending upon the drive level across the base-emitter of the device.^16 To test the validity of the noise models, a 1 GHz Colpitts VCO was built, which is shown in Figure 9. Figure 10 shows the measured plot of the phase noise, which is in good agreement within 2 to 3 dB with the calculated and simulated results shown previously. Fig. 10 Measured phase noise of the fabricated VCO. Noise Impedance Matching Minimizing of the noise level can be done by a noise impedance matching analog to power matching by means of a transformer. For high frequency oscillation, noise impedance matching using a transformer winding is practically limited. The other alternative is to match the impedance by incorporating a capacitive tapping factor n of the resonator network for optimum noise impedance level. Here, the tapping factor n is analogous with the conventional transformer-winding ratio. Figure 11 shows the equivalent representation of a capacitively tapped series resonator network. As shown, capacitive tapping increases the impedance level at the terminals of the resonator network, which is required for the impedance matching for minimum noise factor, thereby improving the phase noise performance. However, the tapping mechanism introduces an additional parallel capacitance, C[resonator] and C[tap], which yields an unwanted mode of oscillation. Fig. 11 Equivalent representation of a capacitively tapped series resonator. Care must be taken to avoid the unwanted parasitic mode of oscillations, which otherwise degrades the loaded quality factor of the resonator when the parallel parasitic resonance is relatively close to the fundamental series resonance. The fundamental resonance frequency and transformed resonator impedance can be described by From Equation 15, the transformed resonator impedance [Z(ω[0])] depends upon the tapping factor n, and can be optimized for maximum signal-to-noise ratio to minimize the phase noise. Due to the From Equation 19, as the tapping factor n increases, the parallel unwanted parasitic resonance mode tends to shift towards the fundamental series resonance mode. The effective quality factor Q of the resonator decreases due to the tapping for noise impedance matching,^4 and can be described by From Equation 20, when the tapping factor n is small, the degradation of the quality factor is negligible. The parasitic mode of the frequency can be given by To prevent the unwanted parasitic mode of resonance, the tapped resonator should be compensated by the negative resistance and the negative capacitance. The negative resistance will compensate the loss resistance n^2R[loss], and the negative capacitance cancels the effect of the positive the C[resonator] and C[tap]. By proper selection of an optimum tapping factor n[opt], which depends on the loss resistance of the resonator and the active device (BJT/FET) parameters (especially the base resistance of the bipolar transistor), the noise impedance matching can be done for improved phase noise performance. An oscillator circuit can support more than one resonant mode (unwanted parasitic oscillations due to the bonding wire inductance L[p]), which can be described by the admittance equation as From Equation 22, the fundamental parallel mode of oscillation is given by the parallel combination of L and [1/jω (C[in]+C)], but there is a second parasitic mode associated with [1/jωC[in]] in parallel with [jωL[p]+ jωL] and [1/jωC], which is due to the parasitic bonding wire inductance L[p]. The parasitic oscillation mode can be overcome by incorporating a resistor R[s] to L[b], which will damp the spurious parasitic oscillation mode and has negligible effect on the fundamental resonance mode. However, care needs to be taken in the design, since a large value of Rs increases the noise factor, thereby degrading the overall noise performance of the Colpitts oscillator circuits. Fig. 12 Typical Colpitts oscillator circuit with a noise filtering network. Noise Filtering Figure 12 shows the noise-filtering network at the emitter bias current (I[e]) in a typical Colpitts oscillator circuit. The feedback capacitor C[2] should remain unaffected by the insertion of the filter, which means that an additional capacitance C[f] may be required to cancel the inductor reactance L[f] at the fundamental oscillation frequency. The single-ended bipolar transistor circuit in which filter inductor L[f] tunes the parasitic capacitance to the oscillation frequency can serve this purpose. Simulation CAD and measured data confirm the improvement by 3 to 6 dB of the phase Optimum Transconductance (gm) There are mainly two noise sources that mostly contribute to the phase noise: thermal noise (broadband noise) and flicker noise (low frequency noise). Flicker noise up-conversion is related to the symmetry of a signal waveform and can be reduced by designing the signal swings symmetrically. Fig. 13 Typical plot of phase noise vs. transconductance. The active device in the oscillator circuit generates the negative conductance to compensate for the loss in the resonator network in order to sustain a steady-state oscillation, thereby generating a broadband thermal noise proportional to the negative transconductance of the device. If the negative transconductance is very small, then it does not support steady-state oscillation, whereas, if it is very large, it generates an excess thermal noise that may increase the oscillator phase noise drastically. Therefore, the transconductance of the device should be optimized in order to maintain stable oscillation without introducing excessive noise. Figure 13 shows a typical plot of the phase noise versus device transconductance. The oscillator starts oscillating when the transconductance reaches g[m(min)] and it is just sufficient enough to compensate the loss in the resonator tank. As the transconductance increases from g[m(min)], the phase noise decreases till it reaches g[m(opt)]. Any further increase in transconductance creates a counter effect and the thermal noise in the active device increases and follows the transconductance curve. Therefore, corresponding to g[m(opt)], the phase noise reaches the minimum point and, after that, the increase in oscillation amplitude is completely nullified by the increase in thermal noise. After crossing the minimum point, the phase noise increases as the signal amplitude is limited to the supply voltage, while the thermal noise continuously increases with the increase in conductance. Therefore, for a given oscillator topology, there exists an optimum transconductance for the minimum phase noise. Fig. 14 Two zones: Voltage limited and inductance limited. Optimum Inductance (L) As shown in Figure 14, two modes of operation exist for an LC oscillator, namely current and voltage regimes. Considering the bias current as an independent variable, the voltage across the resonator network can be described by In the current-limited zone, the resonator tank amplitude V[resonator] linearly increases the bias current according to the relationship, until the oscillator enters the voltage-limited zone, whereas, in the voltage-limited zone, the amplitude is limited to the V[threshold], which can be determined by the available supply voltage. The equivalence of the current and inductance-limited zone can be combined to determine the relation between E[resonator] and I[bias] in the inductance-limited zone. The noise-to-carrier ratio can be given as From Equation 31, the noise-to-carrier ratio remains constant in the L-limited zone and does not depend on the value of the inductor. However, once the oscillator enters the voltage limited zone, the noise-to-carrier ratio increases with L. Therefore, selecting an L, which transfers the oscillator in the voltage-limited zone, yields a waste of L and increases the noise. For a given energy E [resonator], a larger V[resonator] obtained by increasing the L does not offer a better noise performance since the oscillator has a similar response to both the E[resonator] and the thermal energy. Self-injection Mechanism Fig. 15 Self-injected coupled oscillator. Minimizing of the noise can be done by employing a self-injection locking mechanism in a coupled oscillator, which is a cost-effective and power-efficient alternative and has recently emerged as a strong contender for low noise signal sources in modern wireless communication systems.^2–4 Figure 15 shows the second-harmonic self-injected coupled oscillator topology, which consists of a cross-coupled pair (Q[2]–Q[3]), a current source (Q[1]), a power splitter, and a tunable delay path containing a delay-line cable and a tunable phase shifter.^27 Figure 16 shows the simplified oscillator model consisting of an LC tank, a conductance (G[t]) representing the tank loss, a feedback signal V[f](t) and the mildly nonlinear transconductance (g[m1] to g[m3]). For the self-injected coupled oscillator, part of the output signal feeds back to the current source. The current source with the mildly nonlinear transconductance (g[m1]) transforms the feedback signal V[f](t) to a larger current format [I[f](t) = g[m1]V[f](t)]. With the equivalent parallel resistance of the tank R[eq] for the second-harmonic (2f[0]), the feedback signal amplitude crossing the tank (V[inj]) is produced [V[inj](t) = I[f](t).R[eq]]. The expression of the phase fluctuation (phase noise) of the self-injected coupled oscillator is^27 Fig. 16 Simplified model of the self-injected coupled oscillator. For (θ[f]–θ[1]) ⇒ 2nπ Equation 32 can be given by From Equation 33, the noise can be further minimized by increasing the transconductance (g[m1]) of the current source, increasing the equivalent parallel-resistance of the tank (R[L]) for the fundamental signal (f[0]), reducing the amplitude imbalance (Δ Fig. 17 Measured phase noise plots of 2488 MHz oscillators using a CPR configuration (two resonators) and uncoupled resonator oscillator (one resonator). Examples: Low Noise Oscillators Coupled Planar Resonator-based 2488 MHz Oscillator The following example describes the use of coupled planar resonator (CPR) and noise minimization techniques as discussed in the previous section for a high performance, low noise, high quality microwave source. A CPR-based 2488 MHz VCO was designed and fabricated on a 0.35" x 0.35" x 0.16" substrate and experimental results have validated the novel techniques proposed in this work. Figure 17 shows the phase noise plot of the 2488 MHz VCO using CPR resonators in a hybrid medium (transverse coupling between stripline and microstrip line coupled resonators, PCB: six-layer board with Rogers substrate) and the operating bias conditions are V[cc] = 5 V, I[c] = 15 mA. The measured phase noise plot of the oscillator minimizes the noise and shows a 7 dB reduction in phase noise, with respect to the uncoupled planar resonator-based oscillator with a typical power output of 5 dBm (minimum) and 30 dB harmonic rejections. Fig. 18 Measured phase noise of 622/2488/4200 MHz VCOs using CPR. For the validation of the approach, 622, 2488 and 4200 MHz VCOs were designed and fabricated, where the resonator is self-injection locked and tuned to their respective fundamental frequencies (without frequency multiplication). Figure 18 shows the phase noise plot for comparative analysis of 622, 2488 and 4200 MHz VCOs using CPR and noise reduction techniques. Power Efficient and Low Microphonics VCOs An object of this research work is to provide a cost-effective solution to solve the problem of microphonics and conversion efficiency by using stubs tuned planar-coupled resonators (STPCR) in a stripline medium (since they are self-shielding due to their dual ground plane) for low noise signal sources, which can replace a low power oscillator followed by an amplifier, in order to reduce the size and cost of the wireless communication systems.^3,14,15,19 DC-to-RF conversion efficiency is related to the fundamental signal RF output power and DC power consumption, which can be described by Fig. 19 Schematic of a self-coupled, shorted-stubs resonator oscillator (patented). η[efficiency] = DC-to-RF conversion efficiency P(ω[0]) = RF output power of the fundamental signal P[DC] = DC power consumption For higher conversion efficiency, the oscillator circuit topology should be such that it operates at low DC power and at the same time produces high RF output power at the desired fundamental frequency. The RF output power for a typical oscillator circuit can be described in terms of the higher order harmonics as Fig. 20 Simulated phase noise of the self-coupled, shorted-stubs resonator oscillator. where V[1], I[1], V[2] I[2] and V[n], I[n] are the amplitudes of the voltage and currents of the fundamental, second and nth harmonic components, respectively; the angles θ[1], θ[2] and θ[n] are the phase angles between the voltage and the current of the respective harmonic components present at the output node of the oscillator circuit. For a high value of η[efficiency], other higher order harmonics must be suppressed; otherwise, they will degrade the conversion efficiency of the generated fundamental signal tone (ω[0]) from the given input DC power (V[DC] x I[DC]). Fig. 21 Measured phase noise of the STPCR VCO. Figure 19 shows the schematic of a typical 2488 MHz STPCR oscillator circuit, where the RF output is extracted from three different nodes (Nodes 1, 2 and 3) for comparative analysis of the DC-to-RF conversion efficiency and phase noise performances. Figure 20 shows the simulated (CAD: Ansoft Designer Nexxim V3) phase noise plot for the oscillator circuit, which shows that the RF output extracted through Node 3 ultimately offers the best phase noise performance. As depicted, Node 3 gives a higher level of second-order harmonic rejections (45 dB) in comparison to Node 2 (30 dB) and Node 1 (15 dB). However, Node 2 offers higher efficiency (40 percent) in comparison to Node 1 (10 percent) and Node 3 (20 percent); therefore, there is trade-off between phase noise and harmonic rejection based on the applications. Figure 21 shows the phase noise plot of the STPCR-based, high spectral pure signal source at 2488 MHz in accordance with the present novel techniques (patent-pending), which can be tuned (user-defined frequency) and where the frequency can be extended without changing the dimensions of the stub-tuned resonators (stripline domain PCB: six-layer). The design is based on an innovative topology, which supports the fast convergence by dynamically tuning the noise impedance transfer function of the resonating network and the negative resistance generating device for optimum noise performance over the tuning range. The measured phase noise for a 2488 MHz carrier frequency is typically –128 dBc/Hz at 10 kHz offset from the carrier with 40 percent DC-to-RF conversion efficiency. The measured RF output power at the fundamental frequency f[0] is typically 15 dBm for a given operating DC bias condition (V[DC] = 5 V, I[DC] = 15 mA). Fig. 22 Schematic of a coupled oscillator self-injection locked VCO. Coupled Oscillators Self-injection Locked Wideband VCO Figure 22 shows the schematic of the configurable signal source by using the coupled oscillator self-injection locked mechanism and noise reduction techniques discussed previously. The circuit works at 5 V and 32 mA and the tuning voltage is 0 to 28 V. The typical RF output power is 5 dBm over the tuning range and sub-harmonic rejection is better than 20 dB. Figure 23 shows the measured phase noise plot of the configurable signal source, which is better than –105 dBc/Hz at 10 kHz offset from the carrier for the frequency band. Fig. 23 Phase noise of the dual-band coupled oscillator self-injection locked VCO. Active Resonator (AR)-based Low Noise Oscillator Normally, in the AR topology, the CPR is coupled to the negative resistance generating device network so that, in principle, an AR element similar to the general oscillator is being created. A general oscillator needs both the amplitude and phase conditions to be satisfied for oscillation build up at f[0]. In the case of the AR, only the phase condition for oscillation build up at f[0] is required for stable and sustained oscillations and no amplitude condition is required to compensate for the loss of the AR from the active device network.^28 As shown in Figure 24, the oscillations will not build up in AR and growth is restricted; therefore, an active amplifier can work in the small-signal linear regime. The gain and power of the amplifier added to the circuit will compensate the inner losses of the AR circuits, and full compensation (–|G[n]| + G = 0) of W (energy losses) will result in infinite unloaded Q and improved loaded Q when coupled to a transmission line or equivalent oscillator circuit. AR based on a negative resistance approach offers improved Q factors, but they have drawbacks: the schematic is complex and must have a feedback element and matching networks to produce the negative conductance |–G[n]|, sensitive to spurious oscillation (if the oscillation start-up condition is satisfied). Fig. 24 Active resonator with feedback arrangement. A normal oscillator requires the amplitude and phase condition to be satisfied for guaranteed and sustained oscillation build up at the desired frequency, whereas, for an active resonator element, only the phase condition needs to be satisfied. Hence, the oscillation will not build up across the active resonator and, therefore, the active resonator module can work in the small-signal regime (instead of the large-signal regime condition required for sustained and guaranteed oscillations). Moreover, the negative resistance, added to the active resonator circuit, will reduce the intrinsic losses of the passive resonators used as active resonators. This approach yields high Q resonators; however, active resonator elements are sensitive to spurious oscillations that may cause an unwanted oscillation mode in the event of satisfying the start-up oscillation condition. Since the conventional planar microstripline resonator itself is a lossy element, the unloaded Q factor is low and finite. Moreover, coupling the planar resonator to the external circuits (oscillator, filter, diplexer, etc.) results in loosing a finite amount of energy due to the coupling and other mechanism, thereby resulting in further degradation in the loaded Q factor. In addition, the excitation of other higher order oscillation modes across the resonators increases the resistive loss, which has to be compensated by the active resonator topology for low phase noise performance. Fig. 25 Measured Q of planar resonators. Figure 25 illustrates the measured Q of the typical planar-coupled resonators (uncoupled, coupled, ACPR) for the purpose of comparative analysis. Figure 26 shows the block diagram of an APCR (active planar-coupled resonator) VCO, which is based on a novel topology that supports minimum phase hits and broadband tunability, to compensating for the frequency drift due to temperature and aging, in a compact size and also amenable for integration in current IC technology. To overcome these problems, the active resonator is realized by incorporating an injection mechanism based on a feedback approach that can be dynamically controlled over the desired frequency band. By adjusting the feedback factor of the negative resistance generating circuit, the optimum value of the negative resistance to compensate for the loss of the CPR can be achieved. In this way, the conduction angle, the injection level and the group delay can be optimized towards the steepest phase characteristic curve for a given resonance condition across the active resonators. This condition leads to the operation of the APCR oscillator circuit in the vicinity of the evanescent domain. Hence, an improved group delay and phase characteristic curve are obtained, thereby increasing the effective dynamic loaded Q by many folds, resulting to low phase noise. Fig. 26 Block diagram of the APCR VCO (patent pending). The layout of the APCR VCO is a six-layer board, fabricated on a 64 mil thick Rogers substrate of dielectric constant 3.38 and loss tangent 2.7(10^–4). The choice of substrate depends on size, higher order modes, surface wave effects, implementations (couplings, line length, width, spacing and spacing tolerances), dielectric loss, temperature stability and power handling (dielectric strength and thermal conductivity). The APCR circuit works at 5 V and 25 mA, with an output power of 2 dBm, and second-harmonic rejection is better than -20 dBc. Unfortunately, each development design of a VCO, using APCR technology, has its price, since they occupy larger PCB area and, for the same space, exhibit much lower Qs compared to a high Q, CRO/SAW resonator. For the most part, these disadvantages have been overcome by means of a mode coupling approach, which acts as a Q-multiplier, and minimization of noise over the band is achieved by incorporating a noise-filtering network, a noise cancellation network, a phase compensating network and a noise feedback bias circuit.^16 Figure 27 shows the measured phase noise plot of the APCR VCOs at 2560 MHz with a 1 percent tuning range. Figure 28 shows the temperature and frequency drift profile of commercially available CROs and the new APCR VCOs (this work) for the purpose of the comparative analysis about the thermal drift profile and frequency tuning range (Δf). As depicted, the APCR VCO offers broadband tunability, extended operating temperature range and overcome the limiting performance of the frequency drift due to temperature and component tolerances. Fig. 27 Measured phase noise of an APCR VCO at 2560 MHz. Fig. 28 Measured thermal drift and frequency tuning range of commercially available CROs and the new APCR VCO. With regard to the state-of-the-art of the configurable signal source, this novel approach provides a general concept of reducing the noise over the frequency bands; it can help to avoid pitfalls that can increase the time required to achieve minimum phase noise over the band, and offers a promising alternative for high Q planar resonators in the context of a planar fabrication process, compatible with existing IC and MMIC processes.
{"url":"https://synergymwave.com/articles/2007/09/","timestamp":"2024-11-06T14:43:18Z","content_type":"text/html","content_length":"68959","record_id":"<urn:uuid:c145ede5-3455-46c1-8b59-a7e3ff2ed06b>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00785.warc.gz"}
Operators: - Minus operator. Unary form: Binary form: In its first, unary version the - operator expects one signed integer, unsigned integer, floating-point or point operand and results in the value of this operand with its sign inverted. Precisely, an operand with a positive value will become negative and a negative operand will become positive. Applied on a signed int8, int16, int32 or unsigned uint8, uint16 integer operand, the operation always results in an int32 value. If the operand is int64, the operation results consequently in int64. Exceptions occur in context of uint32 or uint64 operands. In this case the resulting data type will remain unchanged unsigned uint32 or uint64 and an adequate warning is reported. Applied on a floating-point operand, the operation always results in a float value. Finally, when applied on a point operand, the operation results in a point value. Please note, performing the operation on a point operand, negates the x and y point elements individually. For example: var float a = -1369.1251; var uint32 b = 4032; var uint64 c = 15746432301; var point d = <100,-200>; var float result = -a; // result = 1369.1251 var int32 result = -b; // result = -4032 var int64 result = -c; // result = -15746432301 var point result = -d; // result = <-100,200> In its second version, the - operator calculates an arithmetic difference between the left and the right operand. If used in combination with 8-, 16- or 32-bit signed integer operands, the data type resulting from the operation is consequently int32. When mixing signed and unsigned integer operands, the operation results in the unsigned uint32 value. In the case, one of the operands is 64-bit large, the resulting data type is int64 or uint64 according to whether the 64-bit operand is signed or unsigned. For example: var int32 a = 1369; var int32 b = -1496; var uint32 c = 1369; var uint32 d = 1251; var uint64 e = 897546641189; var int64 f = -149613691251; var int32 result = a - b; // result = 2865 var uint32 result = c - d; // result = 118 var uint32 result = b - d; // result = 0xFFFFF545 var int64 result = f - c; // result = -149613692620 var uint64 result = e - a; // result = 897546639820 Floating point subtraction In its third version, the - operator calculates an arithmetic difference between the left and the right floating-point operand. If used in combination with a signed or unsigned integer operand, the integer operand is automatically converted to float before performing the subtraction. The resulting data type of the operation is always a float. For example: var float a = 1369.1251; var float b = -1496.158; var int32 c = 260; var float result = a - b; // result = 2865.283203 var float result = b - c; // result = -1756.157958 In its fourth version, the - operator calculates an arithmetic difference between the left and the right point operand. The operation is performed individually for the x and y point elements. The subtraction of two point operands can be considered as a translation of the point in the left operand by a negative offset specified in the right operand. The resulting data type of the operation is always a point. For example: var point a = <100,200>; var point b = <50,70>; var point result = a - b; // result = <50,130> Rectangle negative displacement In its fifth version, the - operator translates the left rectangle operand by an offset specified in the right point operand. During the operation, the x and y point elements are subtracted from the corresponding coordinates of the rectangle's top-left and bottom-right corners. The resulting data type of the operation is always a rect. For example: var rect a = <100,200,110,220>; var point b = <50,70>; var rect result = a - b; // result = <50,130,60,150> var rect result = b - a; // This operand combination is not allowed. Chora error Color subtraction with saturation In its sixth version, the - operator calculates an arithmetic difference between the left and the right color operand. The operation is performed individually for every red, green, blue and alpha color components by respecting the lower limit of the value range 0 .. 255. The resulting data type of the operation is always a color. For example: var color a = #10C050FF; var color b = #225F2200; var color result = a - b; // result = #00612EFF In its seventh version, the - operator calculates an arithmetic difference between the codes of the left and the right character operand. Please note, in Chora all characters are handled as 16-bit UNICODE entities - they are represented as 16-bit UNICODE numbers. The resulting data type of the operation is always a int32. For example: var char a = 'D'; var char b = 'A'; var int32 result = a - b; // result = 3 In its eighth version, the - operator performs an arithmetic difference between a signed or unsigned integer value and the character code stored in a char operand. The result of the operation is again char value. In this way, it is possible to calculate new character codes from given character code and an offset. For example: var char c = 'a'; var char result = c - 32; // result = 'A' In its ninth version, the - operator determines the difference between the left and the right styles operand. Please note, styles operands can be considered as collections containing multiple elements you can individually include or exclude. Thus, the - operation results in a new styles value including all elements available in the left but not in the right operand. The resulting data type of the operation is always styles. For example: var styles a = [ Style1, Style3, Style16 ]; var styles b = [ Style3, Style16 ]; var styles c = [ Style1, Style8 ]; var styles result = a - b; // result = [ Style1 ] var styles result = b - c; // result = [ Style3, Style16 ] In its tenth version, the - operator determines the difference between the left and the right user-defined set operand. Please note, set operands can be considered as collections containing multiple elements you can individually include or exclude. Thus, the - operation results in a new set value including all elements available in the left but not in the right operand. The resulting data type of the operation corresponds to the data type of the operands. For example: var Core::Layout a = Core::Layout[ AlignToTop, ResizeHorz ]; var Core::Layout b = Core::Layout[ ResizeHorz ]; var Core::Layout c = Core::Layout[ AlignToTop, ResizeHorz ]; var Core::Layout result = a - b; // result = Core::Layout[ AlignToTop ] var Core::Layout result = b - c; // result = Core::Layout[]
{"url":"https://doc.embedded-wizard.de/operator-minus?v=12.00","timestamp":"2024-11-07T12:12:34Z","content_type":"text/html","content_length":"35310","record_id":"<urn:uuid:53d0d013-75ad-4237-a9d5-b328991c2db6>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00162.warc.gz"}
Ordinary differential equations and operatorsLecture notes in mathematics (Springer-Verlag) a tribute to F.V. Atkinson : proceedings of a symposium held at Dundee, Scotland, March-July, 1982 Atkinson, F. V Everitt, W. N 1924- Lewis, Roger T Science and Engineering Research Council (Great Britain) Mathematics Committee Berlin New York Springer 1983 xv, 521 p 24 cm edited by W.N. Everitt and R.T. Lewis "Proceedings of the Symposium on Ordinary Differential Equations and Operators"--P. [iii] Sponsored by the Mathematics Committee of the Science and Engineering Research Council One paper in German Contine bibliografie Differential equations Congresses Differential equations, Partial Congresses Operator theory Congresses 510 s 515.3/52 038712702X (U.S. : pbk.) 83020217
{"url":"http://library.imar.ro/cgi-bin/koha/opac-export.pl?op=export&bib=11177&format=mods","timestamp":"2024-11-06T20:27:58Z","content_type":"application/xml","content_length":"2627","record_id":"<urn:uuid:a70d0283-6718-4b31-95b1-20ff9e45822c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00520.warc.gz"}
EViews Help: Specification and Hypothesis Tests Specification and Hypothesis Tests We can use the estimated equation to perform hypothesis tests on the coefficients of the model. For example, to test the hypothesis that the coefficient on the price term is equal to 2, we will perform a Wald test. First, determine the coefficient of interest by selecting from the equation toolbar: Note that the coefficients are assigned in the order that the variables appear in the specification so that the coefficient for the PR term is labeled C(4). To test the restriction on C(4) you should select , and enter the restriction “c(4)=2”. EViews will report the results of the Wald test: The low probability values indicate that the null hypothesis that C(4)=2 is strongly rejected. We should, however, be somewhat cautious of accepting this result without additional analysis. The low value of the Durbin-Watson statistic reported above is indicative of the presence of serial correlation in the residuals of the estimated equation. If uncorrected, serial correlation in the residuals will lead to incorrect estimates of the standard errors, and invalid statistical inference for the coefficients of the equation. The Durbin-Watson statistic can be difficult to interpret. To perform a more general Breusch-Godfrey test for serial correlation in the residuals, select from the equation toolbar, and specify an order of serial correlation to test against. Entering “1” yields a test against first-order serial correlation: The top part of the output presents the test statistics and associated probability values. The test regression used to carry out the test is reported below the statistics. The statistic labeled “Obs*R-squared” is the LM test statistic for the null hypothesis of no serial correlation. The (effectively) zero probability value strongly indicates the presence of serial correlation in the residuals.
{"url":"https://help.eviews.com/content/demo-Specification_and_Hypothesis_Tests.html","timestamp":"2024-11-05T04:17:30Z","content_type":"application/xhtml+xml","content_length":"8779","record_id":"<urn:uuid:5260fa74-0009-4918-8b03-227929ae83ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00193.warc.gz"}
double negation Add spatiality condition to the characterization of $L=L_{egeg}$. diff, v58, current Yes, spatiality is certainly necessary here. Maybe it should be “If $L$ is spatial, then $L=L_{egeg}$ if and only if …”? Yes, it seems that statement had been there since Day 1. I suspect Toby was thinking along roughly the following lines: a topological space $X$ in which every open is regular open is (classically) discrete. If you allow the luxury of $T_1$ spaces, and if $x$ is any non-isolated point, then its set-theoretic complement $U$ has $U \cup \{x\} = X$ as its closure, so $eg eg U = X$. So maybe with a sufficiently generous interpretation of “classical”, the statement is defensible – but I agree the statement is confusing as it stands. The article claims Classically, we have L=L¬¬ if and only if L is the discrete locale on some set S of points. In constructive mathematics, S must also have decidable equality. But any complete Boolean algebra is a frame for which the corresponding locale satisfies L=L_¬¬ because ¬¬=id. There are plenty of nonatomic complete Boolean algebras. That is a nice post, but I have to take issue with classical logic is happy with lack of negative evidence. I would say that what classical logic is happy with is the impossibility of negative evidence, which is rather stronger than the present lack thereof. I have added a sentence mentioning forcing to the Idea-section at double negation, I have added to the References at double negation pointer to Andrej’s exposition: which is really good. I have also added this to double negation transformation, but clearly that entry needs some real references, too. Corrected reference to Caramello ’09. Added reference to Sketches of an Elephant which removes the need for the ad hoc proof of the subsequent Proposition (moved up) that the double negation subtopos is the unique subtopos which is both dense and Boolean. Rearranged the other propositions and made the Sierpinski topos remark into an example. In my opinion, the page contained insufficient background for the proof to be reader-comprehensible anyway, so pointing to a detailed reference is more useful. Here is the removed proof, in case someone wishes to restore it later: Proof It remains to show that (1) and (2) imply that $j=otot$. First note that the dense monos corresponding to $j$ are classified by the subobject classifier $\Omega_j$ of $\mathcal{E}_j$. Since (2) implies that $\Omega_j$ is an internal Boolean algebra, it follows that the dense subobjects of any object $X$ form a Boolean algebra. This Boolean algebra is a reflective sub-poset of the Heyting algebra of all subobjects of $X$, whose reflector is lex, i.e. preserves finite meets. Thus, it will suffice to show that if $B$ is a Boolean algebra that is a lex-reflective sub-poset of a Heyting algebra $H$ and if $0\in B$, then $B = \{ U | U = egeg U \}$. To show this, first note that the Boolean negation in $B$ is the restriction of the Heyting negation in $H$. Thus, Booleanness of $B$ implies $U=egeg U$ for all $U\in B$. Thus, it remains to show that if $U=egeg U$ then $U\in B$. But since $0\in B$ and $B$ is an exponential ideal, by the definition $eg U = (U\Rightarrow 0)$ it follows that $egeg U\in B$ for any $U$. Thus, if $U=egeg U$ then $U\in B$ as well. diff, v60, current Please could someone more familiar with the markdown used on the nLab fix the formatting of my recent edit to make the example less prominent? Have looked at your latest edits, but it’s not easy to see what’s going on. (Am on the last day of a vacation – still just on my phone.) Probably the paragraph on the Sierpinski topos wants to be in an Example-environment. For that just enclose it inside And then not to forget to delete the spurious section header #### Example If you still get stuck, I can try to look into it later this week. Formatting of examples fix diff, v61, current Am still not sure what’s going on, hence what was really intended: The new example (currently still here) 1. sits in between two propositions which look like they want to be directly subsequent, 2. refers to “the above” topos which is probably meant to be $\mathcal{E}_{ot ot}$ but remains ambiguous, 3. refers to a proposition with label “negdense”, which has not been declared. I suggest to edit further such as to • either clarify why this example is in the Properties-subsection right after the proposition that currently precedes it, • or else move it to a separate Examples-subsection (to be created, which may have been the original intention) after the Properties-subsection. diff, v62, current So I have moved the example of the Sierpisnki topos out of the Properties-section into a new Examples-section (now here). In doing so I have replaced the words “the above” with “the double-negation subtopos”, which is probably what was meant (but check). And the broken reference to a Proposition labeled “negdense” I have replaced with “Prop. xy”, so that it’s visible that the reference needs fixing and hoping that this makes somebody go and fix it. diff, v63, current
{"url":"https://nforum.ncatlab.org/discussion/7748/double-negation/","timestamp":"2024-11-13T06:30:48Z","content_type":"application/xhtml+xml","content_length":"44132","record_id":"<urn:uuid:185026a8-0f79-4ffa-942e-b69c2c5d1cbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00425.warc.gz"}
Proposed in [29]. Other individuals include the sparse PCA and PCA that’s | Ack1 Inhibitor Proposed in [29]. Other individuals include the sparse PCA and PCA that’s Proposed in [29]. Others include the sparse PCA and PCA that’s constrained to specific subsets. We adopt the typical PCA for the reason that of its simplicity, representativeness, substantial applications and satisfactory empirical functionality. Partial least squares Partial least squares (PLS) can also be a dimension-reduction technique. Unlike PCA, when constructing linear combinations from the original measurements, it utilizes information in the get JWH-133 survival outcome for the weight at the same time. The common PLS approach can be carried out by constructing orthogonal directions Zm’s using X’s weighted by the strength of SART.S23503 their effects on the outcome and then orthogonalized with respect to the former directions. A lot more detailed discussions and also the algorithm are offered in [28]. In the context of high-dimensional genomic information, Nguyen and Rocke [30] proposed to apply PLS within a two-stage manner. They utilized linear regression for survival data to establish the PLS components after which applied Cox regression on the resulted elements. Bastien [31] later replaced the linear regression step by Cox regression. The comparison of distinctive strategies is usually found in Lambert-Lacroix S and Letue F, unpublished data. Thinking about the computational burden, we choose the strategy that replaces the survival times by the deviance residuals in extracting the PLS directions, which has been shown to have a superb approximation overall performance [32]. We implement it working with R package plsRcox. Least absolute shrinkage and selection operator Least absolute shrinkage and selection operator (Lasso) is actually a penalized `variable selection’ approach. As described in [33], Lasso applies model choice to choose a small quantity of `important’ covariates and achieves parsimony by producing coefficientsthat are exactly zero. The penalized estimate below the Cox proportional hazard model [34, 35] is often written as^ b ?argmaxb ` ? topic to X b s?P Pn ? exactly where ` ??n di bT Xi ?log i? j? Tj ! Ti ‘! T exp Xj ?denotes the log-partial-likelihood ands > 0 is really a tuning parameter. The technique is implemented utilizing R package glmnet in this post. The tuning parameter is selected by cross validation. We take a few (say P) essential covariates with nonzero effects and use them in survival model fitting. You can find a large number of variable selection techniques. We select penalization, since it has been attracting a great deal of attention inside the statistics and bioinformatics literature. Extensive critiques can be identified in [36, 37]. Among all the obtainable penalization approaches, Lasso is maybe probably the most extensively studied and adopted. We note that other penalties such as adaptive Lasso, bridge, SCAD, MCP and others are potentially applicable right here. It can be not our intention to apply and examine a number of penalization solutions. Below the Cox model, the hazard function h jZ?together with the selected characteristics Z ? 1 , . . . ,ZP ?is from the form h jZ??h0 xp T Z? where h0 ?is an unspecified baseline-hazard function, and b ? 1 , . . . ,bP ?may be the unknown IT1t vector of regression coefficients. The chosen characteristics Z ? 1 , . . . ,ZP ?is often the initial handful of PCs from PCA, the first couple of directions from PLS, or the couple of covariates with nonzero effects from Lasso.Model evaluationIn the location of clinical medicine, it can be of terrific interest to evaluate the journal.pone.0169185 predictive power of an individual or composite marker. We concentrate on evaluating the prediction accuracy inside the idea of discrimination, which can be commonly referred to as the `C-statistic’. For binary outcome, popular measu.Proposed in [29]. Other individuals consist of the sparse PCA and PCA that is certainly constrained to certain subsets. We adopt the regular PCA for the reason that of its simplicity, representativeness, substantial applications and satisfactory empirical efficiency. Partial least squares Partial least squares (PLS) is also a dimension-reduction strategy. As opposed to PCA, when constructing linear combinations from the original measurements, it utilizes data from the survival outcome for the weight as well. The standard PLS technique could be carried out by constructing orthogonal directions Zm’s working with X’s weighted by the strength of SART.S23503 their effects on the outcome and then orthogonalized with respect to the former directions. A lot more detailed discussions and also the algorithm are offered in [28]. In the context of high-dimensional genomic information, Nguyen and Rocke [30] proposed to apply PLS in a two-stage manner. They employed linear regression for survival data to determine the PLS elements then applied Cox regression around the resulted elements. Bastien [31] later replaced the linear regression step by Cox regression. The comparison of distinctive approaches could be identified in Lambert-Lacroix S and Letue F, unpublished data. Thinking about the computational burden, we pick the system that replaces the survival instances by the deviance residuals in extracting the PLS directions, which has been shown to possess a fantastic approximation performance [32]. We implement it making use of R package plsRcox. Least absolute shrinkage and selection operator Least absolute shrinkage and selection operator (Lasso) is often a penalized `variable selection’ system. As described in [33], Lasso applies model selection to opt for a smaller number of `important’ covariates and achieves parsimony by generating coefficientsthat are precisely zero. The penalized estimate beneath the Cox proportional hazard model [34, 35] might be written as^ b ?argmaxb ` ? subject to X b s?P Pn ? exactly where ` ??n di bT Xi ?log i? j? Tj ! Ti ‘! T exp Xj ?denotes the log-partial-likelihood ands > 0 is actually a tuning parameter. The technique is implemented applying R package glmnet within this short article. The tuning parameter is selected by cross validation. We take a number of (say P) important covariates with nonzero effects and use them in survival model fitting. You can find a big variety of variable selection methods. We pick penalization, considering the fact that it has been attracting plenty of attention in the statistics and bioinformatics literature. Complete evaluations is usually discovered in [36, 37]. Among all of the readily available penalization procedures, Lasso is maybe probably the most extensively studied and adopted. We note that other penalties which include adaptive Lasso, bridge, SCAD, MCP and other people are potentially applicable here. It truly is not our intention to apply and evaluate various penalization methods. Under the Cox model, the hazard function h jZ?with the selected capabilities Z ? 1 , . . . ,ZP ?is with the kind h jZ??h0 xp T Z? exactly where h0 ?is definitely an unspecified baseline-hazard function, and b ? 1 , . . . ,bP ?could be the unknown vector of regression coefficients. The selected features Z ? 1 , . . . ,ZP ?may be the first couple of PCs from PCA, the initial handful of directions from PLS, or the couple of covariates with nonzero effects from Lasso.Model evaluationIn the area of clinical medicine, it is of excellent interest to evaluate the journal.pone.0169185 predictive power of an individual or composite marker. We focus on evaluating the prediction accuracy in the notion of discrimination, which is normally known as the `C-statistic’. For binary outcome, well-liked measu.
{"url":"https://www.ack1inhibitor.com/2017/12/01/proposed-in-29-other-individuals-include-the-sparse-pca-and-pca-thats/","timestamp":"2024-11-14T10:43:21Z","content_type":"text/html","content_length":"84952","record_id":"<urn:uuid:8271e527-3161-445f-b383-097ed011c013>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00428.warc.gz"}
Approximate Trace Reconstruction: Algorithms We introduce approximate trace reconstruction, a relaxed version of the trace reconstruction problem. Here, instead of learning a binary string perfectly from noisy samples, as in the original trace reconstruction problem, the goal is to output a string that is close in edit distance to the original string using few traces. We present several algorithms that can approximately reconstruct strings that belong to certain classes, where the estimate is within n / polylog (n) edit distance and where we only use polylog (n) traces (or sometimes just a single trace). These classes contain strings that require a linear number of traces for exact reconstruction and that are quite different from a typical random string. From a technical point of view, our algorithms approximately reconstruct consecutive substrings of the unknown string by aligning dense regions of traces and using a run of a suitable length to approximate each region. A full version of this paper is accessible at: https: Publication series Name IEEE International Symposium on Information Theory - Proceedings Volume 2021-July ISSN (Print) 2157-8095 Conference 2021 IEEE International Symposium on Information Theory, ISIT 2021 Country/Territory Australia City Virtual, Melbourne Period 7/12/21 → 7/20/21 All Science Journal Classification (ASJC) codes • Theoretical Computer Science • Information Systems • Modeling and Simulation • Applied Mathematics Dive into the research topics of 'Approximate Trace Reconstruction: Algorithms'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/approximate-trace-reconstruction-algorithms","timestamp":"2024-11-15T00:33:04Z","content_type":"text/html","content_length":"47412","record_id":"<urn:uuid:ce8c8d2e-6fe1-4b94-bc04-42d0ca28e01b>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00345.warc.gz"}
Why 2 bars? 1. many people like to use 2.4, and sometimes if u can't get low angle, and if u use 2.4 u can't get the real angle. 2 bars may be good for this. 2. Again, if u use method like ban pao, bjsl. 2.4's power if u slip 0.1 u gonna miss, and sometimes the shooting power is quite embigious, like 2.4>2.5 ban pao's 2.8 2.95 3.2 , etc. however 2 bars if u slip 0.1 u won't mss so many, why? Becuase 2 bars, every 1 angle is very close, so if u slip a bit u won't miss all (maybe miss 1/4 of your dual) 3. Again, every distance of +1 and -1 angle is very close, so if u miss a bit u won't miss a lot. 4. 2 bars is easy to drag. why? 2 bars = lower power than 2.4 , 3. etc. then u need less time to drag => the drag won't shake and slip + faster delay. 5. it is especially good for armor SS ing. it saves your delay, and sometimes in some very close situation u also can ss (e.g. the middle of the meta mine) 6. good for showing off. Haha, when strong wind with you and u want to shoot -> then u shoot <- and it flies back, what a wonderful sight. 7. Overall-saves delay, real angle, easy to drag, fast! Ok lets stop crap and go into it. here is the picture. Thanks for owwies for giving me a slot to play a game to take screenshot (i don't have any ss in my computer -.- i formatted harddisk) ok i think many people knows 80 2 = 1/4 sd and 70 2.1 is 1/2 sd 3/4 sd = 55 2 bars But if u want to be more detailed, u should use the power bar to measure. The distance of the power bar = half sd = 400 pixel (lets don't talk about the difinition of 1 sd first, lets make it 800 px instead of 780 px , it is easier to measure) Each 1 bar distance = 5 angle. so to be precise, every 0.2 distnace in your power bar = 1 angle (every 20 pixel). Then u get the distance. then subsititude into the formula shooting angle = 90 - distance -/+ wind x wind strength (perhaps everyone knows this, haha sorry for crapping) Note: 2 bars is not really all 2 bars, why? Because angle/distance is not proportional if u use a same power, u need some adjustment. for 80-90 angle use 2 bars. 75-80 use 2.05 70-75 use 2.1 P.S. use this method lowest to 70 angle only ,becuase after half sd angle so low then the angle won't be very proportional to your distance. So after angle 70, don't use. the distance won't be very proportional to the angle. Wind adjustment : zZzz i m so sleepy it is 4:55 Am now so i gonna type very fast and briefly. Angle 80-90 use ban pao 1/2 sd wind chart 70-80 use 1sd ban pao Why use 1/2 sd and 1 sd wind chart? Becuase wind affection to angle is not base on distnace, it is base on the angle, ban pao half sd = 80-90 1 sd = 70-80 angle, that is what i discovered. Angle 70-80 use ban pao 1 sd Some example : if the enemy is260 pixel far away from you 0 wind. 260 pixel = 2.6/4 in your power bar (1 power bar has 400 pixel, every1 bar = 100 pixel) =13 angle distance. then 90 - 13 = 77, so u use 77 2.05 will hit your enemy. example 2 : wind 10 exactly left/right (wind with you)enemy 120 pixel (1.2 bars) away from you then distance = 6 angles due to the banpao wind chart, we know that 10 wind affect 6 angles on that distance then 90 - distance + wind x wind strength = 90 - 6 + 6 = 90 use 90 2 bars will hit your enemy. example 3. Wind 25, at bearing 270 wind with you. enemy is 100 pixel (1 bar far away) 100 pixel = 5 angles distance. due to the ban pao wind chart, 25 wind need to x 0.65, then 25 x 0.65 = 16.25 ~16 then 90 - distance + wind x wind strength = 90 - 5 + 16 = 101 omg, 101??? haha, use your brain do other side (101-90) = 79 2 bars, will hit your enemy with very beautiful curve. You will get backshot bonus and "wow" and "vns" for your teammate hahahaha. some useless discovery : from the left of "all" to the end of the power bar = 60 2 bars (i don't know what is the use of this lolx, i discovered it someday but i found it useless) 2 bars can be worked on bots like mage,a sate, roan too. for mage u approximately - 0.05, asate + approximately 0.05 roan i think approximately +0.1 i m so sleepy and i must sleep now This is my first guide to post in public so give me some face, if u found errors don't reply and challange just tell me in gunbound and i will edit it lol.=P If u copy it to some other website then do it but at least give me a credit. Thanks to owwiez for giving me a slot to take screenshot. P.S. SEE HIS RANK IS THE PICTURE!!! rank8888 HAHAHAHAHAHAA
{"url":"http://creedo.gbgl-hq.com/stony_armor_2_bar.php","timestamp":"2024-11-11T14:28:12Z","content_type":"text/html","content_length":"5718","record_id":"<urn:uuid:03481c6a-b140-409f-b76c-b3dca9493ea2>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00045.warc.gz"}
Separating subadditive Euclidean functionals If we are given n random points in the hypercube [0,1]d, then the minimum length of a Traveling Salesperson Tour through the points, the minimum length of a spanning tree, and the minimum length of a matching, etc., are known to be asymptotically βnd−1d a.s., where β is an absolute constant in each case. We prove separation results for these constants. In particular, concerning the constants βdTSP, βdMST, βdMM, and βdTF from the asymptotic formulas for the minimum length TSP, spanning tree, matching, and 2-factor, respectively, we prove that βdMST<βdTSP, 2βdMM<βdTSP, and βdTF<βdTSP for all d≥2. We also asymptotically separate the TSP from its linear programming relaxation in this setting. Our results have some computational relevance, showing that a certain natural class of simple algorithms cannot solve the random Euclidean TSP efficiently.
{"url":"https://kilthub.cmu.edu/articles/journal_contribution/Separating_subadditive_Euclidean_functionals/6479474","timestamp":"2024-11-02T12:31:10Z","content_type":"text/html","content_length":"120086","record_id":"<urn:uuid:f8238004-e9af-4f61-9c1d-1cd0c48f3b94>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00319.warc.gz"}
NumPy-Indexing and Selection Indexing and Slicing are the important operations that you need to be familiar with when working with Numpy arrays. You can use them when you would like to work with a subset of the array. This tutorial will take you through Indexing and Slicing on multi-dimensional arrays. Please refer to following .ipynb file for numpy implementation through python.
{"url":"http://www.datasciencelovers.com/python-for-data-science/numpy-indexing-and-selection/","timestamp":"2024-11-10T21:27:59Z","content_type":"text/html","content_length":"52242","record_id":"<urn:uuid:617f0cd8-1ecf-4caa-bbdb-662ba1336672>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00792.warc.gz"}
Galaxy cluster gas mass fractions from Sunyaev-Zeldovich effect measurements: Constraints on Ω<sub>M</sub> Using sensitive centimeter-wave receivers mounted on the Owens Valley Radio Observatory and Berkeley-Illinois-Maryland-Association millimeter arrays, we have obtained interferometric measurements of the Sunyaev-Zeldovich (SZ) effect toward massive galaxy clusters. We use the SZ data to determine the pressure distribution of the cluster gas and, in combination with published X-ray temperatures, to infer the gas mass and total gravitational mass of 18 clusters. The gas mass fraction, f[g], is calculated for each cluster and is extrapolated to the fiducial radius r[500] using the results of numerical simulations. The mean f[g] within r[500] is 0.081 ^+0.009[-0.011] h^-1[100] (statistical uncertainty at 68% confidence level, assuming Ω[M] = 0.3, Ω[Λ] = 0.7). We discuss possible sources of systematic errors in the mean f[g] measurement. We derive an upper limit for Ω[M] from this sample under the assumption that the mass composition of clusters within r[500] reflects the universal mass composition: Ω[M] h ≤ Ω[B]/f[g]. The gas mass fractions depend on cosmology through the angular diameter distance and the r[500] correction factors. For a flat universe (Ω[Λ] = 1 - Ω[M]) and h = 0.7, we find the measured gas mass fractions are consistent with Ω[M] < 0.40, at 68% confidence. Including estimates of the baryons contained in galaxies and the baryons which failed to become bound during the cluster formation process, we find Ω[M] ∼ 0.25. • Cosmic microwave background • Cosmology: observations • Galaxies: clusters: general • Techniques: interferometric ASJC Scopus subject areas • Astronomy and Astrophysics • Space and Planetary Science Dive into the research topics of 'Galaxy cluster gas mass fractions from Sunyaev-Zeldovich effect measurements: Constraints on Ω[M]'. Together they form a unique fingerprint.
{"url":"https://experts.illinois.edu/en/publications/galaxy-cluster-gas-mass-fractions-from-sunyaev-zeldovich-effect-m","timestamp":"2024-11-10T15:38:11Z","content_type":"text/html","content_length":"60739","record_id":"<urn:uuid:72ecc1df-e25a-41ed-b10a-e0f14681c362>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00655.warc.gz"}
Dosing Date Imputation for New Patients f_dose_new_cpp {drugDemand} R Documentation Dosing Date Imputation for New Patients Imputes the dosing dates for new patients and ongoing patients with no dosing records. usubjid The unique subject ID. V Initialized to 0 and corresponds to the randomization visit. C The cutoff date relative to randomization. D The discontinuation date relative to randomization. model_k0 The model for the number of skipped visits between randomization and the first drug dispensing visit. Options include "constant", "poisson", "zero-inflated poisson", and "negative binomial". theta_k0 The model parameters for the number of skipped visits between randomization and the first drug dispensing visit. The model for the gap time between randomization and the first drug dispensing visit when there is no visit skipping. Options include "constant", "exponential", "weibull", "log-logistic", model_t0 and "log-normal". theta_t0 The model parameters for the gap time between randomization and the first drug dispensing visit when there is no visit skipping. model_t1 The model for the gap time between randomization and the first drug dispensing visit when there is visit skipping. Options include "least squares", and "least absolute deviations". theta_t1 The model parameters for the gap time between randomization and the first drug dispensing visit when there is visit skipping. model_ki The model for the number of skipped visits between two consecutive drug dispensing visits. Options include "constant", "poisson", "zero-inflated poisson", and "negative binomial". theta_ki The model parameters for the number of skipped visits between two consecutive drug dispensing visits. model_ti The model for the gap time between two consecutive drug dispensing visits. Options include "least squares" and "least absolute deviations". theta_ti The model parameters for the gap time between two consecutive drug dispensing visits. A data frame with two variables: • usubjid: The unique subject ID. • day: The dosing visit date relative to randomization. Kaifeng Lu, kaifenglu@gmail.com usubjid = "Z001", V = 0, C = 87, D = 985, model_k0 = "zero-inflated poisson", theta_k0 = c(0.6, 1.1), model_t0 = "log-logistic", theta_t0 = c(-1.0, 0.7), model_t1 = "least squares", theta_t1 = c(21.5, 1.9), model_ki = "zero-inflated poisson", theta_ki = c(0.1, 0.4), model_ti = "least squares", theta_ti = c(21, 2.3)) version 0.1.3
{"url":"https://search.r-project.org/CRAN/refmans/drugDemand/html/f_dose_new_cpp.html","timestamp":"2024-11-02T20:34:00Z","content_type":"text/html","content_length":"5454","record_id":"<urn:uuid:16a9071e-bb14-42e9-b2d0-546aded4ef8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00573.warc.gz"}
Conditional tests for elliptical symmetry using robust estimators Bianco, Ana M.; Boente, Graciela; Rodrigues, Isabel M. Communications in Statistics - Theory and Methods, 46 (2017), 1744-1765 This paper presents a procedure for testing the hypothesis that the underlying distribution of the data is elliptical when using robust location and scatter estimators instead of the sample mean and covariance matrix. We derive the asymptotic behaviour of the test statistic under the null hypothesis and under contiguous alternatives without any moment requirements on the elliptical distribution. Numerical experiments allow to compare the behaviour of the tests based on the sample mean and covariance matrix with that based on robust estimators, under various elliptical distributions and different alternatives. We also provide a numerical comparison with other competing tests.
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=5&member_id=88&doc_id=1919","timestamp":"2024-11-07T15:49:28Z","content_type":"text/html","content_length":"8633","record_id":"<urn:uuid:f5d1e478-811d-41ac-88f0-40d5e21b0093>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00644.warc.gz"}
BOI FD Calculator 2024 – Bank of India Fixed Deposit Calculator Online About BOI Fixed Deposit Bank of India is a leading public sector bank, offering a host of financial services to its clients. A fixed deposit account is one of the facilities on offer from BOI. Fixed deposits are one of the safest forms of investment options, which offer investors guaranteed returns. Investing in FDs does not expose depositors to the risk of market fluctuations. This makes them a popular choice for most individuals with low-risk appetite. Furthermore, investors can easily gauge the interest they can earn on the amount deposited right at the onset of their investment. This effectively aids them to calculate their ROI before making the deposit, enabling them to make sound financial decisions. In this regard, online calculators offer the fastest and most accurate way to assess the exact profitability of such deposits. Thus, one can effectively use the BOI FD calculator to understand profitability from fixed deposit investments before parking their excess funds with the bank. To use a Bank of India fixed deposit rates calculator, one needs to input certain details regarding their investments. The invested sum, rate of interest, term, and interest payout frequency are some of these parameters. After doing so, this useful tool highlights the returns that the investor can expect from his/her FD. Also Read, Bank of India FD Interest Rates Benefits of BOI FD Calculator Even though calculating FD returns manually is possible, the process can be cumbersome and leave room for inaccuracies. In this regard, opting to use FD calculators has several benefits, such as – • These calculators are easy to use and time-efficient, with FD calculations conducted in mere seconds. • Manual calculations are subject to errors or mistakes, which can be costly for the investor in question. Investors can eliminate such risks with the use of a Bank of India FD calculator, as the tool always relays accurate results. • Such calculators allow free adjustment of the various parameters to achieve desired results. However, in manual calculations, even a small change in the figures forces the lender to start the calculations afresh. Also Read, Bank of India FD Interest Rates FD Calculation Formula and Procedure The online calculator tool uses the same formula that individuals use when calculating FD earnings manually. This is given by – A = P(1+r/n)^n x t • A stands for the total maturity amount, • P refers to the starting investment, • ‘r’ is the rate of interest divided by 100, • ‘t’ signifies the tenure for investment, and • ‘n’ is the frequency of interest payout in a year. Example of BOI FD Calculation 63-year-old Mr Kapoor decides to invest Rs.10 lakh in a BOI FD for 5 years. He chooses yearly interest payments for this FD. Now, referring to the above table, one can understand that he is eligible for a 6.4% interest on his funds. Here, P = Rs.10 lakh r = 6.4 t = 5 n = 1 year Therefore, according to the FD calculation formula, his maturity amount would be – A = 10,00,000(1+0.064/1)^1 x 5 A = Rs.13,63,666 Return on investment for Mr Kapoor would be equal to I = A – P i.e, Rs. (13,63,666 – 10,00,000). Thus, the interest earned by Mr. Kapoor on his FD deposit is Rs.3,63,666. How do Various Factors Affect FD Interest Earnings? As stated previously, the Bank of India fixed deposit calculator relies on four factors to reach the ROI outcome. These are – • Sum invested – This is the principal amount on which the entire calculation relies. High-valued investments tend to draw higher returns when compared to smaller sums. On the calculator, an investor can change the principal investment sum to higher or lower returns effectively. • Investment term – FDs are beneficial, both for long-term and short-term gains. Nevertheless, those opting for a significantly longer tenure can expect increased interest earnings. Bank of India also offers the best interest rates to long-term investors, as is evident from the chart above. • Interest payout frequency – When picking fixed deposits, beneficiaries need to choose an interest calculation frequency. This can be monthly, quarterly, half-yearly, or annually. Choosing frequent interest compounding methods can lead to slightly increased gains. Undoubtedly, interest rates play the most significant role in FD calculations on a BOI fixed deposit calculator. One should always be on the lookout for higher rates, which can significantly boost ROI on FDs.
{"url":"https://groww.in/calculators/boi-fd-calculator","timestamp":"2024-11-08T23:40:23Z","content_type":"text/html","content_length":"71573","record_id":"<urn:uuid:d6dfef29-7f41-4ecf-87f5-a6353b57d8d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00162.warc.gz"}
The mean length of a human pregnancy is 270 days, with a standard deviation of 99... The mean length of a human pregnancy is 270 days, with a standard deviation of 99... The mean length of a human pregnancy is 270 days, with a standard deviation of 99 days. Use the empirical rule to determine the percentage of women whose pregnancies are between 252 and 288 days. (Assume the data set has a bell-shaped distribution.) A. 50% B. 68% C. 95% D. 99.7% Corrrect answer: C) 95% Note: There is on mistake in question Standard deviation should be = 9 (maybe misprinting, so i solved the problem with Standard deviation=9) By assuming the distrbution is normally ditributed, the required probability is obtained by first converting normal distribution to standard normal distribution then the probability is obtained using the standard normal distribution table, The required probability is, Now the normally distributed random variable X is converted to standard normal as shown below, The equation can be written as, This probability is obtained using the standard normal distribution table for z = -2 and z = 2, Hence the are 95% women are have pregnency days between 252 and 288.
{"url":"https://justaaa.com/statistics-and-probability/1242743-the-mean-length-of-a-human-pregnancy-is-270-days","timestamp":"2024-11-14T08:50:34Z","content_type":"text/html","content_length":"42338","record_id":"<urn:uuid:7b2a27ad-0405-4c7f-b16d-bbe175c3dd44>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00650.warc.gz"}