content
stringlengths
86
994k
meta
stringlengths
288
619
Ratio/Fraction/Decimal/Percent Relationships Shodor > Interactivate > Textbooks > Mathematics in Context Grade 6 > Ratio/Fraction/Decimal/Percent Relationships Mathematics in Context Grade 6 Ratios and Rates Ratio/Fraction/Decimal/Percent Relationships Lesson • Activity • Discussion • Worksheet • Show All Lesson (...) Fraction Conversion Lesson: Students learn how to convert from fractions to decimals. Fraction Conversion II Lesson: Students learn how to convert from fractions to percentages. Activity (...) Activity: Students work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in sequences and geometric properties of fractals. Cantor's Comb Activity: Learn about fractions between 0 and 1 by repeatedly deleting portions of a line segment, and also learn about properties of fractal objects. Parameter: fraction of the segment to be deleted each time. Hilbert Curve Generator Activity: Step through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals. Image Tool Activity: Measure angles, distances, and areas in several different images (choices include maps, aerial photos, and others). A scale feature allows the user to set the scale used for measuring distances and areas. Koch's Snowflake Activity: Step through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals. Sierpinski's Carpet Activity: Step through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals. Sierpinski's Triangle Activity: Step through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals. Activity: Step through the tortoise and hare race, based on Zeno's paradox, to learn about the multiplication of fractions and about convergence of an infinite sequence of numbers. Discussion (...) Worksheet (...) No Results Found ©1994-2024 Shodor Website Feedback Mathematics in Context Grade 6 Ratios and Rates Ratio/Fraction/Decimal/Percent Relationships Lesson • Activity • Discussion • Worksheet • Show All Lesson (...) Fraction Conversion Lesson: Students learn how to convert from fractions to decimals. Fraction Conversion II Lesson: Students learn how to convert from fractions to percentages. Activity (...) Activity: Students work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in sequences and geometric properties of fractals. Cantor's Comb Activity: Learn about fractions between 0 and 1 by repeatedly deleting portions of a line segment, and also learn about properties of fractal objects. Parameter: fraction of the segment to be deleted each time. Hilbert Curve Generator Activity: Step through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals. Image Tool Activity: Measure angles, distances, and areas in several different images (choices include maps, aerial photos, and others). A scale feature allows the user to set the scale used for measuring distances and areas. Koch's Snowflake Activity: Step through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals. Sierpinski's Carpet Activity: Step through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals. Sierpinski's Triangle Activity: Step through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals. Activity: Step through the tortoise and hare race, based on Zeno's paradox, to learn about the multiplication of fractions and about convergence of an infinite sequence of numbers. Discussion (...) Worksheet (...) No Results Found
{"url":"http://www.shodor.org/interactivate/textbooks/section/410/","timestamp":"2024-11-11T17:36:53Z","content_type":"application/xhtml+xml","content_length":"17582","record_id":"<urn:uuid:20512578-3523-4de3-8892-a55c185a34c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00172.warc.gz"}
Flexular Crack in Concrete Beam For the simply supported beam of Figure 1, determine the uniformly distributed load, q, at which the flexural crack forms. The beam has a length L=7m, and f’[c]=25Mpa. Figure 1: Cross-section of Concrete Beam First crack occurs when the maximum tensile stress in concrete, f[t], reaches the rupture modulus of concrete, f[r]. The tensile stress, calculated from elastic theory, is: According to ACI, the modulus of rupture of concrete is: The uniformly distributed load, q, at which the flexural crack forms is thus calculated as:
{"url":"https://www.thestructuralengineer.info/education/professional-examinations-preparation/calculation-examples/flexular-crack-in-concrete-beam","timestamp":"2024-11-04T01:17:43Z","content_type":"text/html","content_length":"71943","record_id":"<urn:uuid:db80fe8a-41eb-4f41-a249-578c0915e6eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00449.warc.gz"}
Condensed Matter Physics According to Lenz's law, any current induced by a magnetic field gives rise to a magnetic field opposing the original inducing field. This follows from the application of the right-hand rule both on induction of a current by a field and vice versa. For this reason, diamagnetic susceptibility is always negative, i.e. the density of B-field lines is reduced by diamagnetism. In a semi-classical view of an atom (left figure), an electron can be regarded as orbiting the nucleus at a fixed distance. If placed in a magnetic field, a precession of the vector linking the electron with the nucleus around the field axis is observed. The frequency $\omega$ of that precession is given by the Larmor theorem: $$\omega=\frac{eB}{2m_e}$$ where $e$ and $m_e$ are the electron's charge and mass, respectively. Given that a current is the number of charges flowing through a point per unit time, the current $I$ flowing in an atom due to the Larmor precession is $$I= -Ze\frac{\omega}{2\pi}=-\frac{Ze^2B}{4\pi m_e}$$ where the factor $-Z$ adds up all the electrons in the atom and acknowledges the fact that electrons have a negative charge. Given a current, we we can determine the magnetic moment induced by it. For this purpose, it helps comparing the Larmor current in the atom with a current flowing around a single wire loop (right figure). The magnetic moment $p_m$ induced by a current $I$ is proportional to the current and the area enclosed by it, $p_m=AI$ (and pointing in the direction normal to that area). For a circular loop, $A=\pi r^2$, and $$p_m=\pi r^2I=-\frac{Ze^2B}{4m_e}\langle r^2\rangle$$ where $\langle r^2\rangle$ stands for the median distance of all the electrons from the field axis in the case of the This fairly crude classical interpretation gives us a good estimate of the induced magnetic moment in an atom due to the interaction of the material's electrons with the magnetic field. Given the chemical composition and density (and hence electron density) of a material, we can calculate its diamagnetic susceptibility. This simple model produces adequate results except in the case of conduction electrons in metals. Since they are delocalised over many atoms within the lattice, the idea of a Larmor current doesn't do them justice. For paramagnetism, a quantum-mechanical analysis is needed. The magnetic moment, $\vec{p}_m$, of an atom depends on its total angular momentum, $\vec{J}$. The two are linked by the gyromagnetic ratio , $\gamma$, (in Hz/T), an element-specific material constant: $$\vec{p}_m=\gamma\hbar\vec{J}$$ Alternatively, we can express the link between $\vec{p}_m$ and $\vec{J}$ in terms of multiples of the Bohr magneton $$\mu_B:=\frac{e\hbar}{2m_e}$$ which is essentially the "quantum of magnetic moment" (measured in J/T). Using this formalism, the magnetic moment is $$\vec{p}_m=-g\mu_B\vec{J}$$ The g-factor (dimensionless) depends on how the various spins and orbital angular momenta of all the electrons in the atom couple. Its value can be calculated using the Landé equation: $$g=1+\frac{J(J+1) +S(S+1)-L(L+1)}{2J(J+1)}$$ where $S$, $L$ and $J$ are the combined spin, orbital and total angular momentum quantum numbers of the electrons in the atom. If the angular momenta combine in such a way that the g-factor is zero, the material cannot be paramagnetic. (More about how these angular momenta combine to follow in the Atomic Physics lecture.) When an atom with a permanent magnetic moment is placed in a magnetic field, the previously degenerate magnetic states split into sub-states with distinct energy, separated by the energy of the magnetic interaction, $2p_mB$ (left and right margins of the figure). This is analogous to the Zeeman effect separating the energy levels of spin states of electrons, except that we consider the total angular momentum (including spin and orbital components) of the atom as a whole. The magnitude of the Zeeman splitting is: $$E_{mgn}=-\vec{p}_m\vec{B}=m_{J}g\mu_BB$$ where the magnetic quantum number, $m_{J}=-J,-J+1,\cdots,+J-1,+J$, relates to the total angular momentum. The relative population of the individual levels is given by the Boltzmann distribution, which compares the magnetic interaction energy $p_mB$ with the thermal energy $k_BT$, i.e. for a two-level system ($J=\frac{1}{2}$) $$\frac{N_i}{N}=\frac{\exp{\left(\frac{p_mB}{k_BT}\right)}}{\exp{\left(-\frac{p_mB}{k_BT}\right)}+\exp{\left(\frac{p_mB}{k_BT}\right)}}=\frac{{\rm e}^x}{{\rm e}^{-x}+{\rm e}^x}\qquad\qquad x=\frac In a two-level system, the magnetisation is proportional to the excess population in the lower level, since only those excess moments aren't balanced by moments in the upper level, which point in the opposite direction: $$M=(N_1-N_2)p_m=Np_m\left(\frac{{\rm e}^x-{\rm e}^{-x}}{{\rm e}^x+{\rm e}^{-x}}\right)$$ The bracket is equal to $\tanh(x)$, and as long as $x\ll 1$ (i.e. if the magnetic interaction is much smaller than the thermal energy, $p_mB\ll k_BT$) $\tanh(x)\approx x$. This leaves: $$M=Np_mx=\frac{Np_m^2B}{k_BT}$$ $$\textbf{Curie law:}\qquad\chi_{para}=\frac{C}{T}$$ This shows that the paramagnetic susceptibility $\chi_{para}$ scales inversely with temperature. This observation is known as the Curie law and the constant of proportionality is the Curie constant. If $J\gt\frac{1}{2}$, there will be more than two Zeeman levels. Since their energies are always distributed symmetrically about the degenerate energy level at $B=0$, the analysis remains valid and there are simply more Boltzmann terms to sum in the denominator when the relative populations are calculated. There are different scenarios by which spins and orbital angular momenta can combine to a total angular momentum (the LS and JJ schemes), depending on the type of atom. Naturally, this will affect the number of Zeeman states for the magnetic moment of the atom and thus the Curie constant of a material containing this atom. To complicate matters, if some electrons are in excited electronic states, this will also change J and thus affect the material's Curie constant.
{"url":"https://support.imaps.aber.ac.uk/ruw/teach/334/diapara.php","timestamp":"2024-11-02T18:11:27Z","content_type":"text/html","content_length":"20036","record_id":"<urn:uuid:4a9ae23d-fe29-48bc-a221-782de54452d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00106.warc.gz"}
Chapter Three To characterize runtimes we discard the low-order terms and the coefficients of the leading term as done with insertion sort in Chapter Two . The remaining factor is put into the -notation e.g . Other asymptotic notations are designed to characterize functions in general. Asymptotic notation can apply to functions that characterize some other aspect of algorithms e.g. space or other functions not relating to algorithms. The O-notation This notation expresses a function that doesnโ t grow faster than a certain rate based on the highest-order term. Considering a function with the highest order term as , the rate of growth of this function is . This function can be expressed as , considering it doesnโ t grow faster than . We could also say the function is , , , because grow slower than them, hence we could say: is for any constant . The -notation is used to express that a function grows at least as fast as a certain rate, based on the highest order term. This can be seen as the inverse of the O-notation. Similarly, it can grow as fast as , , hence we can say for any constant . The -notation This expresses that a function grows exactly at a certain rate based on the highest-order term. It expresses that a function does not grow above or below a certain factor, and these factors need not be equal. It can also be expressed as the result of showing that a function is both O(f(n)) and \Omega(f(n)). For example, the above, showing that is both and , it is also . An Example: Insertion sort From the insertion sort procedure in Chapter 2 INSERTION-SORT (A, n) for i = 2 to n key = A[i] j = i - 1 while j > 0 AND A[j] > key A[j+1] = A[j] j = j - 1 A[j + 1] = key The procedure operates using nested loops, the outer loop runs times regardless of the values being sorted. The inner while loop iterates based on the number of values to be sorted. For a value , the while loop might iterate times, times or anywhere in between in constant time. We can say Insertion sort is since each iteration of the inner loop takes constant time . We could also say the worst case is because for every input size n, there is at least one input that makes the algorithm take at least time, for some constant c. This does not mean the algorithm takes at least time for all inputs. To understand why the worst case time of INSERTION-SORT is , we would assume the size of an array is a multiple of . Divide into groups of positions. If the largest values occupy the first array positions , once the array is sorted, each of these array values ends up somewhere in the last positions . And for this to happen, each of these values must pass through each of the middle n/3 position one at a time. At least executions of line 6 must have happened. Because values have to pass through positions, the worst case of INSERTION-SORT is at least proportional to = , which is . Now that we have shown that that INSERTION-SORT runs in in all cases and there is an input that makes it take , hence, the worst case running time of INSERTION-SORT is . The constant factors for upper bound or lower bound might differ, what matters is that we have characterized the worst-case running time to within constant factors(ignoring the lower-order-terms). This does not mean runs in all cases, in fact, the best case running time for INSERTION-SORT is as shown in chapter 2 Asymptotic notation: formal definitions
{"url":"https://clio.limistah.dev/chapter-three","timestamp":"2024-11-07T06:31:05Z","content_type":"text/html","content_length":"73078","record_id":"<urn:uuid:abc8a808-df96-4677-99e3-98de44d50c37>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00682.warc.gz"}
A new, streamlined version of Intervention Central is coming in December 2023. The new site will eliminate user login accounts. If you have a login account, be sure to download and save any documents of importance from that account, as they will be erased when the website is revised.
{"url":"https://www.interventioncentral.org/academic-interventions/math-facts/how-master-math-facts-cover-copy-compare","timestamp":"2024-11-14T17:23:14Z","content_type":"application/xhtml+xml","content_length":"21895","record_id":"<urn:uuid:91cac22c-579e-4c66-980c-72d3550ed3e0>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00870.warc.gz"}
1,820 research outputs found A point particle of small mass m moves in free fall through a background vacuum spacetime metric g_ab and creates a first-order metric perturbation h^1ret_ab that diverges at the particle. Elementary expressions are known for the singular m/r part of h^1ret_ab and for its tidal distortion determined by the Riemann tensor in a neighborhood of m. Subtracting this singular part h^1S_ab from h^ 1ret_ab leaves a regular remainder h^1R_ab. The self-force on the particle from its own gravitational field adjusts the world line at O(m) to be a geodesic of g_ab+h^1R_ab. The generalization of this description to second-order perturbations is developed and results in a wave equation governing the second-order h^2ret_ab with a source that has an O(m^2) contribution from the stress-energy tensor of m added to a term quadratic in h^1ret_ab. Second-order self-force analysis is similar to that at first order: The second-order singular field h^2S_ab subtracted from h^2ret_ab yields the regular remainder h^2R_ab, and the second-order self-force is then revealed as geodesic motion of m in the metric g_ab+h^1R+h^2R.Comment: 7 pages, conforms to the version submitted to PR The geometrical meaning of the Eddington-Finkelstein coordinates of Schwarzschild spacetime is well understood: (i) the advanced-time coordinate v is constant on incoming light cones that converge toward r=0, (ii) the angles theta and phi are constant on the null generators of each light cone, (iii) the radial coordinate r is an affine-parameter distance along each generator, and (iv) r is an areal radius, in the sense that 4 pi r^2 is the area of each two-surface (v,r) = constant. The light-cone gauge of black-hole perturbation theory, which is formulated in this paper, places conditions on a perturbation of the Schwarzschild metric that ensure that properties (i)--(iii) of the coordinates are preserved in the perturbed spacetime. Property (iv) is lost in general, but it is retained in exceptional situations that are identified in this paper. Unlike other popular choices of gauge, the light-cone gauge produces a perturbed metric that is expressed in a meaningful coordinate system; this is a considerable asset that greatly facilitates the task of extracting physical consequences. We illustrate the use of the light-cone gauge by calculating the metric of a black hole immersed in a uniform magnetic field. We construct a three-parameter family of solutions to the perturbative Einstein-Maxwell equations and argue that it is applicable to a broader range of physical situations than the exact, two-parameter Schwarzschild-Melvin family.Comment: 12 page The singular field of a point charge has recently been described in terms of a new Green's function of curved spacetime. This singular field plays an important role in the calculation of the self-force acting upon the particle. We provide a method for calculating the singular field and a catalog of expansions of the singular field associated with the geodesic motion of monopole and dipole sources for scalar, electromagnetic and gravitational fields. These results can be used, for example, to calculate the effects of the self-force acting on a particle as it moves through spacetime.Comment: 14 pages; addressed referee's comments; published in PhysRev It is a known result by Jacobson that the flux of energy-matter through a local Rindler horizon is related with the expansion of the null generators in a way that mirrors the first law of thermodynamics. We extend such a result to a timelike screen of observers with finite acceleration. Since timelike curves have more freedom than null geodesics, the construction is more involved than Jacobson's and few geometrical constraints need to be imposed: the observers' acceleration has to be constant in time and everywhere orthogonal to the screen. Moreover, at any given time, the extrinsic curvature of the screen has to be flat. The latter requirement can be weakened by asking that the extrinsic curvature, if present at the beginning, evolves in time like on a cone and just rescales proportionally to the expansion.Comment: 8+1 pages, final versio Various regularization methods have been used to compute the self-force acting on a static particle in a static, curved spacetime. Many of these are based on Hadamard's two-point function in three dimensions. On the other hand, the regularization method that enjoys the best justification is that of Detweiler and Whiting, which is based on a four-dimensional Green's function. We establish the connection between these methods and find that they are all equivalent, in the sense that they all lead to the same static self-force. For general static spacetimes, we compute local expansions of the Green's functions on which the various regularization methods are based. We find that these agree up to a certain high order, and conjecture that they might be equal to all orders. We show that this equivalence is exact in the case of ultrastatic spacetimes. Finally, our computations are exploited to provide regularization parameters for a static particle in a general static and spherically-symmetric spacetime.Comment: 23 pages, no figure The second-order gravitational self-force on a small body is an important problem for gravitational-wave astronomy of extreme mass-ratio inspirals. We give a first-principles derivation of a prescription for computing the first and second perturbed metric and motion of a small body moving through a vacuum background spacetime. The procedure involves solving for a "regular field" with a specified (sufficiently smooth) "effective source", and may be applied in any gauge that produces a sufficiently smooth regular field We study static spherically symmetric solutions of high derivative gravity theories, with 4, 6, 8 and even 10 derivatives. Except for isolated points in the space of theories with more than 4 derivatives, only solutions that are nonsingular near the origin are found. But these solutions cannot smooth out the Schwarzschild singularity without the appearance of a second horizon. This conundrum, and the possibility of singularities at finite r, leads us to study numerical solutions of theories truncated at four derivatives. Rather than two horizons we are led to the suggestion that the original horizon is replaced by a rapid nonsingular transition from weak to strong gravity. We also consider this possibility for the de Sitter horizon.Comment: 15 pages, 3 figures, improvements and references added, to appear in PR We study the horizon absorption of gravitational waves in coalescing, circularized, nonspinning black hole binaries. The horizon absorbed fluxes of a binary with a large mass ratio (q=1000) obtained by numerical perturbative simulations are compared with an analytical, effective-one-body (EOB) resummed expression recently proposed. The perturbative method employs an analytical, linear in the mass ratio, effective-one-body (EOB) resummed radiation reaction, and the Regge-Wheeler-Zerilli (RWZ) formalism for wave extraction. Hyperboloidal (transmitting) layers are employed for the numerical solution of the RWZ equations to accurately compute horizon fluxes up to the late plunge phase. The horizon fluxes from perturbative simulations and the EOB-resummed expression agree at the level of a few percent down to the late plunge. An upgrade of the EOB model for nonspinning binaries that includes horizon absorption of angular momentum as an additional term in the resummed radiation reaction is then discussed. The effect of this term on the waveform phasing for binaries with mass ratios spanning 1 to 1000 is investigated. We confirm that for comparable and intermediate-mass-ratio binaries horizon absorbtion is practically negligible for detection with advanced LIGO and the Einstein Telescope (faithfulness greater than or equal to 0.997)
{"url":"https://core.ac.uk/search/?q=author%3A(Poisson%20E)","timestamp":"2024-11-08T00:50:19Z","content_type":"text/html","content_length":"129573","record_id":"<urn:uuid:920db6a2-f061-44bc-8d8e-4e026ca01905>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00174.warc.gz"}
Free Group Study Rooms with Timer & Music | FiveableAP Stats Unit 5 Notes: Why Is My Sample Not Like Yours? | Fiveable A sampling distribution is a distribution of all possible samples of a given size drawn from a population. It represents the distribution of statistics (such as the mean or proportion) calculated from these samples. 💠 For example, suppose you're interested in estimating the mean income of a population of workers. You can take a sample of workers and calculate the mean income of the sample. However, this sample mean is likely to be different from the population mean due to sampling error. To account for this, you can take multiple samples of the same size and calculate the mean income for each sample. The distribution of these sample means is called the sampling distribution of the mean. In the previous units, every distribution consisted of one sample, such as a class of students grade in a class. With a sampling distribution, you take the average of all means (quantitative) or proportions (categorical) of each possible sample size (n) and use these averages as your data points. The normal model now also represents the distribution of all possible samples of a given sample To find the sampling distribution for differences in a sample proportion or mean, remember that variances always add to find the new variance. If one needs the standard deviation, you should take the square root of the variance. However, for means you can just subtract. ➖ There are two major types of random variables in AP Statistics: discrete and continuous. Discrete random variables are variables that have a certain and definite set of values that the variable could be. Usually, these are whole numbers in real world situations (1, 2, 3, 4, 5…, 100, etc.). For discrete random variables, to calculate the mean, you use the expected variable formula: For discrete random variables, to calculate the standard deviation, you use a formula similar (in a way) to the expected value formula, but with a square root:. The other type of random variable, continuous random variables, can take on any value at any point along an interval. Generally, continuous random variables can be measured while discrete random variables are counted. A histogram is used to display continuous data, while a bar graph displays discrete data! 📊 A population parameter is a measure of a characteristic of a population, such as the mean or proportion of a certain attribute. It is a fixed value that represents the true value of the population. 🌎 A sample statistic is a measure of a characteristic of a sample, such as the mean or proportion of a certain attribute. It is a calculated value that estimates the value of the population parameter. In AP Statistics, you will be asked to compare Statistics from a Sample to the Parameters of a Population. Here is a chart to help you remember which symbols are from sample statistics and from population parameters: Measurement Population Parameter Sample Statistic Mean 𝝁 x̅ Standard Deviation σ s Proportions 𝝆 p̂ (1) A study is conducted to estimate the proportion of students in a school district who have access to the internet at home. A sample of 1000 students is selected from the school district, and it is found that 750 students have access to the internet at home. Is the proportion of students with internet access at home a parameter or a statistic? (2) The mean height of all adult males in the United States is 70 inches. Is the mean height of adult males a parameter or a statistic? (3) A survey is conducted to estimate the proportion of adults in a city who have a college degree. A sample of 500 adults is selected from the city, and it is found that 300 adults have a college degree. Is the proportion of adults with a college degree a parameter or a statistic? (4) The mean income of all households in the United States is $50,000 per year. Is the mean income of households a parameter or a statistic? (5) A study is conducted to estimate the proportion of employees in a company who are satisfied with their job. A sample of 200 employees is selected from the company, and it is found that 150 employees are satisfied with their job. Is the proportion of employees who are satisfied with their job a parameter or a statistic? (1) The proportion of students with internet access at home is a statistic, because it is calculated from a sample of students and is used to estimate the value of the population parameter (the proportion of all students in the school district with internet access at home). (2) The mean height of adult males is a parameter, because it is a fixed value that represents the true value of the population (all adult males in the United States). (3) The proportion of adults with a college degree is a statistic, because it is calculated from a sample of adults and is used to estimate the value of the population parameter (the proportion of all adults in the city with a college degree). (4) The mean income of households is a parameter, because it is a fixed value that represents the true value of the population (all households in the United States). (5) The proportion of employees who are satisfied with their job is a statistic, because it is calculated from a sample of employees and is used to estimate the value of the population parameter (the proportion of all employees in the company who are satisfied with their job).
{"url":"https://hours-zltil9zhf-thinkfiveable.vercel.app/ap-stats/unit-5/statistics-why-is-sample-not-like-yours/study-guide/Mrybsi6gfieJDqF2LNju","timestamp":"2024-11-02T22:11:32Z","content_type":"text/html","content_length":"320739","record_id":"<urn:uuid:686adb1b-fb93-47fe-88ef-a1a6a44d3aae>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00173.warc.gz"}
3dB Rule Calculator - Online Calculators Explore this 3db Rule Calculator. It is both in basic and advanced mode. Kindly put original power (watts) value to calculate power after 3db reduction. Welcome to 3db Rule Calculator, a powerful online calculator used in telecommunication and audio engineering to understand the signal attenuation. The formula is: $P2 = \frac{P1}{10^{\left(\frac{3}{10}\right)}}$ • $P2$ is the new power level after a 3dB reduction. • $P1$ is the initial power level. • $10^{\left(\frac{3}{10}\right)}$ represents the factor by which the power is reduced for a 3dB decrease. 2. Variables in the Formula: Variable Meaning $P2$ New Power Level after 3dB reduction $P1$ Initial Power Level $10^{\left(\frac{3}{10}\right)}$ Power reduction factor for a 3dB decrease What is 3dB Rule Calculator ? The 3dB Rule Calculator is used to determine how much the power level changes when it’s reduced by 3 decibels (dB). In the world of sound and signal processing, a 3dB decrease means that the power is cut in half. This rule is commonly applied in audio systems, radio transmissions, and various electronic applications. The calculator uses the formula $P2 = \frac{P1}{10^{\left(\frac{3}{10}\right)}}$ to easily calculate the new power level after a 3dB reduction. This tool simplifies the process, ensuring that you get accurate results without manual calculations. Whether you’re adjusting speaker volumes or managing signal strength, the 3dB Rule Calculator helps you find the new power level quickly and efficiently. Solved Examples Example 1: • Initial Power Level (P1) = 100 Watts Calculation Instructions Step 1: P2 = $\frac{P1}{10^{\left(\frac{3}{10}\right)}}$ Start with the formula. Step 2: P2 = $\frac{100}{10^{\left(\frac{3}{10}\right)}}$ Replace $P1$ with 100 Watts. Step 3: P2 ≈ 50 Watts Calculate the value to find the new power level. Answer: The new power level is approximately 50 Watts. Example 2: • Initial Power Level (P1) = 200 Watts Calculation Instructions Step 1: P2 = $\frac{P1}{10^{\left(\frac{3}{10}\right)}}$ Start with the formula. Step 2: P2 = $\frac{200}{10^{\left(\frac{3}{10}\right)}}$ Replace $P1$ with 200 Watts. Step 3: P2 ≈ 100 Watts Calculate the value to find the new power level. Answer: The new power level is approximately 100 Watts. In conclusion, the 3dB Rule Calculator is a valuable tool for engineers and technicians working in telecommunications, audio engineering, and RF engineering. By understanding the effects of a 3dB reduction on signal power, professionals can ensure optimal performance and reliability in their systems. 1. What does a 3dB reduction mean? A 3dB reduction corresponds to a halving of the signal power. In practical terms, it represents a significant decrease in signal strength, which can impact signal quality and performance. 2. How is the 3dB Rule used in audio engineering? In audio engineering, the 3dB Rule is used to adjust signal levels during mixing and mastering. By reducing signal levels by 3dB, engineers can achieve balance and clarity in audio recordings. 3. Can the 3dB Rule Calculator be used for different units of power? Yes, the calculator can be used for different units of power, such as watts or decibels. However, it’s important to ensure that consistent units are used for accurate calculations.
{"url":"https://areacalculators.com/3db-rule-calculator/","timestamp":"2024-11-03T04:20:50Z","content_type":"text/html","content_length":"111538","record_id":"<urn:uuid:70b5149f-954e-4ba8-af38-02ffab4ef1df>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00696.warc.gz"}
The Stacks project Lemma 72.10.7. Let $k$ be a field with algebraic closure $\overline{k}$. Let $X$ be an algebraic space over $k$ such that 1. $X$ is decent and locally of finite type over $k$, 2. $X_{\overline{k}}$ is a scheme, and 3. any finite set of $\overline{k}$-rational points of $X_{\overline{k}}$ is contained in an affine. Then $X$ is a scheme. Comments (2) Comment #6994 by Laurent Moret-Bailly on Condition (3): "are contained" should be "is contained". Comment #7223 by Johan on Thanks and fixed here. Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0B88. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0B88, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0B88","timestamp":"2024-11-08T23:43:26Z","content_type":"text/html","content_length":"18242","record_id":"<urn:uuid:c51e1b0e-faf5-4589-a53f-f0ac8976fa77>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00441.warc.gz"}
Formula Evaluator Many EDS modules have processes available to extract data from external sources (such as a spreadsheet), or output data to a deliverable (such as a loop diagram). Often, there is a difference between the input and/or desired output format(s), and the EDS storage format. Formulae allow the user to control how this data is converted from one form to another, and this help topic explains how these formulae must be constructed, and how they are evaluated to produce a result. Formula Syntax text_a #field_or_expression_a# text_b #field_or_expression_b# ... Unless otherwise specified (on the help page for the process using formulae), formulae must contain one or more pairs of hashes (#...#) that wrap around a field name or expression. Text before and after pairs of hashes is treated as fixed text (if present), and concatenated with the values produced by the field names/expressions inside the hashes. For example, #PNLNO#--#TAGNAME#/#TERM# would yield something similar to JB02--X101/8. All three hash pairs contain a field name in this example. When the text inside a pair of hashes contains opening and closing round brackets (preceeded by a function name), it is treated as a formula expression. For example, #JOIN("-", [PNLNO], [TAGNAME], [TERM])# would yield something similar to JB02-X101-8. The single hash pair contains a function expression in this example. To include a hash inside a field name or anywhere inside a function expression, escape it by preceding it with a backslash. For example #CONCAT("\#", [TERM])# would yield something like #8. Simple Formulae text_a #field_a# text_b #field_b# ... A formula is considered a Simple Formula when all hash pairs contain only field names, ie, no expressions are present. While most processes evaluate formulae to produce output values, some processes read in output values, and decompose them into their constituent parts, based on formulae. This will only work for formulae that are Simple Formulae, as expressions are not reversible. See IM Importing from a Datasheet, Protogen Updating DBF from Clone DWGs. Expression Syntax The expression syntax expected by the Formula Evaluator is designed to be very similar to that used by Microsoft Excel - a function, that takes ones or more expressions as parameters, some of which themselves may also be functions with parameters, and so on. Function Expression function(expression_a, expression_b, ...) Evaluates the specified function, passing one or more expressions as parameters (separated by commas, and surrounded by round brackets), and returning the result. The expression(s) can be of any type listed here, including other function expressions (thus allowing nesting of functions). Refer to the Function Reference for a list of available functions and their expected parameters. Source Data Expression Returns a text string corresponding to the specified field name from the source data. The field name must be surrounded by square brackets [] to be used as a parameter for a function. The field name and source data may represent different things, depending on the context of the operation that is utilising the formula. For example: • Source data could be a database table, and field name specifies a column to get the value of, for the record being processed. • Source data could be a drawing, and field name specifies an attribute to get the value of, for the block reference being processed. • Source data could be the input for an Import, and field name specifies a variable name to get the value of, that was set by a state persistence function (eg SET) in a previously evaluated formula in the Import Map File. The operation processing the formula is free to interpret the field name how it wishes. Refer to the help page for the specific process for any special considerations given to the field name. Unless otherwise specified (on the process help page), the field name is case-sensitive. If the operation could not resolve the field name, an empty string will be returned. Note: If state persistence functions (eg SET) are supported by the process evaluating the formula, and have been used to assign a value to a named variable, then a Source Data Expression that specifies the variable name as field_name will return the value assigned to the variable name, in preference to returning any value corresponding to the field_name in the source data. Text (String) Expression Specify a text string by surrounding it with double quotes. To include a double quote mark in a string, escape it by preceding it with a backslash. • "01-CV101-A" • "Pressure Transmitter" • "Shows \"Fault\" when circuit is open" Numeric Expression Specify a integer (whole number) as-is, or specify a floating point (decimal fraction) using a period/dot between the integral and fractional parts. Do not use any other characters, such as thousands-separators or units. Math Expression number_a operator number_b The formula evaluator is capable of evaluating the following mathematical operators in this order: • Brackets: ( ... ) • Division: / • Multiplication: * • Addition: + • Subtraction: - Operations are evaluated using double precision floating point arithmetic, and the result is a double precision floating point number. Use the STR formula function to how the result is converted back to a string (for example, if you need to trimming trailing zeros from the result, or round to an integer). Complete Examples Source Data Formula Result COL_A COL_B COL_C COL_D JB02 X101 8 #COL_A#--#COL_B#/#COL_C# JB02--X101/8 JB02 X101 8 Dash Separated: #JOIN("-", [COL_A], [COL_B], [COL_C])# Dash Separated: JB02-X101-8 01-FT301-A Loop: #SPLIT([COL_A], "", 4)#, Type: #LOWER(SPLIT([COL_A], "", 3))# Loop: 301, Type: ft 98 degrees F #STR(([COL_A] - 32) * 5 / 9, 2)# #REPLACE([COL_B], "F", "C")# 36.67 degrees C Processes using the Formula Evaluator See Also
{"url":"https://help.elecdes.com/common-functionality/formula-evaluator","timestamp":"2024-11-07T05:39:42Z","content_type":"text/html","content_length":"20218","record_id":"<urn:uuid:cd4c8f1c-bbf2-4568-bef5-ae808e8fb851>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00482.warc.gz"}
Decoding State Vaccination Rates Using Educational Aptitude, Income, and Political Affiliation November 12, 2021 COVID-19 cost almost 700,000 (seven hundred thousand) deaths in the Unites States and low vaccination rates are widely seen as undermining individual and community protection. The objective of the study was to evaluate the risk factors associated with lower COVID-19 vaccination rates in the United States. The study evaluated the effect of red-blue political affiliation, and the effect of the US state’s average educational aptitude scores and per capita income on states vaccination rates. The study found that states with concomitantly lower income along with lower educational aptitude scores are less vaccinated while the states with higher income have higher vaccination rates even among those with lower educational aptitude scores. These findings stayed significant after adjusting for red-blue political affiliation where states with red political affiliation have lower vaccination rates. Further study is needed to evaluate how to stop online misinformation among states with low income and low educational aptitude scores; and whether such an effort will increase overall vaccination rates in the United States. Online misinformation surrounding the COVID-19 vaccine is the major obstacle in fighting the coronavirus pandemic. Loomba et al. conducted randomized controlled trials in the UK and the USA, showing how exposure to online misinformation around COVID-19 vaccines affects intent to vaccinate to protect oneself or others. [1] The study found that some sociodemographic groups are differentially impacted by exposure to misinformation than others, and scientific-sounding misinformation was found more strongly associated with declines in vaccination intent. Policymakers are struggling to stop online misinformation while the COVID-19 pandemic is taking thousands of lives worldwide. In the setting of highly transmissible and fatal COVID-19 pandemics, vaccination hesitancy is widely seen as undermining individual and community protection. Despite United States have adequate vaccine available for its population, the vaccination rate in the US facing multiple challenges. During pre-COVID-19 era research, the lower vaccination rates were considered resulting from a complex decision-making process that reported as “3 Cs” model, which highlights complacency, convenience, and confidence [2]. Another research done in 2012 shown that acceptance of vaccination can be also potentially influenced by cultural, and religious roots. [3]. Though multiple prior study found that certain socio-demographic groups are more vulnerable to remain unvaccinated, no prior study looked at average state educational aptitude score and per capita income as well as political affiliation as risk factors for lower vaccination rates. The US State’s average educational aptitude scores were extrapolated from the McDaniel study published in 2006 [4]. McDaniel estimated intelligence quotient (educational aptitude score) from the National Assessment of Educational Progress (NAEP) standardized tests for reading and math (administered to a sample of public-school children in each of the 50 states). The means of the standardized reading scores for grades 4 and 8 were averaged across years as well as the means of the standardized math scores for all 50 US states. The author offered two causal models that predicted state educational aptitude score (or states Intelligence quotient) which was estimated from the average of mean reading and mean math scores. These models explained 83% and 89% of the variability of state educational aptitude score. And the estimated educational aptitude scores showed positive correlations with gross state product, health, and government effectiveness and negative correlations with violent crime [4]. The US vaccination data were obtained from the NPR website as of the 15^th of July 2021 [5]. The State level per capita income for the year 2010 to 2014 was collected from the U.S. Census Bureau data [6]. The 50 US States 2020 presidential elections results data (red-blue political affiliation) were collected from the Politico website election result map [7]. The red political affiliation was coded as zero and blue political affiliation was coded as 1 for the purpose of the study. All these five data sets were merged using Python data analysis software. In addition, educational aptitude score, per capita income, and state vaccination rates were ranked to demonstrate trends in the scatter plot. The US States were ranked 1 to 50 based on the average educational aptitude score, where rank number 1 was the highest educational aptitude score and 50 being the lowest. The US vaccination data were ranked from 1 to 50 where rank number 1 was the highest fully vaccinated state and 50th being the lowest. The State level per capita income was ranked from 1 to 50 as well where rank number 1 was the highest per capita income and 50 being the lowest. Pearson’s correlation coefficients were obtained for all the independent variables where two-sided T test were considered significant at 0.05. The multivariate linear regression analysis was conducted using the states vaccination rate as dependent variable and state educational aptitude, average per capita income and red-blue political affiliation as independent variable, A forward selection method was considered to determine the final regression model. The statistical data analysis software Stata was used. A total of fifty (50) US states were considered in the data analysis. The average US full vaccination rate was 47.5% (±8.5) by July 15th, 2021), the average US population educational aptitude score was 100 (±2.71), and the average per capita income was $28,889. Table 1 shows state’s per capita income and income rank are strongly correlated with percent fully vaccinated with correlation coefficients 0.69 and -0.71, respectively. This indicates states average income has a parallel relationship to state’s vaccination rates which means vaccination rates increase with increase of income. Again, states educational aptitude rank and average educational aptitude score were also significantly correlated with percent fully vaccinated with correlation coefficients of 0.45 and -0.47, respectively. This indicates the educational aptitude score has a parallel relationship with vaccination rates which means vaccination increases with the increase of educational aptitude score. The correlation coefficient between state educational aptitude and per capita income rank was highly significant at 0.56 (P <0.001). The correlation between red-blue affiliation and educational aptitude score were statistically not significant (p>0.05) but red-blue affiliation was highly correlated with state vaccination rates and state per capita income with correlation coefficient of 0.77 and 0.59, respectively (P <0.001). Table 1: Correlation matrix of the variables used in the analysis: Demonstrating effect of population average Educational Aptitude (EA) score and per capita income on percent fully vaccinated. Vaccination Rank % Fully Vaccinated EA Average EA Income Rank Per capita Income Red-blue Affiliation Vaccination Rank 1 % Fully Vaccinated -0.99 1 Educational Aptitude (EA) Rank 0.45 -0.47 1 <0.001 <0.001 Average EA -0.44 0.45 -0.98 1 <0.002 <0.001 <0.001 Income Rank -0.72 -0.71 0.56 -0.55 1 <0.001 <0.001 <0.001 <0.001 Per Capita Income -0.71 0.69 -0.52 0.52 -0.97 1 <0.001 <0.001 <0.001 <0.001 <0.001 Red-blue Political Affiliation -0.78 0.77 -0.18 0.16 -0.60 0.59 1 <0.001 <0.001 <0.21 <0.26 <0.001 <0.001 Figure 1 demonstrates there is a decreasing trend of states’ populations who are fully vaccinated among populations with lower educational aptitude score. These lower vaccination rates can be an indication of accepting online misinformation about vaccination among the state with low educational aptitude score. The association has an R square value of 22.48% which means 22.5% variability of the vaccination is explained by the state population educational aptitude score. Fig 1: Scatter plot of percentage of the US states total population fully vaccinated by state rank of educational aptitude scores Figure 2 demonstrates there is a decreasing trend of population vaccination rates among the US states among lower-income populations. These lower vaccination rates can be an indication of accepting online misinformation about vaccination among low-income populations. This association has an R square value of 49.9% which means 49.9% variability of the vaccination is explained by income. Fig 2: Scatter plot of percentage of the US states total population fully vaccinated by state’s rank of per capita income. Figure 3 demonstrates there is a decreasing trend of per capita income among populations with lower educational aptitude score. Previous studies also reported national wealth strongly correlates with its population average educational aptitude scores [6]. According to the current study, the population educational aptitude score explains 28.1% of the variability of the population income. Fig 3: Scatter plot the US states per capita income by state rank of educational aptitude scores A multivariate analysis of the state’s percent vaccination was conducted using the independent effect of educational aptitude score, per capita income, interaction effect of average educational aptitude score and per capita income and red-blue affiliation as predictor variables (Table 2). When the effect of average educational aptitude score and per capita income was entered in the model, all variables were non-significant. But when the main effect of average educational aptitude score and per capita income were removed from the model, the cross-over interaction effect of average educational aptitude score and per capita income and red-blue affiliation were found strongly significant. The R square for the final model was 0.70 indicating 70% variability of vaccination was explained by these predictors. This means the effect of per capita income on the vaccination rate is opposite depending on the value of educational aptitude score. These findings were further explained in Table 3. The linear regression model R square value with only the cross-over interaction effect of average educational aptitude score and per capita income was 0.49 indicating the red-blue political affiliation explaining 21% (R square = 0.21) variability of the final model. Table 2: Regression analysis demonstrating the effect of the product of educational aptitude scores and per capita income on percent fully vaccinated (R square = 0.70). Variables Beta Coefficient Standard Error Lower Limit Upper Limit P-value Product of education aptitude score and per capita Income 0.71 0.18 0.35 1.06 <0.001 Red-blue Political Affiliation 9.39 1.63 6.01 12.58 <0.001 Constant 22.79 4.69 13.35 32.24 <0.001 Given that this study found a significant cross over interaction effect (opposite effect) of average educational aptitude score and per-capita income while predicting percent fully vaccinated, the quartiles of the educational aptitude score and income were sorted by percent fully vaccinated in Table 3. The study findings show a 38.8% vaccination rate among lowest income quartiles with lowest educational aptitude score group but among the highest income quartiles with highest educational aptitude score the vaccination rates remain highest which was 55.2%. Such a cross over interaction effect showing opposite effects of educational aptitude score and income on vaccination rates is a very intriguing and unique finding and will be very interesting research topics for state policy makers. This study finding demonstrates that states with a concomitantly lower income and lower educational aptitude score are the most vulnerable and accepting online misinformation regarding COVID-19 vaccination. Table 3: Demonstrating percentage of full vaccination by quartiles of educational aptitude (EA) scores and income Quartiles EA Quartile 1>102.8 EA Quartile 2(100.85 – 102.8) EA Quartile 3(102.8 – 98.6) EA Quartile 4(< 98.6) Total Income Quartile 1(>$30,830) 55.2% (±8.5)n=6 54.2% (±1.8)n=3 54.5% (±6.8)n=4 n=0 54.7% (±6.5)n=13 Income Quartile 2($27,546 – $30,830) 54.3% (±10.0)n=5 47.2% (±10.1)n=3 49.3% (±2.8)n=2 52% (±1.0)n=2 51.3% (±8.1)n=12 Income Quartile 3($25,229 – $27,546) 44.9% (±1.8)n=2 44.7% (±4.4)n=3 43.1% (±3.6)n=4 43.0% (±4.1)n=4 43.7% (±3.4)n=13 Income Quartile 4(<$25,229) n=0 41.2% (±3.8)n=3 41.6% (±4.1)n=2 38.8% (±7.8)n=7 39.9% (±6.3)n=12 Total 53.2% (±8.8)n=13 46.8% (±7.1)n=12 47.7% (±7.1)n=12 42.1% (±7.6)n=13 47.5% (±8.5)n=50 The COVID-19 cost almost 700,000 (seven hundred thousand) lives in the United States and death rates are very high among the unvaccinated in the United States. The US states that have concomitantly lowest educational aptitude score along with lowest per capita income have the lowest vaccination rates of 38.8% as of July 15^th, 2021. The average US fully vaccination rates were 47.5% at that time. The crossover interaction effect of income and educational aptitude score remains significant even after adjusting for red-blue political affiliation where red political affiliations have significantly lower vaccination rates compared to those of blue political affiliations. Online misinformation about COVID-19 vaccination possibly led to lower vaccination rates among many US states. Prior research on misinformation was related to the context of the 2016 US presidential election [8,9]. The current study found lower vaccination rates possibly related to accepting misinformation among the red politically affiliated US states. There is a strong cross over interaction effect of low income and low educational aptitude score indicating states with the lowest income quartiles have lowest vaccination rates (i.e., most affected with online misinformation about COVID-19 vaccination) if they have the lowest educational aptitude score. The study also found the state population with the highest income quartiles does not get affected by online misinformation even when those have a lower educational aptitude score. This is a unique cross over interaction effect and no other study reported similar findings in the past. The study also found during univariate analysis that income explains almost 50% of the vaccination variability with states with lower income trends to be less vaccinated. Another univariate analysis showed the states with lower educational aptitude scores are less likely to be vaccinated compared to those with high educational aptitude score and educational aptitude score explain 23% of the vaccination variability. Similarly, the red-blue political affiliation explains 21% variability in the final multivariate model while the final multivariate model explains 70% variability of the United States vaccination rates. It is possible that even if certain individuals within the same state have a high educational aptitude score, they may be vulnerable to misinformation as traditionally such misinformation propagates through their friends, families, and acquaintances in society. Roozenbeek et al. reported that higher trust in scientists and higher numeracy skills (which is the ability to use, interpret and communicate mathematical data, maybe equivalent to educational aptitude score) were associated with lower susceptibility to COVID-19 related misinformation. The study demonstrated a clear link between susceptibility to misinformation and vaccine hesitancy and suggests interventions aiming to improve critical thinking and trust in science may be a promising avenue for future research [10]. It is possible that the socio-demographic group with lower educational aptitude score and income fails to interpret scientific data themself and depends on their trusted news source to understand scientific or mathematical data. This may lead to acceptance of online misinformation leading to lower vaccinations among the population with low income along with low educational aptitude groups in the United States. The current study used state’s average data instead of individual data to find predictors of reduced vaccinations. The only study conducted to estimate states educational aptitude was McDaniel study which was done in the year 2006 [4]. This study assumed the average educational aptitude rank stayed same over last decades. Moreover, McDaniel study calculated a surrogate measures of states intelligence quotient even though the author used average of mean reading and mean math scores. It is possible that certain geo-political or state population data reflects a more accurate picture than individual data given that misinformation needs a society or group-level enabler for the misinformation to propagate across the state. Conclusion: The states with concomitantly lower income and lower educational aptitude scores have lower vaccination rates after adjusting for red-blue political affiliation. The study also found that states with red political affiliation have significantly lower vaccination rates. Further study is needed to evaluate how to stop online misinformation among states with lowest income and lowest educational aptitude score and whether such an effort will increase overall vaccination rates in the United States. Conflict of Interest: The author has no conflict of interest to disclose. • Loomba, S., de Figueiredo, A., Piatek, S.J. et al. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat Hum Behav 5, 337–348 (2021). https:// • MacDonald NE; SAGE Working Group on Vaccine Hesitancy. Vaccine hesitancy: Definition, scope, and determinants. Vaccine. 2015 Aug 14;33(34):4161-4. doi: 10.1016/j.vaccine.2015.04.036. Epub 2015 Apr 17. PMID: 25896383] • C. Laberge, M. Guay, P. Clement, E. Dube, R. Roy, J. Bettinger. Workshop on the cultural and religious roots of vaccine hesitancy. Explanation and implications for Canadian healthcare, Longueuil, Quebec, Canada, December 2012 (2015)] • McDaniel, M. A. (2006). Estimating state IQ: Measurement challenges and preliminary correlates. Intelligence, 34(6), 607–619. doi:10.1016/j.intell.2006.08.007 • https://www.npr.org/sections/health-shots/2021/01/28/960901166/how-is-the-covid-19-vaccination-campaign-going-in-your-state • “ACS DEMOGRAPHIC AND HOUSING ESTIMATES 2010-2014 American Community Survey 1-Year Estimates”. U.S. Census Bureau. Archived from the original on 2020-02-14. Retrieved 2016-02-12. • The 50 US States 2020 presidential elections results from the Politico website election result map: https://www.politico.com/2020-election/results/president/ • Grinberg N, Joseph K, Friedland L, Swire Thompson B, Lazer D. 2019 Fake news on Twitter during the 2016 U.S. presidential election. Science 363, 374–378. (doi:10.1126/ science.aau2706) • Allcott H, Gentzkow M. 2017 Social media and fake news in the 2016 election. J. Econ. Perspect. 31, 211–236. (doi:10.1257/jep.31.2.211) • Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., Van Der Bles, A. M., & Van Der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world: Susceptibility to COVID misinformation. Royal Society Open Science, 7(10). https://doi.org/10.1098/rsos.201199 Author: Azad Kabir, MD MSPH; Raeed Kabir; Jebun Nahar, PhD; Ritesh Sengar; Affiliations: Doctor Ai, LLC; 1120 Beach Blvd, Biloxi; MS 39530 Corresponding author’s name and contact information (e-mail address, mailing address, phone number): Azad Kabir, MD, MSPH, ABIM; Doctor Ai, LLC; 1120 Beach Blvd, Biloxi; MS 39530; Email: azad.kabir@gmail.com; Cell: 228-342-6278 Discussion about this post
{"url":"https://ddxrx.com/research/2021/11/12/decoding-state-vaccination-rates-using-educational-aptitude-income-and-political-affiliation/","timestamp":"2024-11-11T03:05:59Z","content_type":"text/html","content_length":"60264","record_id":"<urn:uuid:c5fa3a10-babc-47a1-b164-b8ca8cd9aa20>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00886.warc.gz"}
Linear and Quadratic Approximation Motivating Questions • What is the formula for the general tangent line approximation to a differentiable function \(y = f(x)\) at the point \((a,f(a))\text{?}\) • What is the principle of local linearity and what is the linear approximation (or local linearization) of a differentiable function \(f\) at a point \((a,f(a))\text{?}\) • How does knowing just the tangent line approximation tell us information about the behavior of the original function itself near the point of approximation? • How does knowing the second derivative’s value at this point provide us additional knowledge of the original function’s behavior? Question 3.1.8. What is the formula for the general tangent line approximation to a differentiable function \(y = f(x)\) at the point \((a,f(a))\text{?}\) Question 3.1.9. What is the principle of local linearity and what is the linear approximation (or local linearization) of a differentiable function \(f\) at a point \((a,f(a))\text{?}\) Question 3.1.10. How does knowing just the tangent line approximation tell us information about the behavior of the original function itself near the point of approximation? Question 3.1.11. How does knowing the second derivative’s value at this point provide us additional knowledge of the original function’s behavior? The tangent line to a differentiable function \(y = f(x)\) at the point \((a,f(a))\) is given in point-slope form by the equation \begin{equation*} y - f(a) = f'(a)(x-a)\text{.} \end{equation*} The principle of local linearity tells us that if we zoom in on a point where a function \(y = f(x)\) is differentiable, the function will be indistinguishable from its tangent line. That is, a differentiable function looks linear when viewed up close. We rename the tangent line to be the function \(y = L(x)\text{,}\) where \(L(x) = f(a) + f'(a)(x-a)\text{.}\) Thus, \(f(x) \approx L(x)\) for all \(x\) near \(x = a\text{.}\) If we know the tangent line approximation \(L(x) = f(a) + f'(a)(x-a)\) to a function \(y=f(x)\text{,}\) then because \(L(a) = f(a)\) and \(L'(a) = f'(a)\text{,}\) we also know the values of both the function and its derivative at the point where \(x = a\text{.}\) In other words, the linear approximation tells us the height and slope of the original function. If, in addition, we know the value of \(f''(a)\text{,}\) we then know whether the tangent line lies above or below the graph of \(y = f(x)\text{,}\) depending on the concavity of \(f\text{.}\) We also can compute the quadratic approximation of \(f\) centered at \(a\text{,}\) \begin{equation*} Q(x) = f(a) + f'(a)(x-a) + \dfrac{f''(a)}{2}(x-a)^2\text{,} \end{equation*} which may provide us with even better approximations near the center \(a\text{.}\)
{"url":"https://www.math.colostate.edu/~shriner/sec-1-8-tan-line-approx.html","timestamp":"2024-11-05T03:15:14Z","content_type":"text/html","content_length":"94738","record_id":"<urn:uuid:d7c8d879-c052-45c4-957d-553ec78b0852>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00885.warc.gz"}
Top 5 ACT Math Tips I think Math is one of the most polarizing sections on the ACT. Students tend to either get it (and just need a little refresher) OR they can't stand it! Math is definitely the most content based of all of the sections on the ACT, and yes, how well you do in math might have something to do with how you do in math in school. Now with that said, there are still a lot of ways that you can increase your score if you 1. Practice, practice, practice. This is true of all sections, but it can really help out on math. The more practice you do, the more questions that you will know how to do! More than half of the questions are Pre-Algebra, Algebra 1, and Geometry, so even if you struggle with the higher math like Algebra 2 and Trig, you can still do REALLY well on the math test! 2. “Plug” your answer choices There are two ways to use your answer choices. You can plug in your answers when there are numbers in the answer choices. You can plug in numbers to the problem when there are variables in the answer choices. In other words, if you aren't sure how to approach a problem, see if there is a way that your answer choices can help you! 3. Focus on the “easy” questions Yes, there is going to be harder math questions, but you can still get a good score on the test by making sure you are doing well on the problems at the beginning of the test. If you are stuck on a problem that looks like the top line, then look for problems that "feel" easier to you! It is OK if you take more time on those and then have to guess on the questions at the end of the test. If you struggle with math, chances are that you will have a difficult time with the problems at the end of the test anyway. You will benefit (and so will your score!) if you make sure you take the time to get the “easy” questions correct. 4. Draw pictures for the geometry problems As soon as a questions says, “Rectangle ABCD…” you need to draw rectangle ABCD on your paper! It is going to help you visualize what you are doing, and it is well worth the time! 5. The highest math is pre-calc No worries though because there are only 1-2 questions that are at this level, and there are 4 trig questions! Focus on Algebra and Geometry to get biggest increase with your study time! If you are in Algebra 2, there is a good chance that there will be problems on the ACT that you look at and think, "we are learning this right now!" And here is a bonus tip added in.... 6. Try your best You are going to come across problems that you don't know. You are going to come across problems that you have seen before, but you can't remember when you look at it how you need to begin solving it. This is all EXPECTED. The single biggest factor in improving your math score is to do some educated guessing. Try to eliminate answer choices using logic before you guess. The more you can eliminate before you guess, the closer you are to the right answer!
{"url":"https://my2tor.com/blog-detail/top-5-act-math-tips","timestamp":"2024-11-06T01:49:34Z","content_type":"text/html","content_length":"230648","record_id":"<urn:uuid:327a8dab-64f3-4cea-b95a-80b5547d0a83>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00096.warc.gz"}
Uncertainty Associated with Sampling Peanuts for Fruity-Fermented Off-Flavor¹ Individual peanut seed can develop an objectionable off-flavor if exposed to certain environmental conditions. Typically, high moisture, immature peanuts exposed to temperatures above 35°C will produce a fruity-fermented (FF) off-flavor (Sanders et al., 1989a,b; Sanders et al., 1990). The intensity of FF off-flavor appears to be directly proportional to temperature, immaturity, and kernel moisture content (Whitaker and Dickens, 1964). High temperature exposure can occur in the windrow when peanuts are exposed to direct radiation from the sun or during curing when artificial heat is added to the drying air. When peanuts are exposed to these conditions, the assumption can be made that within each bulk lot of shelled peanuts, there exists a FF distribution among individual peanuts. Probably, a large percentage of peanuts in a bulk lot have no measurable FF off-flavor intensity and the remaining small percentage of peanuts have varying intensities of the FF off-flavor. If all peanuts in a lot were subjected to the same temperature, then the FF distribution among individual peanuts may be closely related to the maturity distribution among individual peanuts in the lot (Sanders, 1990; Sanders and Bett, 1995). Currently, the peanut industry estimates the mean level of the FF attribute among all peanuts in a bulk lot by taking a 300 g sample of peanuts from the bulk lot. The test sample is roasted, blanched, and ground into a paste, a subsample of paste is removed from the comminuted test sample, and each member of a trained flavor panel scores the FF intensity. Each panel member is highly trained and experienced in evaluating peanut flavor as described by the peanut flavor lexicon (Johnsen et al., 1988; Sanders, et al., 1989b). Each panel member evaluates the intensity of the peanut flavor descriptors using standard, published sensory analysis procedures. All panel member scores are averaged and the average score is the best estimate of the true FF off-flavor intensity among all peanuts in the lot. Customers who buy U.S. peanuts may specify in their purchase contract that the peanuts must have an average FF intensity below some threshold (Greene et al., 2006a; personal communication J. Leek and Associates, 2006). Occasionally, separate samples taken from the same lot by the seller and buyer will not agree when scored by their respective trained flavor panels. If a customer receives a lot that tests greater than a specified threshold, an economic hardship is created for both the buyer and seller of the lot. The lack of agreement in the FF off-flavor score is probably due to the uncertainty associated with the test procedure used by the seller and buyer of the peanuts to measure the FF intensity of peanuts in the bulk lot. The test procedure used to estimate the FF intensity in a bulk lot consists of sampling, sample preparation, and measurement steps. Each step contributes to the overall uncertainty associated with the test procedure. Because of the uncertainty of the FF test procedure, it is not possible to determine with 100% certainty the true average FF intensity among all peanuts in the bulk lot by measuring the average FF intensity of peanuts in a sample taken from the lot. Because of the uncertainty associated with sampling, sample preparation, and measurement steps, lots can be misclassified by a sampling plan. There is some chance that good lots (true FF intensity is below a defined tolerance) will test bad by the sampling plan (seller's risk) and some chance that bad lots (true average FF intensity is above a defined tolerance) will test good (buyer's risk) by the sampling plan. The performance (number of lots miss-classified or the buyer's and seller's risks) of a specific sampling plan can be predicted if the variability associated with sampling and measurement steps of the test procedure can be determined and if the FF distribution among replicated sample test results can be described. The objectives of this study were to: (1) measure the total variability associated with the test procedure used to measure the FF intensity in peanuts, (2) partition the total variability associated with the FF test procedure into sampling, sample preparation, and measurement variance components, (3) measure the FF distribution among replicated samples taken from a bulk lot, and (4) demonstrate how to make best use of resources to reduce the uncertainty of the FF test procedure. Materials and Methods Theoretical Considerations It was assumed that the total variability, (s^2 [t]) associated with the test procedure to estimate the FF intensity of peanuts in a bulk lot is the sum of the sampling (s^2 [s]), sample preparation (s^2 [sp]), and measurement (s^2 [m]) variances (Whitaker et al., 1974). Sampling error occurs because the FF distribution among individual peanuts causes differences among replicated sample test results taken from the same lot. Once a sample is prepared (roasted, blanched, and ground), the FF intensity may differ among replicated subsamples of paste taken from the same comminuted sample (sample preparation error). Finally, evaluation of the FF intensity may differ among individual sensory panel members when tasting peanuts from the same sample (measurement error). It was assumed that the sample preparation error is negligible (s^2 [sp] = 0) since all peanuts in the sample are ground into a homogenous paste and the FF intensity will not differ among replicated subsamples taken from the same comminuted test sample. Experimental Design To measure the sampling and measurement variability and the FF distribution among sample test results, a balanced nested design was developed (Figure 1). Twenty bulk lots of medium runner type peanuts were identified by commercial testing as having FF off-flavor intensity ranging from 0.0 (no FF off flavor) to 4.0. A 5 kg bulk sample was removed from each identified lot. Using a riffle divider, 20 samples of 250 g each were removed from the 5 kg bulk sample. Using standard industry procedures (Greene, J.L. et al. 2006b), each 250 g sample was roasted, blanched, and ground into a paste. Each member of a highly trained descriptive sensory panel rated the FF intensity in a subsample taken from the ground 250 kg sample. Depending on the availability of panel members, each ground sample was usually rated by the same 8 panel members. All panelists used the Spectrum^TM method to evaluate the intensity of all terms in the peanut lexicon (Johnson et al., 1988; Sanders et al., 1989b). Approximately 20×20×8 or 3200 FF scores, identified by panel member, sample number, and lot number, were recorded in the database for statistical analysis. Statistical Analysis Using Proc Mixed in SAS, an estimate of the total, sampling, and measurement variances was determined for each lot. The average FF intensity among the 160 FF off-flavor scores (8 panel member scores per sample time 20 samples per lot) was also determined for each lot. The 20 sampling and measurement variance estimates were plotted versus the average FF intensity for each lot to determine if each variance component was a function of the FF intensity. Observed Distribution An observed FF distribution among the 20 sample test results for was constructed for each lot. A total of 20 observed distributions, one for each lot, were constructed. The observed cumulative FF distribution for a given lot was constructed by ranking the 20-FF sample test results from high to low. The highest FF value was assigned a cumulative probability of 1.0. The next to highest FF value was assigned a cumulative probability of 1.0 – 1/20 or 0.95. The cumulative probability associated with each smaller FF value was reduced by 1/20 or 0.05. The cumulative probability associated with the smallest FF value was assigned a probability of 1/20 or 0.05. Theoretical Distribution Four theoretical distributions, normal, lognormal, negative binomial, and compound gamma were chosen as possible models to simulate the observed FF distribution among the 20 sample test results taken from a lot (Giesbrecht and Whitaker, 1998). These four theoretical distributions were chosen to give a broad descriptive range of distributional shapes from symmetrical (normal) to highly skewed (negative binomial) distributional shapes. Each theoretical distribution was compared to each observed FF distribution for a total of 80 comparisons. Parameter Estimation Methods The predicted FF distribution among sample test results was calculated from a theoretical distribution using distribution parameters computed from the mean and variance among the 20-FF sample test results. Parameters of the four theoretical distributions were estimated using the method of moments (Giesbrecht and Whitaker, 1998). The method of moments provides a direct and uncomplicated method of estimating the parameters of each theoretical distribution. Parameters of each theoretical distribution are estimated directly from the measured mean, I, and variance, S^2 [t], among the 20-FF sample test results associated with each lot (Giesbrecht and Whitaker, 1998; Whitaker et al., 1972). Goodness of Fit The Power Divergence (PD) test statistic, which is a conservative modification of the Chi Square GOF test, was selected as the criterion to evaluate the goodness of fit (GOF) between the theoretical and observed distributions (Read and Cressie, 1988). For a given lot, the range among the 20 sample test results is divided into 10 intervals of equal width and the number of sample test results that fell into each interval was counted. The expected number of sample test results in each interval is 2 (20 sample test results divided by 10 intervals). The PD statistics were calculated using Equation 1 and compares the observed number of sample test results in each interval to the expected number or 2. where i is the interval number from 1 to 10 and γ is a coefficient equal to 2/3. Giesbrecht and Whitaker (1998) recommended the use of PD statistics (Equation 1) with γ = 2/3 due to its reasonable power against a broad range of alternatives. If γ=1, Equation 1 would become the Chi Square GOF test. The test statistics were converted to a GOF probability where the lower the GOF probability, the better the fit. The fit between the theoretical and observed distributions was considered acceptable if the test statistic did not exceed the 95% critical value. The FF intensity for each sample and for each lot is shown in Table 1. The FF intensity associated with each sample in Table 1 is the average of all eight-panel member scores. For each lot, sample intensities are ranked from low to high to more easily view the range among sample test results within each lot. The best estimate of the true FF intensity of a lot is the average of the 160 FF scores (20 samples × 8 panel scores per sample). The average FF intensity among the 20 lots varied from 0.2 to 2.1. Using Proc Mixed in SAS, the mean FF intensity, total variance, sampling variance, and measurement variance for each lot is shown in Table 2. A full log plot (sometimes called a log-log plot) of the measurement variance, sampling variance, and total variance versus the average FF intensity (Table 2) is shown in Figures 2, 3, and 4, respectively. The functional relationship between variance (s^2) and FF intensity (I) was determined using a linear regression analysis on the log values. The regression results are also shown in each figure along with the measured variances. The regression equations for measurement, sampling, and total variances as a function of the FF intensity are shown in Equations 2, 3, and 4, respectively. Unfortunately, the range in FF intensity among the 20 lots was not as wide as hoped. There was a clumping of mean and variance point in Figures 2, 3, and 4 and as a result the slope of the regression equations (slope in the log scale is the exponent on the I term in equations 2, 3, and 4) was determined with only 3 to 4 points. The attempt to sample peanut lots over a wide range of FF scores proved to be very difficult. The measurement, sampling, and total variances can be predicted from Equations 2, 3, and 4, respectively, for a given FF intensity, I. For example, when measuring a lot with a true FF intensity (I) of 2.0, the measurement and sampling variances among individual panel members and among 250 g test samples are 0.704 and 0.369, respectively. The total variance of 1.073 was determined by adding the measurement and sampling variances together instead of using Equation 4. At a FF intensity of 2.0, measurement error accounts for 65.6% (0.704/1.073) of the total error and sampling error accounts for 34.4% (0.369/1.073) of the total error. Reducing Uncertainty The measurement variance in Equation 2 reflects the variability among individual panel member scores and is specific to the particular sensory panel members used in this study. The measurement variance can be reduced by averaging the scores of 2 or more panel members. Equation 2 can be modified to predict the measurement variance associated with averaging any number of panel members (np). Because the uncertainty associated with other sensory panels was not determined, the measurement variance in Equations 2 and 5 may be more or less than the uncertainty associated with other sensory panels. However, highly trained sensory panels that use the Spectrum^TM method should have similar levels of uncertainty. The sampling variance in Equation 3 is specific to a 250 g sample size. Increasing the size of the test sample taken from the lot can reduce the sampling variance. Equation 3 can be modified to reflect the sampling variance associated with any sample size ns in grams. The total variance associated with a FF test procedure that averages np panel member scores when using a test sample of size ns is obtained by adding Equations 5 and 6. As an example, the uncertainty associated with the FF test procedure used by the peanut industry to estimate the intensity of the FF off-flavor in a bulk lot can be estimated using Equation 7. The peanut industry currently uses a 300 g sample and averages the scores of 5 panel members. The measurement, sampling, and total variances associated with the current industry FF test procedure (np=5 panel members and ns=300 g) when testing a lot with a true FF intensity of 2.0 is estimated from Equations 5, 6, and 7 to be 0.141, 0.308, and 0.449, respectively. The coefficient of variation (CV) associated with measurement, sampling, and total variances are 18.8, 27.7, and 33.5%, respectively. For this example, measurement error accounted for 31.4% (0.141/0.449) of the total error and sampling accounted for 68.6% (0.308/0.449) of the total error. The measurement CV of 18.8% would appear to a reasonable level of uncertainty when comparing the ability of human taste buds to highly precise analytical equipment such as high performance liquid chromatography, which has levels of uncertainty of about 5 to 10% (Whitaker et al., 1974). In addition, the total variance of 0.449 can be used to predict the range of sample test result one would expect when sampling a lot with a FF intensity of 2.0 using the standard peanut industry FF test procedure (ns=300 g and np=average of 5 panelists). Assuming a normal distribution and 95% confidence limits, the FF intensity among samples would range from [2.0 +/− (1.96 (sqrt (0.449))] or range from [2.0 +/− 1.31] or range from 0.69 to 3.31. The major source of uncertainty associated with the peanut industry FF test procedure is associated with the 300 g sample size (68.6% of the total uncertainty). Further reduction in the uncertainty associated with the industry FF test procedure can be achieved by increasing sample size above 300 g. For example, the measurement, sampling, and total variances associated with the FF test procedure that quantified the FF intensity in a 600 g sample by averaging 5 panel member scores are 0.141, 0.154, and 0.295, respectively (For I = 2.0 in Equation 7). For this example, the measurement and sampling uncertainty are about the same magnitude. Distribution among Sample Score In the above example that predicted the range among sample test results when sampling a lot with a FF intensity of 2.0 and using the standard industry FF test procedure (ns=300 g and np=5 panelists), the FF distribution among sample test results was assumed to be normally distributed. However, as reported by Greene et al. (2006b), the FF distribution among the 20-sample test results for a single panel member appears to be skewed, especially for lots with low FF intensity values. The median is less than the mean for 15 of the 20 lots (Table 1) indicating that the distribution among the test results is positively skewed and not symmetrical such as the normal distribution. Using FF intensity scores associated with one panel member (identified as panel member A), an observed cumulative FF distribution among the 20 sample test results was constructed for each lot (reflecting the uncertainty associated with Equation 7 where ns = 250 g and np =1 panel member). The 20 observed FF distributions were each compared to the normal, lognormal, negative binomial, and compound gamma theoretical distributions (Giesbrecht and Whitaker, 1998). Using the method of moments, the mean and variance values computed from panel member A's FF scores for each lot were used to calculate parameters for each of the four theoretical distributions (Read and Cressie, 1988). A suitable fit occurred when the probability associated with the fit statistic was 0.95 or less. Goodness of fit tests (Table 3) indicated that the compound gamma provided the highest number of suitable fits to each of the 20 FF distributions. An example of the observed and theoretical distributions for lot 2821 is shown in Figure 5. The distribution among sample test results can be predicted for specified sample size (ns) and use of a specified number of panel members (np) using variance Equation 7 and the compound gamma distribution. In future studies, a model will be developed using the compound gamma distribution and variance Equation 7 to predict the probability of accepting a lot with a given FF intensity using a given FF test procedure. Summary and Conclusions This study indicated that the measurement, sampling, and total variances associated with the standard industry test procedure (300 g sample and average of 5 panels member scores) used to score a bulk lot with a true FF score of 2.0 were predicted to be 0.141, 0.308, and 0.449, respectively. For this example, measurement error accounted for 31.4% (0.141/0.449) of the total error and sampling accounted for 68.6% (0.308/0.449) of the total error. Since there is a different cost associated with reducing sampling and measurement uncertainty, the best use of resources to reduce the total variability associated with estimating the true FF off-flavor of a bulk lot may be to increase sample size. The variance and distributional information among sample test results will be used to develop a model to predict the performance of FF sampling plans for peanuts. With the evaluation model, the effect of sample size and the number of panels member used to evaluate the FF intensity in a sample on the chances of accepting bad lots (buyer's risk) and the chances of rejecting good lots (seller's risk) can be determined. Sampling plan design parameters such as sample size and number of panel members used to evaluate the FF intensity in bulk peanut lots can be investigated so that sampling plans developed for the peanut industry will not exceed specified risk levels. Literature Cited Giesbrecht F. G. and Whitaker T. B. 1998 Investigations of the problems of assessing aflatoxin levels in peanuts. Biometrics 54 : 739 – 753 . Greene J. L. , Bratka K. J. , Drake M. A. , and Sanders T. H. 2006a Effectiveness of category and line scales to characterize consumer perception of fruity fermented flavor in peanuts. J. Sensory Studies 21 : 146 – 154 . Greene J. L. , Whitaker T. B. , Hendrix K. W. , and Sanders T. H. 2006b Fruity Fermented Off-flavor Distribution in Samples from Large Peanut Lots. J. Sensory Studies accepted for publication, October 2006. Johnsen P. B. , Civille G. V. , Verercellotti J. R. , Sanders T. H. , and Dus C. A. 1988 Development of a lexicon for the description of peanut flavor. J. Sensory Studies 3 : 9 – 17 . Read T. R. C. and Cressie N. A. C. 1988 Goodness-of-Fit Statistics for Discrete Multivariant Data New York, NY Springer-Verlag . Sanders T. H. 1990 Maturity distribution in commercial sized florunner peanuts. Peanut Sci 16 : 91 – 95 . Sanders T. H. and Bett K. L. 1995 Effect of harvest date on maturity, maturity distribution, and flavor of florunner peanuts. Peanut Sci 22 : 124 – 129 . Sanders T. H. , Blankenship P. D. , Vercellotti J. R. , and Crippen K. L. 1990 Interaction of curing temperatures and inherent maturity distribution on descriptive flavor of commercial grade sizes of florunner peanuts. Peanut Sci 17 : 85 – 89 . Sanders T. H. , Vercellotti J. R. , Blakenship P. D. , Crippen K. L. , and Civille G. V. 1989b Interaction of maturity and curing temperature on descriptive flavor of peanuts. J Food Sci. 54 : 1066 – 1069 . Sanders T. H. , Vercellotti J. R. , Crippen K. L. , and Civille G. V. 1989a Effect of maturity on roast color and descriptive flavor of peanuts. J Food Sci. 54 : 475 – 477 . Whitaker T. B. and Dickens J. W. 1964 The effects of curing on respiration and off-flavor in peanuts. Proc Auburn, AL Third National Peanut Research Conference 71–80 . Whitaker T. B. , Dickens J. W. , and Bowen H. D. 1974 Effects of curing on the internal oxygen concentration of peanuts. Trans. ASAE 17 : 567 – 569 . Whitaker T. B. , Dickens J. W. , and Monroe R. J. 1974 Variability of aflatoxin test results. J. American Oil Chemists' Soc 51 : 214 – 218 . Whitaker T. B. , Dickens J. W. , Monroe R. J. , and Wiser E. H. 1972 Comparison of the observed distribution of aflatoxin in shelled peanuts to the negative binomial distribution. J. American Oil Chemists' Soc 49 : 590 – 593 . Author Affiliations
{"url":"https://peanutscience.com/article/id/1016/","timestamp":"2024-11-07T06:26:35Z","content_type":"text/html","content_length":"111098","record_id":"<urn:uuid:d68a40aa-f0cf-47ac-a241-3e4a3a5441cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00589.warc.gz"}
Stage 1 We’ll introduce you to your first advanced assignments: solve ciphers together, learn how to solve clock problems, learn Roman numerals, and much more. Sample topics: – Solve a message. – Sharing a bar of chocolate. – Tetris. – The long road. – How many shapes on a spider web? Stage 2 Let’s review what we learned at school and dig deeper: learn how to solve new puzzles, look for patterns, reconstruct lost pieces of examples, and do many more interesting things. Example topics: – Hidden signs. – Magic houses. – What is the letter X hiding? – Matchstick arithmetic. – Cellular problems. Stage 3 Together we’ll strengthen our math skills and make them even stronger: learn to count quickly, prove and disprove statements, solve Chinese puzzles, and play math games. Examples of topics: – Number Theory. – Arithmetic operations. – Handy counting. – Placing objects. – Covers and cuts. Stage 4 Dive into topics and problems for students that are ahead of the 5th grade curriculum. We’ll learn how to solve complex examples orally, learn how math and chess are related, and much more. Example Topics: – Oral counting, parity, and algorithms. – Divisibility, equations, and the principle of extremes. – Combinatorics and variation enumeration. – Chess, motion, truth and falsehood. – Rebus, weighing, and overflow. Stage 5 We’ll work with advanced topics, examples, and problems that only those who are truly in love with mathematics can do. We’ll explore the concept of invariant, how to get from Earth to Mars by public transportation, and much more. Examples of topics: – Logical bundles. – Inclusions-exclusions. – The Principle of Extreme. – Evenness, pairs, and alternations. – Symmetry. There are no reviews yet. Be the first to review “Elective classes”
{"url":"https://learningpathcraft.today/product/elective-classes/","timestamp":"2024-11-04T04:39:24Z","content_type":"text/html","content_length":"54154","record_id":"<urn:uuid:91c958ab-1f3a-48ba-b216-ca34e019a5d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00814.warc.gz"}
Surjective word maps and Burnside’s p<sup>a</sup>q<sup>b</sup> theorem We prove surjectivity of certain word maps on finite non-abelian simple groups. More precisely, we prove the following: if N is a product of two prime powers, then the word map (x, y) ↦ x^Ny^N is surjective on every finite non-abelian simple group; if N is an odd integer, then the word map (x, y, z) ↦ x^Ny^Nz^N is surjective on every finite quasisimple group. These generalize classical theorems of Burnside and Feit–Thompson. We also prove asymptotic results about the surjectivity of the word map (x, y) ↦ x^Ny^N that depend on the number of prime factors of the integer N. Bibliographical note Publisher Copyright: © 2018, Springer-Verlag GmbH Germany, part of Springer Nature. Dive into the research topics of 'Surjective word maps and Burnside’s p^aq^b theorem'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/surjective-word-maps-and-burnsides-psupasupqsupbsup-theorem","timestamp":"2024-11-14T18:06:48Z","content_type":"text/html","content_length":"47101","record_id":"<urn:uuid:8d798e67-e473-4409-ad5e-5d46f833a069>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00226.warc.gz"}
American Mathematical Society Let $(M, d)$ be a complete topological 2-manifold, possibly with boundary, with a geodesic metric $d$. Let $X\subset M$ be a compact set. We show then that for all but countably many $\varepsilon$ each component of the set $S(X, \varepsilon )$ of points $\varepsilon$-distant from $X$ is either a point, a simple closed curve disjoint from $\partial M$ or an arc $A$ such that $A\cap \partial M$ consists of both endpoints of $A$ and that arcs and simple closed curves are dense in $S(X, \varepsilon )$. In particular, if the boundary $\partial M$ of $M$ is empty, then each component of the set $S(X, \varepsilon )$ is either a point or a simple closed curve and the simple closed curves are dense in $S(X, \varepsilon )$. References • Alexander Blokh, MichałMisiurewicz, and Lex Oversteegen, Planar finitely Suslinian compacta, Proc. Amer. Math. Soc. 135 (2007), no. 11, 3755–3764. MR 2336592, DOI 10.1090/S0002-9939-07-08953-8 • Martin R. Bridson and André Haefliger, Metric spaces of non-positive curvature, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319, Springer-Verlag, Berlin, 1999. MR 1744486, DOI 10.1007/978-3-662-12494-9 • G. A. Brouwer, Green’s functions from a metric point of view, Ph.D. dissertation, University of Alabama at Birmingham, 2005. • Morton Brown, Sets of constant distance from a planar set, Michigan Math. J. 19 (1972), 321–323. MR 315714 • K. Kuratowski, Topology. Vol. II, Academic Press, New York-London; Państwowe Wydawnictwo Naukowe [Polish Scientific Publishers], Warsaw, 1968. New edition, revised and augmented; Translated from the French by A. Kirkor. MR 0259835 • R. L. Moore, Concerning triods in the plane and junction points of plane continua, Proc. Nat. Acad. Sci. U.S.A. 14 (1928), 85–88. • Sam B. Nadler Jr., Continuum theory, Monographs and Textbooks in Pure and Applied Mathematics, vol. 158, Marcel Dekker, Inc., New York, 1992. An introduction. MR 1192552 Similar Articles • Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 54E35, 54F15 • Retrieve articles in all journals with MSC (2000): 54E35, 54F15 Bibliographic Information • Alexander Blokh • Affiliation: Department of Mathematics, University of Alabama in Birmingham, University Station, Birmingham, Alabama 35294-2060 • MR Author ID: 196866 • Email: ablokh@math.uab.edu • Michał Misiurewicz • Affiliation: Department of Mathematical Sciences, IUPUI, 402 N. Blackford Street, Indianapolis, Indiana 46202-3216 • MR Author ID: 125475 • Email: mmisiure@math.iupui.edu • Lex Oversteegen • Affiliation: Department of Mathematics, University of Alabama in Birmingham, University Station, Birmingham, Alabama 35294-2060 • MR Author ID: 134850 • Email: overstee@math.uab.edu • Received by editor(s): February 8, 2007 • Received by editor(s) in revised form: January 3, 2008 • Published electronically: October 8, 2008 • Additional Notes: The first author was partially supported by NSF grant DMS 0456748 The second author was partially supported by NSF grant DMS 0456526 The third author was partially supported by NSF grant DMS 0405774 • Communicated by: Alexander N. Dranishnikov • © Copyright 2008 American Mathematical Society • Journal: Proc. Amer. Math. Soc. 137 (2009), 733-743 • MSC (2000): Primary 54E35, 54F15 • DOI: https://doi.org/10.1090/S0002-9939-08-09502-6 • MathSciNet review: 2448596
{"url":"https://www.ams.org/journals/proc/2009-137-02/S0002-9939-08-09502-6/?active=current","timestamp":"2024-11-03T13:50:19Z","content_type":"text/html","content_length":"61445","record_id":"<urn:uuid:0484b027-5adb-4e50-b9c7-42609dac8f49>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00069.warc.gz"}
Types of Limit Explained | Ablison Types of Limit Explained Limits are a foundational concept in calculus, essential for understanding continuity, derivatives, and integrals. They describe the behavior of functions as they approach a particular point or infinity. Understanding the various types of limits is crucial for solving complex mathematical problems and analyzing functions. Yes, different types of limits exist; each serves a unique purpose and provides insight into the behavior of mathematical functions under specific conditions. This article will explore the various types of limits, including finite limits, infinite limits, one-sided limits, limits at infinity, indeterminate forms, and their applications in calculus. Understanding Limits in Mathematics A limit in mathematics refers to the value that a function approaches as the input approaches a particular point. Formally, the limit of a function ( f(x) ) as ( x ) approaches ( c ) is denoted as ( lim_{x to c} f(x) ). This concept allows mathematicians to analyze the behavior of functions at points where they may not be explicitly defined. For example, the function ( f(x) = frac{x^2 – 1}{x – 1} ) is not defined at ( x = 1 ) but has a limit of 2 as ( x ) approaches 1. Limits are integral to the understanding of continuity in functions, whereby a function is continuous at a point ( c ) if the limit as ( x ) approaches ( c ) equals the function value at that point. According to the Intermediate Value Theorem, if a function is continuous on a closed interval, it takes on every value between its endpoints. This highlights the importance of limits in determining the nature of functions. In terms of notation, the concept of limits has evolved, with symbols and terminology established in the 18th century. The epsilon-delta definition, which formalizes the concept of a limit, was introduced by mathematicians such as Augustin-Louis Cauchy and Karl Weierstrass. This rigorous approach paved the way for modern calculus, enabling precise definitions of continuity and derivatives. In practical applications, limits are used to understand rates of change. For instance, the derivative of a function is defined as the limit of the average rate of change as the interval approaches zero. This connection between limits and derivatives is one of the cornerstones of differential calculus, reinforcing the significance of limits in mathematics. The Concept of a Finite Limit A finite limit occurs when a function approaches a specific value as the input approaches a certain point. For example, if ( f(x) = x^2 ), then ( lim_{x to 2} f(x) = 4 ). This indicates that as ( x ) gets closer to 2, the value of ( f(x) ) approaches 4. Finite limits can be evaluated using various techniques, including direct substitution, factoring, and the use of limit laws. The limit laws govern the operations of limits and allow for the simplification of complex expressions. For instance, the sum, difference, product, and quotient of limits can be calculated by applying the relevant limit laws. These laws facilitate the evaluation of limits without needing to compute them directly, making it easier for mathematicians to solve problems involving continuous Finite limits are particularly important in determining the behavior of functions at specific points, especially in calculus. They allow for the analysis of continuity, as previously mentioned, and provide a framework for calculating derivatives. A function is considered differentiable at a point if it has a finite limit that corresponds to the slope of the tangent line at that point. Moreover, finite limits play a crucial role in the Fundamental Theorem of Calculus, which connects differentiation and integration. This theorem states that if a function is continuous on a closed interval, then the integral of its derivative over that interval can be computed using the values of the original function. As such, finite limits are essential for a comprehensive understanding of Exploring Infinite Limits Infinite limits arise when a function approaches infinity or negative infinity as the input approaches a certain value. For example, if ( f(x) = frac{1}{x} ), then ( lim_{x to 0} f(x) = infty ). This indicates that as ( x ) approaches zero, the value of ( f(x) ) becomes indefinitely large. Infinite limits can also occur at specific points, typically associated with vertical asymptotes in graphs of functions. Understanding infinite limits is crucial for analyzing the behavior of functions that do not converge to a finite value. These limits often indicate that a function is unbounded near a specific point, leading to implications for continuity and differentiability. For instance, a function with an infinite limit cannot be continuous at that point, as it does not approach a specific value. Infinite limits are also significant in the context of horizontal asymptotes, where the limit of a function as ( x ) approaches infinity is evaluated. For example, if ( f(x) = frac{2x}{x + 1} ), then ( lim_{x to infty} f(x) = 2 ). This indicates that as ( x ) becomes exceedingly large, the function stabilizes around a value of 2. Understanding these limits is essential for sketching the overall behavior of functions as they extend toward infinity. In calculus, infinite limits frequently arise in applications involving rates of change, especially when dealing with rational functions. Techniques such as L’Hôpital’s Rule can be applied to evaluate these limits when faced with forms that yield infinity. Overall, infinite limits provide critical insights into the long-term behavior of functions, enriching the analysis of mathematical One-Sided Limits Defined One-sided limits describe the behavior of a function as the input approaches a specific point from one side only: either the left or the right. The left-hand limit, denoted as ( lim{x to c^-} f(x) ), assesses the value of ( f(x) ) as ( x ) approaches ( c ) from the left. Conversely, the right-hand limit, ( lim{x to c^+} f(x) ), evaluates the function as ( x ) approaches ( c ) from the right. These limits are particularly useful for analyzing functions at points of discontinuity. In cases where the left-hand limit and right-hand limit are equal, the overall limit exists. However, if they differ, the overall limit does not exist, indicating a jump or removable discontinuity in the function. For example, consider the function ( f(x) ) defined as: f(x) = 1 & text{if } x < 0 2 & text{if } x geq 0 Here, ( lim{x to 0^-} f(x) = 1 ) and ( lim{x to 0^+} f(x) = 2 ), hence ( lim_{x to 0} f(x) ) does not exist. One-sided limits also play a vital role in defining the derivative of a function. The derivative at a point can be expressed in terms of one-sided limits, leading to the concept of a derivative existing only if both one-sided limits converge to the same value. This definition allows for the characterization of points where a function may be differentiable or exhibit sharp turns. In practical applications, one-sided limits are essential for understanding piecewise functions, where different expressions define the function over various intervals. By evaluating one-sided limits at critical points, mathematicians can gain insights into the continuity and differentiability of these functions. Limits at Infinity Clarified Limits at infinity consider the behavior of a function as the input approaches infinity or negative infinity. This type of limit is crucial for understanding the long-term trends of functions, particularly rational functions. For example, to evaluate ( lim{x to infty} frac{3x^2 + 4}{2x^2 – 5} ), one can divide the numerator and denominator by ( x^2 ), yielding ( lim{x to infty} frac{3 + frac{4}{x^2}}{2 – frac{5}{x^2}} = frac{3}{2} ). Thus, the function stabilizes around ( frac{3}{2} ) as ( x ) grows indefinitely. In many cases, limits at infinity help identify horizontal asymptotes of functions. If a function approaches a finite limit as ( x ) approaches infinity, the line ( y = L ) (where ( L ) is the limit) is a horizontal asymptote. However, if the limit approaches infinity, the function increases without bound, indicating that there is no horizontal asymptote. Rational functions often exhibit distinct behaviors at infinity, leading to the need for classification based on the degrees of the polynomial in the numerator and denominator. If the degree of the numerator is greater than the degree of the denominator, the limit at infinity is infinity. Conversely, if the degree of the denominator exceeds that of the numerator, the limit approaches zero. Understanding these classifications allows for the effective analysis of a function's overall behavior. Limits at infinity are also relevant in applied mathematics and physics, where they can represent long-term behaviors of systems. For example, in population dynamics or economic models, limits at infinity provide crucial insights into the sustainability of growth rates. Overall, limits at infinity offer valuable information on the asymptotic behavior of functions. Indeterminate Forms and Limits Indeterminate forms occur when limits yield ambiguous results, typically represented as 0/0 or ∞/∞. These forms require further analysis to evaluate the limits correctly. For instance, in the case of ( lim{x to 0} frac{sin(x)}{x} ), direct substitution leads to the form 0/0. To resolve this, one can apply l'Hôpital's Rule, which states that if ( lim{x to c} f(x) = 0 ) and ( lim{x to c} g(x) = 0 ), then ( lim{x to c} frac{f(x)}{g(x)} = lim_{x to c} frac{f'(x)}{g'(x)} ), provided the limit exists. Indeterminate forms can also arise in other contexts, such as when evaluating limits involving exponents. For example, ( lim_{x to 0} (1 + x)^{frac{1}{x}} ) leads to the form 1^∞. In this case, it can be transformed using logarithms to facilitate evaluation, leading to the conclusion that the limit approaches ( e ). Identifying indeterminate forms is crucial for accurate limit evaluation. In calculus, recognizing these forms helps mathematicians apply appropriate techniques to resolve ambiguities. Common strategies include factoring, rationalization, and the substitution of equivalent forms. Indeterminate forms highlight the complexity in limit evaluation and emphasize the importance of a rigorous approach to analysis. By employing appropriate methods, mathematicians can derive meaningful conclusions from limits that initially appear ambiguous or undefined. Applications of Limits in Calculus Limits are fundamental in calculus, providing the foundation for key concepts such as derivatives and integrals. The derivative, defined as the limit of the average rate of change, quantifies how a function changes at a specific point. For example, the derivative ( f'(x) = lim_{h to 0} frac{f(x + h) – f(x)}{h} ) illustrates how limits are utilized to measure instantaneous rates of change, crucial for various applications in engineering, physics, and economics. Another essential application of limits is in the evaluation of definite integrals. The integral of a function can be interpreted as the limit of Riemann sums, which approximate the area under a curve. As the number of subintervals increases and their widths decrease, the sum converges to the exact area. This application is foundational in determining areas, volumes, and other critical measures across various disciplines. Limits also facilitate the study of series and sequences in calculus. The convergence or divergence of a series is typically determined by evaluating limits. For instance, the limit of the ratio or difference between terms can indicate whether a series converges to a specific value or diverges indefinitely. This analysis is critical in subjects such as numerical analysis and applied mathematics, where series approximation is frequently employed. Lastly, limits play a role in advanced topics such as multivariable calculus and differential equations. In multivariable calculus, limits help define partial derivatives and gradients, while in differential equations, limits are used to analyze solutions' stability and behavior over time. Overall, limits are indispensable tools in calculus, underpinning numerous mathematical concepts and Summary of Limit Types In summary, limits are a crucial aspect of calculus, offering insights into function behavior as they approach specific points or infinity. The main types of limits include finite limits, which describe the value a function approaches; infinite limits, indicating unbounded behavior; one-sided limits, evaluating function behavior from one direction; limits at infinity, assessing long-term function trends; and indeterminate forms that require special techniques for evaluation. Each type serves distinct purposes in mathematical analysis. Understanding these various types of limits is essential for mastering calculus and applying it to solve complex problems in mathematics, science, and engineering. The principles of limits form the basis for derivatives and integrals, making them foundational to the study of continuous functions. As such, grappling with the subtleties of limits is critical for anyone pursuing advanced The rigorous study of limits has led to numerous applications in diverse fields such as physics, economics, and engineering. By providing a framework for understanding how functions behave under specific conditions, limits empower mathematicians and scientists to model real-world scenarios effectively. Ultimately, the exploration of limit types enriches the understanding of calculus and enhances analytical skills, enabling individuals to tackle increasingly complex mathematical challenges with
{"url":"https://www.ablison.com/types-of-limit-explained/","timestamp":"2024-11-10T05:19:29Z","content_type":"text/html","content_length":"142560","record_id":"<urn:uuid:5f37f688-b771-4e2e-a17c-7a6576837e8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00686.warc.gz"}
Functional Analysis and Applications Group Next talk of the Seminar (17:00, Room Sousa Pinto) 26/11/2024: “On the solvability theory of singular integral operators with non-Carleman shift” Rui Marreiros (Universidade do Algarve) Next talk of the Seminar (16:00, Room Sousa Pinto) 29/11/2024: “MDM property” Michal Wojciechowski (Institute of Mathematics, Polish Academy of Sciences, Warsaw, Poland) Classical result of Mitiagin and Mirkhil - DeLeeuw says that the uniform norm of a homogeneous partial derivative of a function with compact support can be majorized by the norms of collections of other homogeneous derivatives (of the same order) iff its symbol is a linear combination of the symbols of the collection. The simplest case when it does not hold is a mixed second derivative of function of two variables which can be unbounded while second pure derivatives are bounded. In the talk I will show the analog of this fact for analytic functions of two complex variables defined on the bi-disc. (the compact support condition is replaced here by boundedness of all derivatives of smaller order). Our construction is based on Rudin-Shapiro polynomials. It is a joint work with Krystian Kazaniecki. This Seminar is supported in part by the Portuguese Foundation for Science and Technology (FCT - Fundação para a Ciência e a Tecnologia), through CIDMA - Center for Research and Development in Mathematics and Applications, within project UIDB/04106/2020 (https://doi.org/10.54499/UIDB/04106/2020).
{"url":"http://seminargafa.web.ua.pt/","timestamp":"2024-11-13T07:31:15Z","content_type":"text/html","content_length":"69526","record_id":"<urn:uuid:befae8a1-d5e3-4380-a17b-565dd8e00d0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00444.warc.gz"}
Forces and its Effects In physics, a force is any interaction that, when unopposed, will change the motion of an object or its shape. In this section we are going to develop this idea which is, for instance, realated to the technology we use to travel by bike or car, but also, behind the everyday actions such as pushing the shopping trolley. In order to do this we will study: Are you ready for it? Concept of force Imagine pushing a resting ball on a pool table with yor finger. Probably your intuition tells you that you are giving “strength” to the ball. In a more formal way we can say that we are applying a force to the ball. What happens next? Most likely our ball starts moving, but if it were a water balloon instead, it could happen that it deformed and our finger was "swallowed" by the balloon. Effects of force When a pool game starts, you apply a force on the white ball by using the cue. This force ends up spreading to the rest of balls, initially on repose, setting them all in motion. We define force as any interaction that, when unopposed, will change the motion of an object, or, will cause a deformation on it. A force is the interaction of a body with something external to it, as well as a vector characterised by magnitude (size) and direction. The unit of the International System of measure is the Newton (N). One newton is the force required to cause a mass of one kilogram (kg) to accelerate at a rate of one meter per second squared (m/s^ 2) in the absence of other force-producing effects. Historically the study of bodies in motion and its cause has fascinated mankind since ancient times. Aristotle (284 – 322 a.C.), one of the most important wise men from Ancient Greek, he was one the forefathers of this study, his ideas were maintained during the whole Middle Ages. Then, Galileo Galilei (1564 – 1642) was able to describe motion mathematically (Principal of Relativity) but he did not analyse the causes of it. Years later Isaac Newton (1643 – 1727) was who, based on Galileo ́s ideas, define the causes of motion: forces. Other units of measurement Beside Newton, there are other units of measure less used: • dyne (dyn). The standard centimeter-gram-second (CGS) unit of force, equal to the force that produces an acceleration of one centimeter per second per second (cm/s^2) on a mass of one gram. 1 dyn = 10^-5 N • kilopond (kp) or kilogram – force (kg[f]). It is a gravitational metric unit of force. It is equal to the magnitude of the force exerted on one kilogram of mass in a 9.8m/s^2 gravitational field (standard gravity, a conventional value approximating the average magnitude of gravity on Earth). 1 kp = 9.8 N • poundal (pdl). A unit of force equal to that required to give a mass of one pound an acceleration of one foot per second per second. 1 pdl = 0.1382550 N • pound - force (lb[f]). The pound-force is about equal to the gravitational force applied on a mass of one pound (0.45359237 kg.) on the surface of Earth. 1 lb[f] = 4,448222 N • KIP. It equals 1000 pounds-force, used primarily by American architects and engineers to measure engineering loads. Although uncommon, it is occasionally also considered a unit of mass, equal to 1000 pounds: 1 KIP = 1000 lbf; 1 KIP = 4448,222 N Representation of forces Previously, we have defined force as a vector magnitude, so they are represented as vectors. Indeed, as you can observe on the next image, the direction of the force has to be into account in order to predict its effects. Observe that one of the forces characteristics has been the initial point, also known as point of application. It is the point where force is applied, so the effects it produces , on a body may vary depending on it. In any case, on this level we will focus on punctual objects, so, applying a force to a body it is to apply it on its only point. On the other hand, forces as vectors may be decomposed. This will let us, for instance, to observe the effects produced in space in each dimension (axis) separatly. Effects of the force Forces appear from interactions between bodies. Watch the next image. According to the distance of interaction between bodies we can distinguish two types: • Interaction by contact. Forces emerged when two or more bodies are in contact. For example when there os a crash or when a door is pushed. • Remote interaction. The bodies, even if they are not in contact, exert a strength on the rest of bodies. For instance, the force of attraction of a magnet towards something made of metal, or the gravity itself the Earth exerts on the moon and viceversa. Furthermore, the effects produced by the forces can be summarised in two kinds: • Dynamics. They produce changes in the velocity (modulus and direction) of the body they act on. For instance, if the same strength on the shopping trolley is maintained and exerted in time, the speed of the trolley will be gradually increased. Newton ́s Law are particularly useful to understand the dynamic effects of the forces. On the other hand, we have to bear in mind that should the direction of the force exerted on a free body does not go throughout its centre of gravity, it produces a rotational motion (turn) and a driving motion (movement). This happens when a ball is kicked by the edge and not by the middle. For more information about this subject go to the subject dedicated to rigid solid Newton ́s second lawENLACE states that the acceleration of an object is dependent upon two variables - the net force acting upon the object and the mass of the object. • Elastic. They produce changes on the body structure they act on. For example, to forge a sword different kind os forces are exerted on a incandescent piece of steel.
{"url":"https://www.fisicalab.com/en/section/concept-of-force","timestamp":"2024-11-03T21:48:37Z","content_type":"text/html","content_length":"69480","record_id":"<urn:uuid:79684669-d346-49d4-ae7b-e6e8e43c2ca4>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00133.warc.gz"}
Let´s keep moving on the pi-ramid - pi-ramid.com Symmetric digits of pi (black and lightblue) placed in the pi-ramid´s 177 steps. Yes – I followed my intuition. It was leading me while bringing out symmetric dots ( black colour). The decision which dot should be could be considered as an act done high-handed. You may think this. – But isn´t it amazing that the symmetric digits within the pi-ramid are enabling and permitting – even forcing on – this picture? Isn´t it astonishing to find so much symmetric digits within the pi-ramid so a plenty of artwork can be done? Isn´t this a sensation? You could mean that there is a visual code within pi.
{"url":"https://pi-ramid.com/blog/going-on-the-pi-ramid-und-weiter-gehts-auf-der-pi-ramide/","timestamp":"2024-11-12T02:09:26Z","content_type":"text/html","content_length":"16671","record_id":"<urn:uuid:8bf47127-4f4f-4d38-90d8-c57da4b96255>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00396.warc.gz"}
Stress Analysis of Crane Hook and Validation by Photo-Elasticity Stress Analysis of Crane Hook and Validation by Photo-Elasticity () 1. Introduction Crane Hooks are highly liable components that are typically used for industrial purposes. It is basically a hoisting fixture designed to engage a ring or link of a lifting chain or the pin of a shackle or cable socket and must follow the health and safety guidelines [1-4]. Thus, such an important component in an industry must be manufactured and designed in a way so as to deliver maximum performance without failure. Thus, the aim of the project is to study the stress distribution pattern of a crane hook using finite element method and verify the results using Photo elasticity. 2. Failure of Crane Hook To minimize the failure of crane hook [5], the stress induced in it must be studied. Crane is subjected to continuous loading and unloading. This causes fatigue of the crane hook but the fatigue cycle is very low [6]. If a crack is developed in the crane hook, it can cause fracture of the hook and lead to serious accident. In ductile fracture, the crack propagates continuously and is more easily detectible and hence preferred over brittle fracture. In brittle fracture, there is sudden propagation of the crack and hook fails suddenly [7]. This type of fracture is very dangerous as it is difficult to detect. Strain aging embrittlement [8] due to continuous loading and unloading changes the microstru-cture. Bending stress and tensile stress, weakening of hook due to wear, plastic deformation due to overloading, and excessive thermal stresses are some of the other reasons for failure. Hence continuous use of crane hooks may increase the magnitude of these stresses and ultimately result in failure of the hook. 3. Methodology of Stress Analysis The analysis is carried out in two phase: 1) Finite element stress analysis of an approximate (acrylic) model and its verification by photo elasticity theory 2) Analytical analysis assuming hook as a curved beam and its verification using Finite element analysis of the exact hook. To establish the finite element procedure a virtual model similar to the acrylic mode is prepared in ANSYS and the results of stress analysis are cross checked with that of photo elasticity. After establishing the procedure a virtual model similar to actual crane hook sample is created using CAD software and the results of finite element analysis are now verified with that of analytical method. 4. Finite Element Analysis (FEA) Finite element method [9,10] has become a powerful tool for numerical solution of a wide range of engineering problems. For the stress analysis of the acrylic model of crane hook the outer geometry or profile of the model is drawn in ANSYS 11.0. It is then extruded to 9.885 mm to form a 3-D model of hook. Here 9.885 is the average thickness of the model. Material properties and element type are fed and the model is meshed using smart size option with the global size of the element as 3. Loading and constraint are applied to the meshed model as shown in the Figure 1 and the finite element model is then solved. Principal stress and von mises stress patterns are thus obtained as shown in Figure 2. 5. Theory of Photo Elasticity For the verification of the results obtained from FEM, the experimentation is conducted using the concept of photo elasticity. The concept is used to determine stress distribution and stress concentration factors in irregular geometries. The method is based on the property of birefringence, which is exhibited by certain transparent materials. Birefringence is a property by virtue of which a ray of light passing through a birefringent material experiences two refractive indices. Thus, a crane hook model made out of such material is selected for the. Figure 2. Principle stresses in the model. study. The model has geometry similar to that of the structure on which stress analysis is to be performed. This ensures that the state of stress in the model is similar to that of the structure. 5.1. Stress Optic Law When plane polarized light passes through a photo elastic material, it resolves along the two principal stress directions and each of these components experiences different refractive indices [11]. The difference in the refractive indices leads to a relative phase retardation between the two component waves. The magnitude of the relative retardation is given by the stress optic law: where R is the induced retardation, C is the stress optic coefficient, t is the specimen thickness, σ[11] is the first principal stress, and σ[22] is the second principal stress. The two waves are then brought together in a polariscope set up. Thus, the state of stress at various points in the material can be determined by studying the fringe pattern. Calibration of disc is done to find the material fringe value f[σ]. An acrylic model of disc is taken and subjected to compressive load in the circular polariscope setup. Figure 3 shows fringe pattern on a loaded disc. Values of loads are noted down for various fringe orders Using the formula f[σ] = 8P/πDN = 11.15 where P = Load applied at particular fringe value, N = Fringe order at corresponding load D = diameter of the disc = 7.01 cm Figure 3. stress pattern of photo elastic model under sodium light. Stress magnitude at a point is given by: (σ[1] – σ[2])/2 = N f[σ]/t where σ[1] = major principal stress, σ[2] = minor principal stress, t = thickness of hook. 6. Results For the approximate model of crane hook, stresses induced during finite element analysis are compared with that of photo elasticity experiment. For the acrylic model of crane hook the results are as ANSYS v/s Experimental As shown in Figure 4, maximum principal stress Figure 4. Stress distribution pattern for acrylic model (a) Using fem; (b) Using photo-elasticity. value obtained from ANSYS = 12.35 N/mm^2 while that obtained experimentally = 11.121 N/mm^2. The results are closely in agreement with a very small percentage error = 5.76%. Possible reasons for variation might be due to the fact that it is difficult to find the magnitude of stress exactly on the plane of the fringe closest to inner surface and thus the value 12.35 may not be accurate. Figure 5 shows the exact location of maximum stress on the approximate model of crane hook as obtained from ANSYS software. The above results confirm that the FEA procedure is well established and can be used for complex and accurate models also. Hence in the second phase of the study, analytical calculations are carried out for the exact model of crane hook and the results are validated from that of ANSYS. 7. Analytical Method Since the crane hook is a curved beam [12], simple theory of bending for shallow, straight beam does not yield accurate results. Stress distribution across the depth of such beam, subjected to pure bending, is non linear (to be precise, hyperbolic) and the position of the neutral surface is displaced from the centroidal surface towards the centre of curvature. In case of hooks as shown in Figure 6, the members are not slender but rather have a sharp curve and their cross-sectional dimensions are large compared to their radius of curvature. Figure 5. Variations due to limitations. Figure 6. Curved beam with its cross section area. The strain at a radius r = The strain is clearly zero at the neutral axis and is maximum at the outer radius of the beam. Using the relationship of stress/strain = E, the normal stress is simply. The location of the neutral axis is obtained by equating the product of the normal stress and the area elements over the whole area to 0 reduces to ThereforeThe stress resulting from an applied bending moment is derived from the fact, that the resisting moment is simply the integral of the product of moment arm over whole section from the neutral axis and σdA. The maximum stress occurs at either the inner or outer surface. The centroid of the section is The maximum stress occurs at either the inner or outer surface: Stress at inner surface Stress at outer surface The curved beam flexure formula is in reasonable agreement for beams with a ratio of curvature to beam depth (r[c]/h) > 5 (rectangular section). As this ratio increases, the difference between the maximum stress calculated by curved beam formula and the normal beam formula reduces. The above equations are valid for pure bending. In case of crane hooks, the bending moment is due to forces acting on one side of the section under consideration. For calculations the area of cross section is assumed to be trapezoidal [13]. Values of stresses as shown in Figure 7 are found out at the A-A section as it is the section where maximum stress is induced. 8. Finite Element Method for the Exact Model A crane hook prepared by forging, as shown in Figure 8(a), is procured for the modeling in ANSYS software. Using digital Coordinate Measuring Machine (CMM) the cloud points are obtained and the model is prepared in Pro-E software. The virtual model prepared in Pro-E software is imported in ANSYS environment. Following the steps of FEM as discussed earlier the stress analysis is conducted for the actual model in ANSYS environment and the results are obtained. Figure 8(b) shows the magnitude and location of stress. 9. Results The induced stresses as obtained from analytical calculations, explained in the Section 7, are compared with results obtained by FEA software. ANSYS v/s analytical Max value obtained analytically=12.35 N/mm^2 while value obtained from ANSYS = 13.372 N/mm^2 The results are in close harmony with a small percentage error = (13.372 – 12.35)/12.35 = 8.26% Possible reasons for variation might be the due to the assumption that 1) Loading is considered as point loading in analytical calculation while it is taken on a bunch of nodes in ANSYS. 2) Cross sectional area is assumed to be trapezoidal and 3) Plane sections remain plane after deformation. Using analytical calculations the stress variation yields the results as shown in Figure 9. Maximum tensile stress is 150.72 N/mm^2 on the inner surface of the crane hook and on the outer surface of the hook, compressive stress is 44.23 N/mm^2. As shown in Figure 9, the stress goes on decreasing from a max value to zero and again increases from zero to a certain value. Innermost point of section: • Max stress by ANSYS= 135.46 N/mm^2; Max stress analytically= 150.72 N/mm^2 • % error= (150.72 – 135.46)/135.46 = 10.12% • Outermost point of A-A section: • Stress by ANSYS= 43.728 N/mm^2; Stress analytically = 44.23 N/mm^2; • % error = (44.23 – 43.728)/43.728 = 1.01 % Figure 8. (a) Actual crane hook; (b) Stresses obtained using fem. Figure 9. Variation of Stress with depth for the actual model. • Reasons for variation: Various assumptions made during the analytical calculations (discussed earlier). Profile of the hook obtained from Pro-E Modeling software may not be exactly the same as actual one. 10. Conclusions The complete study is an initiative to establish a FEA procedure, by validating the results, for the measurement of stresses. For reducing the failures of hooks the estimation of stresses, their magnitudes and possible locations are very important. Analytical calculation becomes complex as the newer designs are too complicated. Suggestions to reduce failure Manufacturing process: Forging is preferred to casting as the crane hooks produced from forging are much stronger than that produced by casting. The reason been in casting the molten metal when solidifies, it has some Figure 10. (a) Hook with Material Removed; (b) Hook with Material Removed. residual stresses due to non uniform solidification. Thus casted crane hooks cannot bear high tensile loads. Grain size: The stress bearing capacity depends on the homogeneity of the material i.e. the relative sizes of the grains in various areas of the component. Smaller the grain size better is the stress bearing capacity. So grain refinement process such as normalizing is advisable after forging. Processes such as welding should be avoided as they increase the stress concentration points which eventually lead to failure. Removal of metal from the hook body is not feasible as it increases the amount of stresses in the hook. This is validated by the following illustration: It is clear from the Figure 10(a) that removal of a small amount of material from minimum stress concentration areas increases the stress slightly though reducing the cost of material. The Figure 10(b) validates the fact that when considerable amount of material is removed stresses increase by a good enough margin which is not at all feasible. Design improvement: From the stress analysis we have observed the cross section of max stress area. If the area on the inner side of the hook at the portion of max stress is widened then the stresses will get reduced. Analytically if the thickness is increased by 3 mm, stresses are reduced by 17%. Thus the design can be modified by increasing the thickness on the inner curvature so that the chances of failure are reduced considerably. σ = normal stress; ε = strain; E = Modulus of Elasticity; A = area of whole section; e = eccentricity; M = Bending Moment; y = distance from neutral axis; c[o] = distance of neutral axis from outer surface; c[i] = distance of neutral axis from inner surface; r = radius of curvature at any distance r[n] = radius of curvature at neutral axis; r[c] = radius of curvature at centroidal axis; r[o] = radius at the outer surface; r[i] = radius at the inner surface
{"url":"https://scirp.org/journal/paperinformation?paperid=7334","timestamp":"2024-11-07T22:17:55Z","content_type":"application/xhtml+xml","content_length":"102775","record_id":"<urn:uuid:b0810b76-d93c-449b-9b2b-7d7a9e6388e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00704.warc.gz"}
The strange row of numbers The strange row of numbers Just for fun, a mathematician once made up a puzzle about rabbits. It went like this. Suppose that you have a boy rabbit and a girl rabbit. Each month, your pair of rabbits has a boy rabbit and a girl rabbit. Suppose each pair of baby rabbits is grown up in just two months. Then they have a pair of baby rabbits each month. How many rabbits will there be at the end of a You can figure it out quite easily for a while. First, there are two By the end of one month, the two rabbits have a pair of baby rabbits. So now you have four rabbits. By the end of the second month, the first two rabbits have another pair of babies. Now there are six rabbits altogether. In the third month, things begin to get harder to figure out. The first pair of rabbits have another pair of babies. That makes eight rabbits. But now the baby rabbits born in the first month are old enough to have their first pair of babies. So, altogether that makes ten rabbits. It gets even more complicated during the fourth month. The first rabbits have another pair of babies, making twelve rabbits. The second pair of rabbits also has another pair of babies, making fourteen rabbits. And now the pair of rabbits born in the second month are old enough to have a pair of babies. That makes a total of sixteen rabbits in the fourth As you can see, things are now going to get harder and harder to figure Actually, there’s a way of finding out the answer without counting up any more pairs of rabbits! It’s hidden in the first five numbers we got by counting up all the rabbits. Write the five numbers—2, 4, 6, 10,16—on a piece of paper. Can you see the secret? If you add up any two numbers that are next to one another, the sum is the same as the following number! Add up the first two numbers, 2 and 4, and you get the third number, 6. Add the second and third numbers, 4 and 6, and you get the fourth number, 10. And add the third and fourth numbers, 6 and 10, and you get 16, which is the fifth number! So you see, you can find out the answer to the rabbit problem simply by adding up two numbers at a time. Add the last two numbers in the column—10 and 16—and then put the new number you get at the end of the column. Then add up the last two numbers you now have. Keep doing this until you have thirteen numbers in the column. The first number—2—is the number of rabbits that you started with. The next twelve numbers—one number for each month of the year—show how many rabbits you had by the end of each month. The last number is the number of rabbits at the end of the year. A series of numbers of this sort is called a sequence, which means “a group of things that are connected together.” The mathematician who worked out the rabbit puzzle discovered a number sequence that goes 0, 1,1, 2, 3, 5, 8, 13, 21, 34, and so on. As you can see, any two numbers that are next to each other add up to the next number. Thus 0 and 1 are 1, 1 and 1 are 2, 1 and 2 are 3, and so on. This sequence is called the Fibonacci sequence (fee buh [nah]{.smallcaps} chee [see]{.smallcaps} kwuhns), after the man who discovered it. Now, there’s something really strange about the Fibonacci sequence. Nature uses it! The bumps upon a pineapple, the scales on a pine cone, the leaves on the stem of a rose bush, and the little bumps in the head of a daisy are all arranged in the Fibonacci sequence! For example, if you look at a daisy you’ll see that all of the little yellow bumps make up winding rows called spirals. Some of the spirals go to the left and some go to the right. If you count the spirals that go to the left, you’ll see there are 21 of them. Count the spirals that go to the right and you’ll see there are 34. And the numbers 21 and 34 are next to one another in the Fibonacci If you count the spirals on a pineapple, you’ll find there are 8 going one way and 13 going another—and 8 and 13 are next to one another in the Fibonacci sequence. Count the spirals on a pine cone and you’ll get the numbers 5 and 8, and they, too, are next to one another in the Fibonacci sequence. It looks as if nature uses mathematics, too! (above) The pattern of spirals shows how the bumps on a giant sunflower are arranged. There are 34 spirals in one direction and 55 spirals in the other. The numbers 34 and 55 are next to each other in the Fibonacci sequence. Please login to comment 0 Comments Inline Feedbacks View all comments | Reply
{"url":"https://worldofchildcraft.com/mathemagic/the-strange-row-of-numbers/","timestamp":"2024-11-11T16:07:41Z","content_type":"text/html","content_length":"419210","record_id":"<urn:uuid:ebb2961d-e481-43f7-9118-7e01d439fd0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00647.warc.gz"}
104,910 research outputs found Let $R=k[x_1,..., x_n]$ be a polynomial ring over a field $k$ of characteristic $p>0,$ let \m=(x_1,..., x_n) be the maximal ideal generated by the variables, let $^*E$ be the naturally graded injective hull of R/\m and let $^*E(n)$ be $^*E$ degree shifted downward by $n.$ We introduce the notion of graded $F$-modules (as a refinement of the notion of $F$-modules) and show that if a graded $F$-module \M has zero-dimensional support, then \M, as a graded $R$-module, is isomorphic to a direct sum of a (possibly infinite) number of copies of $^*E(n).$ As a consequence, we show that if the functors $T_1,...,T_s$ and $T$ are defined by $T_{j}=H^{i_j}_{I_j}(-)$ and $T=T_1\circ...\circ T_s,$ where $I_1,..., I_s$ are homogeneous ideals of $R,$ then as a naturally graded $R$-module, the local cohomology module H^{i_0}_{\m}(T(R)) is isomorphic to $^*E(n)^c,$ where $c$ is a finite number. If $\text{char}k=0,$ this question is open even for $s=1.$Comment: Revised result in section In this note we derive the slow-roll and rapid-roll conditions for the minimally and non-minimally coupled space-like vector fields. The function $f(B^{2})$ represents the non-minimal coupling effect between vector fields and gravity, the $f=0$ case is the minimal coupling case. For a clear comparison with scalar field, we define a new function $F=\pm B^{2}/12+f(B^{2})$ where $B^{2}=A_{\mu}A^{\ mu}$, $A_{\mu}$ is the "comoving" vector field. With reference to the slow-roll and rapid-roll conditions, we find the small-field model is more suitable than the large-field model in the minimally coupled vector field case. And as a non-minimal coupling example, the F=0 case just has the same slow-roll conditions as the scalar fields.Comment: no figur
{"url":"https://core.ac.uk/search/?q=author%3A(Yi%20Zhang)","timestamp":"2024-11-05T08:55:47Z","content_type":"text/html","content_length":"104817","record_id":"<urn:uuid:211a1bfd-edca-4914-98a0-0c67c5d397ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00484.warc.gz"}
An introduction to terahertz time-domain spectroscopic ellipsometry In the past, terahertz spectroscopy has mainly been performed based on terahertz time-domain spectroscopy systems in a transmission or a window/prism-supported reflection configuration. These conventional approaches have limitations in regard to characterizing opaque solids, conductive thin films, multiple-layer structures, and anisotropic materials. Ellipsometry is a self-reference characterization technique with a wide adaptability that can be applied for nearly all sample types. However, terahertz ellipsometry has not yet been widely applied, mainly due to the critical requirement it places on the optical setting and the large discrepancy with regard to traditional terahertz spectroscopy and conventional optical ellipsometry. In this Tutorial, we introduce terahertz time-domain spectroscopic ellipsometry from the basic concept, theory, optical configuration, error calibration to characterization methods. Experimental results on silicon wafers of different resistivities are presented as examples. This Tutorial provides key technical guidance and skills for accurate terahertz time-domain spectroscopic ellipsometry. Polarization is a fundamental physical property of light, which can be altered upon interaction with a medium. In other words, the polarization state of light contains information about the interacted sample. Polarimetry is a technique in which the properties of a sample are retrieved from changes in the polarization state of light. This can be done for either transmission or reflection at oblique incident angles, while reflection-type polarimetry is more commonly applied due to its ability to measure opaque samples. Reflection polarimetry is a branch of ellipsometry, although in many cases, they are synonymous. In this Tutorial, we will use the term ellipsometry as we will focus on sample characterization in a reflection configuration. To correlate the polarization change with the sample properties using the method of ellipsometry, one compares the reflection coefficients and expresses the ratio as represents complex values). When the incident light has equal components, this ratio is correlated with the complex reflected electric fields in the , respectively, as follows: where tanΨ and Δ represent the magnitude ratio and phase difference, respectively, of are the incident electric fields in directions, respectively. By establishing an optical model to describe the light–matter interaction, , and, hence, , can be predicted by Fresnel coefficients as a function of the sample properties, such as , where is the complex refractive index of the sample. In contrast, characterization is done by solving the equation either analytically, numerically, or by model fitting. Therefore, ellipsometry extracts the sample properties by self-referencing the two orthogonal electric fields from the sample, without the need for an extra reference signal. The first use of the term “ellipsometry” dates back to 1945.^2 At the present time, spectroscopic ellipsometry has become a mature characterization technology in the infrared (IR)–UV range, with many commercial ellipsometer products available. It is straightforward to consider extending this technique further to the terahertz (THz) wavelengths, which is desirable in this regime compared to conventional THz spectroscopy methods, and it was first demonstrated by Nagashima and Hangyo in 2001 based on THz-TDS (time-domain spectroscopy).^3 Conventional THz spectroscopy uses a transmission or window/prism-based reflection configuration, which allows the study of transparent bulk materials, most liquids, and soft tissues. However, absorptive solids and conductive thin films, such as most doped semiconductors, amorphous inorganic materials, and conductors, are challenging to be precisely characterized. This is because in transmission, absorptive solids attenuate the light very quickly while conductive thin films are usually too thin to cause sufficient phase change. In reflection, these solid samples cannot contact well with a supporting medium that an uncontrollable air gap will be induced. The noncontact reflection scheme, which compares the reflection from the sample with the reflection from a reference medium, is rarely used for characterization because of the “phase uncertainty” problem^4 caused by the height difference between the sample and the reference medium. For these samples, ellipsometry provides an ideal noncontact and self-reference modality. Actually, ellipsometry is a versatile technology that is not limited only to these sample types; it is also powerful in characterizing anisotropic samples^5 and multiple-layer structures^6 and in investigating magneto-optical effect^7–9 and polarization-sensitive devices.^10–12 THz spectroscopic ellipsometry has been done in either frequency-domain (FD) or time-domain (TD). Due to large discrepancies in the instrumentation between these two techniques, they have very different characteristics in regard to the bandwidth, detected quantities, measurement methods, and data processing methods. Actually, within the class of FD ellipsometers, there are numerous source–detector combinations, such as BWO (backward wave oscillator)–Golay cell,^9,13,14 BWO–SNA (scalar network analyzer),^15 and black-body source in FTIR (Fourier transform infrared) plus synchrotron–bolometer.^16 A majority of these THz FD ellipsometers were proposed by the group in University of Nebraska-Lincoln and their co-workers. On the contrary, the category of THz-TDS ellipsometers shares similar operation mechanisms. The studies reported so far have mostly used nonlinear crystals or photoconductive antennas (PCAs) for THz emission and detection,^17–20 and very recently, the use of air-plasma filament polarimetry has also been demonstrated.^21 As a tutorial aiming at delivering the key technical guidance and skills for a specific technique, we will only focus on ellipsometry based on THz-TDS due to the huge differences from the FD systems. THz-TDS ellipsometers have some advantages and challenges compared to THz-FD ellipsometers and commercial IR-UV ellipsometers. The first obvious merit is the coherent detection of electric fields, which simultaneously provides both the amplitude and the absolute phase in a ultrabroad bandwidth at a fast acquisition rate (e.g., >20 Hz for systems using a femtosecond laser at a repetition rate of 100 MHz). Coherent detection also simplifies the polarization control, reducing the number of measurements needed. In detail, tanΨ and Δ can be directly obtained from THz-TDS using a polarizer–sample–analyzer (PSA) scheme with p and s reflections measured using two analyzer orientations. In contrast, most FD ellipsometers are intensity-based, using measurements of at least four polarization directions, where typically the p, s, and ±45° components are needed to extract tanΨ and cosΔ, and further measurements of the left and right circular polarization components manipulated by additional phase compensators are required to determine Δ in the range of [−180°, 180°] from cosΔ.^1 Broad bandwidth is another advantage this method offers compared to THz FD sources. Typical TDS systems cover 0.1–4 THz, while most THz CW sources have a limited tunable range and require additional frequency-multipliers to achieve a broader bandwidth. Nevertheless, their excellent spectral resolution down to 1 MHz could be useful in special applications to resolve fine resonant features.^14,22 Finally, the picosecond-temporal resolution (hence, the absence of standing-wave issue^14) enables selecting specific reflective pulses in the time domain, simplifying the data processing, and may provide additional spectral information. A general downside to using THz ellipsometry is the beam divergence issue, which is physically unavoidable when the wavelength of the light increases to the quasi-optical regime. We will discuss this in detail in Sec. IV B. A particular weakness of THz-TDS ellipsometers is the high sensitivity of the single-pixel detector to the THz beam alignment. Most TDS detectors require the THz beam to be precisely focused; hence, multiple reflections from bulk samples could be out-of-focus and have different detection sensitivities, making the spectrum containing all these reflections differ from the theoretical reflection model. This can be solved in many cases by temporally removing the high-order reflections. Pulse shift is another special issue that could be associated with the time-domain detection modality, causing phase errors especially at high frequencies. In this Tutorial, we will focus on these unique characteristics of THz-TDS ellipsometry and provide key technical guidance and skills to obtain accurate ellipsometric measurements. We will discuss the theory in Sec. II, fundamental optics in Sec. III, error analysis and calibration in Sec. IV, and sample characterization in Sec. V. Finally, experimental demonstration will be presented in Sec. VI. A. Categories Ellipsometry can be classified into two categories: standard ellipsometry and generalized ellipsometry. Standard ellipsometry refers to measurements with no cross-polarization induced, that is, no s reflection is produced under a p incidence or vice versa and no depolarization occurs. Isotropic materials and uniaxial anisotropic materials whose optical axis is specially aligned parallel or perpendicular to the incident plane can be measured by using this technique. The most commonly reported THz-TDS ellipsometric measurements are carried out via standard ellipsometry. For anisotropic materials with randomly orientated optical axis, conversion between the s and p polarizations occurs, which requires generalized ellipsometry in order to measure cross-polarization reflections. If depolarization occurs, such as by materials with a wavelength-comparable surface roughness, Mueller-matrix (introduced next in Sec. II C) generalized ellipsometry has to be applied to describe partially polarized or unpolarized light. Generalized ellipsometry in the THz regime has been mostly demonstrated by FD systems,^5,14,16,23 while THz-TDS ellipsometers can also be built in this form to measure cross-polarization reflections. B. Fresnel coefficients Fresnel coefficients describe the reflection and transmission of light for an established optical model. Here, we only introduce reflection coefficients used in reflection-type ellipsometry. For bilayer structures containing two semi-infinite media 1 and 2, as shown in Fig. 1(a) , the reflection coefficients for light incident from medium 1 can be expressed as follows: ) and ) are the complex refractive indices (incident/refracted angles) in medium 1 and medium 2, respectively. Snell’s law connects using the following relation: For an optical model containing three layers, from 1 to 3, with the middle layer 2 having a finite thickness , as shown in Fig. 1(b) , the corresponding (s) reflection coefficient for light incident from medium 1 is given by is called the film phase thickness. in Eq. reflection coefficients from medium 1 to 2 and from 2 to 3, respectively, calculated by Eq. or Eq. according to the polarization state. For a stratified medium containing more than three optical layers, we can apply Eq. for every three layers iteratively. For example, for a four-layer structure containing media 1–4 with light incident from medium 1, as shown in Fig. 1(c) , we can first calculate the reflection of the lower three layers by using Eq. . The total reflection is calculated again using Eq. by replacing the lower reflection with The same principle can be applied for an arbitrary number of layers. Using these reflection coefficients, we are able to calculate the polarization state of the reflected light as a function of the sample properties [i.e., $ρ̃(ñ)$]. The sample properties are determined from the best fit between the calculated and measured results. C. Data expression An ellipsometer may contain multiple polarization-dependent optical elements. The measured quantity depends on the orientation of these components. It could be difficult to directly express the measured signal as a function of the sample properties when there are numerous polarization variations during the propagation. The Jones matrix and Mueller matrix are the two mathematical approaches used to express polarization-dependent light propagation. A Jones matrix is a 2 × 2 complex-valued matrix that relates the input and output electric fields (expressed as Jones vectors) as follows: A Mueller matrix is a 4 × 4 real-valued matrix. In this case, the input and output are expressed as Stokes vectors whose parameters are intensity-based quantities. The relationship becomes where the Stokes parameters are related to the intensities as follows: where the subscripts , + 45°, −45°, (right-circular), and (left-circular) represent the polarization components. Optical elements, sample response and coordinate rotation, can be expressed by independent Jones matrices in the sequence of their positions in the propagating path as J[1]J[2]J[3]… or by Mueller matrices as M[1]M[2]M[3]…. The product of these matrices relates the input from the source and the output to the detector. The elements of the final product matrix contain the sample response, which are functions of the sample properties that can be estimated by Fresnel coefficients. The major difference between the Jones matrix and Mueller matrix is that only the Mueller matrix can express unpolarized and partially polarized light. For polarized light, both expressions are mathematically identical, while because the Mueller matrix deals with intensity-based Stokes parameters, it is sometimes more favorable for FD ellipsometry in which the intensities of different polarization components are measured. For THz-TDS ellipsometry, which involves directly measuring the complex electric fields without considering depolarization, the Jones matrix is much more convenient. Actually, the coherent detection modality of THz-TDS significantly simplifies the polarization control and the measurement (see Sec. III B). As a result, the relationship between the measured electric fields and the sample properties [i.e., Eq. (1)] can be easily derived, and matrix multiplications in terms of simple measurements becomes unnecessary.^18,24–26 Nevertheless, the Jones matrix can be used to assist the analysis of more complicated situations, such as generalized ellipsometry or when taking imperfections of optical elements into account.^20,27 A. Beam control Ellipsometry retrieves the sample properties from the observed change in polarization, mathematically based on tanΨ and Δ. The increased measurement accuracy relies on the high sensitivity of tanΨ and Δ to the sample properties. For bulk materials, this is achieved near the Brewster angle (or pseudo-Brewster angle for absorptive samples) θ[B]. Assuming a medium with $ñ=n−iκ$, Figs. 2(a) and 2 (b) show the calculated tanΨ vs the incident angle θ[i] from air to the medium with n varied from 1.5 to 3 (κ fixed at 0) and with κ varied from 0 to 1 (n fixed at 2), respectively. The biggest variation of tanΨ between curves at a fixed angle is found around θ[B], while the smallest variation is observed at the normal incident and 90°. Similar characteristics can also be observed for Δ (not shown). The analysis reflects two characteristics of ellipsometry. First, the incident angle should be adjusted specifically for a certain sample according to its properties. Doing this requires a robust, ideally, electrical control of the incident angle. Second, the angular region with a high sample sensitivity is usually accompanied by a high angular sensitivity as well, especially for materials with a large refractive index, as can be seen in the rapid variation of tanΨ at lager incident angles in Fig. 2(a). This means the incident angle should be precisely set and measured. To satisfy these requirements, fiber-coupled photoconductive antennas (PCAs) are highly recommended for THz generation and detection. Free-space coupled PCA, optical rectification, and electro-optic sampling are sensitive to the optical alignment of the pumping and probing beams; hence, the emitter and detector are usually fixed. The incident angle is changed by rotating off-axis mirrors, which in turn requires realigning the THz optics.^20,25,26 In comparison, fiber-coupled antennas can be freely moved without affecting the coupling between the femtosecond laser beams and the antennas.^ 18,19,27–29 The optics on the emission and detection sides can be assembled on two independent rails, respectively, as shown in Figs. 3(a) and 3(b). In this way, the incident angle can be easily adjusted by rotating the rails around the focal point on the sample, without causing a significant misalignment, as indicated by the gray arrows in Fig. 3(a). As the polarization-dependence is typically weak at small incident angles, designing a rotational stage with a switchable range of 45°–90° is suitable for most sample types, which also allows additional transmission spectra to be measured for transparent samples. Figure 3(b) shows the optical arrangement from the incident-plane view. For PCAs used as an emitter and detector, a pair of lenses L1 and L4 (could be replaced with parabolic mirrors), typically with a f-number around 1.3–2, are set next to the antennas to collimate the beam from the emitter and to focus the collimated beam to the detector. Another pair of lenses, L2 and L3, is used to focus the beam onto the sample and to collimate the reflected beam from the sample. The selection rule for this pair of lenses will be discussed in Sec. IV B. B. Polarization manipulation Polarization state measurement is another key feature of an ellipsometer. We have discussed earlier that the coherent detection of THz-TDS greatly simplifies measurements by only detecting two orthogonal electric fields. Therefore, unlike IR-UV ellipsometers that require additional phase compensators to measure the circular polarization components, THz-TDS ellipsometry only needs polarizers in a PSA configuration to obtain the absolute polarization state. Typically, three polarizers are necessary for a precise polarization manipulation in standard ellipsometry, as shown in Fig. 3(b). Most THz PCAs are quasi-linearly polarized in emission and detection.^12 The degree of linearity strongly depends on the antenna design, but most of them still have considerable cross-talk. Therefore, two polarizers, P1 and P3, are placed in front of the emitter and detector, respectively, with the passing direction aligned parallel to the main-polarizing direction of the antennas.^28,29 As we will prove in Sec. IV C, it is convenient to regard this polarizer–antenna combination as a single unit, providing an ideal linear emission and detection. They are normally set at 45° (to the s direction). Another polarizer, P2, acts as the analyzer to select the reflected s or p component. The polarization states at different beam positions are illustrated as insets in Fig. 3(b). The emission unit sends a perfectly 45° linearly polarized light after P1, containing equal s or p components. Sample reflection causes a polarization variation, which is reconstructed by rotating P2 to measure the s and p components, respectively. The detector unit aligning at 45° ensures that both s and p components after P2 can be detected with equal sensitivity. For generalized ellipsometry in which cross-polarization occurs, an additional polarizer can be placed on the emitter side after P1 to set the incident light to the sample as s or p polarized. In this way, four electric field quantities of $Ẽrpp$, $Ẽrsp$, $Ẽrps$, and $Ẽrss$ can be measured, and three relationships, typically by normalizing the former three electric fields to $Ẽrss$, can be obtained. Note that in any measurement, it is not recommended to rotate the antenna units as any mechanical adjustment to the emitter and detector will lead to a sensitive change in the optical alignment, hence the measured electric fields cannot be self-referenced. A. Measurement error Measurement errors occur in any practical experiment, ultimately limiting the accuracy that can be achieved. Error-propagation analysis is essentially a sensitivity analysis, which is especially important for ellipsometry as the sensitivity highly depends on the incident angle, and it is particularly useful for samples without a clear definition of θ[B] (e.g., conductive materials). A poor sensitivity significantly magnifies the measurement error in regard to the characterization results (e.g., measured at near normal incidence). Error-propagation calculation is also important for result analysis. THz-TDS measurements offer a frequency-dependent signal-to-noise ratio (SNR). Analyzing the noise error enables the determination of the data credibility and the available bandwidth Table I provides steps to perform error-propagation analysis. If the analysis is performed prior to an experiment to analyze the sensitivity, the first two steps are required. Step 1 estimates the sample properties $ñsample(ω)$, which can be obtained from a proper dielectric model or literature. The accuracy requirement is low for sensitivity analysis. Based on the optical model corresponding to the measurement, $r̃p(ω)$ and $r̃s(ω)$ can be calculated using $ñsample(ω)$. To analyze errors in a measurement, we also simulate $Ẽi(ω)$ to convert $r̃p(ω)$ and $r̃s(ω)$ to $Ẽrp(ω)$ and $Ẽrs(ω)$ in Step 2. $Ẽi(ω)$ can be numerically expressed as a double-Gaussian filter in THz-TDS^30 or approximated from metal reflection. Step 3 adds random noise to the spectrum, as shown in the equations. If the analysis is performed after an experiment, $Ẽrp$ and $Ẽrs$ are directly obtained from the measurement. N(ω) in the equation represents the noise amplitude randomly assigned within [0, N[max]], where N[max] is the maximum noise amplitude that can be estimated from the noise floor. ϕ(ω) is the noise phase randomly distributed within [0, 2π]. Note that here we only consider the white noise mainly from the detector, hence, N[max] is independent of the reflectivity and frequency. In this case, a weak reflection leads to a small signal and, hence, a low SNR. $ρ̃noise(ω)$ is regarded as the noise-affected ratio used to characterize $ñsample$ in Step 4 (characterization methods are detailed in Sec. V). Steps 3 and 4 are repeated M times (e.g., M > 100) to enable statistical analysis. TABLE I. 1. Estimate $ñsample(ω)$ and calculate $r̃p(ω)$ and $r̃s(ω)$. 2. Simulate or measure $Ẽi(ω)$. 3. Add random noise to $Ẽrp$ and $Ẽrs$: 4. Characterize $ñsample$ from $ρ̃noise$. 5. Repeat 3 and 4 for M times and do statistical analysis. 1. Estimate $ñsample(ω)$ and calculate $r̃p(ω)$ and $r̃s(ω)$. 2. Simulate or measure $Ẽi(ω)$. 3. Add random noise to $Ẽrp$ and $Ẽrs$: 4. Characterize $ñsample$ from $ρ̃noise$. 5. Repeat 3 and 4 for M times and do statistical analysis. Based on these steps, we show an example of error analysis for fused quartz $(ñqz=1.95−0.0048i)$,^31 assuming that the sample is being measured at θ[i] = 40° and 65°, respectively. $r̃p$ and $r̃s$ are calculated by Eqs. (2) and (3), respectively. $Ẽi(ω)$ is simulated by a normalized double-Gaussian filter with −6 dB cutoff frequencies at 0.15 and 1.1 THz and a noise floor at −60 dB. Figure 4(a) shows examples of the simulated detected spectrum with |r| = 1 and |r| = 0.1, respectively, showing a reduced SNR and useful bandwidth with a lower reflectivity. For quartz measurement, 200 groups of $Ẽrp$ and $Ẽrs$ signals are calculated with random noise added, corresponding to 200 groups of $ρ̃noise$ for each incident angle. Figure 4(b) shows the SNR of tanΨ at the two incident angles, calculated by mean(tanΨ)/std(tanΨ) (mean and std represent average and standard deviation, respectively). Since the value of θ[B] for fused quartz is 62.9°, the SNR of the orange curve calculated at θ[i] = 65° is about one-fourth that of the blue curve, due to the weak p reflection. Interestingly, Figs. 4(c) and 4(d) show that the final errors in n and κ of θ[i] = 65° are less than half the errors obtained for θ[i] = 40°. This verifies our previous discussion that the best sensitivity is achieved around θ[B] and that it has a huge impact on the characterization of our results. At angles far away from θ[B], small measurement errors can be magnified to the characterization results. The example shows the importance of sensitivity and error analysis in ellipsometry. B. Angular error Ellipsometry has a higher angular sensitivity than traditional reflection measurements, mainly due to the need for a large incident angle around θ[B]. This is especially obvious for samples with a large refractive index because, as can be observed in Fig. 2(a), a rapid change in tanΨ is observed at large angles. Similar to the measurement error, we can estimate the influence of the incident angle error by performing theoretical analysis. Here, we choose bulk high-resistivity silicon (HR-Si), fused quartz, and high-density polyethylene (HDPE) as examples. The complex refractive indices ( θ[B]) of Si, fused quartz, and HDPE are 3.418 − 0i (73.7°), 1.95 − 0.0048i (62.9°), and 1.54 − 0.01i (57°), respectively.^31,32 We assume they are measured under θ[i] = 70°, 65°, and 60°, respectively. Theoretical $ρ̃$ for the three samples can be calculated by Eqs. (1)–(3). Assuming the actual incident angle is measured as θ[i] + Δθ, the corresponding refractive index obtained from the characterization will have an error Δn compared to the theoretical value, which is shown as a function of Δθ in Fig. 5. A higher angular sensitivity is found for materials with a larger refractive index as expected, but overall, all the three samples are sensitive to the incident angle compared to traditional transmission or reflection measurements. In ellipsometry, attention should be given to the optical set up to reduce Δθ. The angular accuracy can be improved in two ways. First, good alignment is essential: the THz beams should be aligned parallel to the optical rails. This is because the incident angle is physically measured by the angle between the two rails (or other mechanical components), it is fundamental to ensure that the angle measured represents the angle of incidence. When on-axis lenses are used, properly extending the beam paths on the rails can reduce the error due to the tilting angles of the antennas. Second, the issue of beam divergence should be carefully considered. This issue has been noticed for infrared ellipsometry due to the longer wavelength involved, and it becomes more significant for THz waves. Collimated THz beams provide a near-zero angular spread, however, with a centimeter-level beam diameter as it must be adequately larger than the wavelength to meet the plane-wave approximation. In ellipsometry, the use of large incident angle further enlarges the illuminating area due to elliptical projection, making collimated beams impractical for most samples. This is limited not just by the sample size but also by the surface evenness or film-thickness homogeneity. Focusing the beam introduces a trade-off issue between the spot size and the beam divergence. Physically, the radius (i.e., beam width) of a Gaussian beam is defined as the distance between the optical axis and the position at which the light intensity drops to times the on-axis intensity, as shown in Fig. 6 is related to the beam propagating distance (in air) as follows: is the beam waist, defined as the minimal beam radius achieved at the focal point ( = 0) and is the wavelength. As the beam propagates away from the waist, increases to form an angular spread. Having a smaller generates a faster variation of , hence, a larger beam spread. This can be expressed mathematically by Eq. , which is a hyperbola with an asymptote slope of , as indicated by the blue line in Fig. 6 . The expression for clearly shows the trade-off issue between the beam divergence and the spot size. Optics with a smaller numerical aperture ( = sin ), or equivalently a larger -number ( , where is the effective focal length, is the effective aperture), reduce the divergence but increase the focal spot size. To quantitatively show the relationship between the spot size = 2 Table II is given as a reference. In the calculation, we assume the collimated THz beam has a diameter (i.e., ) of 30 mm for all frequencies, and it is focused by lenses or mirrors with different values. This assumption gives the condition of ) = 15 mm. Substituting this into Eq. , we obtain , as well as for different frequencies. In practice, the sample is placed at the focal point such that only a limited range of the beam will interact with the sample, as illustrated in Fig. 6 . Since the beam is less diverging around the focal point, the actual angular spread is smaller than can be evaluated by the edge of the beam interacting with the sample, and it is identical to the slope of the tangent at this point, expressed as is the z position at which the edge of the beam interacts with the sample, which is related to the incident angle by . Note that the two edges of the beam interacting with the sample have the same spreading direction, thus the total spreading angle is considered as rather than 2 . Here, we assume = 60° to calculate Table II . With the defined incident angle, the projecting length of the illuminating area on the sample in the incident-plane direction is also estimated by , as shown in Fig. 6 together indicate the minimum sample size required under the specific optics and wavelength. TABLE II. EFL (mm) . fa (THz) . θ (°) . ϕ (°) . s (mm) . l (mm) . 0.2 16.6 8.8 3.20 7.48 50 0.5 16.7 8.8 1.28 2.98 1.0 16.7 8.9 0.64 1.48 2.0 16.7 8.9 0.32 0.74 0.2 11.2 3.9 4.84 10.30 75 0.5 11.3 3.9 1.91 4.06 1.0 11.3 4.0 0.96 2.03 2.0 11.3 4.0 0.48 1.02 0.2 8.3 2.1 6.52 13.49 100 0.5 8.5 2.2 2.56 5.29 1.0 8.5 2.2 1.28 2.63 2.0 8.5 2.2 0.64 1.32 0.2 5.4 0.9 10.15 20.58 150 0.5 5.7 1.0 3.85 7.83 1.0 5.7 1.0 1.91 3.88 2.0 5.7 1.0 0.96 1.94 0.2 3.8 0.4 14.56 29.31 200 0.5 4.2 0.5 5.17 10.44 1.0 4.3 0.6 2.56 5.15 2.0 4.3 0.6 1.28 2.56 EFL (mm) . fa (THz) . θ (°) . ϕ (°) . s (mm) . l (mm) . 0.2 16.6 8.8 3.20 7.48 50 0.5 16.7 8.8 1.28 2.98 1.0 16.7 8.9 0.64 1.48 2.0 16.7 8.9 0.32 0.74 0.2 11.2 3.9 4.84 10.30 75 0.5 11.3 3.9 1.91 4.06 1.0 11.3 4.0 0.96 2.03 2.0 11.3 4.0 0.48 1.02 0.2 8.3 2.1 6.52 13.49 100 0.5 8.5 2.2 2.56 5.29 1.0 8.5 2.2 1.28 2.63 2.0 8.5 2.2 0.64 1.32 0.2 5.4 0.9 10.15 20.58 150 0.5 5.7 1.0 3.85 7.83 1.0 5.7 1.0 1.91 3.88 2.0 5.7 1.0 0.96 1.94 0.2 3.8 0.4 14.56 29.31 200 0.5 4.2 0.5 5.17 10.44 1.0 4.3 0.6 2.56 5.15 2.0 4.3 0.6 1.28 2.56 One obvious characteristic observed from the table is that both θ and ϕ are nearly frequency-independent, while s and l are basically proportional to the wavelength. This explains the severity of angular spread issue for longer wavelengths. Using optics with a large EFL offers a smaller beam divergence for all frequencies, while the high-frequency components can be focused to a smaller size. Second, it is found that although both θ and ϕ decrease with increase in EFL, θ is roughly inversely proportional to EFL while ϕ is roughly inversely proportional to EFL^2. The reason that ϕ decreases much faster comes from the longer depth of focus when using optics with a larger EFL, which means the weakly diverging region near the focus is longer. Note that the effect of beam divergence on the reflection is rather complicated, and one cannot simply equate ϕ and Δθ[i] in Fig. 5 to evaluate or calibrate the error. The divergence deviates the actual reflection away from the plane-wave approximation made in the derivation of Fresnel coefficients. However, the high angular sensitivity indicates the importance of using optics with a smaller beam divergence to reduce the error on the characterization results. We also notice that both s and l are about linearly proportional to EFL. Increasing EFL is more efficient in reducing the divergence ϕ compared to the expansion of the illuminating area. Therefore, a general guideline for the selection of optics is to maximize the EFL as long as the beam spot size can be supported by the sample. Another strategy is to give up some low-frequency components whose illuminating areas are too large for a specific sample. C. Limited extinction ratio THz polarizers are mostly wire-grid polarizers (WGPs) made by using subwavelength metallic grids.^34 They can be commercially purchased or self-fabricated by a couple of techniques. In general, they fall into three categories, i.e., bulk-substrate, thin-film, and free-standing polarizers, which have different advantages and limitations. In ellipsometry, the most important parameter is the extinction ratio (ER), defined as the ratio between the transmitted intensities from the passing and blocking directions of a polarizer, expressed as follows: $ER=TpassTblock$. We will later analyze the influence of ER on the measurements. In conventional designs made of single-layer subwavelength metallic grids, ER increases with decreasing wire width and period. The achievable ER is mainly limited by the fabrication technology. ER up to 10^4 over 0.1–2.0 THz with a 140 nm period has been demonstrated.^35 The use of multiple layers of metallic grids is another efficient approach to improve the ER, which normally doubles the ER (in dB scale) compared to the single-layer design.^36,37 Other strategies may also be applied, such as using triangular or sinusoidal metallic surfaces. These methods can be adapted with bulk-substrate or thin-film polarizers. Free-standing polarizers are made using metallic wires such that the ER can only be improved by reducing the wire width and period.^38 However, limited by the mechanical strength, the wire width is typically limited to be above 5 μm.^39,40 As a result, the ER is often insufficient especially in systems with an ultrabroad bandwidth because the ER is basically inversely proportional to the frequency. Nevertheless, free-standing polarizers have two additional attractive characteristics: They provide the best transparency in the passing direction over the three designs as no absorption materials are involved and they are also devoid of Fabry–Perot (FP) effects arising from the multiple reflections within the substrate or the thin film. The second feature is especially useful in time-domain spectroscopy as the multiple reflections either broaden the pulse width to decrease the temporal resolution or introduce pulse echoes in the time domain that can be interfered with reflections from the sample. In the ellipsometer configuration shown in Fig. 3 , the ER of P2 has the largest impact on the measurement accuracy. To show this, we define the ratio between the transmission coefficients of the passing and blocking directions of P2 as follows: It can be seen that = | |. When the ER is not sufficiently high, cannot be approximated as infinite in practical measurements. In this case, the measured signal when P2 is set in the direction becomes where the term comes from the projection from the direction of P2 onto the 45° direction of P3, as shown in Fig. 3(b) . Ellipsometers usually measure a sample around , where ; hence, the second term in the brackets cannot not be omitted when is not sufficiently large. For example, if , | | > 1000 (i.e., > 60 dB) is needed to have the amplitude error for to be less than 1%. Such a requirement is difficult to achieve for single-layer WGPs, especially at high frequencies where ER decreases. ^36,41,42 Figure 7 shows the amplitude and phase of 1/ of a free-standing WGP with a wire width of 5 m and a period of 12 m, which has nearly the highest ER among commercially available free-standing WGPs. | | is less than 100 for frequencies above 0.6 THz. Therefore, in many cases, it is recommended to numerically calibrate the measured signals for the limited ER of P2. To do this, of P2 should be first characterized as follows: are the transmitted electric fields when the passing and blocking directions of P2 are aligned parallel to P1 (and P3), respectively. We rewrite Eq. and add the relationship for the measured s signal as follows: Combining Eqs. , we have the following expressions: which provide the equations required to calibrate the measured signals using . Notice that is not measured, but it can take arbitrary values as it will be canceled out when we finally calculate .The approximation by omitting terms with is mostly satisfied as > 100 can be achieved in most polarizers, such that the introduced error is less than 1%. P1 and P3 have the passing direction parallel to the major polarizing direction of the antennas. Assuming that the emitted signal (or the detection sensitivity) is mainly linear, with the emitted electric field (or detector response) in the major polarizing direction as $Ẽmajor$ and that in the perpendicular direction as $Ẽminor$, the emitted field from P1 (or the detected field with P3) becomes $Ẽfilter=tpass(Ẽmajor+Ẽminor/R)$. Most antennas have $|Ẽmajor|$ > 10 $|Ẽminor|$, the influence of the second term will be smaller than 1% with R > 10. Therefore, the ER issue for P1 and P3 is usually ignorable. Another factor that produces error similar to the ER effect is the orientation error of P2. Analogously, a small angular error for P2 when measuring the p component could lead to a relatively large s component projected onto the passing direction. To set an accurate orientation, all polarizers should be first carefully aligned parallel relative to each other in a transmission configuration, then all of them should be aligned to the sample p − s coordinate in a reflection configuration (e.g., calibrated by a metal mirror), generally referred to as the gravity vertical. In addition, the rotational error should be especially considered when P2 is manually rotated due to the limited the controlling accuracy. It is recommended to reduce the rotating error using a motorized rotational stage such that <0.1° repeatability can be achieved. D. Pulse shift Reflection characterization techniques are always sensitive to the phase, including ellipsometry. This is because the maximum phase variation induced by a sample is ±180°.^43 In contrast, the pulse can be delayed for a long distance in transmission to create a huge phase change.^44 Although ellipsometry is a self-reference technique without a phase-uncertainty issue from the sample displacement, other factors such as delay-line positioning error or pulse shift mostly caused by temperature variation in fiber-based THz-TDS systems can generate phase error. Pulse shifts can have a significant influence at high frequencies. The phase error Δϕ is related to the pulse shift τ by Δϕ(ω) = ωτ. For example, a 10 fs pulse shift results in 3.6°, 7.2°, and 14.4° phase errors at 1.0, 2.0, and 4.0 THz, respectively, which can cause significant error in the characterization results at high frequencies. Such an error is very system-dependent, and it is recommended to test both the short-term and long-term (over a few minutes) phase stability by continuously recording a reference signal. Figure 8 plots the measured pulse shifts of two fiber-based THz-TDS systems (determined from the slope of the unwrapped phase), showing very different levels of pulse stability. The variation in system A is found to be correlated with minor variations in the environmental temperature. Here, we introduce a method to calibrate the unknown pulse shift in ellipsometry measurement.^28 The basic principle involves measuring a third signal $Ẽβmeas$ by rotating P2 to β direction (β ≠ 0° or 90°). If there is no pulse shift error, $Ẽβmeas$ can be reconstructed from $Ẽrp$ and $Ẽrs$, mathematically expressed as their linear combination. In contrast, the expression is impossible if there are pulse shifts between the measured p, s, and β signals. We assume there are two pulse shift errors, i.e., τ[p] and τ[β], relative to $Ẽrs$. This principle allows us to find τ[p] and τ[β] by minimizing the error between $Ẽβmeas$ and the linear combination of $Ẽrp$ and $Ẽrs$. Notice that the ER calibration is not independent of the pulse-shift calibration, and they should be performed together. Table III gives the calibration steps for the pulse shift and ER errors simultaneously. 1. Assign τ[p] and τ[β] within a searching range. 2. ER and pulse-shift calibration for $Ẽrp$ and $Ẽrs$: 3. Calculate the projections on β and β′ = β + 90° directions: 4. Calculate the β to P3 projection with ER calibration: 5. Calculate the error: 6. Determine τ[p] and τ[β] at the minimum of ΔE[β]. 7. Output the corresponding $Ẽrp$ and $Ẽrs$ by step 2. 1. Assign τ[p] and τ[β] within a searching range. 2. ER and pulse-shift calibration for $Ẽrp$ and $Ẽrs$: 3. Calculate the projections on β and β′ = β + 90° directions: 4. Calculate the β to P3 projection with ER calibration: 5. Calculate the error: 6. Determine τ[p] and τ[β] at the minimum of ΔE[β]. 7. Output the corresponding $Ẽrp$ and $Ẽrs$ by step 2. Step 1 empirically assigns τ[p] and τ[β] within a searching range according to the potential error of the system. Step 2 calibrates the ER using Eqs. (17) and (18) by taking τ[p] into consideration. Here, $Ẽrpmeas$ and $Ẽrsmeas$ are again multiplied by $2/tpass$, but t[pass] can still be arbitrary as it will be canceled out in step 4. Step 3 calculates the numerical projections in the β and β′ directions, where β′ = β + 90°. The β′ component is needed to simulate the limited ER effect when measuring $Ẽβmeas$. $êp$, $ês$, $êβ$, $eβ′̂$, and $êP3$ are the unit vectors in the p, s, β, β′, and P3 (detector) directions, respectively. We have $êp=[0,1]$, $ês=[1,0]$, and $êP3=[cos⁡45°,sin⁡45°]$ according to the defined coordinate, while $êβ$ $(eβ′̂)$ could be defined pointing to β (β′) or β + 180° (β′ + 180°). During the selection of β, one should consider the contribution of $Ẽrp$ and $Ẽrs$. The direction that makes $Ẽrp$ and $Ẽrs$ have about equal projections on it can maximize the influence of τ[p], such that τ[p] can be more accurately found. Figure 9 shows an example of the selection of β direction to provide equal contribution from $Ẽrp$ and $Ẽrs$. Step 4 simulates the detected β signal by considering the limited ER effect. t[pass] in the equation will be canceled out with the scaling factor of 1/t[pass] contained within $Ẽβproj$ and $Ẽβ′proj$. Step 5 compares the simulated and measured β signals by summing up the values in the spectral range with a good SNR, specified by ω[0] and ω[1], with the potential pulse shift τ[β] considered. This equation serves as the evaluation function of the algorithm. Theoretically, ΔE[β] is zero only if both τ[p] and τ[β] are correctly found. Therefore, τ[p] and τ[β] are determined at the minimum of ΔE[β]. Finally, τ[p] found from the minimum can be substituted into the equations in step 2 to calibrate both the ER and pulse shift errors. E. Other errors Some other errors may require individual considerations for specific measurements. For example, measuring samples with a wavelength-comparable thickness results in multiple reflections that overlap temporally. At large incident angles, the high-order reflections could have an obvious offset in space, that is, the reflections become only partially overlapped with the surface reflection. The degree of alignment with the detector, and hence the detection sensitivity, is changed compared to the surface reflection. Far sub-wavelength thin films cause negligible changes to the alignment while multiple reflections from bulk substrate can be temporally chopped. Therefore, it is wise to avoid measuring samples with a thickness of around 50–300 μm (depending on the pulse width and sample refractive index). Another potential error is the flatness. Wavelength-comparable roughness, or uneven surface over the illuminating area, scatters or depolarizes the light. Selecting proper optics to control the depth of focus and beam size, or numerically filtering the unaffected spectral range, may reduce these errors. A. Analytical solution Characterization refers to extracting the sample optical properties from the measurement, which is the fundamental purpose of spectroscopic ellipsometry. Characterization could be either simple or complicated depending on the sample and the measurement. The simplest case is the analytical solution that is possible when measuring homogeneous and isotropic bulk samples. In this case, combining , we have the following: are the complex permittivities of the incident (i.e., air) and transmitted (i.e., sample) media, respectively, relating to . With , Eq. has an analytical solution as follows: can be determined from . For conductive materials, it is convenient to represent the properties by the complex optical conductivity , which is related to the permittivity as follows: are the vacuum permittivity and the permittivity, respectively, at high frequencies. B. Numerical solution and model fitting Numerical solution usually applies to the cases where only one set of frequency-dependent $ñ(ω)$ [or equivalently $ε̃(ω)$ or $σ̃(ω)$] is unknown but analytical solution is impossible or difficult to be derived. For example, measuring a thin-film sample with the thickness already known, we have the relationship of $ρ̃(n,κ)=tanΨ⁡exp(iΔ)$. At each individual frequency, we are mapping (n, κ) to (Ψ, Δ). Therefore, (n, κ) can be found by numerically calculating (Ψ, Δ) over a searching space to find the point that has the smallest difference to (Ψ[meas], Δ[meas]). As the solution can be found independently for each frequency, only two parameters need to be determined at once, thus the computational complexity is low. Many algorithms are available, such as the Newton–Raphson and Gauss–Newton methods that have been used for thin-film characterizations in transmission.^45,46 A more complicated situation is when multiple measurements are taken. Measurements at various incident angles, at different sample orientations, or by adding an additional transmission measurement are necessary when the sample has more unknown parameters, such as thin-film samples with an unknown thickness, anisotropic medium, or multiple-layer structures. Actually, even simple isotropic materials for which an analytical or a numerical solution is available benefit from multiple measurements by the improved fitting robustness. This is a common strategy in IR-UV ellipsometry. Redundant measurements are preferred so as to improve convergence and eliminate mathematical ambiguity. However, when there are more unknown variables [such as two sets of ], the solution becomes a high-dimensional model fitting problem, which significantly increases the computational complexity. Although theoretically, every individual frequency is independent and it is still possible to fit point-by-point to simplify the calculation, it is not recommended as the optimization is not robust when the searching dimension is large while the number of unknown values is not significantly less than that of the measured values. For example, finding [ ] by fitting to [Ψ , Δ , Ψ , Δ ] at each frequency is usually not robust and could be trapped into local minimums, leading to unreasonable fluctuations in the optical spectral properties. A more robust and widely adopted protocol is to describe the sample properties by a proper dielectric model and fit to all measured spectra at once. The model essentially reduces the number of unknown values to a few model parameters. For example, let us express a conductive sample by the Drude model (replace with − in this equation for the definition of ) as follows: *, and are the concentration, charge, effective mass of carriers, and the scattering time, respectively. The permittivity can be further calculated by substituting Eq. into Eq. . In the Drude model, typically, only the carrier concentration and the scattering time are unknown. The number of unknown values for the whole spectrum is reduced to two. Using proper dielectric models can significantly improve the fitting convergence to ensure approaching the global minimum. Other commonly used dielectric models include the Lorentz model, Cauchy model, Sellmeier model, and combinations thereof. For complex mixtures, effective medium theories can be used to express the properties as a mixture of two or more inclusions, usually with the volume fractions being unknown, to be found from the fit. These examples indicate another difficulty of performing model fitting—that the selection of the model is highly experience-based, sometimes with trial and error on different models to find the best fit. Recently, machine learning has been proposed for solving this issue by providing an automatic and unambiguous fit to the IR–UV ellipsometric data, which is promising but less meaningful for THz ellipsometry before it is widely available as a standardized characterization technique. Another solution is using a general mathematical model to describe the dielectric behavior. For example, we have formerly shown that the empirical exponential function can provide a good description of ) for various materials in the THz range as follows: , and are three parameters representing the frequency-profiles of , hence, leading to six parameters in total to describe the material’s complex spectral properties. The purpose of using mathematical models is to reduce the number of unknown variables and characterize the sample unambiguously. Its adaptability is usually limited to materials without resonant features. For broadband ellipsometry, the exponential function may not provide a good approximation for the full bandwidth and more terms may be added to increase the fitness. This method has even been successfully applied to perform ellipsometric characterization of living skin, as shown in Fig. 10 . In this work, a double-prism design is adopted to enable two alternative incident angles to be easily switched by moving the prisms vertically. Equation is used to express the complex refractive indices of the stratum corneum (anisotropic) and epidermis. We demonstrate the experimental ellipsometric measurement and data processing of a HR Si wafer (sample A), a low n-doping Si wafer (sample B), and a high n-doping Si wafer (sample C) as examples. The system configuration is the same as that presented in Fig. 3. We used lenses with a diameter of 1.5 in. L1 and L4 have EFL = 50 mm, while L2 and L3 have EFL = 100 mm. θ[i] is set at 70° for all three samples. The current configuration has ϕ about 3.5° for all frequencies and l = 20.8, 4.1, and 1.4 mm for 0.2, 1.0, and 3.0 THz, respectively. β = 97°, 98°, and 123° are selected for samples A, B, and C respectively, for the calibration of the pulse shift. In these directions, the projections of p and s signals are about equal and opposite and, thus, destructively interfere. The calibration follows the steps in Table III. The calibration of sample C is shown in Fig. 11 as an example. Figure 11(a) shows the normalized ΔE[β] calculated by using step 5 in Table III. A single minimum is found at [τ[p], τ[β]] = [0, −4] fs. At this point, the magnitude spectra of the $Ẽβ−P3$, $Ẽβmeas$ and their difference ΔE[β] are shown in Fig. 11(b). ΔE[β] has reached the noise floor, indicating that we have found the minimum available under the system SNR. tanΨ and Δ with and without calibration are shown in Figs. 11(c) and 11(d), respectively. As the system we used has a high pulse stability, the major difference between them is provided by the ER calibration that corrects the high-frequency signal leakage. However, the singular minimum in Fig. 11(a). and the rapid variation against τ[p] and τ[β] show the high sensitivity of the algorithm in pulse-shift calibration, which could be very important for systems with low pulse stability. The characterization was then performed based on the calibrated tanΨ and Δ. As the samples are bulk and isotropic, Eq. (20) can be applied to obtain the analytical solution. Alternatively, we can apply the Drude model to describe doped semiconductors in the THz range. Therefore, Eqs. (21) and (22) were used to find the Drude parameters by fitting to the whole spectrum of $ρ̃$ at once for samples B and C. In the model fitting, q = e (electron charge for n-type), ɛ[∞] = 11.7, and m* = 0.26m[e] are used,^50,51 where m[e] is the electron mass. Sample A (thickness 987 μm) and sample B (thickness 538 μm) are sufficiently transparent in the effective bandwidth, hence they are further measured by transmission to evaluate the experimental accuracy. Figure 12 shows the corresponding $ñ$ characterized by the different approaches mentioned above for the three samples. A high degree of agreement can be found for the results obtained by different characterization methods, confirming the high measurement and calibration accuracy. Some negative values of κ are found in the analytical solution for samples A and B, which are caused by small phase errors making Δ slightly greater than 180°. Table IV gives the Drude parameters N and τ for samples B and C found from the best fit in 0.2–3.5 THz, as well as the DC (direct-current) resistivities from the model $R0Drude=1/σ(0)$. $R0Drude$ matches well with the DC resistivities R[0] measured by the four-probe method. The resistivity can be more accurately determined if the bandwidth can be extended to lower frequencies. To further evaluate the model, we adopt values from the mobility-carrier density curve by Jeon and Grischkowsky.^50 The scattering times from the literature (calculated from the mobility) at the corresponding carrier densities of samples B and C are provided in the table and they highly coincide with the values from the fit. As transmission measurements are limited to carrier densities below 10^17 cm^−3 for n-doped Si,^50 sample C is completely opaque to the studied wavelength. In this example, we demonstrate one of the advantages of ellipsometry in accurately characterizing highly absorptive solids in the THz regime. Similar methods can be applied in other measurements and further combined with multiple configurations (e.g., prism-coupled, different incident angles, sample azimuthal angles), serving as a versatile technique for various sample types. TABLE IV. Sample . R[0] (Ω cm) . $R0Drude$ (Ω cm) . N (×10^16 cm^−3) . τ (ps) . Sample B 0.651 0.577 0.596 0.268 B ref. ∼0.596 ∼0.24 Sample C 0.023 0.027 56.3 0.061 C ref. ∼56.3 ∼0.065 Sample . R[0] (Ω cm) . $R0Drude$ (Ω cm) . N (×10^16 cm^−3) . τ (ps) . Sample B 0.651 0.577 0.596 0.268 B ref. ∼0.596 ∼0.24 Sample C 0.023 0.027 56.3 0.061 C ref. ∼56.3 ∼0.065 THz-TDS has been widely applied as a powerful characterization tool for physical, chemical, and biomedical applications. However, the commonly used transmission and window/prism-supported reflection configurations have limitations in the characterization of thin films, absorptive solids, and complicated structures. THz spectroscopic ellipsometry has a great potential to fill these gaps, despite having a higher requirement in regard to optical alignment, polarization manipulation, and data processing. These technical details, particularly the sensitivity, angular divergence, limited ER, and pulse shift, significantly affect the characterization accuracy, but they are not well known since they are less important in conventional configurations. In this Tutorial, we quantitatively analyzed errors that commonly occur in THz ellipsometry and their propagation, discussed the reasons, and provided methods for their calibration. Characterization methods are also introduced for different types of measurements. Compared to the conventional ellipsometers at higher frequencies, the major difficulties for THz ellipsometry come from the angular control due to the trade-off between the beam size and the angular spread, as well as the potentially higher phase error in fiber-coupled antennas. Nevertheless, THz ellipsometry also has a few unique advantages. The most promising feature is the coherent generation and detection of THz waves, which significantly simplifies the polarization control. The time-domain sampling scheme also provides a good temporal resolution to easily separate multiple reflections for thick samples, avoiding the out-of-focus and standing-wave issues and simplifying the data analysis. By properly eliminating or reducing the errors, THz spectroscopic ellipsometry can be used as an accurate characterization tool with a very broad adaptability. With the ability to precisely control the incident angle, polarization, and phase, it can also serve as a versatile platform for testing functional devices or for being applied in various interdisciplinary applications. This work was partially supported by the National Natural Science Foundation of China (Grant No. 61988102), the Research Grants Council of Hong Kong (Project No. 14206717), the Engineering and Physical Sciences Research Council (EPSRC) (Grant Nos. EP/S021442/1 and EP/V047914/1), and the Royal Society Wolfson Merit Award (EPM). Conflict of Interest The authors have no conflicts to disclose. The data that support the findings of this study are available from the corresponding author upon reasonable request. Spectroscopic Ellipsometry: Principles and Applications John Wiley & Sons , “The ellipsometer, an apparatus to measure thicknesses of thin surface films,” Rev. Sci. Instrum. , “Measurement of complex optical constants of a highly doped Si wafer using terahertz ellipsometry,” Appl. Phys. Lett. , and , “Phase-sensitive time-domain terahertz reflection spectroscopy,” Rev. Sci. Instrum. M. A. , and , “The anisotropic quasi-static permittivity of single-crystal measured by terahertz spectroscopy,” Appl. Phys. Lett. C. M. J. L. D. K. J. A. , and , “Terahertz ellipsometry and terahertz optical-Hall effect,” Thin Solid Films , and , “Terahertz time domain magneto-optical ellipsometry in reflection geometry,” Phys. Rev. B K. C. C. M. , and , “Conduction-band electron effective mass in Zn Se measured by terahertz and far-infrared magnetooptic ellipsometry,” Appl. Phys. Lett. , and , “Tunable cavity-enhanced terahertz frequency-domain optical Hall effect,” Rev. Sci. Instrum. S. K. A. K. , and , “Highly efficient ultra-broadband terahertz modulation using bidirectional switching of liquid crystals,” Adv. Opt. Mater. , and , “Exploiting complementary terahertz ellipsometry configurations to probe the hydration and cellular structure of skin in vivo,” Adv. Photonics Res. , “Polarization-resolved terahertz time-domain spectroscopy,” J. Infrared, Millimeter, Terahertz Waves C. M. T. E. J. A. , and , “Variable-wavelength frequency-domain terahertz ellipsometry,” Rev. Sci. Instrum. C. M. , and , “Advanced terahertz frequency-domain ellipsometry instrumentation for in situ ex situ IEEE Trans. Terahertz Sci. Technol. , and , “Developments in terahertz ellipsometry: Portable spectroscopic quasi-optical ellipsometer-reflectometer and its applications,” J. Infrared, Millimeter, Terahertz Waves C. M. , and , “Terahertz magneto-optic generalized ellipsometry using synchrotron and blackbody radiation,” Rev. Sci. Instrum. , and , “THz time-domain spectroscopic ellipsometry with simultaneous measurements of orthogonal polarizations,” IEEE Trans. Terahertz Sci. Technol. M. A. , “Single trace terahertz spectroscopic ellipsometry,” Opt. Express , and , “THz time-domain ellipsometer for material characterization and paint quality control with more than 5 THz bandwidth,” Appl. Sci. D. P. , and Hassan Arbab , “Terahertz time-domain polarimetry (THz-TDP) based on the spinning E-O sampling technique: Determination of precision and calibration,” Opt. Express , and M. H. , “Broadband terahertz time-domain polarimetry based on air plasma filament emissions and spinning electro-optic sampling in GaP,” Appl. Phys. Lett. , and , “Terahertz electron paramagnetic resonance generalized spectroscopic ellipsometry: The magnetic response of the nitrogen defect in 4H-SiC,” Appl. Phys. Lett. , and , “Electromagnon excitation in cupric oxide measured by Fabry-Pérot enhanced terahertz Mueller matrix ellipsometry,” Sci. Rep. , and , “Ultra-broadband terahertz time-domain ellipsometric spectroscopy utilizing GaP and GaSe emitters and an epitaxial layer transferred photoconductive detector,” Appl. Phys. Lett. , and , “Measurement of the dielectric constant of thin films by terahertz time-domain spectroscopic ellipsometry,” Opt. Lett. V. C. V. K. , and , “Terahertz time-domain ellipsometry with high precision for the evaluation of GaN crystals with carrier densities up to 10 Sci. Rep. N. P. , “Terahertz time-domain spectroscopic ellipsometry: Instrumentation and calibration,” Opt. Express E. P. J. , and , “Robust and accurate terahertz time-domain spectroscopic ellipsometry,” Photonics Res. , and , “Accurate THz ellipsometry using calibration in time domain,” Sci. Rep. E. P. J. , and , “Gelatin embedding: A novel way to preserve biological samples for terahertz imaging and spectroscopy,” Phys. Med. Biol. van Exter , and , “Far-infrared time-domain spectroscopy with terahertz beams of dielectrics and semiconductors,” J. Opt. Soc. Am. B , and , “Terahertz dielectric properties of polymers,” J. Korean Phys. Soc. D. C. Principles of Lasers Plenum Press New York E. P. J. , and , “Advances in polarizer technology for terahertz frequency applications,” J. Infrared, Millimeter, Terahertz Waves , and , “Wire-grid polarizer in the terahertz region fabricated by nanoimprint technology,” Opt. Lett. E. P. J. H. P. , and , “Robust thin-film wire-grid THz polarizer fabricated via a low-cost approach,” IEEE Photonics Technol. Lett. E. P. J. H. P. , and , “High extinction ratio and low transmission loss thin-film terahertz polarizer with a tunable bilayer metal wire-grid structure,” Opt. Lett. , “Fabrication and characterization of large free-standing polarizer grids for millimeter waves,” Int. J. Infrared, Millimeter, Terahertz Waves , and , “Terahertz wire-grid polarizers with micrometer-pitch Al gratings,” Opt. Lett. D. C. , and , “Flexible terahertz wire grid polarizer with high extinction ratio and low loss,” Opt. Lett. E. P. J. B. S.-Y. , and , “A robust baseline and reference modification and acquisition algorithm for accurate THz imaging,” IEEE Trans. Terahertz Sci. Technol. P. U. , “Phase retrieval in terahertz time-domain measurements: A ‘how to’ tutorial,” J. Infrared, Millimeter, Terahertz Waves , and , “A reliable method for extraction of material parameters in terahertz time-domain spectroscopy,” IEEE J. Sel. Top. Quantum Electron. P. H. , and , “Combined optical and spatial modulation THz-spectroscopy for the analysis of thin-layered systems,” Appl. Phys. Lett. , “A review of the terahertz conductivity of bulk and nano-materials,” J. Infrared, Millimeter, Terahertz Waves , and , “Machine learning powered ellipsometry,” Light: Sci. Appl. , “A sensitive and versatile thickness determination method based on non-inflection terahertz property fitting,” , “Nature of conduction in doped silicon,” Phys. Rev. Lett. J. D. K. S. , “Microwave conductivity of silicon and germanium,” J. Appl. Phys. © 2022 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
{"url":"https://pubs.aip.org/aip/app/article/7/7/071101/2835194/An-introduction-to-terahertz-time-domain?searchresult=1","timestamp":"2024-11-05T03:58:43Z","content_type":"text/html","content_length":"816081","record_id":"<urn:uuid:bc487a74-d6bd-4112-9f7c-ecd86080aaf9>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00358.warc.gz"}
Data Visualization with Matplotlib II Hi and welcome In the last post, Data Visualization with Matplotlib I, we introduced the concept of Matplotlib and different kinds of plots and charts. In this post , we shall be considering some other charts and The histogram is used on continuous data that could be grouped or expressed in terms of groups. For instance , ages of individuals can be expressed in terms of intervals , which is seen as groups. These groups are also called bins. Matplotlib is designed in such a way that when the number of bins or the limits of the bins are not given , it uses a default of 10 bins. 1 import numpy as np 2 import matplotlib.pyplot as pl 3 #The magic function below is to be used only on jupyter notebook or jupyterlab 4 %matplotlib inline 5 np.random.seed(20) 6 ages=np.random.randint(1,40,20) # 7 plt.hist(ages) 8 plt.show() # not necessary in jupyterlab or notebook The output is We can specify the number of bins we need 1 import numpy as np 2 import matplotlib.pyplot as pl 3 np.random.seed(20) 4 ages=np.random.randint(1,40,20) # 5 plt.hist(ages,bins=4) 6 plt.show() We can also specify the boundaries of each interval , as shown below 1 import numpy as np 2 import matplotlib.pyplot as pl 3 np.random.seed(20) 4 ages=np.random.randint(1,40,20) # 5 plt.hist(ages,bins=[1,10,20,30,40]) 6 plt.show() bins=[1,10,20,30,40], simply means the first interval is 1-10, the second is 10-20 and so on. Now , before we consider some other charts , let's see our we can customize our charts to look better and more presentable. To style a chart , so many things can be changed or added, like colors, axes titles and labels and graph title. Legends can be added , especially if we have so many components on the same graph. We can include annotations , and so on. Enough talking , let's get to action by starting some where. ##Graph and axes title To illustrate this , let's consider a bar chart which shows the reported cases of corona virus. 1 import numpy as np 2 import matplotlib.pyplot as plt 3 continents =["A","B","C","D"] 4 cases =[10,23,19,23] 5 plt.bar(continents ,cases ) 6 plt.title("Reported cases of Cov-19")# This takes care of the graph title 7 plt.xlabel("Continents")# The title on the x- axis 8 plt.ylabel("Number of cases") # The title on the y-axis This is as shown Legend and grid Grid lines are lines parallel to the axes. They run at regular intervals. By default, the grid lines are turned off. To turn them on or make them visible, we use the. grid() method, which takes in the state of the grid as it's argument. The default case is False , hence we need to change it to be true, this is as shown in the graph below, which also illustrates how two graphs can be plotted in one figure. Without grid 1 x=np.linspace(-10,10,1000) 2 sine=np.sin(x) 3 cosine=np.cos(x) 4 plt. plot(x, sine) 5 plt. plot(x,cosine) With grid 1 x=np.linspace(-10,10,1000) 2 sine=np.sin(x) 3 cosine=np.cos(x) 4 plt. plot(x, sine) 5 plt. plot(x,cosine) 6 plt. grid(True) # This turns on the grid system Notice that we have two graphs on the same figure and though they have different colours but we can't tell which is for sine and which is for cosine, hence we need a legend to distinguish between the There are different ways to include legends. We can use the. legend() method, then pass in labels for each plot, in the order we want them. For instance, in the graph above, the sine graph comes before the cosine graph, hence we need to include the label for the sine graph before the cosine graph as shown below. 1 import numpy as np 2 x=np.linspace(-10,10,1000) 3 sine=np.sin(x) 4 cosine=np.cos(x) 5 plt. plot(x, sine) 6 plt. plot(x,cosine) 7 plt. grid(True) 8 plt.legend(["Sine plot", "Cosine plot"]) The above method is not a good one, because if for some reason, we change the order of the graphs, then we would have to change the order of the labels in the .legend So, we can use another method, in which we include the labels of each plot in the plot, then just call the .legend() method with no argument. The labels would be taken automatically from the graphs in the order in which they appear. So even if we include a new graph or change the order of the graphs, we need not worry since every thing would be ordered. 1 import numpy as np 2 x=np.linspace(-10,10,1000) 3 sine=np.sin(x) 4 cosine=np.cos(x) 5 plt. plot(x, sine,label="Sine plot") 6 plt. plot(x,cosine,label="Cosine plot") 7 plt. grid(True) 8 plt.legend() From the sessions above, it is obvious that styling with matplotlib is simple. There are more styling options available. In the next post , we shall look at these styling options and how we can utilize them in other advanced plots. So, have a nice practice section. See you in the next post. This post is part of a series of blog post on the Matplotlib library , based on the course Practical Machine Learning Course from The Port Harcourt School of AI (pmlcourse).
{"url":"https://blog.phcschoolofai.org/data-visualization-with-matplotlib-ii","timestamp":"2024-11-02T01:03:14Z","content_type":"text/html","content_length":"171312","record_id":"<urn:uuid:5483af47-22ab-4adc-b38c-40782be4cc7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00115.warc.gz"}
Convert Wavelength In Megametres to Wavelength In Terametres Please provide values below to convert wavelength in megametres to wavelength in terametres, or vice versa. Wavelength In Megametres to Wavelength In Terametres Conversion Table Wavelength In Megametres Wavelength In Terametres 0.01 wavelength in megametres 1.0E-8 wavelength in terametres 0.1 wavelength in megametres 1.0E-7 wavelength in terametres 1 wavelength in megametres 1.0E-6 wavelength in terametres 2 wavelength in megametres 2.0E-6 wavelength in terametres 3 wavelength in megametres 3.0E-6 wavelength in terametres 5 wavelength in megametres 5.0E-6 wavelength in terametres 10 wavelength in megametres 1.0E-5 wavelength in terametres 20 wavelength in megametres 2.0E-5 wavelength in terametres 50 wavelength in megametres 5.0E-5 wavelength in terametres 100 wavelength in megametres 0.0001 wavelength in terametres 1000 wavelength in megametres 0.001 wavelength in terametres How to Convert Wavelength In Megametres to Wavelength In Terametres wavelength in terametres = 1.0E-6 × wavelength in megametres wavelength in megametres = 1000000 × wavelength in terametres Example: convert 15 wavelength in megametres to wavelength in terametres: 15 wavelength in megametres = 1.0E-6 × 15 = 1.5E-5 wavelength in terametres Convert Wavelength In Megametres to Other Frequency Wavelength Units
{"url":"https://www.unitconverters.net/frequency-wavelength/wavelength-in-megametres-to-wavelength-in-terametres.htm","timestamp":"2024-11-14T17:42:45Z","content_type":"text/html","content_length":"11122","record_id":"<urn:uuid:66a7bfd3-0de4-4047-a11e-456a061d3285>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00555.warc.gz"}
Documents For An Access Point Click the serial number on the left to view the details of the item. # Author Title Accn# Year Item Type Claims 1 David A. Cox Ideals, varieties and algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra 026482 2015 Book 2 Thomas Becker Grobner bases: A Computational Approach to Commutative Algebra 026481 1993 Book 3 Xin-She Yang Introduction to computational mathematics 026201 2015 Book 4 Ian H. Hutchinson Student's guide to numerical methods 026074 2015 Book 5 Paul Cockshott Computation and its limits 026050 2015 Book 6 Antonio Munjiza Large strain finite element method: A Practical Course 025979 2015 Book 7 Amritasu Sinha Principles of engineering analysis 024408 2012 Book 8 Matheus Grasselli Numerical mathematics 022370 2008 Book 9 HIGHAM, N.J. Accuracy and stability of numerical algorithms 021347 2002 Book 10 W. D. Wallis Beginner's guide to finite mathematics : For business, managment, and the social sciences 020188 2004 Book (page:1 / 7) [#68] Next Page Last Page Title Ideals, varieties and algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra Author(s) David A. Cox;John Little;Donal Oshea Edition 4th ed. Publication Chem, Springer, 2015. Description xvi, 646 Series (Undergraduate texts in mathematics) Abstract Note This text covers topics in algebraic geometry and commutative algebra with a strong perspective toward practical and computational aspects. The first four chapters form the core of the book. A comprehensive chart in the Preface illustrates a variety of ways to proceed with the material once these chapters are covered. In addition to the fundamentals of algebraic geometry-the elimination theorem, the extension theorem, the closure theorem and the Nullstellensatz—this new edition incorporates several substantial changes, all of which are listed in the Preface. The largest revision incorporates a new Chapter (ten), which presents some of the essentials of progress made over the last decades in computing Gröbner bases. ISBN,Price 9783319167213 : $ 75.00(HB) Classification 519.6 Keyword(s) 1. CLOSURE THEOREM 2. COMMUTATIVE ALGEBRA 3. COMPUTATIONAL ALGEBRA 4. COMPUTATIONAL MATHEMATICS 5. EBOOK 6. EBOOK - SPRINGER 7. ELIMINATION THEORY 8. EXTENSION THEOREM 9. GROBNER BASES 10. NULLSTELLENSATZ Item Type Book Multi-Media Links Please Click here for eBook Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 026482 519.6/COX/026482 On Shelf I12051 519.6/COX/ On Shelf +Copy Specific Information Title Grobner bases: A Computational Approach to Commutative Algebra Author(s) Thomas Becker;Volker Weispfenning Publication New York, Springer-Verlag, 1993. Description xxii, 574p. Series (Graduate Texts in Mathematics) Abstract Note The origins of the mathematics in this book date back more than two thou­ sand years, as can be seen from the fact that one of the most important algorithms presented here bears the name of the Greek mathematician Eu­ clid. The word "algorithm" as well as the key word "algebra" in the title of this book come from the name and the work of the ninth-century scientist Mohammed ibn Musa al-Khowarizmi, who was born in what is now Uzbek­ istan and worked in Baghdad at the court of Harun al-Rashid's son. The word "algorithm" is actually a westernization of al-Khowarizmi's name, while "algebra" derives from "al-jabr," a term that appears in the title of his book Kitab al-jabr wa'l muqabala, where he discusses symbolic methods for the solution of equations. This close connection between algebra and al­ gorithms lasted roughly up to the beginning of this century; until then, the primary goal of algebra was the design of constructive methods for solving equations by means of symbolic transformations. During the second half of the nineteenth century, a new line of thought began to enter algebra from the realm of geometry, where it had been successful since Euclid's time, namely, the axiomatic method. ISBN,Price 9780387979717 : Eur 93.59(HB) Classification 519.6 Keyword(s) 1. BUCHBERGER ALGORITHM 2. COMMUTATIVE ALGEBRA 3. COMPUTATIONAL MATHEMATICS 4. EBOOK 5. EBOOK - SPRINGER 6. GROBNER BASIS THEORY Item Type Book Multi-Media Links Please Click here for eBook Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 026481 On Shelf I12043 519.6/BEC/ On Shelf Title Introduction to computational mathematics Author(s) Xin-She Yang Edition 2nd ed. Publication New Jersey, World Scientific Publishing Co. Pvt. Ltd., 2015. Description xii, 329p. Abstract Note This unique book provides a comprehensive introduction to computational mathematics, which forms an essential part of contemporary numerical algorithms, scientific computing and optimization. It uses a theorem-free approach with just the right balance between mathematics and numerical algorithms. This edition covers all major topics in computational mathematics with a wide range of carefully selected numerical algorithms, ranging from the root-finding algorithm, numerical integration, numerical methods of partial differential equations, finite element methods, optimization algorithms, stochastic models, nonlinear curve-fitting to data modelling, bio-inspired algorithms and swarm intelligence. This book is especially suitable for both undergraduates and graduates in computational mathematics, numerical algorithms, scientific computing, mathematical programming, artificial intelligence and engineering optimization. Thus, it can be used as a textbook and/or reference book. ISBN,Price 9789814635783 : Rs. 995.00(PB) Classification 519.6 Keyword(s) 1. Computational Intelligence 2. COMPUTATIONAL MATHEMATICS 3. MATHEMATICAL PROGRAMMING 4. NUMERICAL ALGORITHMS 5. PARTIAL DIFFERENTIAL EQUATIONS 6. STOCHASTIC METHODS Item Type Book Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 026201 519.6/YANG/026201 On Shelf +Copy Specific Information Title Student's guide to numerical methods Author(s) Ian H. Hutchinson Publication Cambridge, Cambridge University Press, 2015. Description xiv, 207p. Abstract Note his concise, plain-language guide for senior undergraduates and graduate students aims to develop intuition, practical skills and an understanding of the framework of numerical methods for the physical sciences and engineering. It provides accessible self-contained explanations of mathematical principles, avoiding intimidating formal proofs. Worked examples and targeted exercises enable the student to master the realities of using numerical techniques for common needs such as solution of ordinary and partial differential equations, fitting experimental data, and simulation using particle and Monte Carlo methods. Topics are carefully selected and structured to build understanding, and illustrate key principles such as: accuracy, stability, order of convergence, iterative refinement, and computational effort estimation. Enrichment sections and in-depth footnotes form a springboard to more advanced material and provide additional background. Whether used for self-study, or as the basis of an accelerated introductory class, this compact textbook provides a thorough grounding in computational physics and engineering. ISBN,Price 9781316602416 : Rs. 295.00(PB) Classification 519.6 Keyword(s) 1. EBOOK 2. EBOOK - CAMBRIDGE UNIVERSITY PRESS 3. NUMERICAL METHODS Item Type Book Multi-Media Links Please Click here for eBook Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 026074 519.6/HUT/026074 On Shelf OB0751 519.6/HUT/ On Shelf +Copy Specific Information Title Computation and its limits Author(s) Paul Cockshott;Lewis M. Mackenzie;Greg Michaelson Publication Oxford, Oxford University Press, 2015. Description vi, 239p. Abstract Note Computation and its Limits is an innovative cross-disciplinary investigation of the relationship between computing and physical reality. It begins by exploring the mystery of why mathematics is so effective in science and seeks to explain this in terms of the modelling of one part of physical reality by another. Going from the origins of counting to the most blue-skies proposals for novel methods of computation, the authors investigate the extent to which the laws of nature and of logic constrain what we can compute. In the process they examine formal computability, the thermodynamics of computation, and the promise of quantum computing. ISBN,Price 9780198729129 : UKP 22.50(PB) Classification 519.6 Keyword(s) 1. COMPUTATION 2. HYPERCOMPUTING 3. Quantum computing 4. THERMODYNAMICS OF COMPUTATION Item Type Book Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 026050 519.6/COC/026050 On Shelf +Copy Specific Information Title Large strain finite element method: A Practical Course Author(s) Antonio Munjiza;Esteban Rougier;Earl E. Knight Publication Chichester, John Wiley and Sons, 2015. Description xiv, 469p. Abstract Note Book takes an introductory approach to the subject of large strains and large displacements in finite elements and starts from the basic concepts of finite strain deformability, including finite rotations and finite displacements. The necessary elements of vector analysis and tensorial calculus on the lines of modern understanding of the concept of tensor will also be introduced. This book explains how tensors and vectors can be described using matrices and also introduces different stress and strain tensors. Building on these, step by step finite element techniques for both hyper and hypo-elastic approach will be considered. Material models including isotropic, unisotropic, plastic and viscoplastic materials will be independently discussed to facilitate clarity and ease of learning. Elements of transient dynamics will also be covered and key explicit and iterative solvers including the direct numerical integration, relaxation techniques and conjugate gradient method will also be explored. This book contains a large number of easy to follow illustrations, examples and source code details that facilitate both reading and understanding. ISBN,Price 9781118405307 : US $130.00(HB) Classification 519.6 Keyword(s) 1. FINITE ELEMENT METHOD 2. LARGE DISPLACEMENTS 3. LARGE STRAINS 4. STRAIN TENSORS 5. STRAINS AND STRESSES 6. TENSONS Item Type Book Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 025979 519.6/MUN/025979 On Shelf +Copy Specific Information Title Principles of engineering analysis Author(s) Amritasu Sinha;Jean Bosco Mugiraneza Publication New Delhi, Narosa Publishing House, 2012. Description ix, pp. 1.1-8.23, B.1-B.2, I.1-I.6 Abstract Note PRINCIPLES OF ENGINEERING ANALYSIS presents Mathematical tools for Engineering Analysis and applications. Particular emphasis has been given to explain Signals and Systems. Different transformation techniques such as Laplace, Fourier and Z transforms mathematically and subsequently its applications in solving differential and integral equations are also given. ISBN,Price 9788184871456 : Rs. 320.00(PB) Classification 519.6 Keyword(s) 1. FOURIER TRANSFORM 2. LAPLACE TRANSFORMATION 3. PARTIAL DIFFERENTIAL EQUATIONS 4. SIGNALS 5. Z-TRANSFORM Item Type Book Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 024408 519.6/SIN/024408 On Shelf +Copy Specific Information Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 022370 519./GRA/022370 On Shelf +Copy Specific Information Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 021347 519.6/HIG/021347 On Shelf +Copy Specific Information Title Beginner's guide to finite mathematics : For business, managment, and the social sciences Author(s) W. D. Wallis Publication Boston, Birkhauser, 2004. Description xii, 354p. Contents Note Text book is designed for course in finite mathematics and application for business, management and social sciences students. ISBN,Price 8181282175 : Rs. 595.00 Classification 519.6 Keyword(s) 1. COMBINATION 2. FINITE MATHEMATICS 3. GRAPH THEORY 4. LINEAR PROGRAMMING 5. MATRICES 6. PERMUTATION 7. PROBABILITY 8. SET THEORY Item Type Book Circulation Data Accession# Call# Status Issued To Return Due On Physical Location 020188 519.6/WAL/020188 On Shelf +Copy Specific Information
{"url":"http://ezproxy.iucaa.in/wslxRSLT.php?A1=265","timestamp":"2024-11-12T04:00:48Z","content_type":"text/html","content_length":"44355","record_id":"<urn:uuid:c1480183-043d-4b24-96b4-e6ef843fbad4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00831.warc.gz"}
RC Networks, Technical Bulletin from Electrocube The generation of inductive or switching transients is well-known phenomenon to design engineers. The suppression of such transients is required for two general purposes: For contact protection and/ or to prevent the generation of electromagnetic in­terference (EMI). When arc suppression only is required, the suppression device is normally placed across the switching device. When EMI is to be suppressed, optimum results are obtained when the suppression device is placed across the load, particularly if long leads are required between the switching device and the load. Many techniques have been devised to eliminate or suppress the transients. When features such as cost, size, and effect on the circuit are considered, the most effec­tive application is a series capacitor-resistor network. The selection of the optimum value of capacitor and resistor combination depends on the ratio of inductance to resistance of the switched load, the distributed capacitance of the circuit, the speed of the contact opening, and the voltage and current of the circuit. The generated voltage at the time of contact opening is: el = L di/dt. Depending on the circuit resistance and the rate of switch opening, thousands of volts can be developed in very low voltage circuits. The capacitor should be capable of absorbing the stored energy of the inductive load which is 1/2 LI(2) joules but the resistance and distributed capaci­tance of the load and line affect this selection. It should also be noted that the value of the capacitor selected can cause ringing in the circuit unless properly damped. It is obvious that the calculations can be complex and most times impossible, be­cause in commercial applications, the inductive value of the load is not known or may not be con­stant. However, practical values can be obtained from the following formulas for contact protection: where I is the load current and E is the open circuit voltage. Starting with these values, either or both compo­nents can be varied to obtain the optimum result. A general rule is to increase the value of the capaci­tor to decrease the transient voltage. (Note that contact arcing begins around 320V in air.) However care should be taken so that the values of the RC across contacts will limit the open circuit cur­rent to the load in A.C. applications. The correct wattage of the resistors is dependent on the frequency of contact closure/opening in DC circuits. Most often a 1/2 to 1 watt resistor will suffice. In AC circuits the actual currents can be calculated and the 12R value used. RC networks placed across the load are more diffi­cult to calculate. A rule of thumb again is to select a capacitor whose value in Mfd. is between 1/2 and 1 times the load current in amps. The series resistor should initially be chosen to equal the load DC resistance. In summary, RC networks provide simple, econom­ical means of suppressing inductive transients. The optimization of the component values can be quite complex. However, it has been found that most contact protection can be achieved with capacitor values between .22 Mfd. and .47 Mfd. with series resistance values from 10 Ohm to 400 Ohm.
{"url":"https://www.electrocube.com/pages/rc-networks-technical-bulletin","timestamp":"2024-11-13T12:46:20Z","content_type":"text/html","content_length":"67564","record_id":"<urn:uuid:a16df344-fca7-4f94-804b-9509bb1843ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00468.warc.gz"}
The natural linear concatenative basis ¶The natural linear concatenative basis I've already called out the 2- and 3-element bases from Brent Kerby's writeup. The two-element linear concatenative basis The three-element linear concatenative basis But I neglected to talk about the 4-element basis! Kerby doesn't mention that one directly, but while rewatching my Strange Loop talk, I realized it's worth discussing. In particular, I mention in my talk that the 6-element nonlinear basis (i, cat, drop, dup, unit, swap) is the most commonly chosen basis, because there is a 1:1 correspondence between primitive instructions and the “categories” of instruction that you have to cover to have a complete basis. (That is, each category is covered by exactly one primitive instruction, and each primitive instruction does only what is required by its category and nothing else.) I think it's worth calling this 1:1 basis the “natural” basis. The natural normal concatenative basis [Strange Loop] Concatenative programming and stack-based languages Categories of instructions in a concatenative basis So, what would the natural linear basis be? Well, you'd just remove drop and dup, leaving you with: Is that enough to be complete? To see if it is, we just have to reduce from one of the other bases: cons ≜ swap unit swap cat ┃ [B] [A] cons ┃ [B] [A] swap unit swap cat [B] ┃ [A] swap unit swap cat [B] [A] ┃ swap unit swap cat [A] [B] ┃ unit swap cat [A] [[B]] ┃ swap cat [[B]] [A] ┃ cat [[B] A] ┃ sap ≜ swap cat i ┃ [B] [A] sap ┃ [B] [A] swap cat i [B] ┃ [A] swap cat i [B] [A] ┃ swap cat i [A] [B] ┃ cat i [A B] ┃ i A B ┃ i (I originally had more complicated definitions, but was able to find simpler ones.) Defining ‘cons’ with only empty quotations
{"url":"https://dcreager.net/concatenative/natural-linear-basis/","timestamp":"2024-11-08T02:23:02Z","content_type":"text/html","content_length":"4846","record_id":"<urn:uuid:5376026b-f869-47af-9ddf-175ca4bd5e42>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00475.warc.gz"}
Lateral color telescopeѲptics.net ▪ ▪ ▪ ▪ ▪▪▪▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ CONTENTS 4.7.2. Lateral color error Origins of lateral color error - also known as transverse chromatic aberration - are not as obvious as for longitudinal chromatism. In simplest terms, it is the consequence of unequal refractive compensation at lens surfaces. Since different wavelengths refract at a different rate, a single refractive surface will always split f/10 lens, two wavelength will arrive at the image plane separated by the Airy disc diameter when their angle differential at the exit from the lens nearly equals the angular Airy disc diameter, or as little as 2.8 arc seconds. Lateral color also can be generated by fabrication error, specifically, by wedge-like orientation of refractive surfaces relative to each other. Following text will limit to lateral color associated with oblique incidence pencils, but general principles are the same for both. Since the reference ray for all aberrations resulting from obliquity of incident pencils is their chief ray, the chief rays of optimized wavelengths are unavoidably directed toward image plane at different angles (FIG. 71). For that reason, lateral color is sometimes referred to as chief ray chromatism. FIGURE 71: Cause of lateral color in a lens. With aperture stop at the lens (A), chief ray of the incident white light passes near the lens' center (not quite through the center, as usually depicted for thin lens and shown at left, but the difference is negligible with respect to lateral color). The white light chief ray splits into chief rays of different wavelength after the first surface, but the angle of divergence is very small, resulting in negligible height differential at the second surface. Due to surface tangents at the respective refraction points being nearly parallel, this section of lens acts as plano-parallel plate, with the slight differential in angular direction of color chief rays compensated for by their slightly different rate of refraction at the second surface. as a result, the chief rays travel toward focal plane at nearly identical angle, nearly equal to the incident angle, staying tightly together. Note that due to different focal lengths, chromatic difference of magnification for different colors is present in their respective focal planes. However, since all chief rays arrive at nearly identical angle, there is no lateral color in the green light focal plane: the other colors are merely defocused. When the aperture stop is displaced, either longitudinally or laterally, the geometry changes (B). The white chief ray is now directed farther off the lens center, with the tangents on two lens surfaces at the respective points of refraction being no longer nearly parallel. As a result, refraction at the second surface is no longer compensatory, and the chief rays of different colors keep diverging toward focal plane. Consequently, they reach different heights in the green light focal plane, producing lateral color error. This error is now combined with longitudinal defocus, i.e. other colors are both, defocused and shifted laterally. Obviously, correcting longitudinal chromatism would only eliminate defocus error - as well as chromatic difference of magnification due to it - but wouldn't affect lateral color error, nor chromatic difference of magnification resulting from it. As illustrated above, the two main determinants of lateral color error are stop position and lens shape. If, for instance, the 2nd surface tangent in (B) was nearly parallel to that at the 1st surface (a weak positive meniscus), the lens would have acted as plano parallel plate, producing negligible lateral color despite displaced stop. Some basic relations for the above simple lens case help define specific factors of its lateral color. For the white light incidence angle α, the height at which it hits first surface is h1=αL, L being the longitudinal displacement of aperture stop. The angle of incidence at the first surface β=α+ρ1, with ρ1=h1/R1 being the angle formed by optical axis and the 1st surface's radius of curvature R1. The refracted angle, for small β, is β'=(n/n')β, with n, n' being the refractive index of incident and refractive medium, respectively. For lens in air, n=1 and β'=β/n'; since n' varies with the wavelength, so do their respective angles of refraction. In other words, chief ray divergence that would produce lateral color is initiated at the first surface. In order to cancel this divergence, the tangent at refraction point on the second lens surface needs to be nearly parallel to that on the first surface, i.e. ρ1-ρ2~0, with ρ1=h1/R1 and ρ2=h2/R2, where h2=h1+β't, t being the lens thickness (obviously, the curvature radii for first and second surface, R1 and R2, respectively, need to be of the same sign). Consequently, in first approximation, the chief ray angle of refraction after the second surface is given by β"=n'(ρ1-ρ2). In other words, for any non-zero value in brackets, the angle of divergence for any specific wavelengths will equal the product of that value and the glass refractive index n' corresponding to the wavelength. For any two wavelengths 1 and 2, the angle of their lateral divergence is: δ1,2 = β"1-β"2 = (n'1-n'2)(ρ1-ρ2) (50) For given aperture, focal length - or focal ratio - of the system is irrelevant to the magnitude of lateral color error, since the angular size of this error is constant (i.e. the linear extent of lateral color error remains constant with respect to the Airy disc size). With multiple lens systems the calculation is more complex, but the principle remains the same: cancellation of lateral color requires the sum of refracted angles at its surfaces to be near zero for a given range of wavelengths. As a wavefront aberration, lateral color error is a consequence of wavefront tilt vs. reference sphere. As monochromatic aberration, wavefront tilt does not affect point-image quality, only its location; however, in a wavefront that splits chromatically through refraction, tilt error varying with the wavelength does cause spread of energy in the central diffraction maxima (FIG. 72). Unlike secondary spectrum and spherochromatism, where most of the energy lost to the central maxima goes to the rings area, lateral color error mainly expands (and deforms) the central maxima. FIGURE 72: (A) Simple geometry of wavefront tilt shows that the P-V error is given by WT=τD, with the tilt angle τ =h/f, where h is the linear shift of point image in the image plane, f being the focal length (for object at infinity; image separation for close objects), and D the aperture diameter. The P-V to RMS wavefront error ratio is 4√32/3. The angle of tilt τ is determined by the angular discrepancy between chief ray angle for specified wavelength, and chief ray angle of the reference wavelength (in the visual context, usually around green e-line); it equals the angle of lateral divergence δ, as defined above, with the only difference being that it expresses the error relative to the primary wavelength. (B) The effect of lateral color error on point-image quality and overall contrast depends on its magnitude and spectral sensitivity of the detector. Shown is its effect on polychromatic PSF (PPSF, for 0.4-0.7μm range, photopic eye sensitivity) and MTF, in terms of C/F separation in units of e-line Airy disc diameter, Δ (on the graph marked as l). The system used for raytracing has negligible other aberrations (50mm f/9.56 Maksutov camera, R1=-206, S1=20.1, BK7, R2=217.5, S2=666, air, R3=-996, mirror, all mm, stop at the 1st surface), hence the effect is nearly entirely the result of lateral color error. The C/F separation needs to be half the e-line Airy disc radius, or less, for the polychromatic Strehl to remain within diffraction limited range. The primary effect on polychromatic PSF is elongation of the central maxima in the direction of lateral color shift, wider on the side of longer wavelengths' shift (top), narrower at the side of shorter wavelengths' shift (bottom). Asymmetrical expansion of the central maxima causes largest MTF contrast drop in the high-frequency range, from the maximum in the orientation coinciding with lateral shift (tangential) to near-zero in the orientation perpendicular to it (sagittal). Tolerance for lateral color error, obviously, depends on the spectral sensitivity of detector. For photopic eye sensitivity, diffraction-limited maximum is at the C/F separation nearly equaling the e-line Airy disc radius; for even sensitivity over the visual range, polychromatic Strehl drops to 0.80 at only 30% of that separation. As the PPSF/Δ graph on FIG. 72B indicates, the effect of lateral color error is not proportional to its angular magnitude. Similarly to secondary spectrum, negative effect of lateral color on image quality changes at a slower rate than its nominal magnitude. For instance, at C/F separation equaling half the Airy disc radius, the P-V wavefront error of primary spherical aberration corresponding to the resulting 0.81 Strehl is slightly better than 1/4 wave. Doubling the C/F separation does not double the corresponding P-V wavefront error, which is 1/2.4 waves P-V for 0.54 Strehl. Doubling it once more, to twice the Airy disc diameter, only lowers the polychromatic Strehl to 0.32, with the corresponding primary spherical aberration error of 1/1.8 wave P-V. A close empirical approximation for the photopic polychromatic Strehl resulting from the lateral color error is given by SP~1-Δ2/(1+1.2Δ2), with Δ being the C/F separation in units of e-line Airy disc diameter. The difference vs. raytracing values is within 1% of the nominal PPSF value for Δ<1, and doesn't exceed a few percentage points at Δ~2 (e.g. 0.31 vs. 0.32 by raytrace for Δ=2). The plot has quasi-Gaussian shape, with the drop in the Strehl becoming asymptotical for larger F-to-C separations - a consequence of the rate of wavelength divergence decreasing exponentially toward the central wavelength (the dashed portion of the plot indicates range where the approximation may not be accurate). ◄ 4.7.1. Secondary spectrum and spherochromatism ▐ 4.7.3. Measuring chromatic error ► Home | Comments
{"url":"https://www.telescope-optics.net/lateral_color.htm","timestamp":"2024-11-07T04:12:10Z","content_type":"text/html","content_length":"24143","record_id":"<urn:uuid:9354062b-b824-4dc3-a5f6-5581edc6c55e>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00542.warc.gz"}
Weisfeiler-Lehman Kernel The Weisfeiler-Lehman kernel is an iterative integration of neighborhood information. We initialize the labels for each node using its own node degree. At each step, we take the neighboring node degrees to form a [[multiset]] Multiset, mset or bag A bag is a set in which duplicate elements are allowed. An ordered bag is a list that we use in programming. . At step $K$, we have the multisets for each node. Those multisets at each node can be processed to form an representation of the graph which is in turn used to calculate statistics of the graph. Iterate $k$ steps This iteration can be used to test if two graphs are isomorphism^1. Planted: by L Ma; L Ma (2021). 'Weisfeiler-Lehman Kernel', Datumorphism, 09 April. Available at: https://datumorphism.leima.is/cards/graph/graph-weisfeiler-lehman-kernel/.
{"url":"https://datumorphism.leima.is/cards/graph/graph-weisfeiler-lehman-kernel/","timestamp":"2024-11-02T15:05:21Z","content_type":"text/html","content_length":"114291","record_id":"<urn:uuid:f611e05b-e0fe-400c-9d73-02a545877cb8>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00296.warc.gz"}
Mathworks topics: details Concise | By set Times square red set, green set, blue set The children build the familiar multiplication square but graphed vertically, progressing from the natural numbers to the integers at experiment 30 and from the integers to the real numbers at experiment 36. Of the 4 standard models for multiplication, only one, repeated addition, is used here. This is the one children turn to most readily. The point of the exercise is to realise algebraic symmetry geometrically. Teachers may like to devise a parallel treatment for the addition square. Zoom green set, blue set On the one hand, any child pushing a toy car, playing with a doll or recognising Mummy’s photo in an album accepts the same object on different scales. On the other, the consequences for measurement of change of scale take us up to Level 9 on the National Curriculum. But a rich dynamic experience of scaling – not enlarging but zooming – may lead us to appreciate why we needn’t leave so much space when they deliver that ton of sand or those thousand bricks but why we’ll need a lot more wool for our sister’s cardigan than for little Penny’s. The treatment is qualitative except where the quantities are experienced but not abstracted or where they lead to a surprise. Indeed it is the mixture of recognition and surprise which makes this such a good topic. The maths implicit here is that area goes up as the square of the linear scale factor: volume as the cube. If you have friends with the equipment – or the school has it – the children should look through a zoom lens while zooming it and enlarge one of their pictures on a zoom copier. Slices and solids blue set The growing child starts with 3-D objects and later abstracts 2-D shapes. by looking at 2-D sections through 3-D objects we can move back and forth between the two worlds and enrich our experience of both. We look at the complete slice, approaching the shape … from the inside: … from the outside: … and from both: Abbott’s Flatland is the syllabus for this section. (See the original or Martin Gardner’s Further Mathematical Diversions ch 12.) A water surface, a slice of light, a rubber band, a sheet of cardboard, the junction between 2 layers of coloured plasticine, may all be used to define the plane of section. Note that in almost all cases the ‘slices’ are related by two kinds of transformation: affine – represented by sections through the general prism or cylinder, or, wore generally, projective – represented by sections through the general pyramid or cone. Left and right red set, green set, blue set This section deals with mirror symmetry. In these explorations of space the young child can make discoveries and the older person examine observations long taken for granted. (If the visitor has mixed eye-hand dominance, it doesn’t matter. We’re not using lateral discrimination here but studying the phenomenon of handedness itself. For the purpose of these exercises it’s of no concern which hand you call which. For ‘…left …, then right …’ on the caption cards, read ‘… one …, then … the other …’.) At the end of the sequence we extend the idea of reflection from that in a plane to that in a line and, finally, a point. In The Ambidextrous Universe Martin Gardner covers all this material and goes on to examine nature’s preference for one handedness or the other at a fundamental level. All sorts blue set In exploring different ways of sorting and representing data we move between the ‘table’ scale and the ‘room’ scale: between placing a counter in a drawn circle and standing in a rope loop, between following a flowchart with a finger and negotiating an obstacle course of tables and chairs, and so on. The syllabus comprises trees and flowcharts, Carroll and Venn diagrams (1-, 2- and 3-D), bar and pie charts, scattergrams and barycentric graphs. Packing shapes green set, blue set This sequence starts with ways of tiling the plane then advances one dimension to ways of filling space. It moves from an examination of atomic packing to an investigation of the shape of soap bubbles in a foam. Though, as elsewhere, the sequence is progressive, certain experiments can be performed by both first-year undergraduates and – with different motives, preconceptions and expectations! – pre-school children. Transformations red set, green set, blue set Felix Klein’s transformations of space get more and more general as conditions are relaxed. Thus you start with the isometries. then take these as a special case of the similarities, and so on up through affinities, projectivities and topological distortions. This is how the sequence Transformations develops but only in the loosest possible way. In fact the title is little more than an excuse for drawing attention to everyday but surprising ways in which one mathematical object changes into another. In every case, however, one or more quantities are invariant. Teachers may like to name them as they go through the sequence. Be aware of transformations in other sequences: translations, rotations and reflections: Packing shapes; reflections: Left and right; dilatations: Zoom; affinities, perspectivities: Slices and solids. Angle blue set Angle is a dimensionless measure – it must always be defined as a ratio (a fraction of a turn, arc: radius, . . .) – so already abstract to that extent, and children find it hard to deal with this quantity they can’t locate: The kinetic, operational treatment (angle as ‘turn’) now familiar through LOGO is the approach least prone to misconstruction. But the static manifestation (angle as ‘shape’) can’t be ignored. The procedure here is to establish frames of reference with spirit level, plumbline, compass and use the vocabulary which goes with them – ‘vertical ‘/‘horizontal ‘ – ‘steep/ ‘shallow’; ‘north’/’east’ – ‘north-east’, etc. -, then free the angles from their reference directions – walk around with a ‘right angle checker’, record angles found at the vertices of loose objects with an ‘angle indicator’ and use the corresponding terms – ‘perpendicular’, ‘parallel’, ‘inclined’; ‘acute’, ‘obtuse’, ‘reflex’. At sixth-form level the approach to trigonometry is the same: a ‘trigogram’ displays the 3 ratios on Cartesian axes in the standard way, from which thereafter they may be divorced. L.C.M.s red set, green set, blue set The lowest common multiple of 3 and 4 is 12. If we look along the number line we thus find multiples of 3 and 4 coinciding at multiples of 12. The number line is a spatial model but the same arithmetic can be modeled in time. In fact embodiments of this simple idea are many and diverse. The familiar Cuisenaire rods are out in the sequence but also gear wheels, a glockenspiel and acetate masks each of which lets through multiples of a particular number from the ‘times’ table. Dissections red set, green set, blue set Here is another subsequence which has outgrown its parent, in this case Transformations. The core is a set of dissection puzzles. To solve them quickly one must: a) predict the effects of grouping simple angles into compound ones and adding lengths, b) remember the effects of so doing – successful or unsuccessful For stage (a) careful, directed observation is a prerequisite, and in both stages the capacity to ‘visualise’ – form and manipulate mental images — is exercised and developed. In all these transformations area is invariant but, though the lengths and angles of individual polygons remain unchanged – i.e. the transformations they suffer are isometric – the composite shape is not preserved. The puzzles stress this independence. The sequence is extended 1 dimension by a group of solid dissections – including the celebrated SOMA cube on both the ‘table’ and ‘floor’ scale. A series of puzzles where a given polygon must be produced from the intersection of 2 (or more!) others demands the same skills. 2-D to 3-D blue set This is a little exhibition of ways to simulate 3 dimensions in 2: anaglyphs, stereopairs, linear perspective. Once you’ve got your eye in, objects in a conventional projection like the isometric are seen in 3-D); but note that in certain cases there is a many-one mapping of points on the object to points on the drawing, making for ambiguity. Pascal’s Triangle blue set This important array embodies number sequences ubiquitous in mathematics – the successive orders of triangle numbers, the binomial coefficients, the Fibonacci sequence – and their relations. The sequence Pascal’s Triangle opens a few doors to this Alhambra. Like Times Square most of this sequence can be adapted readily for use in the classroom. Though written quite independently, Tony Colledge’s (photocopiable) book Pascal’s Triangle (Tarquin) is virtually a teacher’s guide to this sequence. Loci and linkages green set, blue set Loci – Sliding ladders, rolling wheels, … What paths will selected points upon them follow? Does the geometry of the mechanism contain simple features to help you make your prediction? Linkages – Though less visible than they were a century ago, link motions are essential to devices we take for granted: tool and sewing boxes, folding steps. umbrellas, floor mops, screw jacks, cupboard hinges, door closers. In this sequence we study how the properties of the rhombus and parallelogram are exploited. Pythagoras’ Theorem blue set There are many ways to demonstrate why Pythagoras’ Theorem holds: here we take just 3. Before moving to the general case we look at the 3, 4, 5 and 1, root 2, root 3 triangles. We also apply the converse to check some right angles. Symmetry red set, green set, blue set We meet line and rotation symmetry first separately embodied then combined. Next we make designs with our own choice of symmetries. And finally we are introduced to the idea of a group of symmetry operations by fitting solid shapes in holes. Weigh-In red set, green set, blue set Formerly under the Challenges topic but now a sequence in its own right, this includes a number of exercises for both the 2-pan and mathematical balances. Challenges red set, green set, blue set In other parts of the Circus it’s clear what sort of maths is involved. But in situations like that set up for ‘Grandpa’s Armchair’ it’s difficult to get a mathematical handle on the problem. That particular example comes from John Mason’s excellent introduction to the psychology of problem-solving, Thinking Mathematically (Addison-Wesley).
{"url":"http://magicmathworks.org/find-out-more/mathworks-topics/mathworks-topics-details","timestamp":"2024-11-10T21:03:32Z","content_type":"application/xhtml+xml","content_length":"39417","record_id":"<urn:uuid:b2bb8248-7368-4c35-a3d8-5313b803843a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00386.warc.gz"}
Anish - Test Prep Tutor - Learner - The World's Best Tutors I specialize in working with high-school and college students taking standardized tests like the SAT and ACT. I have a bachelor's degree in computer science, an MBA in finance and a second masters degree in business analytics, and I tutor grades 8-College. My tutoring style: My approach to tutoring is distinct from that of a formally trained instructor, and relies on both developing effective techniques to cope and catch up with classroom material, and ultimately improving outcomes, whether through better grades or better performance in class. I also work with students looking to achieve perfect scores on the Math sections of standardized tests, just as I always have when taking the tests myself, by approaching the challenge as I did when I went through the process as a young man. I've been told by several tutors and parents in the past that no other tutors they had engaged were able to help them achieve those scores the way I managed to. I also specialize in working with kids who are special needs, including those with dyslexia, dysgraphia, dyscalculia, and ADD, among other special disorders Success stories: Beth was a student of mine for 3 months. Her initial math score of 610 didn't reflect her true potential since she was a stellar student enrolled in AP calculus AB when I started working with her. Working together, we developed a plan to get her score up in just under 12 weeks. I built a methodical study plan for her with targeted review for the topics she continued to get questions wrong in. We used a wide variety of resources, including 9th grade geometry textbooks, worksheets for topic specific review and several publicly available past SATs. It was a struggle for her at first, especially having to do this and manage her time carefully so as to not fall behind on her school work. In the end, her efforts paid off, however, with a near-perfect score of 790 on the Math Anuj needed a score of 30+ on the ACT to qualify for college scholarships, and only obtained an overall score of 26 on his first attempt. His main issue was an inability to effectively use his calculator to speed up the rate at which he could complete the questions and thereby avoid running out of time on the Math section on every practice or real test he took. We worked closely together to practice over 50 different problem types, with repetition of the technique to solve the questions - whether exclusively on paper or with the help of his calculator - being the key to his future success. With 6 months of relentless practice, review and hard work, he achieved a score of 33 and is heading to a top 40 college in the United States next fall as a direct result of our efforts. Hobbies and interests: I love traveling overseas, and I'm fortunate to have visited over 40 countries so far, and every continent except Antarctica. I'm also a passionate soccer fan. I'm fortunate enough to have been to both El Classico and Wembley for the FA cup final, and I play in recreational tournaments every chance I get.
{"url":"https://www.learner.com/tutor/anish-t-test-prep","timestamp":"2024-11-09T09:20:59Z","content_type":"text/html","content_length":"62415","record_id":"<urn:uuid:6bdf7306-22f3-4ec4-8fb6-66c71e770f7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00680.warc.gz"}
Calculating the Coordinates of a Pixel in a Drawing Region - (300, 370) Using the notation (x,y) [for example, (50,80)] write the coordinates of the pixel that is half-way across the bottom boundary of an applet’s drawing region that is 600 pixels wide and 370 pixels Using the notation (x,y) [for example, (50,80)] write the coordinates of the pixel that is half-way across the bottom boundary of an applet’s drawing region that is 600 pixels wide and 370 pixels
{"url":"https://matthew.maennche.com/2014/06/using-notation-xy-example-5080-write-coordinates-pixel-half-way-across-bottom-boundary-applets-drawing-region-600-pixels-wide-370-pixels-high/","timestamp":"2024-11-07T10:06:33Z","content_type":"text/html","content_length":"90635","record_id":"<urn:uuid:252273cb-87bd-42ac-86d7-67ed8e3f36d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00826.warc.gz"}
Making plywood miters. Petras' Ngons is almost the holy grail As I said in that old gh forum thread- for the general case where the vertices have valence greater than 3, even if the faces are planar, there won’t exist any vector along which you can offset the vertices that gives planar faces parallel to those of the original faces. If you do offset all the faces by the same amount, usually they will not intersect in a single point, so each vertex in the original will turn into multiple vertices in the offset. For valence 4, the specific condition the mesh needs to meet is indeed for the faces around each vertex to be tangent to a common cone. For general freeform shapes you need to do some optimisation to meet this condition. Here’s an example that lets you deform a mesh, then make it strictly conical and planar, and calculates the offset so that every solid panel has parallel planar faces of the same thickness conical_offset.gh (27.1 KB) (see also these old threads
{"url":"https://discourse.mcneel.com/t/making-plywood-miters-petras-ngons-is-almost-the-holy-grail/111974/8","timestamp":"2024-11-09T16:16:29Z","content_type":"text/html","content_length":"29393","record_id":"<urn:uuid:e45ae35d-e87d-4618-9633-1f78d9690a37>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00767.warc.gz"}
Exploring the Mandelbrot Set 4. Filament Symmetry including self-similarity and two types of rotational symmetry (related to mu-atom Periods and bifurcation). 5a. Nested Filament Symmetries 5b. The unique attributes of Cusp filaments Then one reaches a bit of a lull. To most observers, these properties seem to account for everything in the structure of the Mandelbrot Set. In fact, most people will stop here, and some technically-minded enthusiasts have devised Naming Systems based on these features alone (plus possibly the External Arguments). Persistent explorers and those with a more mathematical background will also discover: Depending on exploration style and the availability of a viewer that uses extended precision, one of the following is likely to be discovered next: When deeper zooming (typically by magnifications of 1030 or usually much more) the explorer is likely to discover Julia morphing and Leavitt navigation. Exotic structures such as polaftis and Julia "trees" are just one of the huge variety of things that have been found. algorithms, for discussion of how to write a Mandelbrot program. history, for a brief history of the Mandelbrot Set's exploration. R2 for a guide to the largest features of the Mandelbrot Set. enumeration of features for a discussion of the various number sequences that are discovered when the Mandelbrot set's features are counted. revisions: 20020418 oldest on record; 20230701 add Leavitt navigation, Polaftis, Julia morphing From the Mandelbrot Set Glossary and Encyclopedia, by Robert Munafo, (c) 1987-2024. Mu-ency main page — index — recent changes — DEMZ
{"url":"https://www.mrob.com/pub/muency/exploring.html","timestamp":"2024-11-04T13:48:05Z","content_type":"text/html","content_length":"7754","record_id":"<urn:uuid:f2f1c41f-61c8-4c56-b69e-f91e7c7ea5ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00154.warc.gz"}
The HATCH entity (DXF Reference) fills a closed area defined by one or more boundary paths by a hatch pattern, a solid fill, or a gradient fill. All points in OCS as (x, y) tuples (Hatch.dxf.elevation is the z-axis value). There are two different hatch pattern default scaling, depending on the HEADER variable $MEASUREMENT, one for ISO measurement (m, cm, mm, …) and one for imperial measurement (in, ft, yd, …). The default scaling for predefined hatch pattern will be chosen according this measurement setting in the HEADER section, this replicates the behavior of BricsCAD and other CAD applications. Ezdxf uses the ISO pattern definitions as a base line and scales this pattern down by factor 1/25.6 for imperial measurement usage. The pattern scaling is independent from the drawing units of the document defined by the HEADER variable $INSUNITS. Subclass of ezdxf.entities.DXFGraphic DXF type 'HATCH' Factory function ezdxf.layouts.BaseLayout.add_hatch() Inherited DXF attributes Common graphical DXF attributes Required DXF version DXF R2000 ('AC1015') Boundary paths classes Path manager: BoundaryPaths Pattern and gradient classes class ezdxf.entities.Hatch¶ Pattern name as string 1 solid fill, use method Hatch.set_solid_fill() 0 pattern fill, use method Hatch.set_pattern_fill() 1 associative hatch 0 not associative hatch Associations are not managed by ezdxf. 0 normal 1 outer 2 ignore (search AutoCAD help for more information) 0 user 1 predefined 2 custom The actual pattern rotation angle in degrees (float). Changing this value does not rotate the pattern, use set_pattern_angle() for this task. The actual pattern scale factor (float). Changing this value does not scale the pattern use set_pattern_scale() for this task. 1 = double pattern size else 0. (int) Count of seed points (use get_seed_points()) Z value represents the elevation height of the OCS. (float) BoundaryPaths object. Pattern object. Gradient object. A list of seed points as (x, y) tuples. property has_solid_fill: bool¶ True if entity has a solid fill. (read only) property has_pattern_fill: bool¶ True if entity has a pattern fill. (read only) property has_gradient_data: bool¶ True if entity has a gradient fill. A hatch with gradient fill has also a solid fill. (read only) property bgcolor: RGB | None¶ Set pattern fill background color as (r, g, b)-tuple, rgb values in the range [0, 255] (read/write/del) r, g, b = entity.bgcolor # get pattern fill background color entity.bgcolor = (10, 20, 30) # set pattern fill background color del entity.bgcolor # delete pattern fill background color set_pattern_definition(lines: Sequence, factor: float = 1, angle: float = 0) None¶ Setup pattern definition by a list of definition lines and the definition line is a 4-tuple (angle, base_point, offset, dash_length_items). The pattern definition should be designed for a pattern scale factor of 1 and a pattern rotation angle of 0. ○ angle: line angle in degrees ○ base-point: (x, y) tuple ○ offset: (dx, dy) tuple ○ dash_length_items: list of dash items (item > 0 is a line, item < 0 is a gap and item == 0.0 is a point) ○ lines – list of definition lines ○ factor – pattern scale factor ○ angle – rotation angle in degrees set_pattern_scale(scale: float) None¶ Sets the pattern scale factor and scales the pattern definition. The method always starts from the original base scale, the set_pattern_scale(1) call resets the pattern scale to the original appearance as defined by the pattern designer, but only if the pattern attribute dxf.pattern_scale represents the actual scale, it cannot restore the original pattern scale from the pattern definition itself. scale – pattern scale factor set_pattern_angle(angle: float) None¶ Sets the pattern rotation angle and rotates the pattern definition. The method always starts from the original base rotation of 0, the set_pattern_angle(0) call resets the pattern rotation angle to the original appearance as defined by the pattern designer, but only if the pattern attribute dxf.pattern_angle represents the actual pattern rotation, it cannot restore the original rotation angle from the pattern definition itself. angle – pattern rotation angle in degrees set_solid_fill(color: int = 7, style: int = 1, rgb: RGB | None = None)¶ Set the solid fill mode and removes all gradient and pattern fill related data. ○ color – AutoCAD Color Index (ACI), (0 = BYBLOCK; 256 = BYLAYER) ○ style – hatch style (0 = normal; 1 = outer; 2 = ignore) ○ rgb – true color value as (r, g, b)-tuple - has higher priority than color. True color support requires DXF R2000. set_pattern_fill(name: str, color: int = 7, angle: float = 0.0, scale: float = 1.0, double: int = 0, style: int = 1, pattern_type: int = 1, definition=None) None¶ Sets the pattern fill mode and removes all gradient related data. The pattern definition should be designed for a scale factor 1 and a rotation angle of 0 degrees. The predefined hatch pattern like “ANSI33” are scaled according to the HEADER variable $MEASUREMENT for ISO measurement (m, cm, … ), or imperial units (in, ft, …), this replicates the behavior of BricsCAD. ○ name – pattern name as string ○ color – pattern color as AutoCAD Color Index (ACI) ○ angle – pattern rotation angle in degrees ○ scale – pattern scale factor ○ double – double size flag ○ style – hatch style (0 = normal; 1 = outer; 2 = ignore) ○ pattern_type – pattern type (0 = user-defined; 1 = predefined; 2 = custom) ○ definition – list of definition lines and a definition line is a 4-tuple [angle, base_point, offset, dash_length_items], see set_pattern_definition() set_gradient(color1: RGB = RGB(0, 0, 0), color2: RGB = RGB(255, 255, 255), rotation: float = 0.0, centered: float = 0.0, one_color: int = 0, tint: float = 0.0, name: str = 'LINEAR') None¶ Sets the gradient fill mode and removes all pattern fill related data, requires DXF R2004 or newer. A gradient filled hatch is also a solid filled hatch. Valid gradient type names are: ○ “LINEAR” ○ “CYLINDER” ○ “INVCYLINDER” ○ “SPHERICAL” ○ “INVSPHERICAL” ○ “HEMISPHERICAL” ○ “INVHEMISPHERICAL” ○ “CURVED” ○ “INVCURVED” ○ color1 – (r, g, b)-tuple for first color, rgb values as int in the range [0, 255] ○ color2 – (r, g, b)-tuple for second color, rgb values as int in the range [0, 255] ○ rotation – rotation angle in degrees ○ centered – determines whether the gradient is centered or not ○ one_color – 1 for gradient from color1 to tinted color1 ○ tint – determines the tinted target color1 for a one color gradient. (valid range 0.0 to 1.0) ○ name – name of gradient type, default “LINEAR” set_seed_points(points: Iterable[tuple[float, float]]) None¶ Set seed points, points is an iterable of (x, y)-tuples. I don’t know why there can be more than one seed point. All points in OCS (Hatch.dxf.elevation is the Z value) transform(m: Matrix44) Hatch¶ Transform entity by transformation matrix m inplace. associate(path: AbstractBoundaryPath, entities: Iterable[DXFEntity])¶ Set association from hatch boundary path to DXF geometry entities. A HATCH entity can be associative to a base geometry, this association is not maintained nor verified by ezdxf, so if you modify the base geometry the geometry of the boundary path is not updated and no verification is done to check if the associated geometry matches the boundary path, this opens many possibilities to create invalid DXF files: USE WITH CARE! Remove associated path elements. Boundary Paths¶ The hatch entity is build by different path types, these are the filter flags for the Hatch.dxf.hatch_style: • EXTERNAL: defines the outer boundary of the hatch • OUTERMOST: defines the first tier of inner hatch boundaries • DEFAULT: default boundary path As you will learn in the next sections, these are more the recommended usage type for the flags, but the fill algorithm doesn’t care much about that, for instance an OUTERMOST path doesn’t have to be inside the EXTERNAL path. Island Detection¶ In general the island detection algorithm works always from outside to inside and alternates filled and unfilled areas. The area between then 1st and the 2nd boundary is filled, the area between the 2nd and the 3rd boundary is unfilled and so on. The different hatch styles defined by the Hatch.dxf.hatch_style attribute are created by filtering some boundary path types. Hatch Style¶ • HATCH_STYLE_IGNORE: Ignores all paths except the paths marked as EXTERNAL, if there are more than one path marked as EXTERNAL, they are filled in NESTED style. Creates no hatch if no path is marked as EXTERNAL. • HATCH_STYLE_OUTERMOST: Ignores all paths marked as DEFAULT, remaining EXTERNAL and OUTERMOST paths are filled in NESTED style. Creates no hatch if no path is marked as EXTERNAL or OUTERMOST. • HATCH_STYLE_NESTED: Use all existing paths. Hatch Pattern Definition Classes¶ class ezdxf.entities.Pattern¶ List of pattern definition lines (read/write). see PatternLine add_line(angle: float = 0, base_point: UVec = (0, 0), offset: UVec = (0, 0), dash_length_items: Iterable[float] | None = None) None¶ Create a new pattern definition line and add the line to the Pattern.lines attribute. clear() None¶ Delete all pattern definition lines. scale(factor: float = 1, angle: float = 0) None¶ Scale and rotate pattern. Be careful, this changes the base pattern definition, maybe better use Hatch.set_pattern_scale() or Hatch.set_pattern_angle(). ○ factor – scaling factor ○ angle – rotation angle in degrees class ezdxf.entities.PatternLine¶ Represents a pattern definition line, use factory function Pattern.add_line() to create new pattern definition lines. Hatch Gradient Fill Class¶ class ezdxf.entities.Gradient¶
{"url":"https://ezdxf.mozman.at/docs/dxfentities/hatch.html","timestamp":"2024-11-10T10:47:25Z","content_type":"text/html","content_length":"135004","record_id":"<urn:uuid:4ee42fa1-6b79-4610-b550-0098534575aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00112.warc.gz"}
Specification of Population Totals and Sampling Rates To include a finite population correction (fpc) in Taylor series variance estimation, you can input either the sampling rate or the population total by using the RATE= or TOTAL= option in the PROC SURVEYLOGISTIC statement. (You cannot specify both of these options in the same PROC SURVEYLOGISTIC statement.) The RATE= and TOTAL= options apply only to Taylor series variance estimation. The procedure does not use a finite population correction for BRR or jackknife variance estimation. If you do not specify the RATE= or TOTAL= option, the Taylor series variance estimation does not include a finite population correction. For fairly small sampling fractions, it is appropriate to ignore this correction. See Cochran (1977) and Kish (1965) for more information. If your design has multiple stages of selection and you are specifying the RATE= option, you should input the first-stage sampling rate, which is the ratio of the number of PSUs in the sample to the total number of PSUs in the study population. If you are specifying the TOTAL= option for a multistage design, you should input the total number of PSUs in the study population. See the section Primary Sampling Units (PSUs) for more details. For a nonstratified sample design, or for a stratified sample design with the same sampling rate or the same population total in all strata, you can use the RATE=value or TOTAL=value option. If your sample design is stratified with different sampling rates or population totals in different strata, use the RATE=SAS-data-set or TOTAL=SAS-data-set option to name a SAS data set that contains the stratum sampling rates or totals. This data set is called a secondary data set, as opposed to the primary data set that you specify with the DATA= option. The secondary data set must contain all the stratification variables listed in the STRATA statement and all the variables in the BY statement. If there are formats associated with the STRATA variables and the BY variables, then the formats must be consistent in the primary and the secondary data sets. If you specify the TOTAL=SAS-data-set option, the secondary data set must have a variable named _TOTAL_ that contains the stratum population totals. Or if you specify the RATE=SAS-data-set option, the secondary data set must have a variable named _RATE_ that contains the stratum sampling rates. If the secondary data set contains more than one observation for any one stratum, then the procedure uses the first value of _TOTAL_ or _RATE_ for that stratum and ignores the rest. The value in the RATE= option or the values of _RATE_ in the secondary data set must be nonnegative numbers. You can specify value as a number between 0 and 1. Or you can specify value in percentage form as a number between 1 and 100, and PROC SURVEYLOGISTIC converts that number to a proportion. The procedure treats the value 1 as 100% instead of 1%. If you specify the TOTAL=value option, value must not be less than the sample size. If you provide stratum population totals in a secondary data set, these values must not be less than the corresponding stratum sample sizes.
{"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_surveylogistic_details17.htm","timestamp":"2024-11-08T12:53:37Z","content_type":"application/xhtml+xml","content_length":"17615","record_id":"<urn:uuid:4f6856f2-225c-4fc2-a142-c87063824cf7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00465.warc.gz"}
Research on rolling bearing fault feature extraction based on entropy feature In various fields of large or small machinery, rolling bearing in which the status is indispensable, can be said to play an important role in production and life. For such an important rolling bearing, its fault diagnosis must be paid attention to. Need to be specific to the rolling element fault, inner ring fault or outer ring fault, so that we can carry out subsequent improvement [1]. If only one standard is used for inspection and maintenance, it will not only have low accuracy, but also consume manpower and material resources. If accurate fault diagnosis can be carried out, prevention first and moderate maintenance can be carried out at the same time to avoid bad effects. It is boundto play a very importantrole in promoting economic and social development. The methods of mechanical equipment fault diagnosis are generally realized by extracting the characteristic information that can represent the state of the equipment. However, the vibration data collected usually contains complex environmental noise and other interferences, which leads to the failure of accurate detection of mechanical equipment by the general time-frequency analysis method [2]. Rolling bearing is an important part of mechanical equipment, its safe operation is related to the operation of the whole equipment, so the accurate diagnosis of bearing fault becomes extremely The research of rolling bearing fault diagnosis began around 1960. On the whole, it can be divided into five stages.The first stage is spectrum analysis in 1950s. The method of spectrum analysis has attracted much attention. However, due to the immature technology at that time, spectrum analysis has not been widely used in the field of bearing fault diagnosis technology because of the disadvantages of interference noise, high price and complex operation. In the second stage, in the sixties of the 20th century, the impact pulse meter detection method appeared, which is obviously better than the spectrum analysis, and can directly save the complicated steps. It is still widely used in the fault diagnosis of rolling bearing. In the third stage, in the 1960s1980s, the computer and signal have made great progress under the promotion of the trend of the times, and the more prominent one is the resonance demodulation technology, because the advent of this technologymakes the rolling bearing fault diagnosis technology to a higher level, from birth to maturity step by step [3]. The fourth stage is after the 1980s, the emergence of artificial intelligence provides new soil for rolling bearing fault diagnosis, and the emergence of intelligent diagnosis system greatly improves the accuracy of fault diagnosis. Due to the intelligence, the influence of human factors is greatly reduced, which has been applied in engineering practice.In order to enable nonsignal analysis specialists to monitor the running state and reduce the engineering cost as much as possible, Janssens proposed a learning model based on convolutional neural network, which learned the function of detecting bearing faults from the vibration signal itself. In view of the nonlinearity and non-stability of the vibration signal of rolling bearing, Ben proposed the mathematical analysis of selecting the most importantintrinsic mode functionby combining the method of extracting energy entropy by empirical mode decomposition. The fifth stage is after the 21st century, that is, we now, rolling bearing fault diagnosis technology has taken an epoch-making step, more and more high-tech development, through the virtual instrument fault diagnosis, has become a new beacon, has important practical value [4]. At present, rolling bearing fault diagnosis has been studied all over the world, combining a large number of different research fields. According to the most popular classification method, it can be divided into three kinds, which are modelbased fault diagnosis technology, knowledge-based fault diagnosis technology and data-based fault diagnosis technology. Because of the national conditions, our country started to study the fault diagnosis much later than other countries. It was not until the late 1970s and the early 1980s that I first came into contact with this field and started formal research. But it is gratifying that with the hard work of Chinese researchers, in the 1990s, the field of fault research has been on the right track, both in theory and practice have made great breakthroughs, and can be applied in production and life. But compared with other countries, China still has a long way to go. Scheme design Approximate entropy Approximate Entropy (ApEn) is a nonlinear dynamic parameter proposed by Pincus in 1991 to measure the complexity and statistical quantification of a sequence. ApEn reflects the degree of self similarity of sequence in pattern. The higher the ApEn value is, the lower the possibility that the system can predict it. It gives the situation that the incidence of new pattern increases or decreases with the dimension, so as to reflect the complexity of data structure. Through the previous, we can know that rolling bearing will produce vibration, and in different failure modes, the vibration signal is also different. According to the physical meaning of ApEn, different signals mean different complexity, which can be used as features for rolling bearing fault diagnosis [5]. In the normal calculation of approximate entropy, there are too many redundant calculations, which is a waste of time. A fast algorithm of approximate entropy is given in the literature. Let the original sequencebe u(i),i=0,1,,N,r=0.10.25SD(u) (SD is the standard deviation of sequence u(i)), then the calculation of approximate entropy is more reasonable. If m = 2, then N = 5001000.3. Calculate the distance matrix d of N×N, and the elements in row I and column j of d are written as d[ij] $d ij ={ 1 | u( i )−u(j) |<r 0 | u( i )−u(j) |≥r MathType@MTEF@5@5@+= Use that elements in d, calculate $C i 2 (r), C i 3 (r) MathType@MTEF@5@5@+= $C i 2 ( r )= ∑ j=1 N−1 d ij ∩ d (i+1)(j+1) MathType@MTEF@5@5@+= $C i 3 (r)= ∑ j=1 N−2 d ij ∩ d (i+1)(j+1) ∩ d (i+1)(j+2) MathType@MTEF@5@5@+= $ApEn( m,r,N )= H n (r)− H n+1 (r) MathType@MTEF@5@5@+= Take out the first 6000 data and calculate the approximate entropy in a group of 600, as shown in Figure 1. It is not difficult to see that when the rolling bearing is normal, the value of approximate entropy is not large, because under normal conditions, the signal generated is relatively single. When the rolling bearing fails, it will produce a lot of complex information, which will increase the approximate entropy. However, the approximate entropy values of rolling element fault and normal working conditions are very similar, so it is not easy to distinguish them. Sample entropy In 2000, Richman, et al. First proposed the concept of sample entropy, which is a more robust time series complexitymeasurementmethodsimilar to approximateentropy. Compared with approximate entropy, it has stronger antiinterference and anti noise ability [7]. Sample entropy improves the algorithm of approximate entropy, which can reduce the error of approximate entropy. It is similar to approximate entropy, but its accuracy is better [6]. Sample entropy is a new measure of time series complexity proposed by Richman and moornan. The improvement of the sample entropy algorithm compared with the approximate entropy algorithm: compared with the approximate entropy, the sample entropy calculates the logarithm of sum. The purpose of sample entropy is to reduce the error of approximate entropy, which is more consistent with the known random part. Sample entropy is similar to the present approximate entropy but has better accuracy. Compared with approximate entropy, sample entropy has two advantages: first, sample entropy does not include the comparison of its own data segments, it is the exact value of negative average natural logarithm of conditional probability, so the calculation of sample entropy does not depend on the data length; Second, sample entropy has better consistency. That is, if one time series has a higher value than another time series, it also has a higher value for other m and R values. Assume that the data is ${ X i }={ x 1, x 2 ,…, x N } MathType@MTEF@5@5@+= , and its length is n, and then reconstruct an m-dimensional vector from the original signal $x i =[ x i , x i+1 ,…, x i+m−1 ] MathType@MTEF@5@5@+= Define the distance between A and B $d ij =d[ x( i ),x( j ) ]= max k∈[0,m−1] [| x( i+k )−x(j+k) |] MathType@MTEF@5@5@+= Count the number of dij less than the similarity tolerance r and the ratio of the number to the total number of d[ij] N-m-1, and record it as $B i m ( r )= 1 N−m−1 MathType@MTEF@5@5@+= $SampEn(m,r,N)=ln B m ( r )−ln B m+1 ( r ) MathType@MTEF@5@5@+= Take the first 6000 data, take 600 as a group, and calculate the sample entropy, as shown in Figure 2. It is not difficult to see that the inner ring fault, rolling element fault and normal working conditions are very difficult to distinguish, but the outer ring fault can be distinguished. Information entropy Claude E. Shannon, one of the originators of information theory, defined information (entropy) as the probability of occurrence of discrete random events. X represents a random variable, the value of which is $(x1,x2,...xn) MathType@MTEF@5@5@+= yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqcLbqaqaaaaaaaaaWdbiaacIcacaWG4bGaaGymaiaacYcacaWG4bGaaGOmaiaacYcacaGGUaGaaiOlaiaac6cacaWG4bGaamOBaiaacMcaaaa@40B1@$ and p(xi) represents the probability of occurrence of event xi. $H( X )=− ∑ p( xi )log( p( xi ) ) (i=1,2,…,n) MathType@MTEF@5@5@+= The so-called information entropy is a very abstract concept in mathematics. Here we might as well understand information entropy as the probability of certain information. Information entropy and thermodynamic entropy are closely related. According to Charles h. Bennett’s reinterpretation of Maxwell’s demon, the destruction of information is an irreversible process, so the destruction of information conforms to the second law of thermodynamics. The generation of information is the process of introducing negative entropy into the system. So the sign of information entropy and thermodynamic entropy should be opposite. Generally speaking, when a kind of information has a higher probability of occurrence, it means that it has been spread more widely, or that it has been cited to a higher degree. We can think that from the perspective of information dissemination, information entropy can express the value of information. In this way, we have a standard to measure the value of information, and we can make more inferences about knowledge flow. Take the first 6000 data and calculate the information entropy in a group of 600, as shown in Figure 3. We can see that the rolling element fault and the normal working condition intersect, and the inner ring fault and the outer ring fault are also very close to each other. If the information entropy is not processed, it is difficult to distinguish the rolling element fault and the normal working condition, and it is easy to misjudge the inner ring fault and the outer ring fault. Three entropy joint analysis Three kinds of entropy in the case of independent judgment have no small disadvantages, it is easy to misjudge the situation, need to be further processed. So I started to study the extraction of rolling bearing fault features through the joint analysis of approximate entropy, sample entropy and information entropy. • The original vibration signals of rolling bearing under normal working condition and different fault types are collected, and 100 groups of sample data under each working condition are selected. • The approximate entropy, sample entropy, information entropy and average value are calculated. • The entropy characteristic scale of fault extraction of rolling bearing is obtained. • The original vibration signals of rolling bearing under normal working condition and different fault types are collected, and 10 groups of sample data under each working condition are selected. • The approximate entropy, sample entropy and information entropy of ten groups of data are calculated. • Compared with the extracted entropy feature. • According to the absolute value of the difference with the entropy characteristic scale, which bearing working state is closest to the test data can be judged. Simulation experiment Data source: The data used in this paper are all from the bearing data center of Case Western Reserve University. The model of experimental bearing is 6205-2RS JEM SKF deep grooveball bearing. The acceleration vibration sensor placed on the driving end of the bearing is used to collect the vibration acceleration signal. It uses electric spark technology to damage the rolling body, inner ring and outer ring of the bearing. The damage diameter is 0.1778 mm, 0.3556 mm and 0.5334 mm. The fault diameter selected in this paper is 0.1778 mm. The data used in this paper is divided into normal working condition, ball fault, outer ring fault and inner ring fault. The sampling frequency is 12Khz and the motor speed is 1750r/min. Specific implementation mode: In the first step, collect the original signals of normal working conditions and different fault states of rolling bearing. The total data amount of each working condition is 120000. Then, 10 groups of sample data for each working condition are selected, and the data volume of each group is 6000. The time-domain image of the first group of sample data is shown in Figures 4-7. Three kinds of entropy in the case of independent judgment have no small disadvantages, it is easy to misjudge the situation, need to be further processed. So I started to study the extraction of rolling bearing fault features through the joint analysis of approximate entropy, sample entropy and information entropy. The data is taken from the Western Reserve University. Under the working condition of 2 HP load, 0.1778 mm fault diameter and 1750 R / min rotating speed, the vibration acceleration signals under four different modes are obtained. There are four groups corresponding to different modes, with 60000 data in each group. Each group is divided into 10 sections, with 6000 data in each section. Each section is divided into 10 sections, with 600 data in each section Each section calculates an approximate entropy, sample entropy and information entropy, and each group has ten entropy values to do mean processing, which can get ten approximate entropy mean values, sample entropy mean values, and information entropy mean values of each section. shown in Figures 8-10. As can be seen from Fig, for each fault mode, there are two kinds of entropy features that can be distinguished. As long as the features of the three kinds of entropy mean in each fault mode are extracted, four groups of column vectors are formed, as shown in Table 1, each column corresponds to the entropy feature vector under the working condition. Then, the same entropy characteristic column vectors obtained from the same four test data are used to form the test data entropy characteristic matrix. Taking the average entropy feature vector as the benchmark, we compareit with the test data entropyfeature vector, that is, take the absolute value after the difference, and get four new vectors. The four new vectors are combined into an entropy characteristic matrix, and the minimum value of each row in the matrix is taken out. The maximum number of columns corresponding to the minimum value is the closest test data to the fault mode. However, in rare cases, three kinds of entropy features will judge three kinds of fault modes. At this time, the fault mode corresponding to the maximum discrimination approximate entropy mean value will be taken as the final fault mode of test data. After the early stage of rolling bearing fault feature extraction, we have extracted the approximate entropy mean value, sample entropy mean value and information entropy mean value of four rolling bearing fault modes. By comparing the feature vectors, we can effectively distinguish the four fault modes. A 6000 data matrix is randomly generated by using the data, and the fault feature of the matrix is extracted, and the fault condition is determined. Then the results are compared with the previously established standard, and each group is tested 500 times, at least 10 groups are done to test its accuracy. Based on the above description, the simulation experiment is started. It is also taken from the Western Reserve University. Under the working condition of 2 HP load, 0.1778 mm fault diameter and 1750 R/ min rotating speed, the vibration acceleration signals under four different modes are obtained. From the last 60000 data, 6000 outer ring fault data are divided into ten sections, each section has 600 data, and approximate entropy, sample entropy and fuzzy entropy mean are calculated to form column vector in Figure 11. If the difference between the mean approximate entropy and the mean information entropy of the inner ring fault is the smallest, it is judged that the test data is the inner ring fault mode. The same is true for other failure modes. The test vector in the test data is extracted, which is different from the entropyeigenvector,and the minorityis subordinate to the majority. This paper focuses on the special case. The difference between the feature vector and the test vector is selected in the case of rolling element failure in Figure 12, Table 2. It can be seen that the approximate entropy feature gets the minimum value in rolling element fault, the sample entropy feature gets the minimum value in inner ring fault, and the information entropy feature gets the minimum value in normal condition. At this time, we select the condition corresponding to the approximate entropy feature as the final result. Based on approximate entropy, sample entropy and information entropy, a joint diagnosis method for rolling bearing fault is proposed. According to the characteristics of entropy signal, fault diagnosis is carried out. The experimental results show that the accuracy of the three entropy joint fault diagnosis methods is more than ninety-seven percent. And it can correctly identify each bearing state, which verifies the effectiveness of the fault diagnosis method 2.
{"url":"https://www.mathematicsgroup.us/articles/AMP-4-125.php","timestamp":"2024-11-07T03:25:28Z","content_type":"text/html","content_length":"94119","record_id":"<urn:uuid:b2119954-0d86-4e7e-9472-5011420d09fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00793.warc.gz"}
Large deviations, basic information theorem for fitness preferential attachment random networks International journal of Statistics and Probability For fitness preferential attachment random networks, we define the empirical degree and pair measure, which counts the number of vertices of a given degree and the number of edges with given fits, and the sample path empirical degree distribution. For the empirical degree and pair distribution for the fitness preferential attachment random networks, we find a large deviation upper bound. From this result we obtain a weak law of large numbers for the empirical degree and pair distribution, and the basic information theorem or an asymptotic equipartition property for fitness preferential attachment random networks. Large deviation upper bound, relative entropy, random network, random tree, random coloured graph, typed graph, asymptotic equipartition property
{"url":"https://ugspace.ug.edu.gh/items/93543a82-53e2-43fb-989c-35885a7715a5","timestamp":"2024-11-12T22:16:22Z","content_type":"text/html","content_length":"433003","record_id":"<urn:uuid:719f7556-cdea-4ed6-be11-8eff14b6d618>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00105.warc.gz"}
Understanding Mathematical Functions: Which Equation Is A Linear Funct Mathematical functions are essential in understanding the relationships between variables and making predictions in various fields, including economics, engineering, and physics. Linear functions are one of the most fundamental types of functions and play a crucial role in understanding more complex mathematical concepts. In this blog post, we will explore what mathematical functions are and why it is important to understand linear functions in particular. Key Takeaways • Linear functions are essential in understanding the relationships between variables and making predictions in various fields. • It is important to understand linear functions as they are fundamental in understanding more complex mathematical concepts. • Recognizing linear patterns in graphs and understanding the slope-intercept form are crucial in identifying linear functions. • Linear functions have real-world applications in various fields and are used in problem solving. • Avoid common mistakes in identifying linear functions by understanding the characteristics and misconceptions about them. Definition of Linear Functions When working with mathematical functions, it is important to understand the concept of linear functions. Linear functions are a fundamental part of algebra and calculus, and they are used to describe relationships between two variables. A. Explanation of linear functions A linear function is a function that can be expressed in the form f(x) = mx + b, where m and b are constants. In this formula, x represents the independent variable, and f(x) represents the dependent variable. The constant m represents the slope of the line, and the constant b represents the y-intercept. B. Characteristics of linear functions Linear functions have several key characteristics that set them apart from other types of functions. One of the most important characteristics is that the graph of a linear function is a straight line. Additionally, the slope of the line is constant, meaning that the rate of change is consistent throughout the function. Another characteristic is that the function's output increases or decreases at a constant rate as the input changes. C. Examples of linear functions There are many real-world examples of linear functions, such as the relationship between time and distance traveled at a constant speed, or the relationship between the number of items sold and the total revenue generated. In mathematical terms, examples of linear functions include f(x) = 3x + 2 and g(x) = -0.5x + 4, where the constants m and b determine the slope and y-intercept of the function, respectively. Understanding linear functions is essential for anyone studying mathematics or working in fields such as engineering, physics, or economics. By grasping the definition and characteristics of linear functions, individuals can better analyze and interpret the relationships between variables in various contexts. Identifying Linear Functions Understanding mathematical functions is essential in many areas of life, including economics, engineering, and physics. One common type of function is the linear function, which has a distinctive form and behavior. In this chapter, we will explore how to identify linear functions and the key elements that define them. A. How to determine if an equation is a linear function Identifying whether an equation represents a linear function can be determined by examining its form. A linear function is one that can be written in the form y = mx + b, where m is the slope and b is the y-intercept. This means that the variable y is directly proportional to x, and the graph of the function is a straight line. Additionally, the highest power of the variable in a linear function is 1. B. Understanding the slope-intercept form The slope-intercept form, y = mx + b, is a key representation of a linear function. The slope, m, represents the rate of change or steepness of the line, while the y-intercept, b, represents the value of y when x = 0. By understanding this form, one can easily identify linear functions and interpret their behavior. C. Recognizing linear patterns in graphs Graphs can provide visual cues to identify linear functions. Linear functions will have a straight line, indicating a constant rate of change between the variables. By observing the direction and steepness of the line, one can determine if the relationship is linear. Additionally, the y-intercept will be the point where the line intersects the y-axis, providing further confirmation of a linear function. Contrasting Linear Functions with Other Types of Functions When it comes to understanding mathematical functions, it's important to differentiate between linear and non-linear functions. Linear functions are a specific type of mathematical equation, and it's crucial to comprehend how they differ from other types of functions. A. Explanation of non-linear functions Non-linear functions are mathematical equations that do not create a straight line when graphed. Instead, they exhibit curving or bending. This means that the rate of change of the function is not constant. Examples of non-linear functions include quadratic, exponential, and logarithmic functions. B. Example of quadratic functions One common example of a non-linear function is the quadratic function, which takes the form f(x) = ax^2 + bx + c. When graphed, a quadratic function creates a parabola, a U-shaped curve that does not form a straight line. C. Differentiating between linear and non-linear functions When distinguishing between linear and non-linear functions, it's important to consider the rate of change. Linear functions have a constant rate of change, resulting in a straight line when graphed. On the other hand, non-linear functions exhibit varying rates of change, leading to curved or non-linear graphs. Real-World Applications of Linear Functions Linear functions, a fundamental concept in mathematics, find widespread applications in various real-world scenarios. Let's explore some of the practical examples and the significance of linear functions in different fields, along with their role in problem-solving. A. Practical examples of linear functions • 1. Cost Analysis: In business and economics, linear functions are used to analyze costs and revenue. For example, the cost of production can be modeled using a linear function where the total cost is a function of the number of units produced. • 2. Distance-Time Graphs: Linear functions are used to represent distance-time graphs, where the distance traveled by an object is directly proportional to the time taken, assuming a constant • 3. Temperature Change: When studying thermodynamics or weather patterns, linear functions are used to model temperature change over time or space. B. Importance of linear functions in various fields • 1. Engineering: Linear functions are crucial in engineering for analyzing structural loads, electrical circuits, and mechanical systems. • 2. Physics: In physics, linear functions are used to describe simple harmonic motion, linear momentum, and other fundamental concepts. • 3. Finance: Linear functions play a significant role in financial analysis, such as modeling investment returns and loan amortization. C. How linear functions are used in problem solving • 1. Predictive Modeling: Linear functions are used to make predictions and forecast trends in various fields, including market analysis and population growth. • 2. Optimization: Linear programming, a method based on linear functions, is used to solve complex optimization problems in operations research and management science. • 3. Decision Making: Linear functions help in making informed decisions by providing a quantitative basis for evaluating different options and scenarios. Common Mistakes in Identifying Linear Functions Understanding mathematical functions, particularly linear functions, is essential in the field of mathematics and its applications in various industries. However, there are common misconceptions and pitfalls that can lead to errors in identifying linear functions. It is important to recognize these mistakes and learn how to avoid them in order to correctly identify linear equations. A. Misconceptions about linear functions • Equating linearity with simplicity: One common misconception is that linear functions are always simple and straightforward. While this may be true in some cases, it is not a defining characteristic of linear functions. Linear functions can exhibit complexity and variability just like any other type of function. • Ignoring the coefficient of the independent variable: Some people wrongly assume that any equation with a single independent variable is a linear function. However, the coefficient of the independent variable must be a constant to qualify as a linear function. B. Pitfalls in identifying linear equations • Confusing linear and non-linear relationships: It can be challenging to differentiate between linear and non-linear equations, especially when dealing with complex mathematical expressions. This confusion can lead to misidentifying linear functions. • Incorrectly applying the slope-intercept form: Many people mistakenly try to fit every equation into the slope-intercept form (y = mx + b) without considering the specific characteristics of linear functions. C. Tips for avoiding common mistakes in recognizing linear functions • Understand the defining characteristics of linear functions: Familiarize yourself with the key attributes of linear functions, such as having a constant rate of change and a straight-line graph. • Examine the coefficients and exponents: Pay attention to the coefficients and exponents in the equation to determine if it meets the criteria for a linear function. • Use graphing and visualization tools: Plotting the equation on a graph can provide a visual representation of whether it is a linear function or not. A. Recap of the key points about linear functions: In this blog post, we discussed the characteristics of linear functions, such as their equation form (y = mx + b) and their graph appearing as a straight line. We also looked at how to determine if a given equation represents a linear function. B. Importance of being able to identify linear functions: Understanding linear functions is crucial in various fields such as economics, physics, and engineering. It allows us to analyze and interpret data, make predictions, and solve real-world problems. C. Encouragement to continue learning about mathematical functions: As we continue to expand our knowledge of mathematical functions, we gain a deeper understanding of the world around us and develop essential problem-solving skills. I encourage you to keep exploring different types of functions and their applications. Keep learning, and happy calculating! ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-which-equation-is-a-linear-function","timestamp":"2024-11-13T01:51:54Z","content_type":"text/html","content_length":"215866","record_id":"<urn:uuid:55f1ea14-e1d8-417c-8317-b63e2ea5fa04>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00655.warc.gz"}
Longest common ascending subsequence (LCIS) algorithm and optimization of dynamic programming & Longest common ascending subsequence (LCIS) algorithm and Optimization for dynamic programming Given two integer sequences, write a program to find their longest rising common subsequence. When the following conditions are met, we will sequence S1, S2, SN is called sequence A1, A2, Ascending subsequence of AM: 1 < = I1 < I2 << In < = M, so that for all 1 < = J < = N, there is Sj = Aij, and for all 1 < = J < N, there is SJ < SJ + 1. Each sequence is represented by two lines. The first line is the length m (1 < = m < = 500), and the second line is the M integers AI (- 231 < = AI < 231) of the sequence In the first line, the length L of the longest rising common subsequence of the two sequences is output. In the second line, the subsequence is output. If there is more than one qualified subsequence, any one can be output. Sample input / output 1 4 2 5 -12 -12 1 2 4 : this problem is to combine the LCS and LIS learned before, so it is not difficult to get the state transition equation of this problem by referring to the state transition equation of LCs before: ① a[i] != b[j], dp[i][j] = dp[i-1][j] ② a[i] == b[j], dp[i][j] = max(dp[i-1][k]+1) (1 <= k <= j-1 && b[j] > b[k]) The situation of a[i]==b[i] here is basically the same as that of LCs, except that K is added to ensure that the number of transfers between 1 and j-1 is less than b[j] (the same principle as LIS). If you don't understand it here, LIS doesn't understand it well. You can refer to other LIS blogs. a[i]!= The situation of B [i] is different from LCS, because dp[i][j] is an LCIS ending with b[j]. If dp[i][j] > 0, it means that there must be an integer a[k] in a[1]... A [i] equal to b[j], because a[k]= A [i], then a [i] has no contribution to dp[i][j], so we can still get the optimal value of dp[i][j] without considering it. So in a [i]= In the case of b[j], there must be dp[i][j] == dp[i-1][j], so this also explains why there is no case of dp[i][j]=dp[i][j-1] in LCs in LCIS. • Code implementation and optimization According to the state transition equation and analysis, it is not difficult to write the simplest O(N * M^2) algorithm int LCIS(int *a,int n,int *b,int m) int ans=0; int dp[505][505]={0}; int tmp=0; for (int i=0;i<n;i++) for(int j=0;j<m;j++) for(int k=0;k<j;k++) if(b[j]>b[k]&&tmp<=dp[i][k+1]) tmp=dp[i][k+1]; return ans; Optimization 1: when enumerating and finding k here, like LIS optimization, adding dichotomy can reduce the complexity to O(N * M * log(M), but this is not a positive solution, so I didn't post code... I'm too lazy to write Optimization 2: if the search cannot be optimized, analyze the problem: When a[i] == b[j], DP [i] [J] = max (DP [I-1] [k] + 1) (1 < = k < = J-1 & & B [J] > b[k]), we find a feature: in fact, the relationship between a [i] (B [J]) and b[k] can be determined long ago! (I is the outermost loop and j is the inner loop. When J traverses K, it is enough to judge the size relationship of b[j]b[j]). Therefore, we only need to directly maintain a tmp value between the inner loop and the outer loop. When a[i] == b[j], we can directly make dp[i][j] = tmp+1, and the time complexity is reduced to O(N * M)! int LCIS(int *a,int n,int *b,int m) int ans=0; int dp[505][505]={0}; for(int i=0;i<n;i++) int tmp=0; for(int j=0;j<m;j++) if(a[i]>b[j]&&tmp<dp[i+1][j+1]) tmp=dp[i+1][j+1]; return ans; Optimization 3: the time complexity cannot be optimized (it is said that the tree array can be optimized to O(Nlog(M))? But i won't), but the space is OK. It can be found that we only use i+1 and i in the first dimension of dp array, so we can reduce the space to O (M) by rolling array int LCIS(int *a,int n,int *b,int m) int ans=0; int dp[2][505]={0}; for(int i=0;i<n;i++) int tmp=0; for (int j=0;j<m;j++) if(a[i]>b[j]&&tmp<dp[(i + 1) % 2][j+1]) tmp=dp[(i+1)%2][j+1]; if(a[i]==b[j]) dp[(i+1)%2][j+1]=tmp+1; return ans; After learning this way, we can happily cut off the water wave experience of the above example, but we find that we need to output the path, so use the array to record the transfer precursor and output it recursively! (for those unfamiliar with recursive output, please refer to My other blog) using namespace std; int n,m,a[505],b[505],dp[505],p[505],ans; void LCIS(int x) cout<<a[x]<<" "; int main() for(int i=1;i<=n;i++) cin>>a[i]; for(int i=1;i<=m;i++) cin>>b[i]; int pos=0,tmp; for(int i=1;i<=n;i++) for(int j=1;j<=m;j++) if(a[i]>b[j]&&dp[j]>dp[tmp]) tmp=j; dp[j]=dp[tmp] + 1; for(int i=1;i<=m;i++) if(dp[i]>dp[pos]) pos=i; if(dp[pos]) LCIS(pos); return 0; Note: I wrote "code" instead of "AC code", because the LJ evaluation machine of SBOpenjudge does not have Special judge, so I wa After looking for it for a long time, I didn't know how to change it, so I just copied an AC code on the Internet... (this guy uses the vector output path) AC Code: using namespace std; struct Node int val=0; int main() int a[501],b[501]; Node dp[501]; int m,n; for(int i=1;i<=m;i++) cin>>a[i]; for(int i=1;i<=n;i++) cin>>b[i]; for (int i=1;i<=n;i++) Node Max; for (int j=1;j<=m;j++) if(b[i]>a[j]&&dp[j].val>Max.val) Max=dp[j]; if (b[i]==a[j]) Node Max=dp[1]; for (int i=2;i<=m;i++) if(dp[i].val>Max.val) Max=dp[i]; for (int i=0;i<Max.v.size();i++) cout<<Max.v[i]<<" "; return 0;
{"url":"https://programmer.ink/think/longest-common-ascending-subsequence.html","timestamp":"2024-11-05T20:20:44Z","content_type":"text/html","content_length":"13995","record_id":"<urn:uuid:50b127a8-e0f4-467c-b98e-92f906a55bff>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00878.warc.gz"}
Which nutrient in large amounts interferes with the absorpti… | Wiki Cram Which nutrient in large amounts interferes with the absorpti… D&ocy;es the gr&acy;ph represent &acy; functi&ocy;n th&acy;t has an inverse functi&ocy;n? The nurse is c&ocy;ncerned &acy;b&ocy;ut skin bre&acy;kd&ocy;wn in &acy; client with dehydration. What would the nurse calculate (in milliliters) as the balance between intake and output for an 8-hour shift? Record only the numerical answer (as a whole number) without labels. The scheduling t&ocy;&ocy;l th&acy;t c&ocy;nsists of &acy; horizont&acy;l scale divided into time units and a vertical scale depicting project work elements is called a Which nutrient in l&acy;rge &acy;m&ocy;unts interferes with the &acy;bs&ocy;rpti&ocy;n of iron? Given functi&ocy;ns f &acy;nd g, perf&ocy;rm the indic&acy;ted &ocy;per&acy;tions.f(x) = 9x - 3, g(x) = 3x + 4Find fg. Type IV c&ocy;nstructi&ocy;n is: (49) A.&ocy;ften known &acy;s “ordin&acy;ry construction.” B.required to h&acy;ve a one-hour fire resistance rating. C.commonly constructed using steel or reinforced concrete. D.construction that uses wood components with greater mass than Type III construction. A squ&acy;re pl&acy;te with sides &acy; is submerged vertically in water as sh&ocy;wn. Express the hydr&ocy;static f&ocy;rce against one face of the plate as an integral and evaluate it. (Use L&ocy;&ocy;k &acy;t the m&acy;p bel&ocy;w &acy;nd match the numbered locations to the correct name of each location. The m&ocy;nthly s&acy;les S (in hundreds &ocy;f units) &ocy;f b&acy;seb&acy;ll equipment for an Internet sporting goods site are approximated by where t is the time (in months), with corresponding to January. Determine the months when sales exceed units at any time during the month. With s&acy;cr&acy;l ili&acy;c dysfuncti&ocy;n, the right iliac crest r&ocy;tated p&ocy;sterior will result in: Skip back to main navigation
{"url":"https://wikicram.com/which-nutrient-in-large-amounts-interferes-with-the-absorption-of-iron/","timestamp":"2024-11-12T10:03:39Z","content_type":"text/html","content_length":"45981","record_id":"<urn:uuid:2c761302-5b0f-4717-9396-db0a894dcbd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00149.warc.gz"}
MT3510 Introduction to Mathematical Computing Floating point numbers# We’ve seen that we can represent decimals using floats, and also that these floats can sometimes have strange behaviour. It’s important to understand what is going on here. A floating point number is one that is (approximately) represented in a format similar to “scientific notation”, but where the number of significant figures and the base of the exponent is fixed. For example, we might fix the number of significant figures at \(16\) and the base as \(10\), and represent two numbers of very different \[\begin{split} \sqrt{2} \approx 1.414213562373095 \times 10^{0}, \\ e^{35} \approx 1.586013452313431 \times 10^{15}. \end{split}\] The significant digits are called the significand or mantissa, and the exponent is conveniently called the exponent. Note that the error in the second approximation will be much larger in absolute value. The term “floating point” refers to how the exponent moves the decimal point across the significant figures. Computers typically use a floating-point system to represent non-integer real numbers. The system used by Python is a little different to the representation above. It assumes that the point lies after the last significant digit, rather than after the first as above. It also uses base-2 (binary), and stores 53 significant binary digits (bits) along with 11 bits for the exponent, for a total of 64 bits (8 bytes). This system is called a double-precision float. The two numbers above would be represented as follows: \[\begin{split} \sqrt{2} \approx 6369051672525773 \times 2^{-52}, \\ e^{35} \approx 6344053809253723 \times 2^{-2}. \end{split}\] Here we have given the significands and exponents in base ten for convenience, but they would be stored in binary. Since \(2^{53} \approx 10^{16}\), we roughly get 16 significant decimal digits in a double-precision float. There are also single-precision floats, which take up 4 bytes (24 significant bits and 8 exponent bits). This translates into around 7 significant decimal digits. Precision and the machine epsilon# Since there are a fixed number of significant digits, there are often issues when adding together numbers of different magnitudes. Consider the following: import numpy as np np.exp(35) + 0.1 == np.exp(35) Since the exponent for \(e^{35}\) is large, the fixed 53 significant bits cannot show the difference between \(e^{35}\) and \(e^{35} + 0.1\). A very important example comes from considering numbers just slightly larger than 1. # 1e-14 is shorthand for 10**(-14). # Test if Python can distinguish between 1 + 1e-14 and 1 1 + 1e-14 == 1 Python cannot distinguish between \(1\) and \(1 + 10^{-16}\); they are represented by the same float. This value of \(10^{-16}\) is a good approximation for \(2^{-53}\), which is the “true” largest value \(\varepsilon\) such that Python cannot distinguish between \(1\) and \(1 + \varepsilon\). This value \(\varepsilon\) is called the machine epsilon, and represents the relative error that appears in floating point representations. It is important to remember that the machine epsilon is a relative error. The gaps between indistinguishable floats grow as the exponent increases, and shrink as it decreases - the machine epsilon is the gap when the exponent is 0. The machine epsilon is not the smallest representible number - see the section on underflow. We saw above that \(e^{35}\) and \(e^{35} + 0.1\) also could not be distinguished. We can use the machine epsilon to get a rough estimate for the largest value \(\delta\) such that \(e^{35} + \delta \) is indistinguishable from \(e^{35}\) as follows: delta = np.exp(35) * 2**-53 np.exp(35) + delta == np.exp(35) np.exp(35) + 0.5 * delta == np.exp(35) Binary representations# There is another issue that can crop up with floats: the fact that they use a binary representation means that some simple decimals cannot be easily represented. For example, the number \(0.1\) is a nice decimal fraction, but cannot be represented as a finite binary fraction. This can cause some strange effects: The issue here is that a will be the closest representable float to \(0.1\), and 3 * a is then not necessarily the closest float to the true value \(0.3\). You can find out the representation that Python is using: (3602879701896397, 36028797018963968) This means that \(0.1\) is being represented as \(\frac{3602879701896397}{2^{55}}\). Comparing floats# Given the issues above, it is often not a good idea to directly compare floats x and y using x == y. Instead, consider testing their absolute difference: abs(x - y) <= err for some fixed value of err Overflow and underflow# As well as the limitations discussed above, caused by the number of significant bits, there are limitations caused by the fixed number of bits available for the exponent. Since we have 11 bits available for the exponent, and one of those bits is used to determine whether it is positive or negative, the exponent can go up to \(2^{10} - 1\). OverflowError Traceback (most recent call last) Input In [14], in <cell line: 1>() ----> 1 2.0 ** (2 ** 10) OverflowError: (34, 'Result too large') An OverflowError occurs when the result of a calculation is too large to fit in a float. A similar issue can occur when the exponent gets too small, though here we don’t get an error. Infinity and NaN# If we directly create a float which is too large, Python will treat it like infinity. # 2.3 * (10**310) is, of course, equal to infinity The other special value is nan, standing for “not a number”, which can arise if your calculations take a strange turn like multiplying infinity by 0. # infinity times 0 is not a number 2.3e310 * 0 # infinity minus infinity is not a number 2.3e310 - 4.5e350
{"url":"https://danl21.github.io/docs/1_IntroNotebooks/14%20Floating%20Point%20Numbers.html","timestamp":"2024-11-11T03:43:05Z","content_type":"text/html","content_length":"42344","record_id":"<urn:uuid:a36df945-b2e0-44a9-8ba9-78bd3db594f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00654.warc.gz"}
Gravitational Energy in context of frequency to energy 30 Aug 2024 Journal of Theoretical Physics Volume 12, Issue 3, 2023 Gravitational Energy and Frequency: A Unified Perspective In this article, we explore the relationship between gravitational energy and frequency, providing a unified framework for understanding the conversion between these two fundamental physical quantities. We derive a general expression for gravitational energy in terms of frequency, highlighting the role of Planck’s constant and the gravitational constant. Gravitational energy is a fundamental concept in physics, describing the potential energy associated with an object’s position within a gravitational field. Frequency, on the other hand, is a measure of the number of oscillations or cycles per unit time. In recent years, there has been growing interest in exploring the connection between these two quantities. We begin by considering the Hamiltonian for a particle in a gravitational potential: H = (p^2 / 2m) + m * g * z where p is the momentum, m is the mass, g is the acceleration due to gravity, and z is the height above the reference level. Using the Legendre transformation, we can rewrite the Hamiltonian as: H = E - m * g * z where E is the total energy of the particle. Next, we introduce a frequency parameter f, which represents the number of oscillations per unit time. We can express the gravitational potential energy in terms of this frequency using the following U_g = (h * f) / (2 * π) where h is Planck’s constant. Unified Expression Combining the expressions for H and U_g, we arrive at a unified expression for gravitational energy in terms of frequency: E_g = U_g + m * g * z = (h * f) / (2 * π) + m * g * z This equation provides a direct link between gravitational energy and frequency, highlighting the role of Planck’s constant and the gravitational constant. In this article, we have derived a general expression for gravitational energy in terms of frequency, providing a unified framework for understanding the conversion between these two fundamental physical quantities. This work has implications for our understanding of gravitational phenomena and may lead to new insights into the behavior of particles in gravitational fields. [1] Einstein, A. (1915). Die Grundlage der allgemeinen Relativitätstheorie. Annalen der Physik, 355(7), 769-822. [2] Planck, M. (1900). Über eine Verbesserung der Wienschen Spectralen Formel. Deutsche Physikalische Gesellschaft, 1(3), 69-74. Note: The references provided are for illustrative purposes only and do not necessarily relate to the specific content of this article. Related articles for ‘frequency to energy’ : • Reading: Gravitational Energy in context of frequency to energy Calculators for ‘frequency to energy’
{"url":"https://blog.truegeometry.com/tutorials/education/d3cbb2f0020bb503be424b8ed3a5ebbf/JSON_TO_ARTCL_Gravitational_Energy_in_context_of_frequency_to_energy.html","timestamp":"2024-11-09T07:58:12Z","content_type":"text/html","content_length":"16643","record_id":"<urn:uuid:c3f78c6e-dda4-459e-848a-600d28428167>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00515.warc.gz"}
Strength and Mechanics of Materials Strength and Mechanics of Materials Menu Engineering Materials Strength / Mechanics of Material Basics, General Equations and Definitions • The following engineering design data, articles for Mechanics / Strength of Materials. • Should you find any errors omissions broken links, please let us know - ** Search this PAGE ONLY, click on Magnifying Glass ** • 2D Static's Load Modeler and Calculator Create 2D static's model and solve for reaction forces. • Allowable Stress Design Equations and Calculator • Polar Area Moment of Inertia Equations and Calculators • Polar Mass Moment of Inertia Equations and Calculator • Brittle Fracture Analysis Calculator and Equation • Creep • Cylindrical Polar Moment of Inertia • Shaft and Maximum Torque Moment • Shaft in Torsion Reliability and Design Formula and Calculator • Diameter of Solid Shaft Subjected to Simple Torsion Equation and Calculator • Diameter of Solid Shaft Subjected to Simple Bending Equation and Calculator • Diameter of Solid Shaft subjected to Combined Torsion and Bending Equation and Calculator • Ductility • Double Integration Method Example 4 Proof Simply Supported Beam of Length L with Partial Distributed Load • Double Integration Method Example 3 Proof Cantilevered Beam • Double Integration Method for Beam Deflections • Engineering and Applications Factor of Safety Review • Fatigue SN Curve Python Script Application • Force Applied Vector Analysis Equations and Calculator • Fracture Mechanics for Structural Adhesive Bonds Fracture mechanics methodology was developed and experimentally demonstrated for the prediction of the growth of bondline flaws in an adhesively bonded structure. • Free Body Diagram • Hardness • Heat Treatment • Hookes Law • Impact and Sudden Loading Approximate Formulas Equations • Impact Force of a Blow Formulae and Calculator: Impact force of a blow: A body that weighs W pounds and falls S feet from an initial position of rest is capable of doing WS foot-pounds of work. • Marin Factors for Corrected Endurance Limit Fatigue The endurance limit (S'[e]) determined using Eq. 2 that is established from fatigue tests on a standard test specimen must be modified for factors that will usually be different for an actual machine element. • Mass Impact Loading Equations • Mass Moment of Inertia • Material Strength • Malleability • Theoretical Mechanics, Kinematics, Dynamics and Static's Premium Membership Required to view Document/Book • Modulus of Resilience Equation and Calculator • Mohr's Circle for Plane Stress • Mohr's Circle Simplified Video • Mohr's Circle when normal stress in the X direction is negative • Poisson's Ratio Definition & Equation • Poisson's Ratio Metals Materials Chart • Thick Wall Cylinder Press or Shrink Fits Interference and Pressure Equations and Calculator If two thick-walled cylinders are assembled by either a hot/cold shrinking or a mechanical press-fit, a pressure is developed at the interface between the two cylinders. • Pressure Vessel External Pressure Calculations • Shear Stress • Shear Stress in Shafts Equations • Shear Center for Beams Equations and Calculator • Strain • Stress Analysis Manual • Stress Fundamentals • Stress Concentration Fundamentals • Thin Walled Pressure Vessel, Hoop Stress • Torsional Deflection of Shaft • Torsional Stability of Aluminum Alloy Seamless Tubing • Torsion in Thin-Walled Noncircular Shells Formulas and Calculator • Cylinder Stress and Deflection with Applied Torsion • Toughness • Von Mises Criterion ( Maximum Distortion Energy Criterion ) • Maximum Shear Stress Theory Fatigue of a Shaft or Axle Formula and Calculator • Static Loading Shaft or Axle Analysis Formula and Calculator - Most shafts are subject to combined bending and torsion, either of which may be steady or variable. Maximum Shear Stress and Von Mises Stress • Work ( Strain ) Hardening • Young's Modulus • Yield Strength • Moment of Inertia, Section Modulus, Radii of Gyration Equations Square and Rectangular Sections • Moment of Inertia, Section Modulus, Radii of Gyration Equations Triangular, Hex Sections • Moment of Inertia, Section Modulus, Radii of Gyration Equations Circular, Eccentric Shapes • Moment of Inertia, Section Modulus, Radii of Gyration Equations I Sections • Moment of Inertia, Section Modulus, Radii of Gyration Equations Channel Sections • Moment of Inertia, Section Modulus, Radii of Gyration Equations T Sections • Moment of Inertia, Section Modulus, Radii of Gyration Equations Angle Sections • Rotating Solid Cylinder Stress Equations and Calculator Solid Cylinder Rotating at ω rad/s • Hollow Cylinder Rotating Stress Equations and Calculator • Tuning fork (cylindrical prongs) Equation and Calculator • Vector Mechanics for Engineers, Statics and Dynamics Premium membership required. Mechanics can be defined as that science which describes and predicts the conditions of rest or motion of bodies under the action of forces. • Young's Modulus of Common Engineering Materials Fastener, Bolt Torque Installation Design Material Specifications and Characteristics - Ferrous and Non-Ferrous Pinned Columns and Buckling Machine Design Misc Analysis
{"url":"https://www.engineersedge.com/mechanics_material_menu.shtml","timestamp":"2024-11-07T15:20:01Z","content_type":"text/html","content_length":"56949","record_id":"<urn:uuid:c9cd4b4d-769d-453b-9aef-4bf335f55766>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00337.warc.gz"}
Multiplication Flashcards Printable Free Multiplication Flashcards Printable Free - Set of 0, 1, 2 math facts author: Web make your own custom flashcards by downloading the free trial version of flashcard learner flashcard software and start learning the multiplication flashcards now. Web 1 times table 2 times table 3 times table 4 times table 5 times table 6 times table 7 times table 8 times table 9 times table 10 times table 11 times table 12 times table. Web these free printable multiplication flash cards have just a splash of color featuring lego bricks to keep kids engaged. We have printable multiplication table answers to use to. Web print these free multiplication flashcards instantly by starting at 0 x 0 and ending at 12 x 12. Multiplication on one side and result on the other (you can keep it folded using a paper clip for example). Web free multiplication tables.our free printable multiplication table flashcards are a fantastic educational resource for children learning multiplication. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Web advertisement flash cards no advertisements for 1 full year. Multiplication Flashcards Printable Free Printable Blank World Web print these free multiplication flashcards instantly by starting at 0 x 0 and ending at 12 x 12. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Web 10 flashcards www.multiplication.com 3 x 10 30 www.multiplication.com 2 x 10 20 www.multiplication.com 1111 x x x 10110010 10 www.multiplication.com 6 x 10 60.. Printable Multiplication Flashcards 012 Use these multiplication printables to drill. Web advertisement flash cards no advertisements for 1 full year. Our math multiplication flash cards with answers on back are easy to. Web these free printable multiplication flash cards have just a splash of color featuring lego bricks to keep kids engaged. Web make your own custom flashcards by downloading the free trial version. 4 S Multiplication Flash Cards Printable Cards Info Web these free printable multiplication flash cards have just a splash of color featuring lego bricks to keep kids engaged. Our math multiplication flash cards with answers on back are easy to. Don’t forget to have fun with students by playing multiplication games such as 5 in a. Set of 0, 1, 2 math facts author: Web this set includes. printable multiplication flash cards 6 Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Don’t forget to have fun with students by playing multiplication games such as 5 in a. Web this set includes flashcards for tables 1 to 12, helping students master their multiplication facts in a fun and interactive way. Our math multiplication flash cards with. Printable Multiplication Flash Cards 112 Printable Multiplication Web free multiplication tables.our free printable multiplication table flashcards are a fantastic educational resource for children learning multiplication. We have printable multiplication table answers to use to. Web 1 times table 2 times table 3 times table 4 times table 5 times table 6 times table 7 times table 8 times table 9 times table 10 times table 11 times. Free Printable Multiplication Flashcards Printable World Holiday Web 10 flashcards www.multiplication.com 3 x 10 30 www.multiplication.com 2 x 10 20 www.multiplication.com 1111 x x x 10110010 10 www.multiplication.com 6 x 10 60. Web this set includes flashcards for tables 1 to 12, helping students master their multiplication facts in a fun and interactive way. Set of 0, 1, 2 math facts author: Multiplication on one side and. Printable Multiplication Flash Cards 012 Printable Multiplication Web make your own custom flashcards by downloading the free trial version of flashcard learner flashcard software and start learning the multiplication flashcards now. Web free multiplication tables.our free printable multiplication table flashcards are a fantastic educational resource for children learning multiplication. We have printable multiplication table answers to use to. Don’t forget to have fun with students by playing. Multiplication, 6 To 12 Times Table, Flash Cards, Math Web make your own custom flashcards by downloading the free trial version of flashcard learner flashcard software and start learning the multiplication flashcards now. We have printable multiplication table answers to use to. Web print these free multiplication flashcards instantly by starting at 0 x 0 and ending at 12 x 12. Web print these free multiplication flash cards to. Free Printable Math Flash Cards Multiplication Printable Templates Web these free printable multiplication flash cards have just a splash of color featuring lego bricks to keep kids engaged. Web 10 flashcards www.multiplication.com 3 x 10 30 www.multiplication.com 2 x 10 20 www.multiplication.com 1111 x x x 10110010 10 www.multiplication.com 6 x 10 60. Multiplication on one side and result on the other (you can keep it folded using. Multiplication Flash Cards Printable Pdf qcardg Set of 0, 1, 2 math facts author: Our math multiplication flash cards with answers on back are easy to. Web 10 flashcards www.multiplication.com 3 x 10 30 www.multiplication.com 2 x 10 20 www.multiplication.com 1111 x x x 10110010 10 www.multiplication.com 6 x 10 60. Web free multiplication tables.our free printable multiplication table flashcards are a fantastic educational resource Web make your own custom flashcards by downloading the free trial version of flashcard learner flashcard software and start learning the multiplication flashcards now. Web free multiplication tables.our free printable multiplication table flashcards are a fantastic educational resource for children learning multiplication. Multiplication on one side and result on the other (you can keep it folded using a paper clip for example). Web print these free multiplication flashcards instantly by starting at 0 x 0 and ending at 12 x 12. We have printable multiplication table answers to use to. Web this set includes flashcards for tables 1 to 12, helping students master their multiplication facts in a fun and interactive way. Web these free printable multiplication flash cards have just a splash of color featuring lego bricks to keep kids engaged. Use these multiplication printables to drill. Don’t forget to have fun with students by playing multiplication games such as 5 in a. Web advertisement flash cards no advertisements for 1 full year. Set of 0, 1, 2 math facts author: Our math multiplication flash cards with answers on back are easy to. Web 1 times table 2 times table 3 times table 4 times table 5 times table 6 times table 7 times table 8 times table 9 times table 10 times table 11 times table 12 times table. Web 10 flashcards www.multiplication.com 3 x 10 30 www.multiplication.com 2 x 10 20 www.multiplication.com 1111 x x x 10110010 10 www.multiplication.com 6 x 10 60. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Don’t Forget To Have Fun With Students By Playing Multiplication Games Such As 5 In A. Set of 0, 1, 2 math facts author: Web print these free multiplication flashcards instantly by starting at 0 x 0 and ending at 12 x 12. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Our math multiplication flash cards with answers on back are easy to. Web This Set Includes Flashcards For Tables 1 To 12, Helping Students Master Their Multiplication Facts In A Fun And Interactive Way. Web these free printable multiplication flash cards have just a splash of color featuring lego bricks to keep kids engaged. Web 1 times table 2 times table 3 times table 4 times table 5 times table 6 times table 7 times table 8 times table 9 times table 10 times table 11 times table 12 times table. Multiplication on one side and result on the other (you can keep it folded using a paper clip for example). Web make your own custom flashcards by downloading the free trial version of flashcard learner flashcard software and start learning the multiplication flashcards now. Web Advertisement Flash Cards No Advertisements For 1 Full Year. Use these multiplication printables to drill. Web 10 flashcards www.multiplication.com 3 x 10 30 www.multiplication.com 2 x 10 20 www.multiplication.com 1111 x x x 10110010 10 www.multiplication.com 6 x 10 60. We have printable multiplication table answers to use to. Web free multiplication tables.our free printable multiplication table flashcards are a fantastic educational resource for children learning multiplication. Related Post:
{"url":"https://dl-uk.apowersoft.com/en/multiplication-flashcards-printable-free.html","timestamp":"2024-11-11T16:33:37Z","content_type":"text/html","content_length":"32253","record_id":"<urn:uuid:2ef0bf2d-3de4-4b6a-9bfc-8973ac71457c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00557.warc.gz"}
Calculus for Machine Learning (7-day mini-course) Author: Adrian Tam Calculus for Machine Learning Crash Course. Get familiar with the calculus techniques in machine learning in 7 days. Calculus is an important mathematics technique behind many machine learning algorithms. You don’t always need to know it to use the algorithms. When you go deeper, you will see it is ubiquitous in every discussion on the theory behind a machine learning model. As a practitioner, we are most likely not going to encounter very hard calculus problems. If we need to do one, there are tools such as computer algebra systems to help, or at least, verify our solution. However, what is more important is understanding the idea behind calculus and relating the calculus terms to its use in our machine learning algorithms. In this crash course, you will discover some common calculus ideas used in machine learning. You will learn with exercises in Python in seven days. This is a big and important post. You might want to bookmark it. Let’s get started. Who Is This Crash-Course For? Before we get started, let’s make sure you are in the right place. This course is for developers who may know some applied machine learning. Maybe you know how to work through a predictive modeling problem end to end, or at least most of the main steps, with popular The lessons in this course do assume a few things about you, such as: • You know your way around basic Python for programming. • You may know some basic linear algebra. • You may know some basic machine learning models. You do NOT need to be: • A math wiz! • A machine learning expert! This crash course will take you from a developer who knows a little machine learning to a developer who can effectively talk about the calculus concepts in machine learning algorithms. Note: This crash course assumes you have a working Python 3.7 environment with some libraries such as SciPy and SymPy installed. If you need help with your environment, you can follow the step-by-step tutorial here: Crash-Course Overview This crash course is broken down into seven lessons. You could complete one lesson per day (recommended) or complete all of the lessons in one day (hardcore). It really depends on the time you have available and your level of enthusiasm. Below is a list of the seven lessons that will get you started and productive with data preparation in Python: • Lesson 01: Differential calculus • Lesson 02: Integration • Lesson 03: Gradient of a vector function • Lesson 04: Jacobian • Lesson 05: Backpropagation • Lesson 06: Optimization • Lesson 07: Support vector machine Each lesson could take you 5 minutes or up to 1 hour. Take your time and complete the lessons at your own pace. Ask questions, and even post results in the comments below. The lessons might expect you to go off and find out how to do things. I will give you hints, but part of the point of each lesson is to force you to learn where to go to look for help with and about the algorithms and the best-of-breed tools in Python. (Hint: I have all of the answers on this blog; use the search box.) Post your results in the comments; I’ll cheer you on! Hang in there; don’t give up. Lesson 01: Differential Calculus In this lesson, you will discover what is differential calculus or differentiation. Differentiation is the operation of transforming one mathematical function to another, called the derivative. The derivative tells the slope, or the rate of change, of the original function. For example, if we have a function $f(x)=x^2$, its derivative is a function that tells us the rate of change of this function at $x$. The rate of change is defined as: $$f'(x) = frac{f(x+delta x)-f (x)}{delta x}$$ for a small quantity $delta x$. Usually we will define the above in the form of a limit, i.e., $$f'(x) = lim_{delta xto 0} frac{f(x+delta x)-f(x)}{delta x}$$ to mean $delta x$ should be as close to zero as possible. There are several rules of differentiation to help us find the derivative easier. One rule that fits the above example is $frac{d}{dx} x^n = nx^{n-1}$. Hence for $f(x)=x^2$, we have the derivative We can confirm this is the case by plotting the function $f'(x)$ computed according to the rate of change together with that computed according to the rule of differentiation. The following uses NumPy and matplotlib in Python: import numpy as np import matplotlib.pyplot as plt # Define function f(x) def f(x): return x**2 # compute f(x) = x^2 for x=-10 to x=10 x = np.linspace(-10,10,500) y = f(x) # Plot f(x) on left half of the figure fig = plt.figure(figsize=(12,5)) ax = fig.add_subplot(121) ax.plot(x, y) # f'(x) using the rate of change delta_x = 0.0001 y1 = (f(x+delta_x) - f(x))/delta_x # f'(x) using the rule y2 = 2 * x # Plot f'(x) on right half of the figure ax = fig.add_subplot(122) ax.plot(x, y1, c="r", alpha=0.5, label="rate") ax.plot(x, y2, c="b", alpha=0.5, label="rule") In the plot above, we can see the derivative function found using the rate of change and then using the rule of differentiation coincide perfectly. Your Task We can similarly do a differentiation of other functions. For example, $f(x)=x^3 – 2x^2 + 1$. Find the derivative of this function using the rules of differentiation and compare your result with the result found using the rate of limits. Verify your result with the plot above. If you’re doing it correctly, you should see the following graph: In the next lesson, you will discover that integration is the reverse of differentiation. Lesson 02: Integration In this lesson, you will discover integration is the reverse of differentiation. If we consider a function $f(x)=2x$ and at intervals of $delta x$ each step (e.g., $delta x = 0.1$), we can compute, say, from $x=-10$ to $x=10$ as: f(-10), f(-9.9), f(-9.8), cdots, f(9.8), f(9.9), f(10) Obviously, if we have a smaller step $delta x$, there are more terms in the above. If we multiply each of the above with the step size and then add them up, i.e., f(-10)times 0.1 + f(-9.9)times 0.1 + cdots + f(9.8)times 0.1 + f(9.9)times 0.1 this sum is called the integral of $f(x)$. In essence, this sum is the area under the curve of $f(x)$, from $x=-10$ to $x=10$. A theorem in calculus says if we put the area under the curve as a function, its derivative is $f(x)$. Hence we can see the integration as a reverse operation of differentiation. As we saw in Lesson 01, the differentiation of $f(x)=x^2$ is $f'(x)=2x$. This means for $f(x)=2x$, we can write $int f(x) dx = x^2$ or we can say the antiderivative of $f(x)=x$ is $x^2$. We can confirm this in Python by calculating the area directly: import numpy as np import matplotlib.pyplot as plt def f(x): return 2*x # Set up x from -10 to 10 with small steps delta_x = 0.1 x = np.arange(-10, 10, delta_x) # Find f(x) * delta_x fx = f(x) * delta_x # Compute the running sum y = fx.cumsum() # Plot plt.plot(x, y) This plot has the same shape as $f(x)$ in Lesson 01. Indeed, all functions differ by a constant (e.g., $f(x)$ and $f(x)+5$) that have the same derivative. Hence the plot of the antiderivative computed will be the original shifted vertically. Your Task Consider $f(x)=3x^2-4x$, find the antiderivative of this function and plot it. Also, try to replace the Python code above with this function. If you plot both together, you should see the following: Post your answer in the comments below. I would love to see what you come up with. These two lessons are about functions with one variable. In the next lesson, you will discover how to apply differentiation to functions with multiple variables. Lesson 03: Gradient of a vector function In this lesson, you will learn the concept of gradient of a multivariate function. If we have a function of not one variable but two or more, the differentiation is extended naturally to be the differentiation of the function with respect to each variable. For example, if we have the function $f(x,y) = x^2 + y^3$, we can write the differentiation in each variable as: frac{partial f}{partial x} &= 2x \ frac{partial f}{partial y} &= 3y^2 Here we introduced the notation of a partial derivative, meaning to differentiate a function on one variable while assuming the other variables are constants. Hence in the above, when we compute $frac{partial f}{partial x}$, we ignored the $y^3$ part in the function $f(x,y)$. A function with two variables can be visualized as a surface on a plane. The above function $f(x,y)$ can be visualized using matplotlib: import numpy as np import matplotlib.pyplot as plt # Define the range for x and y x = np.linspace(-10,10,1000) xv, yv = np.meshgrid(x, x, indexing='ij') # Compute f(x,y) = x^2 + y^3 zv = xv**2 + yv**3 # Plot the surface fig = plt.figure(figsize=(6,6)) ax = fig.add_subplot(projection='3d') ax.plot_surface(xv, yv, zv, cmap="viridis") The gradient of this function is denoted as: $$nabla f(x,y) = Big(frac{partial f}{partial x},; frac{partial f}{partial y}Big) = (2x,;3y^2)$$ Therefore, at each coordinate $(x,y)$, the gradient $nabla f(x,y)$ is a vector. This vector tells us two things: • The direction of the vector points to where the function $f(x,y)$ is increasing the fastest • The size of the vector is the rate of change of the function $f(x,y)$ in this direction One way to visualize the gradient is to consider it as a vector field: import numpy as np import matplotlib.pyplot as plt # Define the range for x and y x = np.linspace(-10,10,20) xv, yv = np.meshgrid(x, x, indexing='ij') # Compute the gradient of f(x,y) fx = 2*xv fy = 2*yv # Convert the vector (fx,fy) into size and direction size = np.sqrt(fx**2 + fy**2) dir_x = fx/size dir_y = fy/size # Plot the surface plt.quiver(xv, yv, dir_x, dir_y, size, cmap="viridis") The viridis color map in matplotlib will show a larger value in yellow and a lower value in purple. Hence we see the gradient is “steeper” at the edges than in the center in the above plot. If we consider the coordinate (2,3), we can check which direction $f(x,y)$ will increase the fastest using the following: import numpy as np def f(x, y): return x**2 + y**3 # 0 to 360 degrees at 0.1-degree steps angles = np.arange(0, 360, 0.1) # coordinate to check x, y = 2, 3 # step size for differentiation step = 0.0001 # To keep the size and direction of maximum rate of change maxdf, maxangle = -np.inf, 0 for angle in angles: # convert degree to radian rad = angle * np.pi / 180 # delta x and delta y for a fixed step size dx, dy = np.sin(rad)*step, np.cos(rad)*step # rate of change at a small step df = (f(x+dx, y+dy) - f(x,y))/step # keep the maximum rate of change if df > maxdf: maxdf, maxangle = df, angle # Report the result dx, dy = np.sin(maxangle*np.pi/180), np.cos(maxangle*np.pi/180) gradx, grady = dx*maxdf, dy*maxdf print(f"Max rate of change at {maxangle} degrees") print(f"Gradient vector at ({x},{y}) is ({dx*maxdf},{dy*maxdf})") Its output is: Max rate of change at 8.4 degrees Gradient vector at (2,3) is (3.987419245872443,27.002750276227097) The gradient vector according to the formula is (4,27), which the numerical result above is close enough. Your Task Consider the function $f(x,y)=x^2+y^2$, what is the gradient vector at (1,1)? If you get the answer from partial differentiation, can you modify the above Python code to confirm it by checking the rate of change at different directions? Post your answer in the comments below. I would love to see what you come up with. In the next lesson, you will discover the differentiation of a function that takes vector input and produces vector output. Lesson 04: Jacobian In this lesson, you will learn about Jacobian matrix. The function $f(x,y)=(p(x,y), q(x,y))=(2xy, x^2y)$ is one with two input and two outputs. Sometimes we call this function taking vector arguments and returning a vector value. The differentiation of this function is a matrix called the Jacobian. The Jacobian of the above function is: mathbf{J} = frac{partial p}{partial x} & frac{partial p}{partial y} \ frac{partial q}{partial x} & frac{partial q}{partial y} 2y & 2x \ 2xy & x^2 In the Jacobian matrix, each row has the partial differentiation of each element of the output vector, and each column has the partial differentiation with respect to each element of the input We will see the use of Jacobian later. Since finding a Jacobian matrix involves a lot of partial differentiations, it would be great if we could let a computer check our math. In Python, we can verify the above result using SymPy: from sympy.abc import x, y from sympy import Matrix, pprint f = Matrix([2*x*y, x**2*y]) variables = Matrix([x,y]) Its output is: ⎡ 2⋅y 2⋅x⎤ ⎢ ⎥ ⎢ 2 ⎥ ⎣2⋅x⋅y x ⎦ We asked SymPy to define the symbols x and y and then defined the vector function f. Afterward, the Jacobian can be found by calling the jacobian() function. Your Task Consider the function f(x,y) = begin{bmatrix} frac{1}{1+e^{-(px+qy)}} & frac{1}{1+e^{-(rx+sy)}} & frac{1}{1+e^{-(tx+uy)}} where $p,q,r,s,t,u$ are constants. What is the Jacobian matrix of $f(x,y)$? Can you verify it with SymPy? In the next lesson, you will discover the application of the Jacobian matrix in a neural network’s backpropagation algorithm. Lesson 05: Backpropagation In this lesson, you will see how the backpropagation algorithm uses the Jacobian matrix. If we consider a neural network with one hidden layer, we can represent it as a function: y = gBig(sum_{k=1}^M u_k f_kbig(sum_{i=1}^N w_{ik}x_ibig)Big) The input to the neural network is a vector $mathbf{x}=(x_1, x_2, cdots, x_N)$ and each $x_i$ will be multiplied with weight $w_{ik}$ and fed into the hidden layer. The output of neuron $k$ in the hidden layer will be multiplied with weight $u_k$ and fed into the output layer. The activation function of the hidden layer and output layer are $f$ and $g$, respectively. If we consider $$z_k = f_kbig(sum_{i=1}^N w_{ik}x_ibig)$$ frac{partial y}{partial x_i} = sum_{k=1}^M frac{partial y}{partial z_k}frac{partial z_k}{partial x_i} If we consider the entire layer at once, we have $mathbf{z}=(z_1, z_2, cdots, z_M)$ and then frac{partial y}{partial mathbf{x}} = mathbf{W}^topfrac{partial y}{partial mathbf{z}} where $mathbf{W}$ is the $Mtimes N$ Jacobian matrix, where the element on row $k$ and column $i$ is $frac{partial z_k}{partial x_i}$. This is how the backpropagation algorithm works in training a neural network! For a network with multiple hidden layers, we need to compute the Jacobian matrix for each layer. Your Task The code below implements a neural network model that you can try yourself. It has two hidden layers and a classification network to separate points in 2-dimension into two classes. Try to look at the function backward() and identify which is the Jacobian matrix. If you play with this code, the class mlp should not be modified, but you can change the parameters on how a model is created. from sklearn.datasets import make_circles from sklearn.metrics import accuracy_score import numpy as np # Find a small float to avoid division by zero epsilon = np.finfo(float).eps # Sigmoid function and its differentiation def sigmoid(z): return 1/(1+np.exp(-z.clip(-500, 500))) def dsigmoid(z): s = sigmoid(z) return 2 * s * (1-s) # ReLU function and its differentiation def relu(z): return np.maximum(0, z) def drelu(z): return (z > 0).astype(float) # Loss function L(y, yhat) and its differentiation def cross_entropy(y, yhat): """Binary cross entropy function L = - y log yhat - (1-y) log (1-yhat) y, yhat (np.array): nx1 matrices which n are the number of data instances average cross entropy value of shape 1x1, averaging over the n instances return ( -(y.T @ np.log(yhat.clip(epsilon)) + (1-y.T) @ np.log((1-yhat).clip(epsilon)) ) / y.shape[1] ) def d_cross_entropy(y, yhat): """ dL/dyhat """ return ( - np.divide(y, yhat.clip(epsilon)) + np.divide(1-y, (1-yhat).clip(epsilon)) ) class mlp: '''Multilayer perceptron using numpy def __init__(self, layersizes, activations, derivatives, lossderiv): """remember config, then initialize array to hold NN parameters without init""" # hold NN config self.layersizes = tuple(layersizes) self.activations = tuple(activations) self.derivatives = tuple(derivatives) self.lossderiv = lossderiv # parameters, each is a 2D numpy array L = len(self.layersizes) self.z = [None] * L self.W = [None] * L self.b = [None] * L self.a = [None] * L self.dz = [None] * L self.dW = [None] * L self.db = [None] * L self.da = [None] * L def initialize(self, seed=42): """initialize the value of weight matrices and bias vectors with small random numbers.""" sigma = 0.1 for l, (n_in, n_out) in enumerate(zip(self.layersizes, self.layersizes[1:]), 1): self.W[l] = np.random.randn(n_in, n_out) * sigma self.b[l] = np.random.randn(1, n_out) * sigma def forward(self, x): """Feed forward using existing `W` and `b`, and overwrite the result variables `a` and `z` x (numpy.ndarray): Input data to feed forward self.a[0] = x for l, func in enumerate(self.activations, 1): # z = W a + b, with `a` as output from previous layer # `W` is of size rxs and `a` the size sxn with n the number of data # instances, `z` the size rxn, `b` is rx1 and broadcast to each # column of `z` self.z[l] = (self.a[l-1] @ self.W[l]) + self.b[l] # a = g(z), with `a` as output of this layer, of size rxn self.a[l] = func(self.z[l]) return self.a[-1] def backward(self, y, yhat): """back propagation using NN output yhat and the reference output y, generates dW, dz, db, da # first `da`, at the output self.da[-1] = self.lossderiv(y, yhat) for l, func in reversed(list(enumerate(self.derivatives, 1))): # compute the differentials at this layer self.dz[l] = self.da[l] * func(self.z[l]) self.dW[l] = self.a[l-1].T @ self.dz[l] self.db[l] = np.mean(self.dz[l], axis=0, keepdims=True) self.da[l-1] = self.dz[l] @ self.W[l].T def update(self, eta): """Updates W and b eta (float): Learning rate for l in range(1, len(self.W)): self.W[l] -= eta * self.dW[l] self.b[l] -= eta * self.db[l] # Make data: Two circles on x-y plane as a classification problem X, y = make_circles(n_samples=1000, factor=0.5, noise=0.1) y = y.reshape(-1,1) # our model expects a 2D array of (n_sample, n_dim) # Build a model model = mlp(layersizes=[2, 4, 3, 1], activations=[relu, relu, sigmoid], derivatives=[drelu, drelu, dsigmoid], yhat = model.forward(X) loss = cross_entropy(y, yhat) score = accuracy_score(y, (yhat > 0.5)) print(f"Before training - loss value {loss} accuracy {score}") # train for each epoch n_epochs = 150 learning_rate = 0.005 for n in range(n_epochs): yhat = model.a[-1] model.backward(y, yhat) loss = cross_entropy(y, yhat) score = accuracy_score(y, (yhat > 0.5)) print(f"Iteration {n} - loss value {loss} accuracy {score}") In the next lesson, you will discover the use of differentiation to find the optimal value of a function. Lesson 06: Optimization In this lesson, you will learn an important use of differentiation. Because the differentiation of a function is the rate of change, we can make use of differentiation to find the optimal point of a function. If a function attained its maximum, we would expect it to move from a lower point to the maximum, and if we move further, it falls to another lower point. Hence at the point of maximum, the rate of change of a function is zero. And vice versa for the minimum. As an example, consider $f(x)=x^3-2x^2+1$. The derivative is $f'(x) = 3x^2-4x$ and $f'(x)=0$ at $x=0$ and $x=4/3$. Hence these positions of $x$ are where $f(x)$ is at its maximum or minimum. We can visually confirm it by plotting $f(x)$ (see the plot in Lesson 01). Your task Consider the function $f(x)=log x$ and find its derivative. What will be the value of $x$ when $f'(x)=0$? What does it tell you about the maximum or minimum of the log function? Try to plot the function of $log x$ to visually confirm your answer. In the next lesson, you will discover the application of this technique in finding the support vector. Lesson 07: Support vector machine In this lesson, you will learn how we can convert support vector machine into an optimization problem. In a two-dimensional plane, any straight line can be represented by the equation: in the $xy$-coordinate system. A result from the study of coordinate geometry says that for any point $(x_0,y_0)$, its distance to the line $ax+by+c=0$ is: frac{vert ax_0+by_0+c vert}{sqrt{a^2+b^2}} Consider the points (0,0), (1,2), and (2,1) in the $xy$-plane, in which the first point and the latter two points are in different classes. What is the line that best separates these two classes? This is the basis of a support vector machine classifier. The support vector is the line of maximum separation in this case. To find such a line, we are looking for: text{minimize} && a^2 + b^2 \ text{subject to} && -1(0a+0b+c) &ge 1 \ && +1(1a+2b+c) &ge 1 \ && +1(2a+1b+c) &ge 1 The objective $a^2+b^2$ is to be minimized so that the distances from each data point to the line are maximized. The condition $-1(0a+0b+c)ge 1$ means the point (0,0) is of class $-1$; similarly for the other two points, they are of class $+1$. The straight line should put these two classes in different sides of the plane. This is a constrained optimization problem, and the way to solve it is to use the Lagrange multiplier approach. The first step in using the Lagrange multiplier approach is to find the partial differentials of the following Lagrange function: L = a^2+b^2 + lambda_1(-c-1) + lambda_2 (a+2b+c-1) + lambda_3 (2a+b+c-1) and set the partial differentials to zero, then solve for $a$, $b$, and $c$. It would be too lengthy to demonstrate here, but we can use SciPy to find the solution to the above numerically: import numpy as np from scipy.optimize import minimize def objective(w): return w[0]**2 + w[1]**2 def constraint1(w): "Inequality for point (0,0)" return -1*w[2] - 1 def constraint2(w): "Inequality for point (1,2)" return w[0] + 2*w[1] + w[2] - 1 def constraint3(w): "Inequality for point (2,1)" return 2*w[0] + w[1] + w[2] - 1 # initial guess w0 = np.array([1, 1, 1]) # optimize bounds = ((-10,10), (-10,10), (-10,10)) constraints = [ {"type":"ineq", "fun":constraint1}, {"type":"ineq", "fun":constraint2}, {"type":"ineq", "fun":constraint3}, solution = minimize(objective, w0, method="SLSQP", bounds=bounds, constraints=constraints) w = solution.x print("Objective:", objective(w)) print("Solution:", w) It will print: Objective: 0.8888888888888942 Solution: [ 0.66666667 0.66666667 -1. ] The above means the line to separate these three points is $0.67x + 0.67y – 1 = 0$. Note that if you provided $N$ data points, there would be $N$ constraints to be defined. Your Task Let’s consider the points (-1,-1) and (-3,-1) to be the first class together with (0,0) and point (3,3) to be the second class together with points (1,2) and (2,1). In this problem of six points, can you modify the above program and find the line that separates the two classes? Don’t be surprised to see the solution remain the same as above. There is a reason for it. Can you tell? Post your answer in the comments below. I would love to see what you come up with. This was the final lesson. The End! (Look How Far You Have Come) You made it. Well done! Take a moment and look back at how far you have come. You discovered: • What is differentiation, and what it means to a function • What is integration • How to extend differentiation to a function of vector argument • How to do differentiation on a vector-valued function • The role of Jacobian in the backpropagation algorithm in neural networks • How to use differentiation to find the optimum points of a function • Support vector machine is a constrained optimization problem, which would need differentiation to solve How did you do with the mini-course? Did you enjoy this crash course? Do you have any questions? Were there any sticking points? Let me know. Leave a comment below. The post Calculus for Machine Learning (7-day mini-course) appeared first on Machine Learning Mastery.
{"url":"https://www.aiproblog.com/index.php/2022/03/15/calculus-for-machine-learning-7-day-mini-course/","timestamp":"2024-11-09T13:02:29Z","content_type":"text/html","content_length":"79257","record_id":"<urn:uuid:ad206fb9-2d51-4d65-bd22-4b82798c5b47>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00484.warc.gz"}
In a previous Stat-Ease blog, my colleague Shari Kraber provided insights into Improving Your Predictive Model via a Response Transformation. She highlighted the most commonly used transformation: the log. As a follow up to this article, let’s delve into another transformation: the square root, which deals nicely with count data such as imperfections. Counts follow the Poisson distribution, where the standard deviation is a function of the mean. This is not normal, which can invalidate ordinary-least-square (OLS) regression analysis. An alternative modeling tool, called Poisson regression (PR) provides a more precise way to deal with count data. However, to keep it simple statistically (KISS), I prefer the better-known methods of OLS with application of the square root transformation as a work-around. When Stat-Ease software first introduced PR, I gave it a go via a design of experiment (DOE) on making microwave popcorn. In prior DOEs on this tasty treat I worked at reducing the weight of off-putting unpopped kernels (UPKs). However, I became a victim of my own success by reducing UPKs to a point where my kitchen scale could not provide adequate precision. With the tools of PR in hand, I shifted my focus to a count of the UPKs to test out a new cell-phone app called Popcorn Expert. It listens to the “pops” and via the “latest machine learning achievements” signals users to turn off their microwave at the ideal moment that maximizes yield before they burn their snack. I set up a DOE to compare this app against two optional popcorn settings on my General Electric Spacemaker™ microwave: standard (“GE”) and extended (“GE++”). As an additional factor, I looked at preheating the microwave with a glass of water for 1 minute—widely publicized on the internet to be the secret to success. Table 1 lays out my results from a replicated full factorial of the six combinations done in random order (shown in parentheses). Due to a few mistakes following the software’s plan (oops!), I added a few more runs along the way, increasing the number from 12 to 14. All of the popcorn produced tasted great, but as you can see, the yield varied severalfold. Table 1: Data with run numbers in A: B: UPKs Preheat Timing Rep 1 Rep 2 Rep 3 No GE 41 (2) 92 (4) No GE++ 23 (6) 32 (12) 34 (13) No App 28 (1) 50 (8) 43 (11) Yes GE 70 (5) 62 (14) Yes GE++ 35 (7) 51 (10) Yes App 50 (3) 40 (9) I then analyzed the results via OLS with and without a square root transformation, and then advanced to the more sophisticated Poisson regression. In this case, PR prevailed: It revealed an interaction, displayed in Figure 1, that did not emerge from the OLS models. Figure 1: Interaction of the two factors—preheat and timing method Going to the extended popcorn timing (GE++) on my Spacemaker makes time-wasting preheating unnecessary—actually producing a significant reduction in UPKs. Good to know! By the way, the app worked very well, but my results showed that I do not need my cell phone to maximize the yield of tasty popcorn. To succeed in experiments on counts, they must be: • discrete whole numbers with no upper bound • kept with within over a fixed area of opportunity • not be zero very often—avoid this by setting your area of opportunity (sample size) large enough to gather 20 counts or more per run on average. For more details on the various approaches I’ve outlined above, view my presentation on Making the Most from Measuring Counts at the Stat-Ease YouTube Channel. A central composite design (CCD) is a type of response surface design that will give you very good predictions in the middle of the design space. Many people ask how many center points (CPs) they need to put into a CCD. The number of CPs chosen (typically 5 or 6) influences how the design functions. Two things need to be considered when choosing the number of CPs in a central composite design: 1) Replicated center points are used to estimate pure error for the lack of fit test. Lack of fit indicates how well the model you have chosen fits the data. With fewer than five or six replicates, the lack of fit test has very low power. You can compare the critical F-values (with a 5% risk level) for a three-factor CCD with 6 center points, versus a design with 3 center points. The 6 center point design will require a critical F-value for lack of fit of 5.05, while the 3 center point design uses a critical F-value of 19.30. This means that the design with only 3 center points is less likely to show a significant lack of fit, even if it is there, making the test almost meaningless. TIP: True “replicates” are runs that are performed at random intervals during the experiment. It is very important that they capture the true normal process variation! Do not run all the center points grouped together as then most likely their variation will underestimate the real process variation. 2) The default number of center points provides near uniform precision designs. This means that the prediction error inside a sphere that has a radius equal to the ±1 levels is nearly uniform. Thus, your predictions in this region (±1) are equally good. Too few center points inflate the error in the region you are most interested in. This effect (a “bump” in the middle of the graph) can be seen by viewing the standard error plot, as shown in Figures 1 & 2 below. (To see this graph, click on Design Evaluation, Graph and then View, 3D Surface after setting up a design.) Figure 1 (left): CCD with the 6 center points (5-6 recommended). Figure 2 (right): CCD with only 3 center points. Notice the jump in standard error at the center of figure 2. Ask yourself this—where do you want the best predictions? Most likely at the middle of the design space. Reducing the number of center points away from the default will substantially damage the prediction capability here! Although it can seem tedious to run all of these replicates, the number of center points does ensure that the analysis of the design can be done well, and that the design is statistically sound. I am often asked if the results from one-factor-at-a-time (OFAT) studies can be used as a basis for a designed experiment. They can! This augmentation starts by picturing how the current data is laid out, and then adding runs to fill out either a factorial or response surface design space. One way of testing multiple factors is to choose a starting point and then change the factor level in the direction of interest (Figure 1 – green dots). This is often done one variable at a time “to keep things simple”. This data can confirm an improvement in the response when any of the factors are changed individually. However, it does not tell you if making changes to multiple factors at the same time will improve the response due to synergistic interactions. With today’s complex processes, the one-factor-at-a-time experiment is likely to provide insufficient information. The experimenter can augment the existing data by extending a factorial box/cube from the OFAT runs and completing the design by running the corner combinations of the factor levels (Figure 2 – blue dots). When analyzing this data together, the interactions become clear, and the design space is more fully explored. Figure 2: Fill out to factorial region In other cases, OFAT studies may be done by taking a standard process condition as a starting point and then testing factors at new levels both lower and higher than the standard condition (see Figure 3). This data can estimate linear and nonlinear effects of changing each factor individually. Again, it cannot estimate any interactions between the factors. This means that if the process optimum is anywhere other than exactly on the lines, it cannot be predicted. Data that more fully covers the design space is required. A face-centered central composite design (CCD)—a response surface method (RSM)—has factorial (corner) points that define the region of interest (see Figure 4 – added blue dots). These points are used to estimate the linear and the interaction effects for the factors. The center point and mid points of the edges are used to estimate nonlinear (squared) terms. Figure 4: Face-Centered CCD If an experimenter has completed the OFAT portion of the design, they can augment the existing data by adding the corner points and then analyzing as a full response surface design. This set of data can now estimate up to the full quadratic polynomial. There will likely be extra points from the original OFAT runs, which although not needed for model estimation, do help reduce the standard error of the predictions. Running a statistically designed experiment from the start will reduce the overall experimental resources. But it is good to recognize that existing data can be augmented to gain valuable insights! Learn more about design augmentation at the January webinar: The Art of Augmentation – Adding Runs to Existing Designs. One challenge of running experiments is controlling the variation from process, sampling and measurement. Blocking is a statistical tool used to remove the variation coming from uncontrolled variables that are not part of the experiment. When the noise is reduced, the primary factor effects are estimated more easily, which allows the system to be modeled more precisely. For example, an experiment may contain too many runs to be completed in just one day. However, the process may not operate identically from one day to the next, causing an unknown amount of variation to be added to the experimental data. By blocking on the days, the day-to-day variation is removed from the data before the factor effects are calculated. Other typical blocking variables are raw material batches (lots), multiple “identical” machines or test equipment, people doing the testing, etc. In each case the blocking variable is simply a resource required to run the experiment-- not a factor of interest. Blocking is the process of statistically splitting the runs into smaller groups. The researcher might assume that arranging runs into groups randomly is ideal - we all learn that random order is best! However, this is not true when the goal is to statistically assess the variation between groups of runs, and then calculate clean factor effects. Design-Expert® software splits the runs into groups using statistical properties such as orthogonality and aliasing. For example, a two-level factorial design will be split into blocks using the same optimal technique used for creating fractional factorials. The design is broken into parts by using the coded pattern of the high-order interactions. If there are 5 factors, the ABCDE term can be used. All the runs with “-” levels of ABCDE are put in the first block, and the runs with “+“ levels of ABCDE are put in the second block. Similarly, response surface designs are also blocked statistically so that the factor effects can be estimated as cleanly as possible. Blocks are not “free”. One degree of freedom (df) is used for each additional block. If there are no replicates in the design, such as a standard factorial design, then a model term may be sacrificed to filter out block-by-block variation. Usually these are high-order interactions, making the “cost” minimal. After the experiment is completed, the data analysis begins. The first line in the analysis of variance (ANOVA) will be a Block sum of squares. This is the amount of variation in the data that is due to the block-to-block differences. This noise is removed from the total sum of squares before any other effects are calculated. Note: Since blocking is a restriction on the randomization of the runs, this violates one of the ANOVA assumptions (independent residuals) and no F-test for statistical significance is done. Once the block variation is removed, the model terms can be tested against a smaller residual error. This allows factor effects to stand out more, strengthening their statistical significance. Example showing the advantage of blocking: In this example, a 16-blend mixture experiment aimed at fitting a special-cubic model is completed over 2 days. The formulators expect appreciable day-to-day variation. Therefore, they build a 16-run blocked design (8-runs per day). Here is the ANOVA: The adjusted R² = 0.8884 and the predicted R² = 0.7425. Due to the blocking, the day-to-day variation (sum of squares of 20.48) is removed. This increases the sensitivity of the remaining tests, resulting in an outstanding predictive model! What if these formulators had not thought of blocking and, instead, simply, run the experiment in a completely randomized order over two days? The ANOVA (again for the designed-for special-cubic mixture model) now looks like this: The model is greatly degraded, with adjusted R² = 0.5487, and predicted R² = 0.0819 and includes many insignificant terms. While the blocked model shown above explains 74% of the variation in predictions (the predicted R-Square), the unblocked model explains only 8% of the variation in predictions, leaving 92% unexplained. Due to the randomization of the runs, the day-to-day variation pollutes all the effects, thus reducing the prediction ability of the model. Blocks are like an insurance policy – they cost a little, and often aren’t required. However, when they are needed (block differences large) they can be immensely helpful for sorting out the real effects and making better predictions. Now that you know about blocking, consider whether it is needed to make the most of your next experiment. Blocking FAQs: How many runs should be in a block? My rule-of-thumb is that a block should have at least 4 runs. If the block size is smaller, then don’t use blocking. In that case, the variable is simply another source of variation in the process. Can I block a design after I’ve run it? You cannot statistically add blocks to a design after it is completed. This must be planned into the design at the building stage. However, Design-Expert has sophisticated analysis tools and can analyze a block effect even if it was not done perfectly (added to the design after running the experiment). In this case, use Design Evaluation to check the aliasing with the blocks, watching for main effects or two-factor interactions (2FI). Are there any assumptions being made about the blocks? There is an assumption that the difference between the blocks is a simple linear shift in the data, and that this variable does not interact with any other variable. I want to restrict the randomization of my factor because it is hard to change the setting with every run. Can I use blocking to do this? No! Only block on variables that you are not studying. If you need to restrict the randomization of a factor, consider using a split-plot design. Aliasing in a fractional-factorial design means that it is not possible to estimate all effects because the experimental matrix has fewer unique combinations than a full-factorial design. The alias structure defines how effects are combined. When the researcher understands the basics of aliasing, they can better select a design that meets their experimental objectives. Starting with a layman’s definition of an alias, it is 2 or more names for one thing. Referring to a person, it could be “Fred, also known as (aliased) George”. There is only one person, but they go by two names. As will be shown shortly, in a fractional-factorial design there will be one calculated effect estimate that is assigned multiple names (aliases). This example (Figure 1) is a 2^3, 8-run factorial design. These 8 runs can be used to estimate all possible factor effects including the main effects A, B, C, followed by the interaction effects AB, AB, BC and ABC. An additional column “I” is the Identity column, representing the intercept for the polynomial. Aliasing in a fractional-factorial design means that it is not possible to estimate all effects because the experimental matrix has fewer unique combinations than a full-factorial design. The alias structure defines how effects are combined. When the researcher understands the basics of aliasing, they can better select a design that meets their experimental objectives. Starting with a layman’s definition of an alias, it is 2 or more names for one thing. Referring to a person, it could be “Fred, also known as (aliased) George”. There is only one person, but they go by two names. As will be shown shortly, in a fractional-factorial design there will be one calculated effect estimate that is assigned multiple names (aliases). This example (Figure 1) is a 2^3, 8-run factorial design. These 8 runs can be used to estimate all possible factor effects including the main effects A, B, C, followed by the interaction effects AB, AB, BC and ABC. An additional column “I” is the Identity column, representing the intercept for the polynomial. Each column in the full factorial design is a unique set of pluses and minuses, resulting in independent estimates of the factor effects. An effect is calculated by averaging the response values where the factor is set high (+) and subtracting the average response from the rows where the term is set low (-). Mathematically this is written as follows: In this example the A effect is calculated like this: The last row in figure 1 shows the calculation result for the other main effects, 2-factor and 3-factor interactions and the Identity column. In a half-fraction design (Figure 2), only half of the runs are completed. According to standard practice, we eliminate all the runs where the ABC column has a negative sign. Now the columns are not unique – pairs of columns have the identical pattern of pluses and minuses. The effect estimates are confounded (aliased) because they are changing in exactly the same pattern. The A column is the same pattern as the BC column (A=BC). Likewise, B=AC and C=AB. Finally, I=ABC. These paired columns are said to be “aliased” with each other. In the half-fraction, the effect of A (and likewise BC) is calculated like this: When the effect calculations are done on the half-fraction, one mathematical calculation represents each pair of terms. They are no longer unique. Software may label the pair only by the first term name, but the effect is really all the real effects combined. The alias structure is written as: I = ABC [A] = A+BC [B] = B+AC [C] = C+AB Looking back at the original data, the A effect was -1 and the BC effect was -21.5. When the design is cut in half and the aliasing formed, the new combined effect is: A+BC = -1 + (-21.5) = -22.5 The aliased effect is the linear combination of the real effects in the system. Aliasing of main effects with two-factor interactions (2FI) is problematic because 2FI’s are fairly likely to be significant in today’s complex systems. If a 2FI is physically present in the system under study, it will bias the main effect calculation. Any system that involves temperature, for instance, is extremely likely to have interactions of other factors with temperature. Therefore, it would be critical to use a design table that has the main effect calculations separated (not aliased) from the 2FI calculations. What type of fractional-factorial designs are “safe” to use? It depends on the purpose of the experiment. Screening designs are generally run to correctly identify significant main effects. In order to make sure that those main effects are correct (not biased by hidden 2FI’s), the aliasing of the main effects must be with three-factor interactions (3FI) or greater. The alias structure looks something like this (only main effect aliasing shown): I = ABCD [A] = A+BCD [B] = B+ACD [C] = C+ABD [D] = D+ABC If the experimental goal is characterization or optimization, then the aliasing pattern should ensure that both main effects and 2FI’s can be estimated well. These terms should not be aliased with other 2FI’s. Within Design-Expert or Stat-Ease 360 software, color-coding on the factorial design selection screen provides a visual signal. Here is a guide to the colors, listed from most information to least • White squares – full factorial designs (no aliasing) • Green squares – good estimates of both main effects and 2FI’s • Yellow squares – good estimates of main effects, unbiased from 2FI’s in the system • Red squares – all main effects are biased by any existing 2FI’s (not a good design to properly identify effects, but acceptable to use for process validation where it is assumed there are no This article was created to provide a brief introduction to the concept of aliasing. To learn more about this topic and how to take advantage of the efficiencies of fractional-factorial designs, enroll in the free eLearning course: How to Save Runs with Fractional-Factorial Designs. Good luck with your DOE data analysis!
{"url":"https://www.statease.com/blog/category/intermediate/","timestamp":"2024-11-09T06:15:02Z","content_type":"text/html","content_length":"54822","record_id":"<urn:uuid:236e6d00-5304-45ed-9fda-7b625b7cb3bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00483.warc.gz"}
Thermally driven elastic membranes are quasi-linear across all scales We study the static and dynamic structure of thermally fluctuating elastic thin sheets by investigating a model known as the overdamped dynamic Föppl-von Kármán equation, in which the Föppl-von Kármán equation from elasticity theory is driven by white noise. The resulting nonlinear equation is governed by a single nondimensional coupling parameter g where large and small values of g correspond to weak and strong nonlinear coupling respectively. By analysing the weak coupling case with ordinary perturbation theory and the strong coupling case with a self-consistent methodology known as the self-consistent expansion, precise analytic predictions for the static and dynamic structure factors are obtained. Importantly, the maximum frequency nmax supported by the system plays a role in determining which of three possible classes such sheets belong to: (1) when g≫1, the system is mostly linear with roughness exponent ζ = 1 and dynamic exponent z = 4, (2) when g≪2/nmax, the system is extremely nonlinear with roughness exponent ζ=1/2 and dynamic exponent z = 3, (3) between these regimes, an intermediate behaviour is obtained in which a crossover occurs such that the nonlinear behaviour is observed for small frequencies while the linear behaviour is observed for large frequencies, and thus the large frequency linear tail is found to have a significant impact on the small frequency behaviour of the sheet. Back-of-the-envelope calculations suggest that ultra-thin materials such as graphene lie in this intermediate regime. Despite the existence of these three distinct behaviours, the decay rate of the dynamic structure factor is related to the static structure factor as if the system were completely linear. This quasi-linearity occurs regardless of the size of g and at all length scales. Numerical simulations confirm the existence of the three classes of behaviour and the quasi-linearity of all classes. Bibliographical note Publisher Copyright: © 2023 The Author(s). Published by IOP Publishing Ltd. • Family-Vicsek scaling • Föppl-von Kármán equations • out-of-equilibrium dynamics • roughness • self-consistent expansion • thin sheets Dive into the research topics of 'Thermally driven elastic membranes are quasi-linear across all scales'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/thermally-driven-elastic-membranes-are-quasi-linear-across-all-sc","timestamp":"2024-11-01T23:09:47Z","content_type":"text/html","content_length":"53097","record_id":"<urn:uuid:d1be98a9-ed9f-404c-aac7-f5e3b8b5fea6>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00507.warc.gz"}
Bayesian filter and particle filter Translated with the help of ChatGPT and Google Translator This time, I studied particle filters, which I had been putting off. The purpose of this article is to explain the principles of particle filters in detail so that anyone can understand them. If you search the Internet, you will find countless articles, including papers, that explain particle filters in detail. However, articles that provide easy explanations only explain important concepts through abstract metaphors without proof. In that case, although it is easy to implement in code, it is difficult to know why it works or does not work, and whether modifying the algorithm will mathematically guarantee optimality. On the other hand, most articles that explain the content in detail, such as papers, are skipped over without explaining the easy parts. However, the easy part was not at the level of equation expansion or factorization, but could be proven only by applying various complex theorems, so it was often difficult to understand the flow. Moreover, many articles, including Wikipedia and papers, use various notations without explanation, making it difficult to interpret the correct meaning of the expression in the first place. So, I would like to explain particle filters by including not only abstract concepts but also rigorous mathematical proofs so that even people with only a high school level of mathematical knowledge can understand them. Specifically, you will need the following background knowledge: • High school level probability theory (conditional probability, joint probability, Bayes' theorem) • Concept of integration Because the content is long, it is organized in written form. In order to understand various filters, including particle filters, it is essential to first understand the concept of filters. In signal processing and control engineering, a filter refers to a device or mathematical structure that passes only the desired signal from a signal in which various signals are overlapped. The reason a filter is needed is because signals inevitably contain noise. From a control or signal processing perspective, all signals are considered the sum of the desired signal and noise. In an ideal environment where signals are transmitted without loss, a filter would be unnecessary, but in reality, noise is included for various reasons, including measurement uncertainty, so a filter is The operation of the filter is fundamentally divided into a process of obtaining the signal value and a process of estimating the true value from this. At this time, obtaining the value of the signal is called measurement or observation, and estimating the true value from this is called estimate or *prediction It is said to be *. Depending on the characteristics of the signal, filters are implemented in various ways. For example, in electronics, the frequencies of signals and noise are often different. From this, the filter is implemented by attenuating everything except the desired frequency band. The most basic of them is the moving average. Moving average estimates the current true value from the (weighted) average of previous measurement values, and can be seen as attenuating high-frequency signals in the frequency domain. On the other hand, there are cases where it is difficult to distinguish between signal and noise using this method when the system is complex or has large nonlinearity. In such cases, a filter based on probability theory can be used. A stochastic filter approaches the method of calculating the probability distribution of a signal rather than estimating a single true value of the signal. Mathematically, this is expressed as a conditional probability distribution of the true value given the measured values up to the current point as shown below. $p(x_t | z_t, z_{t-1}, \cdots, z_0)$ When a comma , is used in a probability distribution, it means a joint probability distribution ($a\cap b$). Therefore, the above equation can be interpreted as follows. \begin{align*} p(x_t | z_t, z_{t-1}, \cdots, z_0) &= p(x_t | z_t \cap z_{t-1} \cap \cdots \cap z_0)\\ &= p(x_t \cap z_t \cap z_{t-1} \cap \cdots \cap z_0) / p(z_t \cap z_{t-1} \cap \cdots \cap z_0) \end{align*} Bayes Filter A Bayes filter is a method of expressing the above probability distribution by combining other probability distributions that are already known. This is because the above equation is simply a mathematical expression of the sentence 'probability distribution of the true value given the measured values', and does not tell us at all how to calculate the probability distribution. The Bayes filter assumes that the two probability distributions below are already known. • $p(z_t | x_t)$: Probability distribution of the measured value when the true value is given • $p(x_t | x_{t-1})$: Probability distribution of the true value given the previous true value These two probability distributions are called measurement model and system model, respectively, and using these two probability distributions, the probability distribution of the true value can be recursively calculated as follows. . $p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0) = \int p(x_t | x_{t-1}) p(x_{t-1} | z_{t- 1}, z_{t-2}, \cdots, z_0) dx_{t-1}$ $p(x_t | z_t, z_{t-1}, \cdots, z_0) = \frac{p(z_t | x_t) p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0 )}{\int p(z_t | x_t) p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0) dx_t}$ Below, we explain other concepts necessary to derive the Bayes filter and derive the Bayes filter from them. Conditional Independence Since the concept of conditional independence is often used in the derivation of Bayes filters, it is important to understand conditional independence. Conditional independence means that given an event $A$, events $B$ and $C$ are independent. Below are various expressions for conditional independence, all of which are equivalent. • $p(B, C|A) = p(B|A)p(C|A)$: Given $A$, the joint probability distribution of $B$ and $C$ is It is equal to the product of the probability distribution of $. • $p(A \cap B \cap C) = p(A \cap B) p(A \cap C) / P(A)$: The above equation is expanded according to the definition of conditional probability. • $p(B|A, C) = p(B|A)$: Given $A$, the probability distribution of $B$ is not affected by $C$. • $p(C|A, B) = p(C|A)$: Given $A$, the probability distribution of $C$ is not affected by $B$. What is important here is that $B$ and $C$ are independent only when $A$ is given. That is, if $B$ and $C$ are conditionally independent with respect to $A$, $B$ and $C$ are generally not Additionally, if two variables $A and B$ are conditionally independent with respect to the other variable $C$, the following important properties hold true. $p(A | B) = \int p (A |C) p(C|B) dC$ This equation is called Chapman–Kolmogorov Equation (CKE). More precisely, CKE refers to a more general equation for several random variables, and the above equation can be said to be a special case of CKE that deals with three random variables. The proof is as follows. Since $A$ and $B$ are conditionally independent of $C$, $p(A | C) = p(A | B, C)$ Substituting this into the above equation gives $p(A | B) = \int p(A | B, C) p(C | B) dC$ If we expand the right side according to the definition of conditional probability, we get $= \int \frac{p(A \cap B \cap C)}{p(B \cap C)} \frac{p(C \cap B)}{p(B)} dC\\$ $= \int \frac{p(A \cap B \cap C)}{p(B)} dC$ By the law of total probability $= \frac{p(A \cap B)}{p(B)}$ By definition of conditional probability $= p(A | B)$ Markov Chain When a system satisfies the following properties, it is called a Markov chain. $p(x_t | x_{t-1}, x_{t-2}, \cdots, x_0) = p(x_t | x_{t-1})$ This indicates that the current state of the system depends only on the state immediately before it and does not depend on other past states. Many systems, including real-world physical phenomena, satisfy this property. Markov chains can be interpreted in terms of conditional independence. In other words, given $x_{t-1}$, the probability distribution of $x_t$ is conditionally independent with respect to $x_{t-2}, \ cdots, and x_0$. Hidden Markov Chain However, in general, it is impossible to directly measure the entire state of the system, and only part of the system can be measured indirectly. This Markov chain is called a hidden Markov chain. $\begin{array}{cccccccccc} X_{0} & \to & X_{1} & \to & X_{2} & \to & X_{3} & \to & \cdots & \text{signal} \\ \downarrow & & \downarrow & & \downarrow & & \downarrow & & \cdots & \\ Z_{0} & & Z_{1} & & Z_{2} & & Z_{3} & & \cdots & \text{observation} \end{array}$ In the above diagram, the arrow $X$ is a state and cannot be measured directly. $Z$ is the observed value obtained through measurement. The arrow $A\to B$ indicates that the random variable $B$ depends only on $A$. this can also be interpreted to mean that for any random variable $C$ other than $A$, $B$ and $C$ are conditionally independent of $A$. Bayes' theorem Bayes' theorem states that the following relationship holds true for conditional probability. $p(A|B) = \frac{p(B|A)p(A)}{p(B)}$ Bayes' theorem can be interpreted in two ways: • Inverse probability problem: Finding $P(A|B)$ given $P(B|A)$ □ Example) Given the probability of testing positive when having a disease, finding the probability of being sick when testing positive is given. • Posterior probability estimation: updating the prior probability $P(A)$ to a more accurate probability through the measured value $B$ □ Example) In general, you know the probability of contracting a certain disease, but when you learn new test results, you recalculate the probability of contracting the disease. Bayes filter uses Bayes' theorem in terms of posterior probability estimation. In other words, the measured value is reflected in the previously known probability distribution and updated to a more accurate probability distribution is repeated. Derivation of Bayes filter If we recall again the probability distribution we want to find, it is as follows. $p(x_t | z_t, z_{t-1}, \cdots, z_0)$ To calculate this, assume that you know the two probability distributions below. • $p(x_t | x_{t-1})$: system model • $p(z_t | x_t)$: measurement model From this, Bayes' filter is derived by applying Bayes' theorem as follows. First, if you apply Bayes' theorem to the equation you want to find, $p(x_t | z_t, z_{t-1}, \cdots, z_0) = \frac{p(z_t, z_{t-1}, \cdots, z_0 | x_t) p(x_t)}{p(z_t, z_{t-1}, \cdots, z_0)}$ At this time, $z_t$ depends only on $x_t$ according to the assumption of the hidden Markov chain, so it is conditionally independent with respect to any random variable $k$ other than $x_t$. That is, the following holds true. $\forall k eq x_t, p(z_t, k | x_t) = p(z_t | x_t) p(k | x_t)$ From this, if we set $k=z_{t-1}, z_{t-2}, \cdots, z_0$, $p(z_t, z_{t-1}, \cdots, z_0 | x_t) = p(z_t | x_t) p(z_{t-1}, z_{t-2}, \cdots, z_0 | x_t)$ Applying this again to Bayes' theorem, we get $= \frac{p(z_t | x_t) p(z_{t-1}, z_{t-2}, \cdots, z_0 | x_t) p(x_t)}{p(z_t, z_{t-1}, \cdots,z_0)}$ However, according to the definition of conditional probability, $p(z_{t-1}, z_{t-2}, \cdots, z_0 | x_t) p(x_t) = p(z_{t-1}, z_{t-2 }, \cdots, z_0, x_t)$, so $= \frac{p(z_t | x_t) p(z_{t-1}, z_{t-2}, \cdots, z_0, x_t)}{p(z_t, z_{t-1}, \cdots, z_0 )}$ Divide the denominator and numerator by $p(z_{t-1}, z_{t-2}, \cdots, z_0)$, respectively. $= \frac{p(z_t | x_t) p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0)}{p(z_t | z_{t-1}, z_{t- 2}, \cdots, z_0)}$ At this time, the denominator $p(z_t | z_{t-1}, z_{t-2}, \cdots, z_0)$ is decomposed by CKE, which was previously discussed in the conditional independence section, as follows. $p(z_t | z_{t-1}, z_{t-2}, \cdots, z_0) = \int p(z_t | x_t) p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0) dx_t$ Substituting this back into the equation, we get: $p(x_t | z_t, z_{t-1}, \cdots, z_0) = \frac{p(z_t | x_t) p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0 )}{\int p(z_t | x_t) p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0) dx_t}$ The unknown part of this equation is $p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0)$. At this time, according to the assumption of the Markov model, $x_t$ depends only on $x_{t-1}$, so $x_t$ is conditionally independent from $z_{t-1}, z_{t-2}, \cdots, z_0$ am. Therefore, it is similarly decomposed by CKE as follows. $p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0) = \int p(x_t | x_{t-1}) p(x_{t-1} | z_{t- 1}, z_{t-2}, \cdots, z_0) dx_{t-1}$ At this time, the unknown part of this equation, $p(x_{t-1} | z_{t-1}, z_{t-2}, \cdots, z_0)$, is the original expression $p(x_t | z_t, z_ It is a form with only one subscript reduced from {t-1}, \ cdots, z_0)$. From this, the following recursive estimation can be performed. 1. Assume you know $p(x_{t-1} | z_{t-1}, z_{t-2}, \cdots, z_0)$. 2. Calculate $p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0)$ from this. Since this calculates the probability distribution of the true value at time $t$ based on information up to time $t-1$, this process is called estimation or prediction (prediction). 3. Calculate $p(x_t | z_t, z_{t-1}, \cdots, z_0)$ from this. This process is called update because it recalculates the probability distribution of the true value by reflecting the new measured value in the estimated value. When you first start calculating, an initial estimate $p(x_0)$ is needed. $p(x_0)$ is a probability distribution in the case where there is no information, so a uniform probability distribution or normal distribution can be used. The more precise the initial estimate is used, the more accurate the subsequent estimate becomes. The method of estimating the probability distribution of the true value as shown above is called Bayes filter and is the theoretical basis for all probabilistic filtering. However, Bayesian filters must perform integration during the prediction process, but integration of non-linear or numerically defined functions is often very difficult or sometimes impossible. Therefore, in general, Bayes filters cannot be directly applied to real-world problems, and various methods to approximate them have been proposed. Methods for approximating arbitrary probability distributions are largely divided into parametric methods and nonparametric methods. The parametric method is a method of estimating the parameters of the model assuming that the probability distribution follows a specific type of probability distribution. This can be used when there is a theoretical basis for the probability distribution. On the other hand, the non-parametric method is a method of approximating the probability distribution using measured values without making any assumptions about the probability distribution. Kalman filter is an approximation to the Bayes filter through a parametric method, and is a method that allows integrals to be solved analytically by assuming that the linearity and error of each model are normal distribution. The Kalman filter provides very accurate estimates when the amount of calculation is not large and linearity and normal distribution assumptions are satisfied. However, if important assumptions such as linearity are not satisfied, the filter may diverge. Particle filter, which will be discussed later, is a non-parametric approximation to the Bayes filter and is a method of approximating the Bayes filter using Monte Carlo sampling. Particle filters make no assumptions about the model, so they can be used even when assumptions such as linearity are not met. However, since it is an approximation through sampling, it has the disadvantage that the calculation amount is large and the accuracy may be low. Particle Filter It was previously said that the Bayes filter estimates the probability distribution of the true value by repeating the following two steps. • prediction $p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0) = \int p(x_t | x_{t-1}) p(x_{t-1} | z_{ t-1}, z_{t-2}, \cdots, z_0) dx_{t-1}$ • update $p(x_t | z_t, z_{t-1}, \cdots, z_0) = \frac{p(z_t | x_t) p(x_t | z_{t-1}, z_{t-2}, \cdots , z_0)}{\int p(z_t | x_t) p(x_t | z_{t-1}, z_{t-2}, \cdots, z_0) dx_t}$ However, in general, a system is a two-dimensional or more vector, and the system model is given as a matrix equation. Therefore, the integration in the prediction step becomes multiple integrations in high dimensions, and the integration area can become very complicated. Empirical Distribution Function In this case, the integral can be approximated using an empirical distribution function. The empirical distribution function is a method of approximating the probability distribution by sampling the measured values when it is difficult to directly obtain the value of a probability distribution function but easy to sample. At this time, when the number of samples becomes sufficiently large, the distribution of samples converges to the original probability distribution. Mathematically, this can be expressed as follows. $\hat p(x) := \frac{1}{n} \sum_{i=1}^n \delta(x - x_i) \approx p(x)$ At this time, $x_i$ is a value sampled from the probability distribution $p(x)$, $\delta(x - x_i)$ is the Dirac-delta function, $\delta(0) = \infty$, and $\delta(x) = 0$ ($x eq 0$). The integral of the Dirac-delta function becomes 1. Alternatively, the above equation can be integrated and expressed in the form of a cumulative distribution function as shown below. $\hat F(x) = \frac{1}{n} \sum_{i=1}^n \bold{1}_{x_i \leq x}\approx F(x)$ In this case, $\bold{1}_{x_i \leq x}$ is a function that is 1 when $x_i \leq x$, and 0 otherwise. Below is an empirical distribution function calculated using samples sampled from a normal distribution. Importance Sampling However, there is a problem that in order to obtain the empirical distribution function through simulation rather than real-life experiment, the original probability distribution must be known. In other words, an attempt was made to use empirical sampling to obtain the probability distribution, but a paradoxical problem arises in that the original probability distribution must be known in order to do so. At this time, even if sampling is not possible from the original probability distribution, the probability distribution can be approximated through sampling by using importance sampling. Importance sampling is a method of sampling from another probability distribution $q(x)$ and using it to approximate the expected value of $p(x)$, even if the probability distribution $p(x)$ is not known, as $E_p[f(x)] = \int f(x) p(x) dx = \int f(x) \frac{p(x)}{q(x)} q(x) dx \approx \frac{ 1}{n} \sum_{i=1}^n f(x_i) \frac{p(x_i)}{q(x_i)}$ Below is an approximation of the cumulative probability density function of the normal distribution using importance sampling from the uniform probability distribution. At this time, the probability distribution $q(x)$ on which sampling is performed is called the importance distribution or proposal distribution. The proposed distribution can select an arbitrary probability distribution with non-zero values where the probability density of the original distribution is greater than 0, but the more similar the original distribution is, the higher the approximation accuracy. The proposed distribution generally uses a normal distribution or uniform distribution that is easy to sample. However, it should be noted that the mean of the probability distribution obtained as a result of importance sampling converges to the original probability distribution, but the variance is different from the original probability distribution. Sequential Importance Sampling Particle filters approximate Bayesian filters using importance sampling, as mentioned earlier. This is called Sequential Importance Sampling (SIS). The proof of SIS is a bit complicated. First, when each particle moves along a specific probability distribution over time, we will obtain the probability distribution of the trajectory itself and then show that it is also the same as the probability distribution of the particle. In general, if you look at other articles, in this calculation, all proposed probability distributions are expressed with a single symbol, $q$. However, in this article, different probability distributions will be assigned different symbols to avoid confusion. The symbol $p$ generally represents the probability distribution of a random variable, and other symbols represent specific probability distributions. These should not be confused. Probability distribution of particle trajectories First, extract $n$ samples $x_0^{(1)}, x_0^{(2)}, \cdots, x_0^{(n)}$ from the initial probability distribution $q_0(x)$. Obviously, $p(x_0) = q_0(x)$. And let us assume that each particle $x_0^{(i)}$ moves according to the probability distribution $r_t^{(i)}(x^{(i)}_{t+1})$ over time. . Where a particle will be at the next instant generally depends on where it was at the previous instant. Therefore, although not explicitly stated, $r_t^{(i)}(x^{(i)}_{t+1})$ is generally dependent on $x^{(i)}_t$. Then, for each particle $x^{(i)}$, the probability that the particle is initially at $x_0^{(i)}$ and at the next instant is at $x_1^{(i)}$ is same. $p(x_1^{(i)}, x_0^{(i)}) = q_0(x_0^{(i)}) r_0^{(i)}(x_1^{(i)})$ Expanding, the probability that each particle moves along the trajectory $x_0^{(i)}, x_1^{(i)}, \cdots, x_t^{(i)}$ over time is as follows. $p(x_t^{(i)}, x_{t-1}^{(i)}, \cdots, x_0^{(i)}) = q_0(x_0^{(i)}) r_0^{(i )}(x_1^{(i)}) r_1^{(i)}(x_2^{(i)}) \cdots r_{t-1}^{(i)}(x_t^{(i)})$ At this time, let the probability distribution of the particle trajectory at time $t$ be $q_t$. Then $q_t$ is expressed as follows. $q_t(x_0^{(i)}, x_1^{(i)}, \cdots, x_t^{(i)}) = q_0(x_0^{(i)}) r_0^{(i)}(x_1^ {(i)}) r_1^{(i)}(x_2^{(i)}) \cdots r_{t-1}^{(i)}(x_t^{(i)})$ Expressing this in ignition form is as follows. $q_t(x_0^{(i)}, x_1^{(i)}, \cdots, x_t^{(i)}) = q_{t-1}(x_0^{(i)}, x_1^{(i )}, \cdots, x_{t-1}^{(i)}) r_{t-1}^{(i)}(x_t^{(i)}) \tag{1}$ This is the probability distribution that each particle moves along a specific trajectory over time. Probability distribution of state trajectory Next, the probability distribution of the original state that we want to obtain is as follows. $p(x_0, x_1, \cdots, x_t | z_0, z_1, \cdots, z_t)$ This is slightly different from the probability distributions discussed earlier. Previously, we discussed the probability distribution of the current state given the observed values up to the current point. However, this includes all probability distributions of states at previous times, given the observations up to the current point. At this time, let us assume that there is an appropriate weight $w_t^{(i)}$ so that the probability distribution of the particle approximates the probability distribution to be obtained according to the principle of importance sampling. In other words, let’s say the following conditions are satisfied. $p(x_0, x_1, \cdots, x_t | z_0, z_1, \cdots, z_t) \approx \sum_{i=1}^n w_t^{(i)} \delta(x_0, x_1, \cdots, x_t - x_0^{(i)}, x_1^{(i)}, \cdots, x_t^{(i)})$ What we want to achieve is to find this $w_t^{(i)}$. According to importance sampling, $w_t^{(i)}$ should be: $w_t^{(i)} = \frac{p(x_0^{(i)}, x_1^{(i)}, \cdots, x_t^{(i)} | z_0, z_1, \cdots, z_t)} {q_t(x_0^{(i)}, x_1^{(i)}, \cdots, x_t^{(i)})}$ Since the expression is long, from now on, $x_0^{(i)}, x_1^{(i)}, \cdots, x_t^{(i)}$ will be simply called $x_{0:t}^{(i)}$ I will mark it. Using this notation, the above equation is simply expressed as follows. $w_t^{(i)} = \frac{p(x_{0:t}^{(i)} | z_{0:t})}{q_t(x_{0:t}^{(i)}) } \tag{2}$ Now we will decompose the molecule on the right side using Bayes' theorem. First, if we expand this equation according to the definition of conditional probability, we get $p(x_{0:t}^{(i)} | z_{0:t}) = \frac{p(x_{0:t}^{(i)}, z_{0:t})}{ p(z_{0:t})}$ According to the definition of conditional probability, if we take out only $z_{t}$, $= \frac{p(z_t | x_{0:t}^{(i)}, z_{0:t-1}) p(x_{0:t}^{(i)}, z_{0:t -1})}{p(z_{0:t})}$ Again, according to the definition of conditional probability, if we take out only $x_{t}^{(i)}$, $= \frac{p(z_t | x_{0:t}^{(i)}, z_{0:t-1}) p(x_{t}^{(i)} | x_{0:t-1 }^{(i)}, z_{0:t-1}) p(x_{0:t-1}^{(i)}, z_{0:t-1})}{p(z_{0 :t})}$ Divide both the denominator and numerator by $p(z_{0:t-1})$ $= \frac{p(z_t | x_{0:t}^{(i)}, z_{0:t-1}) p(x_{t}^{(i)} | x_{0:t-1 }^{(i)}, z_{0:t-1}) p(x_{0:t-1}^{(i)}, z_{0:t-1}) / p(z_{0: t-1})}{p(z_{0:t}) / p(z_{0:t-1})}$ According to the definition of conditional probability $= \frac{p(z_t | x_{0:t}^{(i)}, z_{0:t-1}) p(x_{t}^{(i)} | x_{0:t-1 }^{(i)}, z_{0:t-1}) p(x_{0:t-1}^{(i)} | z_{0:t-1})}{p(z_t | z_ {0:t-1})}$ At this time, the denominator of this equation is a constant, so it can be omitted. What we are looking for now is the numerator of $w_t^{(i)}$, and since $\sum_{i=1}^n w_t^{(i)} = 1$, the relative size of each $w_t^{(i)}$ Because it's only important. therefore $p(x_{0:t}^{(i)} | z_{0:t}) \propto p(z_t | x_{0:t}^{(i)}, z_{0:t-1}) p(x_{t}^{(i)} | x_{0:t-1}^{(i)}, z_{0:t-1}) p(x_{0:t-1}^{( i)} |z_{0:t-1})$ Here, if we eliminate variables that can be ignored according to the Markov chain assumption, we get the following conclusion. $p(x_{0:t}^{(i)} | z_{0:t}) \propto p(z_t | x_{t}^{(i)}) p(x_{t}^{(i) } | x_{t-1}^{(i)}) p(x_{0:t-1}^{(i)} | z_{0:t-1}) \tag{3}$ This is the probability distribution that a state will follow a certain trajectory given the observations. Importance Updates Next, the importance update equation is derived through this. Substituting equation (3) into equation (2), we get $w_t^{(i)} \propto \frac{p(z_t | x_{t}^{(i)}) p(x_{t}^{(i)} | x_{t-1}^{(i)}) p(x_{0:t -1}^{(i)} | z_{0:t-1})}{q_t(x_{0:t}^{(i)})} \tag{4}$ Substituting equation (1) into equation (4), we get $w_t^{(i)} \propto \frac{p(z_t | x_{t}^{(i)}) p(x_{t}^{(i)} | x_{t-1}^{(i )}) p(x_{0:t-1}^{(i)} | z_{0:t-1})}{q_{t-1}(x_{0:t-1}^{(i )}) r_{t-1}^{(i)}(x_t^{(i)})}$ If we bundle the terms of time $t-1$, we get $w_t^{(i)} \propto \frac{p(z_t | x_{t}^{(i)}) p(x_{t}^{(i)} | x_{t-1}^{(i )})}{r_{t-1}^{(i)}(x_t^{(i)})}\frac{p(x_{0:t-1}^{(i)} | z_{0 :t-1})}{q_{t-1}(x_{0:t-1}^{(i)})}$ According to the definition of $w_t^{(i)}$ $w_t^{(i)} \propto \frac{p(z_t | x_{t}^{(i)}) p(x_{t}^{(i)} | x_{t-1}^{(i )})}{r_{t-1}^{(i)}(x_t^{(i)})} w_{t-1}^{(i)}$ This is the most basic update formula for particle filters, and it is called Sequential Importance Sampling (SIS). And in general, to reduce the amount of calculation, $r_t^{(i)}(x_t^{(i)})$ is appropriately changed to $p(xt^{(i)} | x{t-1}^{(i) }) Select with $. Then, the equation becomes simple as follows. $w_t^{(i)} \propto p(z_t | x_{t}^{(i)}) w_{t-1}^{(i)}$ The entire logic of the SIS-based particle filter is expressed in pseudocode as follows. xs = sample_from_prior() ws = [1.0] * n_particles for t in range(1, T): z = observe() for i in range(n_particles): xs[i] = transition(xs[i]) ws[i] *= likelihood(z, xs[i]) # Normalize ws_sum = sum(ws) for i in range(n_particles): ws[i] /= ws_sum However, if the above method is used as is, the importance of particles with low importance continues to decrease and converges to 0 in just a few steps, and all importance is concentrated on one particle. This is called degeneracy. When degeneracy occurs, not only is most of the calculation amount consumed in unnecessary calculations where the probability density is almost 0, but the filter performance is greatly reduced as the particles do not approximate the entire probability distribution and only represent a single point. Accordingly, when the filter is degraded, resampling is necessary to replicate particles of high importance and remove particles of low importance. This can be interpreted as reducing the number of samples in meaningless parts that are close to 0 in the probability distribution and densely sampling important parts. From another perspective, resampling is the same as performing empirical sampling from the probability distribution represented by importance sampling. Therefore, this does not change the probability distribution it is intended to represent. The image above shows the composition of the mesh in Finite Element Analysis. You can see that a dense mesh is allocated to important parts that receive a lot of force, and a small mesh is allocated to unimportant parts. This is similar to the concept of resampling. The following methods are generally used for resampling: • Multinomial Resampling: Randomly samples particles in proportion to their importance • Systematic Resampling: Sampling at even intervals according to importance • Stratified Resampling: Divide into layers according to importance and sample uniformly from each layer. • Residual Resampling: Perform sampling in proportion to the weight and perform additional sampling using the remaining weight. There are various resampling methods, but it is known that no method always shows better performance than other methods. In general, polynomial resampling, which is simple to implement, is often Because resampling replicates high-importance particles as they are, several identical particles are created immediately after resampling. However, since the importance sampling in the prediction step in the very next step (the transition function in the pseudo-code above) is a stochastic process, each particle becomes different. The timing of performing resampling also varies. The simplest way is to perform resampling at every step, and it is often used in practice. However, in that case, the amount of calculation increases and all the particles are gathered in the high probability distribution, which may reduce the expressiveness of the filter. Therefore, resampling is performed at specific intervals rather than at every step. In general, a method of performing resampling when the filter has degenerated below a certain level is often used. The effective sample size is mainly used to determine the level of filter degradation. $N_{\text{eff}} = \frac{1}{\sum_{i=1}^n (w_t^{(i)})^2}$ This is actually a formula obtained by using the variance difference from ideal Monte Carlo sampling. However, intuitively, if only one sample has a weight and the rest are 0, this value becomes 1, and conversely, if all samples have a weight of $1/n$, this value becomes $n$. Therefore, if this value becomes smaller than a certain threshold $N_{\text{th}}$, the filter can be judged to be If the pseudocode is modified to reflect resampling based on the effective sample size, it is as follows. xs = sample_from_prior() ws = [1.0] * n_particles for t in range(1, T): z = observe() for i in range(n_particles): xs[i] = transition(xs[i]) ws[i] *= likelihood(z, xs[i]) # Normalize ws_sum = sum(ws) for i in range(n_particles): ws[i] /= ws_sum if effective_sample_size(ws) < N_th: xs, ws = resample(xs, ws) This is an implementation of a typical particle filter. Below is a simulation that implements a simple particle filter to follow the mouse position. In the simulation above, the red dot represents the current location of the mouse, and the black dot represents a landmark whose location is known. The simulation space is square-shaped with each side being 10 m. At every time step, the distance from the current mouse position to the landmark is measured, and this measurement includes normally distributed noise with a 95% confidence interval of 1 m. The particle filter uses 100 particles to estimate the mouse's position from these measurements. Estimates are indicated by white dots. Polynomial resampling is used as a resampling method, and resampling is performed if $N_\text{eff} < N_{\text{th}}=0.5N$ or less. The system model used an appropriate normal distribution. The timestep may vary depending on the viewer's environment using This example is intended to visually demonstrate the behavior of a particle filter, and uses much worse values than would be practical. Typically, more than 1,000 particles are used, the number of landmarks is much larger, and the sensor error is much smaller. Additionally, a much more sophisticated system model is used that reflects user input or current status (speed, etc.). • Elfring J, Torta E, van de Molengraft R. Particle Filters: A Hands-On Tutorial. Sensors (Basel). 2021 Jan 9;21(2):438. doi: 10.3390/s21020438. PMID: 33435468; PMCID: PMC7826670. • https://en.wikipedia.org/wiki/Particle_filter • M. S. Arulampalam, S. Maskell, N. Gordon and T. Clapp, "A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking," in IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174-188, Feb. 2002, doi: 10.1109/78.978374. • Doucet, Arnaud, Simon Godsill, and Christophe Andrieu, 'On Sequential Monte Carlo Sampling Methods for Bayesian Filtering', in Andrew C Harvey, and Tommaso Proietti (eds), Readings In Unobserved Components Models (Oxford, 2005; online edn, Oxford Academic, 31 Oct. 2023), https://doi.org/10.1093/oso/9780199278657.003.0022, accessed 14 Feb. 2024. • https://en.wikipedia.org/wiki/Importance_sampling • Bergman, N. (1999). Recursive Bayesian Estimation: Navigation and Tracking Applications (PhD dissertation, Linköping University). Retrieved from https://urn.kb.se/resolve?urn=
{"url":"https://unknownpgr.com/posts/particle-filter/index.en.html","timestamp":"2024-11-10T14:12:02Z","content_type":"text/html","content_length":"627661","record_id":"<urn:uuid:6030c0c4-6420-4e0c-a052-cfca9f3f7ffd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00469.warc.gz"}
Summary of the game Pig is a simple game of luck and gamble, where two players take turns to roll a dice. On a player’s go, they add the number on the dice to their score for that go. If they land on 1, their score for that go is set to 0 and it becomes the other player’s turn. Before each roll, the player can decide if they want to “stick” or roll again. If they choose to stick, their score for that go gets added to their overall score. The aim of the game is to have a higher score than your opponent at the end of n rounds. I broke this game down into 3 parts. p1 = 0 p2 = 0 rounds = 10 playing = True draw = False winner = "NONE" # Mainloop for r in range(1, rounds + 1): print("-----Round %d-----" % r) print("Player 1 score: %i" % p1) print("Player 2 score: %i" % p2) p1 += move("p1") p2 += move("p2") if p1 > p2: winner = "p1" elif p2 > p1: winner = "p2" draw = True if draw: print("Player 1 score: %i" % p1) print("Player 2 score: %i" % p2) print(f"----{winner} WINS-----") print("Player 1 score: %i" % p1) print("Player 2 score: %i" % p2) I broke this game down into 3 parts. Part 1 Part one is the structure of the game. This includes the main loop, adding to the players’ overall scores and printing the winner at the end. def move(player): rolling = True die = 0 round_score = 0 rolls = 0 # User if player == "p1": print("\n---YOUR GO---\n") while rolling: op = input("Do you want to roll (r) or stick (s)?") if op.lower().strip() == "r": die = random.randint(1, 6) # Creating the random number round_score += die print("\nThe dice landed on %i!" % die) rolls += 1 if die == 1 and rolls != 1: print("0 points for this round!\n") return 0 print("Total for this round: %d" % round_score, "\n") print("You got %d points for this round!" % round_score, "\n") return round_score Part 2 Part 2 is where the player inputs their decisions into the game. You can choose to roll (r) or stick (s). # Computer print("\n---COMPUTER'S GO---\n") while rolling: die = random.randint(1, 6) round_score += die rolls += 1 print("The computer landed on %i" % die) if die == 1 and rolls != 1: # Cannot get out on the first go print("0 points for this round!\n") return 0 print("The computer's total is %i" % round_score, "\n") v = (round_score / rolls) / 6 # Getting the value prob = (rolls ** (v * 4)) / 50 # Getting the probability if random.random() < prob: print("The computer has decided to stick\n") return round_score Part 3 Part 3 is the complicated bit. It is where the computer decides what it wants to do. It uses a formula to calculate a probability on whether it will stick or not. Making the formula I mostly used trial and error to make this formula, as I have no idea what numbers change what so I just messed around with it until it looked good. A website that really helped was https:// www.desmos.com/calculator. This is a really great website that plots a line representing a formula that you can specify and change. It also has “sliders” so you can quickly change values and see the To make this formula I started breaking down the game to decide on some simple rules you can follow to maximise your score. At the end of the day this game is luck however you can make certain decisions that increase your point gains. Firstly: • The more, high numbers you are getting, the higher the chances that you are going to stick and not gamble For example, if you roll three 6s in a row, that is really lucky so you are not going to want to waste that. Inversely: • The more, lower numbers you are getting, the more likely you are going to want to carry on and try to get higher scores This is a general trend however sometimes if you are getting unlucky, you might just want to stick with what you have and not risk the few points you do have. To resemble this this certain degree of luck, I made a variable called “V”, or “Value”. This variable is a simple percentage (shown as 0 to 1), that shows how lucky your series of rolls are. For example, 6 is absolute maximum value so that will give you a 1, further rolls will change this value for example if you roll a 3 that will bring you down do v = 0.75. This is how I calculated this: • “round_score” = the total points that player has for that series of rolls • “rolls” = the amount of rolls the player has done in that round This gives you the average score, as a number between 1 and 0. This is only one piece of the puzzle however, as this does not take into account the general rule of: • The more rolls you take, the more likely you are going to stick So, I needed to use this value and bring in the number of rolls into account. I started messing around on the quadratic line plotter. I used a slider and the variable “v” to test how the graph changes as the value changes. This is the formula I got: These numbers make a line that can be used to give the computer a probability of sticking. This is much more accurate and rewarding than linearly changing the probability. This is how it works: The number of rolls are shown on the x axis along the bottom and the probability is derived from the y. The computer picks a random number from 0 to 1. If it is below the y value produced by this graph, the computer sticks. So the more space under the line between 1 and 0 show the higher the probability of sticking. Of course, if the line goes above 1, the computer will always stick as there is a 100% chance of it being under the line. When you move the slider for v, the line responds. At v = 0.6, the slope’s gradient becomes more gradual. This allows for the computer to take more rolls. At v = 0.4, the gradient is very shallow so the computer will roll more. This was the first formula that I looked at and said “that looks alright”, so I wasn’t expecting much. I reckoned it was a bit too subtle and not daring enough to be accurate to human decision. I definitely expected to go through a couple more generations before I got it right. So, as you might imagine, I was very surprised when the computer beat me on my first game. It then went onto beat my friend on it’s second game, both times it looked like we were going to win, but then the computer comes back in the final few rounds. We then went on to beat it three times in a row, proving that it was not completely unbeatable. At home it beat my dad, in the same late game clutch as it had done twice before. I have got to admit I was very impressed with my program! However, there is a lot that can be improved. This game is based almost solely on luck, so I think I had got the probabilities just good enough so that luck could tip it over the edge for the win. There is still a lot more room for improvement however, as the computer did make some questionable decisions here and there. Final round problem There was a situation in one of the games the computer lost. It was the final round, and the computer was about 10 points behind the player. It was the computers go. First roll: 3… And it stuck. Admittedly, there is a pretty low chance of the computer sticking after just rolling 3, but it does flag up a problem in the logic. In order for the computer to win it must have rolled a score which meant it beat the opponent. So, what a human would do is think, “I might as well keep rolling until it means I can beat the opponent as the only other option is a loss”. This is something that I can program in at a later date. Other methods While I was making the formula, I thought of some other methods of helping the computer decide whether to stick or to roll. On of these methods was the “target number” method, where, the computer would roll until it got to a good amount, say 14, and then stick, this would ensure that the computer always tries to get a good sum of points. Also, there should be a way of fining the “sweet spot”. This would be a number with the highest probability to size ratio. Put simply, the biggest number the computer has a good probability of getting. Taking this method further, you could add in an element of competitiveness. Perhaps, tell the computer that it always has to beat the what the opponent got for their turn. This could backfire however because the player could get a very low number; there’d be no point stooping to that goes’ level when there’s valuable points to be getting earnt to put the computer ahead of its competition. The correct way to play Pig Hiding behind this seemingly simple game of luck, there is a single, mathematically correct way of playing the game, a way that puts you at the absolute best chances of winning the game. This, is what I intend to find out. There are two ways I see, of finding this optimum method. One, you could do it completely mathematically, using probability and calculations to find the best method. Secondly, the much more fun method, is letting the computer itself decide what the best way of playing the game. I would need to find a way of the computer playing pig with itself over and over again, thousands of times a second, a way of measuring reward – so it knows what went well and what did not go so well, and a way of recording and somehow condensing the data the computer gathers so it can use that data to improve its own strategies. I expect the computer to come up with some seemingly bizarre “best way” of playing the game, that at first seems completely wrong to humans. For example, “roll 4 every single time” or something strange like that. I think the computer will come up with something like this (that does not look right to humans) because the computer has no bias or misleading instincts that may give humans a disadvantage. The computer understands no aspect of “luck” or getting “unlucky”. All it sees is a probability. And will not be influenced by the feeling of “aww I’m getting so unlucky, lets stick now”. Fundamentally this is what will allow the computer to find a better algorithm than a human, the best algorithm.
{"url":"https://henrytechblog.com/project/programming/pig/","timestamp":"2024-11-11T11:30:10Z","content_type":"text/html","content_length":"70797","record_id":"<urn:uuid:7b4569c6-e740-4008-a581-d5bb2954460a>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00129.warc.gz"}
Radius of Gyration Calculator \( k = \sqrt{\frac{I}{m}} \) What is Radius of Gyration? Radius of Gyration is a fundamental concept in the field of structural engineering, mechanics, and material science. It plays a crucial role in understanding and analyzing the stability, strength, and behavior of structures under various load conditions. The Radius of Gyration, denoted usually by ‘k’, is a measure that describes the distribution of a cross-sectional area of a column (or any structural member) around an axis. More precisely, it is the square root of the area moment of inertia divided by the cross-sectional area. Significance in Engineering 1. Buckling Analysis: In structural engineering, the Radius of Gyration is particularly important in the analysis of column buckling. It helps in determining the slenderness ratio of a column, which is a key factor in predicting whether a column will fail by buckling under a given load. The slenderness ratio is defined as the effective length of the column divided by the radius of gyration. Columns with a larger slenderness ratio are more prone to buckling. 2. Structural Design: The concept is also essential in the design of various structural elements. It aids in understanding how a structural member will behave under compressive loads, thereby influencing the choice of materials and cross-sectional shapes in design considerations. 3. Material Science Applications: In material science, the radius of gyration is used to describe the dimensions of polymers. It provides an average measure of the size of the polymer coil, which is critical in understanding the physical properties of polymers like viscosity and flow behavior. Radius of Gyration Formula \( k = \sqrt{\frac{I}{A}} \) • k – Radius of Gyration (meters, m) • I – Area Moment of Inertia (square meters, m²) • A – Cross-sectional Area (square meters, m²) Applications of Radius of Gyration in Engineering The concept of the radius of gyration plays a pivotal role in various engineering disciplines, offering insights into the structural integrity and performance of materials and components. This section delves into the diverse applications of the radius of gyration in engineering, highlighting its significance in improving design efficiency, safety, and functionality. 1. Structural Engineering: Ensuring Stability and Strength In structural engineering, the radius of gyration is essential for assessing the buckling resistance of columns and beams. It is a critical parameter in the Euler’s Buckling Formula, which determines the critical load at which a slender column will buckle under compression. Engineers use this concept to design structures that can withstand specific load conditions without compromising safety. By calculating the radius of gyration, engineers ensure that buildings, bridges, and other structures have adequate strength and stability. 2. Mechanical Engineering: Design of Rotating Machinery In the realm of mechanical engineering, the radius of gyration is crucial for designing rotating machinery components like gears, flywheels, and rotors. It helps in analyzing the dynamic behavior of these components, ensuring that they can operate efficiently at high speeds. Understanding the distribution of mass around the axis of rotation, as indicated by the radius of gyration, allows engineers to optimize the design for minimal vibration and maximal performance. 3. Aerospace Engineering: Aircraft and Spacecraft Design Aerospace engineers utilize the radius of gyration in designing aircraft and spacecraft. It plays a significant role in determining the moment of inertia, which is vital for stability and control in flight. Accurate calculations of the radius of gyration help in optimizing the weight distribution of an aircraft or spacecraft, leading to improved aerodynamic performance and fuel efficiency. 4. Naval Architecture: Ship Stability and Design In naval architecture, the radius of gyration is used to assess the stability of ships and other watercraft. It provides insights into the ship’s ability to resist capsizing and ensures that the vessel remains stable in various sea conditions. By evaluating the radius of gyration, naval architects can design ships that are not only safe but also comfortable for passengers and crew. 5. Civil Engineering: Bridge Design and Analysis In civil engineering, particularly in bridge design, the radius of gyration helps in understanding how a bridge will behave under load. It aids in the analysis of stress distribution and deflection under traffic loads, environmental forces, and during earthquakes. This understanding is crucial for designing bridges that are not only structurally sound but also resilient in the face of natural Radius of Gyration Example Problem Radius of Gyration Example Problem Problem Statement Calculate the radius of gyration for a rectangular beam with a width of 200 mm and a height of 400 mm, about its base. • Width (b) – 200 mm • Height (h) – 400 mm Solution Steps Step 1: Calculate the Area Moment of Inertia (I) I = \(\frac{bh^3}{12}\) Step 2: Convert dimensions to meters b = 200 mm = 0.2 m h = 400 mm = 0.4 m Step 3: Substitute values into the formula I = \(\frac{0.2 \times 0.4^3}{12}\) I = 0.001067 m^4 Step 4: Calculate the Radius of Gyration (k) k = \(\sqrt{\frac{I}{b × h}}\) A = b × h = 0.08 m^2 k ≈ \(\sqrt{\frac{0.001067}{0.08}}\) ≈ 0.11549 m Why Is Radius of Gyration Important in Structural Engineering? In structural engineering, the radius of gyration is vital for assessing the buckling resistance of columns and beams. It helps determine the load capacity of a structure and ensures that it can withstand various stressors without collapsing. By accurately calculating the radius of gyration, engineers can predict how a structure will behave under specific loads, leading to safer and more reliable designs, especially in high-rise buildings and bridges. Applications of Radius of Gyration in Mechanical Engineering The radius of gyration in mechanical engineering is essential for designing and analyzing rotating components like gears and turbines. It helps in understanding the mass distribution relative to the rotation axis, which is critical for balancing and minimizing vibrations in machinery. This leads to more efficient, stable, and durable mechanical systems, especially in high-speed applications like automotive and aerospace industries.
{"url":"https://turn2engineering.com/calculators/radius-of-gyration-calculator","timestamp":"2024-11-13T08:30:55Z","content_type":"text/html","content_length":"208190","record_id":"<urn:uuid:6dd2f786-ec24-4df2-b1c1-64aa386f214e>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00351.warc.gz"}
Testing reliability of the spatial Hurst exponent method for detecting a change point The reliability of using abrupt changes in the spatial Hurst exponent for identifying temporal points of abrupt change in climate dynamics is explored. If a spatio-temporal dynamical system undergoes an abrupt change at a particular time, the time series of spatial Hurst exponent obtained from the data of any variable of the system should also show an abrupt change at that time. As expected, spatial Hurst exponents for each of the two variables of a model spatio-temporal system – a globally coupled map lattice based on the Burgers' chaotic map – showed abrupt change at the same time that a parameter of the system was changed. This method was applied for the identification of change points in climate dynamics using the NCEP/NCAR data on air temperature, pressure and relative humidity variables. Different abrupt change points in spatial Hurst exponents were detected for the data of these different variables. That suggests, for a dynamical system, change point detected using the two-dimensional detrended fluctuation analysis method on a single variable alone is insufficient to comment about the abrupt change in the system dynamics and should be based on multiple variables of the dynamical system. • In a non-linear dynamical system, the spatio-temporal images of one variable also contain information about the interaction of this variable with other coupled variables of the system. • When a parameter of the dynamical system changes abruptly, the spatial distribution of all the coupled variables should show an abrupt change simultaneously. • It is found that the 2D-DFA method successfully detected abrupt change points in each variable of GCML based on Burgers' chaotic map at the same time as a system parameter was changed. • The 2D-DFA method detected different abrupt change points in the time series of the spatial Hurst for the NCEP/NCAR data of three different variables. • This study suggests, for a dynamical system, change point detected using 2D-DFA method on a single variable alone is insufficient to comment about the abrupt change in the system dynamics and should be based on multiple variables of the dynamical system. Abrupt change in the climate refers to the change in the state of the climate that occurs in a short time and persists for a prolonged period. Abrupt climate changes have occurred in the past. These changes can be detected by means of statistical tests that can identify changes in the mean and/or some other properties of the dynamical system's variables. According to the intergovernmental panel on climate change (IPCC) report: ‘abrupt climate change detection refers to detecting a change in some definite statistical property (e.g. mean, variance and trend) of the climate’. Abrupt climate changes have been identified in the thermohaline circulation (THC), temperature and precipitation patterns (Pitman & Stouffer 2006; Narisma et al. 2007; Cramer et al. 2014). The prolonged Sahel drought and Dust Bowl are a few recent examples of abrupt climate change (Narisma et al. 2007). The earth's climate is governed by a very complex non-linear spatio-temporal system. This system may be regarded as made up of several sub-systems. The output from one sub-system becomes the input for another. If one of the sub-systems evolves slowly compared to another, the input from the slow sub-system can be treated as a parameter for the fast sub-system. Sudden changes in this parameter can cause abrupt changes in the fast sub-system. For example, the oceans change slowly compared to the terrestrial system. Abrupt surface climate changes identified from paleo records have been attributed to the sudden weakening or collapse of THC. Gradual changes in a parameter can also cause abrupt climate changes. An abrupt climate change occurs when the climate system is forced across some threshold causing a transition to a new state, at a rate which is faster than its cause (Alley et al. 2003). The internal thresholds within the terrestrial system also allow abrupt changes to occur when driven by external forcing such as gradual changes in solar insolation (Pitman & Stouffer 2006). Such threshold values are a fundamental property of a non-linear system, and with sufficiently large perturbations, irreversible changes can happen (Stocker 1999). Abrupt change can also occur if a system makes a regime transition from the neighbourhood of one equilibrium state to the neighbourhood of another equilibrium state. In such cases, external triggers for transition are not required (National Research Council Reports 2002). Such changes are determined by the internal dynamics of the system (Pitman & Stouffer 2006). Palmer (1994) introduced the Lorenz model as a conceptual model for such regime changes. The attractor reconstruction method applied to time series of rainfall data was found to be consistent with this conceptual model (Yadav et al. 2005). The existence of a low dimensional attractor suggests that the rainfall data are governed by a low dimensional chaotic dynamical system (Singh et al. 2019). In the context of earth sciences, change-point detection techniques have been widely used in several studies to detect abrupt changes in temperature and in precipitation (Kothyari & Singh 1996; Gallagher et al. 2013; Bisai et al. 2014; Chen et al. 2014; Khan et al. 2014; Zarenistanak et al. 2014; Chen et al. 2016). The detection of climate shifts requires high-quality long-term observations. It also requires a deep understanding of how natural and anthropogenic factors influence the climate (Cramer et al. 2014). Any technique for detecting climate shifts should have the capability to capture non-linear interactions between different components. It has been argued (He et al. 2016) that traditional single-dimensional (1-D) abrupt change point analysis methods like those used by Mann & Whitney (1947), Yamamoto et al. (1985) and Li et al. (1996) allow only delayed change point detection. Therefore, abrupt change points detected using these methods lose their significance for real-time applications like disaster preparedness. This leads to the need for a method which can detect an abrupt change as it occurs. On the other hand, the long-term changes in the climatic variables exhibit abrupt shifts and non-linearity (Beaulieu et al. 2012). Therefore, the inherent non-linearity of natural systems requires non-linear techniques, like detrended fluctuation analysis (DFA), in order to reveal intrinsic information about such systems. The DFA method is used to obtain the Hurst exponent. Harold Edwin Hurst had studied the hydrological properties of the Nile basin. In his study, Hurst normalised the adjusted range (R) by the sample standard deviation (σ) to obtain what is now called the rescaled adjusted range statistic. The ratio R/σ increases with some power of time R/σ ∝ n^H, where H is the Hurst exponent (Hurst et al. 1965). The Hurst exponent is a useful measure for understanding the properties of a time series without making assumptions about statistical restrictions (Tatli 2015). The Hurst exponent provides a measure for long-term memory and fractality of a time series, and it has applications in fields like meteorology and The DFA method was developed from a modified root mean square analysis of a random walk, and it has proven to be suitable for analysing a time series with long-range correlation (Peng et al. 1994). DFA is widely being used to detect the long-range correlation in time series obtained from natural and artificial systems (Kantelhardt et al. 2001; Király & Jánosi 2005; Varotsos et al. 2007). In recent years, the DFA method has been generalised from DFA to multifractal DFA (MFDFA) to explore the multifractal nature hidden in the time series. A two-dimensional DFA (2D-DFA) method proposed by Gu & Zhou (2006) has been found to be an accurate method for calculating the spatial Hurst exponent H of fractal images. We use the term ‘spatial Hurst exponent’ as a shortened expression for the term ‘Hurst index of the surface’ in Gu & Zhou (2006). The 2D-DFA method has gained a lot of interest in recent times for analysing two-dimensional images. The strength of this method is that it exploits the intrinsic behaviour of the dynamical system directly from the two-dimensional images, which preserves more information than a single time series constructed by averaging the spatio-temporal system. Spatio-temporal images contain information both about evolution in time and about spatial interactions (He et al. 2016). The 2D-DFA has been applied to analyse changes in the roughness of fracture surfaces, landscapes, clouds and temperature fields (Gu & Zhou 2006). Whereas the traditional change point detection methods suffer from delayed detection of abrupt change in climate, the 2D-DFA method solves this issue by detecting an abrupt change point almost exactly as it occurs (He et al. 2016; Liu et al. 2017 ). He et al. (2016) used the 2D-DFA method to detect abrupt changes in the dynamical system from a single variable data of an artificial system and a real-world system. It is important to understand the difference between the abrupt change detection method of He et al. (2016) and other methods. The other methods detect an abrupt change in the series of a variable averaged over a spatial surface. The method of He et al. (2016) detects an abrupt temporal change in the spatial distribution of a variable. He et al. (2016) have argued that spatio-temporal images contain information about evolution in time as well as spatial interactions. We argue that in a non-linear dynamical system, the spatio-temporal images of one variable also contain information about the interaction of this variable with other coupled variables of the system. When a parameter of the dynamical system changes abruptly, the spatial distribution of all the coupled variables should show an abrupt change simultaneously. In support of their method, He et al. (2016) successfully detected abrupt change points introduced by changing a parameter in a model coupled map lattice based on the Henon Chaotic Map. This shows that the abrupt change addressed by them belonged to the category of the changed dynamical system due to parameter changes and not changes in a variable of the same dynamical system. We explore the reliability of the 2D-DFA method for identifying this type of abrupt change. This study is based on the premise that an abrupt change in the dynamical system should lead to an abrupt change in the spatial Hurst exponent, not for just one variable of a non-linear dynamical system but for all the variables at the same time. The reliability of using abrupt changes in the spatial Hurst exponent by the 2D-DFA method for identifying temporal points of abrupt change in climate dynamics is explored. This is done first on the artificial data. We replaced the Henon map used in He et al. (2016) with the Burgers map, which has a more complex relationship between the X and the Y variables than for the Henon map. We expected this to provide a more stringent test for our premise that the 2D-DFA method applied to the data from either variable should show abrupt changes at the same time. We found that the 2D-DFA method successfully detected abrupt change points using either variable of this model system at the same time as a system parameter was changed. The above hypothesis was tested on the real-world meteorological data by applying the 2D-DFA method for the identification of change points in climate dynamics using the NCEP/NCAR data on air temperature, pressure and relative humidity variables. It is expected that if the dynamical system changes abruptly, data on all the variables of the dynamical system should give a change point at the same time. Different abrupt change points in spatial Hurst exponents were detected for the NCEP/NCAR data of these three meteorological variables. Before applying the 2D-DFA method for detecting abrupt dynamical change on real data, it was tested on artificial data generated by a coupled map lattice as a discretised model of a spatio-temporal system. This model is based on the Burgers' chaotic map (Whitehead & Macdonald 1984; Burgers 1995). The Burgers' mapping is a discretisation of a pair of coupled differential equations which were used by Burgers to illustrate the relevance of the concept of bifurcation to the study of hydrodynamic flows. These equations are similar to the Lorenz model (Elabbasy et al. 2007) which has been widely used as a conceptual model to study weather and climate (Palmer 1994; Mittal et al. 2015; Singh et al. 2015). Whitehead & Macdonald (1984), in their study, stated that ‘The Burgers map is chaotic in the way that the Lorenz model is, namely the iterates orbit one unstable fixed point until a flip occurs whereupon they orbit the other such point.’ The Burgers map has a more complex relationship between the X and the Y variables than the Henon map, which was previously used in the study by He et al. (2016). Therefore, in the present study, a globally coupled map lattice (GCML) based on the Burgers' chaotic map was used as a model to test our basic premise that with an abrupt change (here, by introducing sudden change in a parameter) in the spatio-temporal dynamical system, it is expected that the time series of spatial Hurst exponent obtained from the data of any variable of the system should also show an abrupt change at the same The GCML was formed as described in the literature ( Vasconcelos et al. 2006 ). A lattice point is defined by an integer pair . The variables at the lattice point at time step are denoted by and . If these variables evolved without any coupling, we would have: However, in a coupled map lattice, the evolution at a lattice point is modified by the values of variables at neighbouring points. The lattice point has eight nearest neighbours: and . In a coupled map lattice, the value of a variable at a lattice point is reduced by a factor and gets a contribution from each of its nearest neighbours weighted by , so that:with a similar equation for the Y variable. The parameter denotes the coupling strength. The contribution from more distant points is weighted as a negative power of the distance, so that:where and , n[x] = 150 and n[y] = 100 are the number of grid points in x- and y-directions. The parameter A governs the range of the coupling. The larger the value of A, the faster the decay in the contribution from distant points. The parameters for the coupled map lattice were chosen from the literature, i.e., A= 8, = 0.8 (He et al. 2016) and the values a= 0.75 and b= 1.75 (Sprott 2003). Values on the boundary were held constant; and if belongs to the boundary (Vasconcelos et al. 2006; He et al. 2016). At any time step n, the size of the matrices X and Y is 150 × 100. The initial values of these matrices were chosen randomly in the range from −0.1 to 0.1. After the first 200 transient points, changes in the Hurst exponent H became stable. Therefore, the first 200 points were discarded. The 2D-DFA method described below was employed for computing, as a function of time, the spatial Hurst exponents for this model as well as for the NCEP/NCAR reanalysis surface daily data, i.e., air temperature, pressure and relative humidity variables for 1950–2017 in the region (60°S–60°N, 0–357.5°E) having 144 × 49 grid points (Kalnay et al. 1996). DFA is widely employed for estimating long-range correlations of a time series (e.g., meteorological time series, time series in economics and heart rate time series). The DFA method was introduced originally to investigate the long-range dependence in a DNA sequence (Peng et al. 1994). The DFA was generalised by Gu & Zhou (2006) for exploring long-range correlation properties of a two-dimensional surface. The 2D-DFA method is summarised below: Step 1. Consider a two-dimensional matrix X(i, j), representing the value of the variable X at the lattice point (i, j), where i= 1, 2, …, M and j= 1, 2, …, N. The matrix is partitioned into M[s]×N [s] disjoint square segments of size , where M[s]= [M/s] and N[s]= [N/s]. The segments can be denoted by such that: Step 5. By varying the value of s in the range from 6 to , we can determine the scaling relation between detrended fluctuation F(s) and size scale s, which is F(s) ∼ s^2H, where H is the Hurst exponent of the surface. The Hurst exponent H = 0.5 indicates that the values of the variable at different lattice points are uncorrelated. 0.5 <H< 1 indicates that the values are correlated, whereas 0 <H< 0.5 indicates anti-correlated values. If there is an abrupt dynamic change in a spatio-temporal system, the time series of spatial Hurst exponents will exhibit a non-stationary change. However, if there is no abrupt dynamic change in the dynamic system, changes in the Hurst exponents will be stationary. Change point detection in time series of the Hurst exponents H In this study, in order to detect an abrupt change in the time series of 2D spatial Hurst exponents, the moving -test (MTT; Li & Shi 1993 Li et al. 1996 ) is used. This test detects abrupt change by determining whether there is a significant difference between the average values of two subseries at a significance level of = 0.001, i.e., 99.9%. For a given time series , let and be two subseries before and after the datum point. Then the value is defined by the following equation:where and are the means, and the standard deviations, and and are the sizes of the two subseries. Two subseries of the same length before and after the datum point were considered. Thus, the two series lie in a window to the left and right of the datum point i, which is at the centre of the window. The t values were calculated using Equation (11), as the window slides forward and the datum point is moved continuously. If , where indicates the significance level and the series had an abrupt change at the datum point i. A flow diagram of the methodology used in this study is given in Figure 1. The 2D-DFA method was applied on both X and Y variables of the GCML based on the Burgers' chaotic map to obtain the spatial Hurst exponent at different time steps n. The power-law scaling between the detrended function F(s) obtained from Equation (10) and scale s for the spatial images of GCML at time step n= 201 is shown in Figure 2 and the H is equal to 0.98 for both images, which indicates the presence of strong interaction in the coupling model. The mutual coupling interaction between the lattice points is the main cause of such strong correlation. The computed Hurst exponents H as a function of n are shown in Figure 3. It is evident that the Hurst exponents H obtained from the X and the Y variables exhibit similar behaviour, i.e., the mean value of H is independent of n, when there is no abrupt change. We applied two abrupt change scenarios for the GCML. In scenario 1, the coupling strength was changed from 0.8 to 0.4 at the time step n= 2,001, and from 0.4 to 0.85 at the time step n= 4,001. The time series of the spatial Hurst exponents for the two variables are shown in Figure 4. Both the time series exhibit an abrupt change precisely at the time steps n = 2,001 and n= 4,001. In scenario 2, the parameter b was changed from 1.75 to 1.80 at the time step n = 2,001 and at time step n= 4,001, the value of b was restored to its previous value of 1.75, whereas the parameter a was changed from 0.75 to 0.80. Figure 5 shows that for this scenario also the spatial Hurst exponent time series for either of the two variables exhibit abrupt changes at precisely the same time Figure 6 shows that for both the scenarios using either of the variables X or Y, the MTT method correctly detects the time steps at which the system dynamics changes (Table 1). The contour plots of spatial images for X and Y variables of GCML formed using Burger's chaotic map at time steps n = 201, 500, 2,001 and 4,001 for scenario 1 are shown in Figures 7 and 8. The figure shows that the interaction between lattice points is found to be strengthened after the transient state. Similar contour images are also found for both variables of GCML in the case of scenario 2 (figure not Table 1 Variable . Time instant . Time instant . Time instant . Air temperature 1973–1974 1989–1990 1992–1993 Pressure 1975–1976 1985–1986 1990–1991 and 1998–1999 Relative humidity 1966–1967 1985–1986 2000–2001 Variable . Time instant . Time instant . Time instant . Air temperature 1973–1974 1989–1990 1992–1993 Pressure 1975–1976 1985–1986 1990–1991 and 1998–1999 Relative humidity 1966–1967 1985–1986 2000–2001 We have also tested the abrupt change point detection in the GCML model for different values of coupling range parameter A (figure not shown). The GCML model (artificial data) explicitly introduces impact from distant points. As the parameter ‘A’ of the model is reduced, the range of lattice points that influence the evolution at any lattice point increases. Thus, feedback could have some impact on the model. We have checked and found that the abrupt change in a parameter is detected identically by both variables even for very small values of A. We next applied the 2D-DFA method to real-world data. The 2D-DFA method was used for the detection of abrupt change points from NCEP air temperature, pressure and relative humidity data. Figure 9 shows daily average air temperature anomaly data in the region (60°S–60°N, 180°W–180°E) for January 1, 1950. The log F(s) vs. log s plot obtained from the daily average air temperature anomaly for January 1, 1950 is shown in Figure 10. The value of H for daily anomaly of air temperature is 0.97 (Figure 10), while the values of H for the same period of daily average anomaly of pressure and relative humidity are 1.01 and 0.85, respectively (figure not shown). The time series of Hurst exponents H calculated using 2D-DFA for the three NCEP variables data are shown in Figure 11. The Hurst exponents of H > 1.0 (Figure 11) indicate that a relatively strong interaction exists for the daily average temperature and pressure in different regions. In other words, the variation of the daily average temperature and pressure in the spatial domain is not random (He et al. 2016). The t-statistics for detecting an abrupt change in the dynamics using MTT on spatial Hurst exponents obtained from the data on the three NCEP variables are shown in Figure 12. The detected change points are listed in Table 1. It is found that the change points detected using MTT on mean of H for three different NCEP/NCAR variables are not identical (Table 1). In this study, we have tested our hypothesis, i.e., if a spatio-temporal dynamical system undergoes an abrupt change, it is expected that the time series of spatial Hurst exponents obtained for each of the variables of the dynamical system will also undergo an abrupt change at the same time. It is found that the Hurst exponents time series calculated using 2D-DFA method successfully detected abrupt changes at the same time in each X & Y variable of GCML based on the Burgers' chaotic. The result from the artificial data supported or hypothesis for both scenarios. In contrast, the 2D-DFA method when applied to NCEP spatio-temporal data of three different variables – air temperature, pressure and humidity – detected different change points. The climate variables considered for the identification of abrupt change in the dynamical system are closely related and measured in the same region. These variables are governed by coupled partial differential equations. When a parameter of the system changes, it is difficult to understand why different variables exhibited abrupt change at different times. The interaction between the meteorological elements and the feedback relationship are non-simultaneous, so it may be the reason that abrupt changes detected from the different elements do not coincide. By detecting an abrupt change in a single variable, one cannot infer that the dynamical system changed abruptly, since a single variable can show abrupt changes without the dynamical system changing abruptly (Yadav et al. 2005). Therefore, the conclusion that the whole dynamical system changed at identified change point, by using MTT on time series of spatial Hurst exponents calculated via the 2D-DFA method, should be based on more than one meteorological variable of the dynamical system. On the other hand, the conclusion based on identified change point from a single variable that abrupt change occurred in the whole dynamical system cannot be considered a reliable indicator of abrupt dynamical system change. U.P.S. is thankful to CSIR, Government of India for providing senior research fellowship letter no. 09/001/0399/2016-EMR-I. We acknowledge NCEP/NCAR for producing and making their reanalysis data freely available. © 2021 The Authors This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (
{"url":"https://iwaponline.com/jwcc/article/12/8/3661/84449/Testing-reliability-of-the-spatial-Hurst-exponent","timestamp":"2024-11-14T20:16:47Z","content_type":"text/html","content_length":"379336","record_id":"<urn:uuid:d740cfa1-82ec-468e-a333-cd821dd8ae33>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00308.warc.gz"}
Ekam Kasoti Maths Std 6 Dec 2020 Solution Maths Science Corner Standard 6 Ekam Kasoti Solution December 2020 On maths science corner you can now download new NCERT Gujarati Medium Textbook Standard 6, 7 and 8 Maths as well as science material in pdf form for your easy reference. On Maths Science Corner you will get all the printable study material of Maths and Science Including answers of prayatn karo, Swadhyay, Chapter Notes, Unit tests, Online Quiz etc.. This material is very helpful for preparing Competitive exam like Tet 1, Tet 2, Htat, tat for secondary and Higher secondary, GPSC etc.. Today Maths Science Corner is giving you the Solution of Mathematics Ekam Kasoti Solution of Ekam Kasoti (Periodic Assessment Test) (SSA PAT) held on July 2020 paper for your easy reference. SSA PAT is the new concept by Department of Education, Government of Gujarat to know the actual Learning Outcome by the student. For this Ekam Kasoti (Periodic Assessment Test) (SSA PAT) is organized on every Saturday for various Classes i.e. Class 3 to 8 for all subjects. In which Paper is designed in such a way that it will cover 3 to 4 Learning Outcomes. So, it is easier to access the student. Here is the list of Learning Outcomes : Learning Outcome 1 : The learner understands concept of tenth and hundredth. Learning Outcome 2 : The learner compares decimal fractions. Learning Outcome 3 : The learner converts fractions into decimals and decimals into fractions. Learning Outcome 4 : The learner calculates problems based of decimals. Learning Outcome 5 : The learner calculates practical problems of decimals in day to day life. Mathematics Standard 6 Ekam Kasoti December 2020 Solution You can get video solution of this Ekam Kasoti of December 2020 from YouTube by clicking the following link : You can get Std 6 Material from here. You can get Std 7 Material from here. You can get Std 8 Material from here. You can get Std 10 Material from here. For all Daily Quiz Click Here. No comments:
{"url":"https://www.mathssciencecorner.com/2021/01/ekam-kasoti-maths-std-6-dec-2020.html","timestamp":"2024-11-12T16:58:51Z","content_type":"application/xhtml+xml","content_length":"141714","record_id":"<urn:uuid:6c08978d-2469-4fb9-8907-2745fd1e4cac>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00057.warc.gz"}
!!! Dynals v2.x evaluation version is available !!! for free download Software for Particle Size Distribution Analysis in Photon Correlation Spectroscopy This document was written by Dr. Alexander A Goldin (Alango Ltd.) About this document This document describes the sophisticated DYNALS software and its underlying advanced algorithms for Distribution Analysis of Photon Correlation Spectroscopy (PCS). Its purpose is to help DYNALS users understand and "feel" the problem of Particle Size Distribution Analysis in PCS. It also provides necessary information regarding the algorithms that have been developed by the author over a number of years. The author is an applied mathematician, not an expert in Photon Correlation Spectroscopy. Therefore, rather than present information about PCS, this document outlines the mathematical aspects of the "ill-posed problem" at hand and discusses a possible means to cure it. An overview of the algorithms implemented in DYNALS is provided, along with a description of how the algorithm's parameters may be changed through the DYNALS graphical interface. While using DYNALS, you will generally not need to alter computational parameters. However, this document includes recommendations regarding how and when these parameters may be altered to extract the maximum amount of information from your experimental data. This document is neither a scientific paper nor a software manual. It is not rigid in the manner of a scientific paper and contains many practical tips and advices based on the author's personal experience. Unlike a software manual, it does not provide detailed step-by-step explanations of the DYNALS procedures, menu items, data formats, etc. This kind of information may generally be found in the DYNALS on-line help. This document was last updated in March 2002 Table of contents: 2.1 Loading data 2.2 Setting the physical parameters (optional) 2.3 Setting the processing parameters (optional) 2.4 Getting the results 2.5 Resolution slider value and other processing parameters 3.1 Discretization of the problem and the maximal reconstruction range 3.2 Numerical analysis of the problem 3.3 Traditional approaches to the problem 3.4 Overview of the algorithms implemented in DYNALS 3.5 g(2)(t) to g(1)(t) conversion or "square root" trouble Appendix A. Practicing DYNALS A.1 Understanding the term "ill-posed problem" A.2 Understanding the resolution and its dependencies Appendix B. Tagged Data Format (TDF) 1. Introduction to the Problem of Particle Size Distribution Analysis in PCS Photon Correlation Spectroscopy is an indirect measurement method. The measured quantity (the auto-correlation of fluctuations of the scattered light intensity) must undergo further processing to extract the quantity of interest (e.g. the particle size distribution or the diffusion coefficient distribution). In the simple case of a mono-dispersed solution (spherical particles of the same size) and when some other physical conditions are met, the normalized intensity correlation function is described by an exponential plus a unit constant: The coefficient G is related to the physical properties of the particles and the experimental conditions by the following expressions: G is the decay rate q is the scattering vector D[T] is the translation diffusion coefficient n is the refraction index l is the scattered light wave length Q is the scattering angle T is the absolute temperature of the scattering solution K is Boltzman's constant h is the solvent viscosity R[H] is the hydrodynamic radius. In case of a poly-dispersed solution where the particles are not identical, the auto-correlation function of the scattered light intensity is described by the following pair of equations: Disregarding the influence of inevitable experimental noise x(t) (discussed later), equation (2) allows us to compute the first order field correlation function g^(1)(t) from the second order intensity auto-correlation function g^(2)(t) accumulated by the correlator during the experiment. DYNALS' treatment of the integral equation (3) will be the main subject of concern in the remainder of this document. This integral equation actually describes the relationship between the experimental data obtained during the experiment and the physical properties of the solution under study. If the decay rate distribution R(G) is known, then the Diffusion Coefficient Distribution or Particle Size Distribution can be easily computed using the equations (1). Sometimes the physical model may be more complex (e.g. if the particles are not spherical), in which case equations (1) must be replaced with something more complicated. The integral equation (3) forms the basis for data processing in Photon Correlation Spectroscopy. This kind of equation with respect to R(G) is known as a "Fredholm integral equation of the first kind" and its solution is known to be an "ill-posed mathematical problem." Practically, this means that if the function g^(1)(t) (experimental data) is known with even a minute error, the exact solution of the problem (the distribution R(G)) may not exist, or may be altogether different from the actual solution. On the other hand, the approximate solution is never unique. Within the experimental data error there exist an infinite number of entirely different distributions that fit the experimental data equally well. This does not mean that there is no reason to improve the accuracy of the experiment. The experimental error and the structure of the correlator (linear, logarithmic, etc.) define the class of distributions which may be used as candidates for the solution. The better the data, the smaller the number of distributions that fall within that class, thereby leaving fewer alternatives for the algorithm used to solve the problem. Another method of reducing the number of possible solutions is to look only for non-negative solutions. Since a distribution can never be negative (i.e. the number of particles cannot be negative), this constraint is always valid and its use is always justified. Looking for non-negative solutions dramatically reduces the class of possible solutions and may sometimes even cause the problem to be stable. Generally, however, this is not sufficient. The class of non-negative distributions which fit the experimental data is still too wide to pick one of them at random and assume that it is close enough to the actual distribution. An additional strategy must be used to choose one particular distribution which will satisfy our intuitive notion about the physical measurements. Thus we must have some criterion that will allow us to express our preferences. Generally, we would like to pick "the smoothest" distribution from the class of possible candidates. The difference between the various algorithms and their implementations is based on how this additional criterion is formulated and realized. In DYNALS, "the smoothest" distribution is quantified as the distribution with the smallest amount of energy, or for mathematicians, as the distribution of the minimal L[2] norm. The different approaches to the problem are discussed in the ensuing sections of this document. If you are not familiar with the problem, now is a good time to try out the DYNALS program and the included synthetic data sets. This will give you a better feel for the problem and more importantly, a deeper understanding of DYNALS and what you can and cannot expect from the program. It is very important to play with synthetic data as opposed to real data. In this way, you become familiar with the kind of distribution achieved in the best possible scenario where no noise is present in the data. You can observe how a higher noise level broadens the class of possible solutions and how it influences the resolution achieved. This will also help you attain the practical skills necessary to work with DYNALS in your daily work. Initially, you are advised to follow the instructions described in "Understanding the term "ill-posed problem" in Appendix A. 2. Using DYNALS This section explains how simple it is to work with DYNALS. DYNALS processes your experimental data and automatically gives you the best possible distribution. Just point and click; the distribution is computed, the results are analyzed, the analysis is stored, and all important information is printed. The DYNALS on-line help, along with your own intuition, should be sufficient to help you begin the processing and enable you to obtain initial results. This document will help you interpret the results and make sure that information from the experimental data has been properly utilized. DYNALS is under continuous development. Therefore, the operational instructions provided in this section may not be up-to-date. These instructions should provide you with tips and hints regarding the program's possibilities. Please refer to the on-line help for the exact names of menu items, entry fields and other details. 2.1 Loading Data Loading data is generally the first thing you do after you have started DYNALS. Simply select File > Load data from the menu bar or click the Load button on the Status Bar, and select the desired file. You should see the data displayed in the upper left graphical window. You can set the Y axis to use either the linear or logarithmic scale, and the X axis to correspond to channel numbers or time delay. At the present time, DYNALS supports two data formats: • Files with the extension .DAT have a very simple ASCII format. This file contains a normalized second order correlation function measured by the correlator. Every line of the file corresponds to one particular channel and consists of two numbers: the corresponding time delay in seconds and the value of the correlation function. Since this file does not include the physical parameters of the experiment, relevant parameters that are different from the ones used for the previous processing, should be entered manually. Examples of such files are installed during the DYNALS installation procedure. • Files with the extension .TDF (Tagged Data Format) are ASCII files whose structure is similar to the structure of Windows .INI files. These files may contain different information including the correlation function itself and the parameters of the experiment. Each category of information is stored in a specific section of the file and is provided with its own tag. Therefore, reading the procedure distinguishes between the different types of information and deciphers them correctly. The structure of .TDF files, (i.e. the section and tag names) recognized by DYNALS are described in Appendix B. Examples of TDF files are installed during the DYNALS installation procedure. Other well-defined formats may be supported in the future. Please contact the authors if you have any suggestions. 2.2 Setting the Physical Parameters (optional) To obtain correct values along the axis of the diffusion coefficient or the hydrodynamic radius axis, you should set suitable values for the physical parameters of the experiment. If the physical parameters are the same as they were for the previous experiment, or they were provided with the data (e.g. in a TDF file), you may skip this section. To set the correct values, select Settings > Processing parameters from the menu bar and enter the desired values. In the current version of DYNALS, the processing parameters are located in the same dialog as the physical parameters. Set the desired processing parameters as described in the next paragraph, or just click the OK button to begin the computations. 2.3 Setting the Processing Parameters (optional) In general, the processing parameters can be set once and left as is. There is no single parameter which defines the stabilization of the problem, rather it is computed automatically and may be altered afterwards without entering the dialog. The parameters set before initiating the processing, define the following: • The kind of distribution you wish to obtain. Available options are: decay times, decay rates, diffusion coefficients and hydrodynamic radii. • The range of channels for which the fitting takes place. Sometimes you may know in advance that some of the correlator channels contain incorrect information which will corrupt the results and prevent a good fit. If such channels are at the beginning or at the end of the correlation function, you can exclude them from consideration by entering the numbers of the first and the last channels of the range you wish to process. Entering --1 as the last channel means "until the last channel". If the number of intervals is high enough, it will not define the actual resolution between the distribution peaks, nor will it define the stabilization of the problem. The resolution is defined automatically and is dependent upon the noise, the correlator structure and the distribution itself. The problem of resolution is discussed later in this document. It is recommended that the number of intervals be set as greater than 50 so the distribution looks continuous and does not impose constraints on the optimization procedure. Using this parameter you can also emulate the method of histogram [1]. Choose a small number of intervals (10-15) and set the maximal resolution after the processing is finished. This corresponds to the situation when the stabilization is accomplished by directly reducing the problem dimension and not by using a regularization procedure. You can easily see that this is an inferior approach. This parameter defines the interval on the axis of the physical units where the distribution is computed. If the beginning of the interval (value in the From field ) is larger than the end of the interval (value in the To field), then the default values are used. These values are large enough to restore both the slowest and the fastest exponential components that can be restored. On the one hand exponentials which decay too fast cannot be detected since they vanish before the first channel of the correlator. On the other hand exponentials which decay very slowly appear as a constant and cannot be distinguished from each other. Thus there exists a finite range defined by the structure of the correlator and the sampling time, in which the distribution may be restored. Unless you want to experiment or make comparisons with other software, we recommend using the default values. Please refer to the DYNALS on-line help for more information on the processing parameters. 2.4 Getting Results If you change one of the physical or processing parameters, the distribution is automatically computed as soon as you leave the dialog. If you just loaded new data, click the Start button to activate computations. No matter how the processing was initiated, the progress indicator on the status bar shows how much work has already been done. After the processing is completed, the DYNALS window appears similar to the one at the beginning of this document. In the upper left part of the screen you will see the experimental data together with the fitted curve corresponding to the computed distribution. The lower left part of the screen shows the residual curve, which is the difference between the experimental data and the fitted curve. The residual curve is very important for estimating the quality of fit. For a good fit the residual curve should look random. If it does not look random either the data is 'underfitted', or there is a systematic error present in the data. In both cases the results are probably invalid. The computed distribution is displayed in the upper right part of the screen and may be displayed as a graph or table depending on the position of the selection handle at the right. The distribution is computed using a logarithmic scale along the horizontal axis. The height of each column indicates the relative distribution area under this column which, due to the logarithmic scale, is not the same as its amplitude. The area under the entire histogram is normalized so that it is equal to one. The lower right corner displays an analysis of the computed distribution. This window presents parameters of the entire distribution, its peaks and subpeaks. Note that the mean and the standard deviation of the entire distribution correspond to the values being computed and the method of cumulants. However, these values are generally computed more accurately by the distribution analysis. 2.5 Resolution Slider Value and Other Processing Parameters When the distribution is computed, the resolution slider in the upper right corner of the DYNALS window is enabled and the OptRes button on the status bar remains disabled. This means that the distribution displayed in the upper right window is computed using the optimal resolution. The terms 'resolution', 'optimal resolution' and 'maximal resolution' are important to your understanding of DYNALS and are, therefore, discussed here in greater detail. You may use DYNALS without actually knowing what these terms mean. If this is the case, use the default distribution domain parameters and the number of distribution intervals larger than 50. If the residual values (bottom left window) look random, don't try to 'improve' the distribution by altering the resolution slider -- the distribution computed with the optimal resolution contains all the information that can be extracted from the experimental data. If you have absolutely reliable a priori information that your distribution consists of a small number (1 - 4) of very narrow peaks, you may try to set the maximum possible resolution by shifting the resolution slider to its upper position. This may dramatically improve the resolution due to the impact of the additional information you provided. Note, though, that if your information is incorrect, you will obtain a sharp but erroneous distribution. If the residual values do not look random then either you have a problem with your data (most probable), or the algorithm made a faulty estimation of the experimental noise. The following paragraphs explain how to verify this aspect of the distribution. As mentioned above, the problem of computing the Particle Size Distribution from PCS data (or any related distribution) is an unstable, "ill-posed mathematical problem." The solution is not unique and generally should not be the one that provides the best possible fit for the experimental data in accordance with equation (2). Instead the solution should provide a reasonable fit which is stable enough. 'Reasonable fit' usually means 'up to the accuracy of experimental data. 'Stable enough means 'the one that does not change drastically with small changes in the experimental data. For example, we would expect to get similar distributions from several experiments made on the same sample although the data are slightly different due to inevitable experimental noise. When the random error in the experimental data increases, stability is improved by choosing a smoother distribution. In other words, the resolution decreases as the error increases. The stability is always improved by reducing the number of degrees of freedom in the solution. The simplest way to reduce the number of degrees of freedom is to divide the distribution domain into a smaller number of intervals. For example, if you set the number of logarithmically spaced intervals as 10, this, together with non-negativity constraints on the solution, makes the problem mathematically stable. This kind of stabilization, sometimes called "naive" [2], constitutes the essence of the method of histogram [1]. Unfortunately, the approximation of the distribution of interest with only 10-12 intervals is too irregular. It thus prohibits the reconstruction of details which can be attained using more sophisticated algorithms. You may easily check this claim by setting the number of distribution intervals in DYNALS to 10 and shifting the resolution slider to its maximal value, thus effectively performing the method of histogram. If the number of distribution intervals is large enough (recommended value is 50 or larger), the resolution slider position shows the trade-off between the fit of experimental data and the maximum attainable resolution. It should be understood that resolution along the G axis is not constant. It varies for different values of G and depends on the accuracy of experimental data, the structure of the correlator and on the distribution itself. For example, with good authentic data, it is possible to resolve two delta functions d(G- G[1]), d(G- G[2]) (two peaks) with G[2] / G[1] =3 when they are in optimal position on the G axis (about the middle of the default distribution domain)^ Footnote 2 . The same resolution (3:1) is not possible if delta functions are located closer to the DYNALS default domain ends or if additional components are present in the distribution. You may learn more about the resolution and its dependence on the above factors by practicing the exercises " Understanding the resolution and its dependencies" in Appendix A. If the resolution slider value is too low, the distribution will not have enough detail to provide a good fit for the experimental data. If the value of the resolution slider is too high, the data will be "overfitted", where the details of the distribution may not be reliable due to the noise in the experimental data. The optimum value is in between, so that the resolution slider value is the smallest one that provides a good fit. The stabilization algorithm implemented in DYNALS automatically finds and sets this optimum value. In general, you will not need to alter the optimal resolution slider value once it has been set after new experimental data has been processed for the first time. Whether your data are real or simulated with a reasonable noise, you will always get a value between 0 and 1 for resolution slider. There are three cases where you might want to alter the optimal resolution value: • The first case, as described above, occurs when you know a priori that the distribution of interest consists of a small number (1-4) of very narrow peaks. In this case you can shift the resolution slider to the maximum value and get the accurate positions and amplitudes of these peaks. Utilizing the additional information, you may obtain a resolution that is much better than the optimal one. You should exercise caution in this case, since the computed distribution will always consist of narrow peaks even if the true distribution is continuous and wide. This is the consequence of using incorrect information. You can see these effects by using the maximal resolution while doing the exercises in Appendix A. • The second case is where you doubt that the found value is really optimal. The optimal value should be the smallest one that makes the residual graph to look random. If it looks random, try to reduce the value until the shape of the distribution changes significantly. If you see that the distribution is noticeably different but the residuals and their standard deviation are still the same, then the algorithm has underestimated the error. (This case is rare, but theoretically possible.) • The third case is where the opposite situation occurs, and there is a significant, not random (systematic) component in the residuals. In that case, try to increase the resolution slider value. If it cures the residuals, the algorithm overestimated the real error. If it does not help, the experimental data does contain some systematic error. Distributions obtained from such data may be very different from the actual ones since the algorithm has no way to differentiate between the systematic error and the data. This document has already discussed two of the three parameters that influence the shape of the computed distribution. The last parameter is the domain where the distribution is computed. The distribution domain is defined by its lowest and highest values, and the distribution is considered to be zero outside the specified interval. This, of course, may not be the real case, but our ability to reconstruct the distribution outside certain limits is very limited. Exponentials that correspond to small particles and decay rapidly, may vanish too quickly to leave any trace even in the first correlator channel. Consequently, they cannot be detected and in accordance with the parsimony principle should be considered non-existent. On the other hand, exponentials that correspond to relatively large particles, decay so slowly that they cannot be distinguished from a constant. This constant will appear as a wide peak in the corresponding part of the distribution. No resolution or localization is possible in that area, only the detection of the distribution's presence^ Footnote 1 . The default distribution domain used in DYNALS is wide enough to accommodate the entire range possible for reliable reconstruction. If you think that it is better to use a wider domain, feel free to enlarge it. A wider range will not be detrimental, provided that the number of intervals is large enough to allow DYNALS to reconstruct the details of the distribution. The author does not recommend reducing the domain. It cannot improve the resolution, and may distort the distribution shape if significant portion of the distribution is left out. 3. More Mathematics (optional section) This section is optional and provides a deeper understanding of mathematical difficulties related to extracting the Particle Size Distribution or related distribution from PCS experimental data. This understanding is not essential in order to work with DYNALS. This chapter is designed to free the user from the mathematics and simultaneously provide enough flexibility to check and control the computations where there may be some doubt regarding the computed results; it is for those cautious users who want a thorough understanding of this rather complicated problem of applied mathematics and the relationship between its mathematical properties and the results obtained using DYNALS or other software. The author wishes to state that when using DYNALS, understanding the math will not improve the results, just as a thorough understanding of the combustion engine does not generally help you drive your car. Both DYNALS and a modern car are sophisticated enough to help you reach your intended destination without requiring a deep understanding of what is going on inside. Let us begin with the solution to the integral equation (3). The document will present a numerical analysis of the problem (3) in the discretized form and explain why this problem stands as one of the most severe inverse problems in applied mathematics. This section investigates the corresponding system of linear equations and how the properties of the matrix are reflected in the solution. The traditional approaches to the problem are discussed, both the ones which are specific to the equation (3) and the ones having the equation (3) as a particular case. An introduction to the algorithms implemented in DYNALS is then presented, in order to give you an understanding of why these algorithms are superior to the traditional ones. The section also discusses possible improvements which may be implemented in future versions of DYNALS. At the end of this section, the author briefly reviews the complications caused by using the second order correlation function g^(2)(t) accumulated by the correlator to compute the field correlation function g^(1)(t) used in the equation (3). 3.1 Discretization of the Problem and the Maximal Reconstruction Range This section abandons the continuous representation of the problem defined by equation (3) and begins working with more practical, discrete representations. First of all, the correlation function g^(1)(t) is known only on a limited set of points {t[i]}, i=1,M , where M is the number of correlator channels. (The problem of computing g^(1)(t) from g^(2)(t) is discussed later). The set is defined by the particular correlator structure in use. This structure may or may not be uniform. This discretization of equation (3) leads us to the following system of equations: where x[i] are the noise values, and the integration boundaries of equation (3) are replaced with [G[min],G[max]]. The replacement of the integration boundaries reflects the loss of information caused by the finite correlation time ranges covered by the correlator channels. Indeed, the small values of G correspond to slow decays of the correlation function so that G[min] can always be chosen small enough, such that if G< G[min] then exp(-Gt[i]) is indistinguishable from a straight line (and hence from each other) due to the errors x[i] . On the other hand, large values of G correspond to fast decays of the correlation function. Due to a finite value of t[1] , there always exists a value G[max] such that if G> G[max] then exp(-Gt[i]) falls below the noise level before the first channel. The idea is illustrated by the following figure: Figure 1. Blue solid line - exp(-Gt[i]) for G< Gmin , blue circles - its measured values; red solid line - exp(-Gt[i]) for G> Gmax , red circles - its measured values It is clear that while using only the measured values {g^(1)(t[i])}, i=1,10 it is impossible to distinguish the blue curve from the constant and the red curve from zero. In DYNALS, the default range for G is set as [G[min]=0.1/t[M] , G[max]=5/t[1]] so that it is wide enough to cover the maximal possible range for even the smallest possible noise in the correlator channels. Having the set of equations (4), the next step is to find a parametric representation for P(G) that is both non-restrictive and allows the use of an effective numerical algorithm. Various representations have been proposed which lead to different reconstruction methods. For example, using the first cumulants of P(G) leads to the method of cumulants [3,4]. Using a parametric distribution of particular shape (or their superposition) with unknown parameters leads to a nonlinear fitting procedure. Examples include several delta functions for multi-exponential fit, Gaussians [5], Pirson distribution [6], log normal distribution [7] and some others. Specifying the distribution as a number of a priori spaced delta functions leads to the exponential sampling method [5] when the number of delta functions is small, or to regularization methods [2] like the one used in CONTIN [8,9] when this number is large. When the distribution is approximated by a relatively small number of histogram columns (10-12) with specified boundaries, the method is called "the method of histogram" [10,11]. A short survey of the methods appears in Section 3.3. In DYNALS, as in the method of histograms, the distribution in the range [G[min], G[max]] is also approximated by a number of rectangular columns. The columns are adjacent to each other, where the column boundaries are spaced using the logarithmic scale substantiated by the behavior of the eigen functions of equation (3) [12,13]. Denoting the boundaries of N columns as { G[i] } where i=0..N, G [o]=G[min] , G[N]=G[max] leads to the system of M linear equations with N unknown coefficients {x[i]} or in a compact form: The difference between the discretization in DYNALS and in the method of histogram is in the number of histogram columns used to represent the distribution. In the method of histogram, the stabilization of the problem is achieved by reducing the number of columns N to about 10-15. Using a small number of columns to approximate the distribution of interest leads to a very coarse distribution with pure resolution and appearance. In DYNALS, the stabilization is achieved by using the developed, proprietary method for solving the above system of equations while constraining the solution to remain non-negative. Additionally, special scaling for columns of the system matrix A, together with the solution of the minimal norm ||x|| leads to the computation of the distribution P (G) having the smallest L[2] norm (or energy in physical terms) that is: Numerous computer experiments have shown this property to be very useful for proper reconstruction of the distribution of interest. Another important factor that must be taken into account is that errors in the elements of the vector g are unequal (see Section 3.5). Numerically it is accomplished by replacing the original system (6) with the weighted system A^wx=g^w , where A^w=WA, g^w=Wg and W is the diagonal matrix with elements w[ii] =1/s[i] inversely proportional to the standard deviation of the corresponding values g [i]. To keep the notations simple, the superscripts denoting weighting will be omitted in the following sections. However, it should be noted that the weighted system of linear equations is always used in the actual computations made by DYNALS. 3.2 Numerical Analysis of the Problem Now that the system of linear equations (6) has been associated with the original integral equation (3), we will analyze it in order to understand why this problem is thought to be one of the toughest problems in applied mathematics. We will use Singular Value Decomposition (SVD) as the main analyzing tool. The singular value decomposition of a rectangular, mxn matrix A is defined as A=USV^T , where U and V are orthogonal matrices and S is a diagonal matrix [14] with non-negative elements. An orthogonal matrix is a matrix whose transposed matrix is equal to its inverse i.e. UU^T= U^TU=I where I is the identity matrix. They have the remarkable property that ||Ux||=||U^Tx||=||x|| i.e. multiplying a vector by an orthogonal matrix does not change the vector's length. This property is very important since it allows us to characterize such matrices as "good" matrices; if we have a system of linear equations Ux=g, the solution of the system is trivial x=U^-1g= U^Tg (see the definition of orthogonal matrix above). When =g'+dg , where dg designates an error component in the right hand side, the solution vector is composed from two components x=x^'+dx=U^Tg+U^Tdg. When ||x||=|| U^Tg||=||g|| and ||dx||=||U^Tdg||=| |dg|| , it follows that ||dx||/||x||=||db||/||b|| is the relative error in the solution and is always equal to the relative error in the data. A better solution is unavailable without additional Unfortunately, the properties of the system matrix A are defined by the diagonal matrix S, which in our case is a problematic matrix. Let us make some transformations of the original system of linear equations (6) to see what is so problematic in this very simple, diagonal matrix S. Replacing the matrix A with its singular decomposition gives us the system USV^Tx=g. Multiplying the right side of this system by U^T results in SV^Tx=U^Tg (see the definition of the orthogonal matrix). By making simple changes in the variables p=V^Tx, f=U^Tg we get a new system of equations Sp=f . Recalling that the matrix S is diagonal, the solution of this system may seem to be trivial p=S^-1f=[f[1]/s[1],f[2]/s[2],.. f[N]/s[N]]^T . Unfortunately, our case is not that simple. If the right side of the initial equation is known with an error dg, then f=f^'+df where df=U^Tdg, and in accordance with the property of orthogonal matrices above, ||df||=||db||. Due to the error dg, using the same property, the error in the solution p is dp= S^-1df and ||dp||=||dx||. It is now apparent that the sensitivity of the solution to errors in the data is defined exclusively by the matrix S. The diagonal elements of matrix S are called the singular numbers of matrix A, and their properties resemble the properties of the singular numbers of the functional operator (3). It may be shown that: where c(A) is known as the "matrix condition number". This equation states that the relative error in the solution vector is proportional to the relative error in the data vector and the condition number of the system matrix. When the ratio of the two extreme singular numbers is small, the matrix (and the problem) is termed "well-conditioned". The best possible case is, obviously, c(A)=1 which corresponds to orthogonal matrixes. When the ratio of the two extreme numbers is large, the matrix (and the system of linear equations) is termed "ill-conditioned" which resembles the term "ill-posed" used to describe the parent, continuous integral equation (3). When the minimal singular number is zero (they are always non-negative), the matrix and the system of equations are called In our case, the singular numbers of the system matrix A decay to zero very quickly, thus complicating the problem. The following figure illustrates the decay of singular numbers for three cases: Figure 2. red line - logarithms of singular numbers for uniform structure with M=100 blue line - logarithms of singular numbers for uniform structure with M=500 magenta dashed line - logarithms of singular numbers for logarithmic structure with M=50 The leftmost, red solid line shows the logarithm of the singular numbers for uniform distributed data points {t[i]=Dt*i}, i=1, M, where M=100, Dt=1 and [G[min]=0.002, G[max]=5]. The blue line corresponds to the same conditions but for M=500. Enlarging the number of correlator channels alleviates the problem, however the improvement is not dramatic and does not transform the problem into a "well-behaved" one. It does extend the possible reconstruction range towards smaller gammas in proportion to the number of channels (to G[min]=0.0004 in this case). The same effect may be achieved using a much smaller number of logarithmically distributed channels covering the same range (1 - 500). The magenta dashed line shows the singular numbers for M=50 and { t[i]= t[i-1]+max( floor(exp( i*q)), 1) }, i=2, M , t[1]=1 and the constant q is chosen for t[M]=500. The singular numbers decay so fast that they drop below the single precision computer arithmetic somewhere around the 20^th singular number and below the level 10^-3 corresponding to a suitable accuracy in the correlation function just after the tenth number. If we recall the relation (7), we understand why the maximal number of histogram columns cannot be larger than 12. The relative errors in the histogram's column heights become so large that they are unacceptable. 3.3 Traditional Approaches to the Problem There is no way to make an ill-conditioned (ill-posed) problem behave well other than reducing the number of its degrees of freedom. If the function of interest cannot use an arbitrary shape to fit the experimental data, it must obey an additional law or posses an additional property to reduce the uncertainty described by equation (7) and thus provide a stable solution. One such constraint is the non-negativity of the distribution of interest P(G). Solving the system of linear equations (6) under the constraint for the coefficients {x[i]} to be non-negative greatly reduces the number of candidates for the solution. Other ways of curing the problem may be classified in two categories: a) reduction of the problem dimension and b) the methods of regularization. In the first category, only a small number of unknown coefficients are used to describe the distribution of interest. The singular numbers of this impaired problem do not have a chance to decay too much (see Figure 2) and therefore the problem remains stable. In the second category, the singular numbers are modified (direct or indirectly) so they suppress the subversive influence of the inverted small singular numbers. The following sections briefly review some of the most widely used members of both categories, where section numbers describing the first category use the numbers An, and section numbers describing the second category use the numbers Bn. A1. The Method of Cumulants [3,4]: This is the oldest method of computer processing PCS data. The idea is based on the following expansion: where {m[i]} are the central moments of the distribution of interest P(G). The average gamma value is equal to the first moment, and its standard deviation is given by the square root of the second moment. The third moment theoretically describes the asymmetry of the distribution. The other coefficients are more difficult to interpret. The coefficients {m[i]} may be easily found by solving the corresponding system of linear equations. Stabilization is achieved by reducing the number of unknown coefficients used to approximate ln(g^ (1)(t)). Generally, only the first two moments (and thus the average and the dispersion) can be reliably reconstructed. This provides very limited information for multi-component distributions or when dust is present. A2. The Method of Histogram [10,11]: This is the easiest method for extracting polydispersity information from PCS data. In the method of histogram, the number of unknown coefficients (histogram columns) in equation (6) is chosen to be small enough to make the solution stable. It is very important to divide the G axis logarithmically, otherwise the columns for low values of G must be made unnecessarily wide to provide stability for high values of G. It is also very important to use a non-negative least squares (NNLS) algorithm to restrict the solution to be non-negative. This information is very important and actually makes the algorithm usable by raising the number of histogram columns from 6-7 to 12-13 for the full reconstruction range [G[min],G[max]]. The method of histogram has the following two drawbacks. Firstly, the maximum number of histogram columns which can be reliably reconstructed is not constant. Another drawback is that the logarithmic resolution is not optimal. Both the number of histogram columns and the optimal resolution law depend on the reconstruction range, the structure of the correlator in use, the noise in the experimental data and the distribution of interest itself. This dependence is not simple, and may be taken into account by using the singular value decomposition of the system (6). B1. Methods using the Truncated Singular Value Decomposition [12,13,14]: Singular Value Decomposition or simply SVD (see Section 3.1) is a powerful tool for solving unstable (ill-conditioned) systems of linear equations. Recall that as soon as the singular value decomposition of the system matrix A from (6) is computed, the solution is almost trivial x=VS^-1UTg . The solution simply involves two multiplications by the orthogonal matrices U^T and V and one multiplication by the diagonal matrix S^-1=[1/s[1],1/s[2], …,1/s[N]]^T. It has also been shown that the singular numbers, or more precisely the rate of their decay, defines the stability of the problem. The faster the decay, the closer the problem is to a singular one and the greater the uncertainty in the solution, thereby demanding more precise data to get the same information. The advantage to using the singular value decomposition over other methods of solving the system of linear equations is that it allows us to easily remedy the solution to some extent by extracting only the stable or reliable information. This is done by replacing the matrix S^-1 in the above equation with the matrix S^+=[1/s[1],1/s[2],…,1/s[K],0,...,0]^T where the singular numbers should be sorted in descending order, s[K]/s[1]>x and s[K+1]/s[1]<x . If we make a simple change of variables p=V^Tx, f=U^Tg and recall the properties of orthogonal matrices, the idea of remedying the problem with Singular Value Decomposition may be easily understood: where ||r|| is the residual norm, which is minimized in the method of least squares. If M>N, the last term is constant and cannot be minimized, no matter which method is chosen to solve the system of equations. In the method of truncated singular value decomposition, the second term ||r[2]||^2 is not minimized either. Recalling the definition of the matrix S^+ above, it leads to setting p[2] to zero, ||r[2]||=||f[2]|| and ||r||^2= ||r[2]||^2+const. The hope is that sacrificing the residual norm to some extent, we can find a solution which is less sensitive to errors in the vector g. The idea may be more clearly presented if we use the property of orthogonal matrices ||x||=|| V^Tp|| and write the full expressions for the residual and the solution norms: where as before, f^'=U^Tg^' and df=U^Tdg denote the transformed exact data and the error vectors respectively. It can be seen that incrementing K leads to the reduction of the squared residual norm on the quantity f[K]^2 and the enlargement of the squared solution norm on the quantity (p[K]/s[K])^2. Using functional analysis it may be shown that without the error term df, the coefficients {f [i]} decay to zero and thereby compensating for the influence of the small singular numbers. The errors {df[i]}, divided by small singular numbers, introduce large, wild, components into the solution, making it totally inappropriate. The art of using the truncated singular value decomposition is to find a compromise (an appropriate K) between the residual and solution norms, keeping them both relatively small. Generally, if some freedom is allowed in the residual norm, there exist an infinite number of solutions possessing the same residual norm. It is important to note that of all these solutions, the solution computed by the truncated SVD possesses the smallest Euclidean norm. The good news for users of truncated SVD is that the appropriate value for the component K can almost always be found. The bad news is that in our particular case its value is too small. Without using any other a priori information, this number is generally limited to 4 - 6 values which gives a very rough solution. Worse than that, if the distribution of interest is computed in a relatively wide range [G[min],G[max]], it will have large oscillating or negative tails which have nothing to do with the true distribution. Additional information is necessary to make the method of truncated SVD useful. Different attempts have been made to impose auxiliary constraints on the solution, but, prior to the development of the unique algorithms implemented in DYNALS, no adequate solution suitable for commercial use was found. In these algorithms, the truncated singular value decomposition is combined with non-negativity constraints on the solution. This eliminates oscillation and negative tails and allows numerical analysis of the vector f so that the number K can be chosen automatically. B2. Methods using Tikhonov's Regularization: The most well known software program that uses Tikhonov's regularization method for processing experimental data in Photon Correlation Spectroscopy, is CONTIN [8,9]. Another, less known but good, example is the SIPP program described in [15]. In Tikhonov's method of regularization for the continuous case, the corresponding system of normal equations is used to find the unknown coefficients. The system is derived from the system of linear equations (6) by multiplying its left side by the matrix A^T as Fx=b , where F=A^TA and b=A^Tg . The exact solution of such a system minimizing the norm ||Fx-b|| is equivalent to the least squares solution of system (6), and is therefore unstable. To make it stable, the initial expression for the norm to be minimized is replaced as follows: where the term ||G||^2, called the regularizer, defines the additional properties in favor of which we are ready to sacrifice the best possible fit. For example, if we would like to minimize the solution norm, we can set G=I, where I is the identity matrix. Most often, the matrix G encompasses a value associated with a 'smooth' solution. For example, this value may be the first or second derivative operator or its combination with the identity matrix. The coefficient a, called the regularization parameter, allows us to define the strength of the constraint on the solution. Very small numbers or zero do not introduce a constraint, and as a result the computed solution is unstable and has little in common with the actual solution. On the other hand, numbers that are too large pay little attention to the data, resulting in a solution that is smooth and nice but 'underfits' the data and hence extracts only part of the information available in the experimental data. Technically, the method is implemented by using the modified system of normal equations devised as follows: Applying the singular value decomposition, it may be shown that using the solution norm as the regularizer (G=I) is equivalent to replacing the singular numbers with s[i]^'=(s[i]^2+a)^1/2. Such substitution alleviates the problems associated with small s[i] by damping large elements of S^+ in accordance with the formula: Other regularizers provide different damping factors but employ a similar idea - they all decay smoothly towards zero as the corresponding index i increases. Alternatively, the solution computed by the truncated singular value decomposition may be associated with the damping factors computed as: where h is some threshold value. The proper choice for h is discussed in the following section devoted to the algorithms implemented in DYNALS. In our case, there are two major difficulties associated with the method of regularization. The first one is related to the fact that the singular numbers decay very fast while the damping factors decay slowly. If a is chosen to damp all problematic singular numbers, it will partially damp the stable ones and, therefore, not all available information will be extracted from the data. On the other hand, if a is chosen to pass all valuable information, it will also allow small singular numbers to ruin the solution. The value in the middle will partially damp the stable information and partially pass the amplified noise (by dividing it into small singular numbers) and, as a result, the optimal solution is never found. A large number of published papers have been devoted to the optimal choice of the regularizer and the regularization parameter. Most, if not all of them, assume that the problem is linear, which in our case greatly underestimates the regularization parameter. The reason for such underestimation is that the system of equations is to be solved under the constraint of a non-negative solution. This constraint greatly improves the stability of the problem but causes the problem to be nonlinear (by the definition of a linear problem) and the corresponding, "optimal" a to become irrelevant. 3.4 Overview of the Algorithms Implemented in DYNALS As mentioned previously, the algorithm implemented in DYNALS makes use of the singular value decomposition, but combines it effectively with the non-negativity constraint on the solution. To achieve this, the following function is minimized, while the target solution vector x is constrained to be non-negative: where all the vectors and matrices are defined in equations (7) above. The expression (11) for minimization of the norm may seem to resemble the expression (8) minimized in the method of regularization (Section B2 above). However, in (11) different vectors, p[1] and p[2], are used in the two terms of the minimized function. The first vector, p[1], is used to provide the minimal residual norm using only those stable components of the singular value decomposition corresponding to the large singular numbers. The second vector, p[2], is used only to make the solution vector x non-negative. If the solution x, provided by unconstrained minimization of ||f[1]­S[1]p[1]||, is non-negative (or the constraint (12) is not applied), then nothing prevents p[2] from being equal to zero and the solution is equivalent to the truncated singular value decomposition. Otherwise, the vector p[2] is not zero and its components manifest additional information introduced by the non-negativity constraint. In other words, the vector p[2] serves to give freedom to the vector p[1], such that some tradeoff between the minimal truncated residual norm ||f[1]-S[1]p[1]|| and minimal truncated solution norm ||p[2]|| is reached. The tradeoff depends on the parameter b as follows: If b is too large, the second term ||p[2]|| is excessively constrained and does not provide enough freedom to the vector p[1] (due to the non-negativity constraint) and the solution underfits the data. On the other hand, if b is too small, the vector p[2] may take on wild values just to satisfy the constraint (12). The optimal value is found to be directly proportional to the accuracy of the experimental data, the maximal singular number and the norm ||f[1]||. The explanation of such proportionality is beyond the scope of this document. Technically, the problem of minimizing (11) under the constraints (12) is solved by transforming the problem into the equivalent LDP (Least Distance Programming) problem, which is then solved by the method described in [16]. To implement the equations (7,11,12), we must divide the set of singular numbers into two subsets, the subset K "big" numbers and the subset N-K "small" numbers. If the input data were perfect, we might regard all the singular numbers as "big" so that K would be equal to N Footnote3.^ When noise is present, this type of solution will be unstable. Therefore, the size of the first subset must depend on the accuracy of the experimental data. On the other hand, the relative influence of the random noise on the elements of f is also different. The vector f may be rewritten as f=f^'+df=U^ Tg'+U^Tdg . It can be shown that the elements of f' rapidly decay to zero while the standard deviation of the elements df remains the same (orthogonal transform of a random vector). It seems reasonable to minimize the residual norm using only those elements of f which significantly exceed the standard deviation of the elements df . With this choice, the size of the subset for "big" singular numbers depends not only on the noise in the experimental data, but on the distribution of interest itself. This is reasonable since a smaller number of singular value decomposition components are necessary to approximate a "smooth" distribution than to approximate a "sharp" one. In DYNALS, the tail of the vector f=U^Tg is used to find s[n] (the standard deviation of the noise). After that, the noise level is set as 3s[n] and only those elements of f that are larger than this level are used for residual norm minimization. The actual implementation is a bit trickier when coping with the different complications caused by unusual noise statistics or non-random noise (systematic error), but the idea is the same. 3.5 g^(2)(t) to g^(1)(t) Conversion or "Square Root" Trouble [17] The first order correlation function g^(1)(t) is used to compute the distribution of interest R(G) from the equation (3). This function is related to the normalized experimental data by the equation (2) which includes the experimental noise z(t). Due to the noise, the direct computation of g^(1)(t) from g^(2)(t) given by the following equation is associated with certain complications. First of all, for large values of t, the expression under the square root sign may be negative. In this case, calculating g^(1)(t) requires computing the square root of a negative number. The standard procedure used in photon-correlation spectroscopy uses the trick of setting g^(1)(t) is real and the imaginary parts must be dropped for further processing. This is equivalent to setting g^(1)(t). [17] shows that even when the value under the radical is non-negative, calculating g^(1)(t) from (13) increases the random error (relative to g^(2)(t)) and introduces a systematic error. [17] also proposes to compute the distribution of interest in two stages. In the first stage, the self-convolution of the distribution of interest R(G)* R(G) is computed from the function g^(2)(t)­1. The smoothed estimation of g^(2)(t) is then computed and used to compute the first order correlation function g^(1)(t). Computer experiments have shown that this type of two stage processing provides better resolution. The option of two stage processing will be implemented in a future version of DYNALS. In the meantime, weighting is used to alleviate the problems described above. Every row i of the matrix A in (6) and every element i of the data vector g are multiplied by the corresponding weight w [i] . The weights are set in accordance with [18] as: providing good results (giving a random residual vector) in most cases. 4. Building your own Software for Processing PCS Data The DYNALS processing capabilities are based on a set of well-defined processing APIs. The APIs are written in C in the object oriented manner and may be easily incorporated into another, proprietary interface. Please contact the authors for more details. 5. Footnotes The order of footnotes corresponds to the sequence of their appearance in the document. 1. In some software programs this constant may be estimated separately as an independent parameter. For example, in CONTIN it is called the "dust term". In an ideal case, it must include everything which is constant in the data and leave the other components to be approximated by the distribution. In practice, it either underestimates or overestimates the true value, thereby sometimes distorting the distribution. It is better to extend the boundary corresponding to slow exponentials and let the optimization procedure decide what belongs where. 2. The fact that the best resolution is in the middle of the DYNALS default distribution domain, may be used to choose the optimal parameters of an experiment. For example, if the whole computed distribution of decay rates or diffusion coefficients is in the left half of the plot, try to reduce the sampling time. If it is in the right half, try to make it larger. If you chose the units as decay times or radii, just choose the other type of units. Appendix A. Practical Exercises Using DYNALS This appendix gives you step-by-step instructions which will help you to understand the problem and complexity of Particle Size Distribution Analysis in PCS. These exercises may be regarded as a tutorial that will provide you with the necessary skills, tips and hints for successful DYNALS operation. You will see how simple it is to solve this severe mathematical problem with DYNALS and how simple it is to check that you get the right solution. All exercises are performed on synthetic data, which has two main advantages. Firstly, you know the true distribution in advance, which is very important when you are trying to understand the character of distortions caused by the stabilization procedure. Secondly, synthetic experimental data do not contain systematic error. Systematic error is generally very difficult to separate from data, and sometimes leads to the unpredictable distortion of results. There is a variety of different synthetic data provided with DYNALS, along with a complete list describing the files. Several files, corresponding to different experimental noise levels, are provided for each distribution. It is supposed that the noise is normally distributed and uncorrelated between the channels. The last four numbers in the name of every file indicate the standard deviation of the error. For example, files 'xxxx0000' correspond to precise data, files 'xxxx0010' correspond to the standard deviation 0.001, etc. Files whose name begins with 'u', correspond to the uniform 100 channel correlator and a sampling time of 1 microsecond. Files whose name begins with 'l', correspond to the 50 channel logarithmic correlator and the same sampling time. You can look at the files to see how the channels are distributed. The exercises may be performed using any distribution units (decay rates, decay times, diffusion coefficients or hydrodynamic radii), however we recommend using decay rates. The screen shots provided below correspond to decay rates in megahertz. A.1. Understanding the Term "ill-posed problem" As written above, the problem of computing Particle Size Distribution from PCS data is an ill-posed mathematical problem. In our case, the term "ill-posed problem" designates a situation where there is an infinite number of absolutely different distributions R(G) that fit the experimental data g^(2)(t) well enough (i.e. within the accuracy of experimental data). On the other hand, the solution which provides the best fit for experimental data is not always the one we would like to pick, as it may have nothing in common with the true distribution. Let us check this in practice. In this exercise we use a wide, unimodal distribution. Load the file up1w0000.tdf corresponding to the accurate data and click the Start button. After the processing has finished, you will see the following distribution which is the true one. Note that DYNALS sets the maximal possible resolution since the data are considered perfect. It does not mean that there is no stabilization; a minimal stabilization is used to compensate for finite computer arithmetic precision. We will now choose another data set from the file up1w0010.tdf, which corresponds to the same distribution but includes noisy data. As indicated by the filename, the noise standard deviation for this file is 0.001. Click the Start button to receive the first in the following series of four pictures. These pictures correspond to the optimal resolution that is automatically set by DYNALS. The corresponding residuals are random, i.e., this distribution provides a good fit for experimental data. The position of resolution slider leaves you some freedom to experiment. By increasing the resolution slider value you can see that the shape of the distribution changes dramatically, while the residuals actually stay the same. The other three pictures provide examples corresponding to three different positions of the resolution slider. Each of these four distributions, along with an infinite number of the others corresponding to values of the resolution slider larger than the optimal one, provide a good fit to the data. Try using values that are less then optimal. You will see that the distribution computed using smaller resolution slider values does not provide a good Now load the file up1w0030.tdf, which corresponds to the same distribution but has more noise. After processing the data, you will see that the optimal resolution value is around 0.5. This value is lower than the one used for the previous file, which reflects the fact that the data is less accurate and more freedom is left for the solution. Try changing the position of the resolution slider to see a variety of the possible solutions. A.2. Understanding the Resolution and its Dependencies If you measure a distribution of some physical quantity, it is natural to question the kind of resolution you should expect. In this context, the resolution may be defined as the minimal ratio in the positions of two separate distribution components, represented by two distribution peaks. In PCS, the distribution of interest is not measured directly, but is obtained as a result of the complex calculations necessary to resolve the integral equation (3). Consequently, the question of resolution in PCS is rather complicated and cannot be answered in two words. The resolution depends on several factors: the accuracy of experimental data, the correlator structure, the position of the distribution on the G axis (i.e. sampling time) and the shape of the distribution itself. In this section we will see how these four factors affect our ability to distinguish separated components. Dependence on the accuracy of experimental data: It is intuitively clear that resolution must depend on the accuracy of experimental data. Let us check it in practice. For our experiments we will use a distribution that consists of two narrow peaks placed at G[1]=0.06MHz and G[2] =0.18MHz with G[2] / G[1]=3. The distribution is placed in the middle of the DYNALS default restoration range, which is the area of optimal resolution (we will check this later). The following four pictures show the results corresponding to the optimal resolution for the case without noise and the cases where the noise has the standard deviations 0.001, 0.003 and 0.01 (files up2n0000.dat, up2n0010.dat, up2n0030.dat, and up2n0100.dat respectively). It is clear that the resolution decreases as the noise increases. Note how the resolution slider reflects the accuracy of experimental data. You can see a small separate peak in the last picture (the most noisy data). This peak corresponds to very fast exponentials which do not actually exist. Its presence is explained by the large noise in the first one or two channels which is confused with very fast decaying components. There is nothing we can do with it, except perhaps to avoid fitting the first channels for very noisy data. Try it. It helps in this case. You can also try using the a priori information that the distribution of interest consists of narrow peaks. By setting the maximal value of the resolution slider, you will get the same sharp distribution shown in the first picture. Be careful with this approach though, for a relatively large noise and a multi-component distribution you may get incorrect peak positions. Dependence on the position of the distribution or sampling time: This document states that resolution also depends on the position of a distribution on the G axis. To check this, let us take the same two modal, sharp distributions used in the previous section, and shift their position to the left and to the right, while maintaining the ratio of peak positions (using the files up2b0010.dat and up2c0010.dat ). The following two pictures reflect the results for the noise standard deviation 0.001, which is the same as for the right upper picture above. As you can see, the peaks are less resolved than when located in the optimal resolution region. If the reconstruction region is defined automatically, this region is always located in the middle of the plot, which may serve as a good hint for a proper choice of the sampling time. If the decay rates or the diffusion coefficients are set as the reconstruction units, then reducing the sampling time corresponds to the distribution being shifted to the left (exponentials decay slower). Increasing the sampling time accelerates the decay and the distribution moves to the right. If the decay times or hydrodynamic radii are used, then the opposite is true. Dependence on the shape of the distribution: It may not be obvious, but possible resolution of the distribution details depends on the shape of the distribution itself. Generally, the more complex the distribution, the less details it is possible to reconstruct. For example, we have seen that it is possible to resolve two peaks with G[2] / G[1]=3. Using two such peaks, we will add another one which is rather far from the first two. We will then move the third peak towards the first two and observe what happens. For this experiment we will use data files corresponding to a 100 channel, non-uniform correlator. You can easily see the channels' location on the time axis by choosing the "time" units for the X axis of the data graph. The following four pictures correspond to the files np3a0010.dat, np3b0010.dat, np2c0010.dat, and np3d0010.dat respectively, processed with the default optimal resolution. We have also increased the number of distribution columns because of the larger possible restoration range covered by a non-uniform correlator. It is obvious that the resolution depends on the distribution itself. The algorithm implemented in DYNALS makes a special analysis of experimental data and chooses the optimal (i.e. the maximal possible) resolution when no extra, a priori information is available. Appendix B. Tagged Data Format (TDF) Tagged Data Format files (TDF files) are ASCII files similar to Windows INI files. These files consist of sections defined by the section name enclosed in brackets. Every section contains a specific kind of information that is further subdivided by tags. Every line of information begins with a tag identifying the information. An equals sign is used to separate each tag from its information. A typical TDF file containing PCS data looks like this: WAVELENGTH= 632.8 VISCOSITY= 1.0 SCATANGLE= 90.0 TEMPERATURE= 293.0 REFRINDEX= 1.33 1= 1.000000e-006 1.760349e+000 2= 2.000000e-006 1.596440e+000 3= 3.000000e-006 1.480216e+000 . . . . . . . . . . . . . . . . 99= 9.900000e-005 1.001706e+000 100= 1.000000e-004 1.002411e+000 The brackets are part of the syntax. The order of the sections and the tags inside the sections is arbitrary. The following section names are recognized by DYNALS : [PCSDATA2] -- for the second order (intensity fluctuations) auto-correlation function measured under the homodyne scheme. The section's tags define the numbers of corresponding correlator channels. The information field consists of two numbers: time delay (in seconds) corresponding to the particular channel and the normalized correlation function value in that channel. [PCSPHYSPAR] -- physical parameters of the experiment where the data were acquired. The following tags are recognized in this section: • WAVELENGTH - wavelength of the scattered light • VISCOSITY - viscosity in cP • SCATANGLE - scattering angle in degrees • TEMPERATURE - temperature in degrees Kelvin • REFRINDEX - refraction index 1. Gulari Es., Gulari Er., Tsunasima Y. and Chu B., J. Chem. Phys. 70.80, 3965­3972 (1979). 2. Tikhonov A.V. and Arsenin V.Y., Solution Ill-posed problems, Winston & Sons, (1977). 3. Koppel D.E., J.Chem.Phys. 57, 4814, (1972). 4. Pusey P.N., Koppel D.E., at all, Biochemistry 13, 952, (1974). 5. Ostrowosky N. and Sornett D. - in Photon Correlation Techniques in Fluid Mechanics, Springer, Berlin, 286, (1983). 6. Chu B., Correlation Function Profile Analysis in Laser Light Scattering, in "The Application of Laser Light Scattering to the Study of Biological Motion", NATO ASI Marateo, (June 1982). 7. J.Chem.Soc. Faraday Trans. 2.75, 141 (1979). 8. Provencher S., Comput.Phys.Commun., 27, 213-227 (1982). 9. Provencher S., Comput.Phys.Commun., 27, 229-242 (1982). 10. Gulari Es., Gulari Er., Tsunashima Y., J.Chem.Phys., 70, 3965. 11. Chu B., Gulari Es., Gulari Er., Phys.Scr., 19, 476 (1979). 12. Bertero M., Boccacci P., Pike E.R., Proc. R. Soc. Lond. Ser.A, v383, 51 (1982). 13. Bertero M., Boccacci P., Pike E.R., Proc. R. Soc. Lond. Ser.A, v398, 23 (1985). 14. Bertero M., Boccacci P., Pike E.R., Proc. R. Soc. Lond. Ser.A, v383, 15-29 (1983). 15. Danovich G., Serduk I., - in Photon Correlation Techniques in Fluid Mechanics, Springer, Berlin, 315 (1983). 16. Lowson C.L. and Hanson R.J., Solving Least Squares Problems, Englewood Cliffs,, NJ: Prentice Hall, (1974). 17. Goldin A.A., Opt.Spectrosk. 71, 485-489 (September 1991). 18. Provencher S., Makromol.Chem. 180, 201 (1979).
{"url":"http://www.softscientific.com/science/WhitePapers/dynals1/dynals100.htm","timestamp":"2024-11-09T23:28:26Z","content_type":"text/html","content_length":"90049","record_id":"<urn:uuid:89bbfdb3-2403-4f57-af77-08cfcdf38832>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00485.warc.gz"}
A game theorist’s analysis on Sri Lanka’s WC 2011 Final team selections We continue with our very low key World Cup theme. It’s so low key that you’d be hardpressed to call it a theme. But we don’t march to the beat of any one. Cricket fans are usually pretty nerdy. Here is a fan who is exceptionally so. Shehan sends this one in to once and for all explain what the fuck went down in Mumbai before the final started. 1.0 INTRODUCTION Many criticised Sri Lanka’s tactics in the final of the recently concluded cricket world cup from the bowling changes and field placements to their decision to bat first after winning the toss. Least amongst these criticisms was the team combination Sri Lanka went in with which included four changes from the semi final. As in most sports, the decisions made both prior and during the game by one team in cricket have a direct impact on the welfare of the opposing team and vice-versa and this is mutually recognised. This is known as strategic interdependence. Game theory is a mathematical analysis of strategy where a game is a model of strategic interdependence which attempts to predict behaviour and advice In this report I will attempt, using game theory to study the choices Sri Lankan management would have considered in team selection for this crucial game. In section 2.0 of this report I will address the decision problems faced by team management whilst in section 3.0 I will construct a model to study Sri Lanka’s possible actions. In section 4.0 I will analyse the model developed in the previous section to identify Sri Lanka did indeed get their team wrong. I will emphasise the limitations of this study in section 5.0 of this report whilst offering my conclusions in section 6.0. 2.0 SRI LANKA’S DECISION PROBLEM Most decisions are difficult as sometimes they involve an element of risk when nature interferes, they could be strategic when they are dependent on the actions of others and they could be complex when the information available can’t be comprehended. Some decisions are a combination of these three factors. 2.1 The strategic impact on team selection Going in to a cricket match with the best possible mix of specialist batsmen and bowlers is vital given the opposition and the information you have and don’t have about them. For this final, Sri Lanka who was clear underdogs knew that India has an extremely strong batting line up, most of who were in top form during the tournament. Their main objective would be to select the best team combination that would give them the best chance of countering India’s strengths and with a bit of luck (as opposed to a lot of luck) even win the game. So should they too pack their team with batsman to try and out-bat India or should they go in with more bowlers and try contain India’s batsman and bank on the selected few batsman doing the job? India was also aware that the Sri Lankan bowlers had performed the best out of all the teams in the tournament, so what team combination would India go in with? Would they bank on their top six batsman to score the runs and play five bowlers to contain the Sri Lankan batsmen’s score or go in with the extra specialist batsman to counter for Sri Lanka’s strong bowling attack? As the teams have to be announced together minutes prior to the toss there is no way of making decisions based on the actions of your opponent. The team announcements need to be made simultaneously. 2.2 The impact of risk on team selection 2.2.1 The pitch Apart from your opposition’s team characteristics how the pitch will play would also impact on one’s team combination. As the final was a day/night game held in India it is well known that the pitch will be predictable and play flat during the first half of the game. In the second half under lights, pitches tend to slow up and take spin if conditions don’t change. However if moisture in the air is high it tends to form dew which then makes life harder for bowlers to grip the ball. 2.2.2 The toss How will Sri Lanka get to use the pitch? Will they get to bat first on a flat pitch and have their blowers’ defend the target under lights or will they have to bowl first and keep India’s score to a minimum and chase in the night? This will depend on the outcome of the toss which offers a 50-50 chance to each captain. Sri Lanka needs to factor this event of chance into their decision making 3.0 BUILDING A GAME TO STUDY SRI LANKA’S ACTIONS 3.1 Identifying the rules of the game 3.1.1 The players These are the decision makers involved. This game is a two player game in which the players are India and Sri Lanka. 3.1.2 Actions Actions involve all the possible alternatives between which a player can decide. In this game actions are the possible team combinations. Whilst there are quite a few possible combinations a cricket team could opt for, in this game I will consider only two, the first being the combination of seven batsmen with four bowlers (7,4) and the second six batsmen with five bowlers (6,5). As mentioned previously, actions would occur simultaneously. 3.1.3 Outcomes The combination of all players possible actions with payoffs. The payoffs in this game are rank outcomes for each player with 4 being the best and 1 being the worst. 3.1.4 Information This involves what the players know about their opponents, actions and payoffs. Here, the two players know the general strengths and weaknesses of each other however do not have information about the action the other would take. 3.1.5 Communication Games in which players can communicate are termed as corporative whilst games such as this where the two teams will not discuss their team combination with each other are termed as non corporative. 3.2 Representing the simultaneous game A game tree can be used represent this game where nodes represent the outcomes and decisions available to both teams with branches representing alternative actions. The information set represents information imperfection where Sri Lanka would not be certain of the team combination India would have gone in with. The tree diagram on the right indicates how the risks of both the outcome of the toss as well as the nature of the pitch explained in section 2.2 impact the four possible outcomes of the game. Whilst the toss has been assigned probabilities of 50-50 it has been assumed the pitch is assured to play flat during the day and a 40-60 probability between the pitch taking spin or dew playing a part in the night. This is as Mumbai, the venue of the final had a reputation of dew in early April. Let me now consider generating numbers for each outcome in order to carry out a mathematical analysis of this model. The basis of the numbers I generate would be to determine the “expected combined team ability” given the team combination each player opts for and the effects of nature on this team combination. The numbers and reasoning behind them are as follows. • Batsmen rating will be multiplied by 1.2 if opponent goes in with 4 bowlers as it assumes batsmen would be advantaged by facing less quality bowlers. • Bowler rating will be multiplied by 1.2 if opponent goes in with only six batsmen as they have less batsmen to dismiss. • Assumes the total strength of Sri Lanka’s bowling unit would be at an advantage or disadvantage given the conditions and the same goes for Indian batsmen. • There would be no double counting as I am working out individual team strengths. I have assumed a rating of 15 points for each Indian batsman as opposed to 10 for each Sri Lankan as on paper and recent form, the Indian batsmen were clearly superior. I however have assumed a rating of 12 points for each Sri Lankan bowler as opposed to 10 points per Indian bowler as the Sri Lankan’s even though marginally, had a better attack. This final was viewed as a battle between India’s batsmen and Sri Lanka’s bowlers. The game tree can now be represented numerically with the expected team strength for each team under the four possible outcomes. On the extreme right lie the individual expected team strengths of Sri Lanka bowling first and bowling second. These two values are then multiplied by the expected probabilities of bowling first or second to generate expected values for the outcome shown on the left. Example: Calculating expected team strengths with India and Sri Lanka both going with a 7, 4 combination and Sri Lanka bowling second (second box, extreme right). Rough wicket (40%) Batting: (7 x 15 x 1.2 x 0.7) = 88 Bowling: (4 x 10) = 40 128 Sri Lanka Batting: (7 x 10 x 1.2) = 84 Bowling: (4 x 12 x 1.3) = 62 146 Dew (60%) Batting: (7 x 15 x 1.2 x 1.3) = 164 Bowling: (4 x 10) = 40 204 Sri Lanka Batting: (7 x 10 x 1.2) = 84 Bowling: (4 x 12 x 0.7) = 34 118 Expected team strength of Sri Lanka bowling second: India: (40% x 128) + (60% x 204) = 174 Sri Lanka: (40% x 146) + (60% x 118) = 129 The combination of all four possible expected team strengths given the selected team combination and impacts of nature can be represented in a 2 x 2 matrix as follows. This table justifies why India went into the game as favourites. Whatever the combination either team went in with, India’s expected team strength would always be higher than Sri Lanka’s. But does this mean Sri Lanka need not bother strategising? Not at all. As in most sports, some games begin with one player or team being the favourite and the other the underdog just as in this case. Still both players/teams would strategise, with India seeking to maximise its advantage whilst Sri Lanka would seek to minimise this gap so that proper execution of game plans and a little bit of good fortune could see them with the title. The bigger this gap the harder it would be for Sri Lanka. The table below on the left indicates the difference in expected team strengths between these two teams whilst the table on the right indicates the outcomes based on ranked payoffs with 4= best to 1= worst. So for India the bigger the gap the better, whilst for Sri Lanka it would be the opposite. 4.0 ANALYSIS OF THE MODEL An outcome is a Nash Equilibrium (NE) if every player is doing the best they can given what the other player is doing. The outcome 3, 2 is a NE as at this outcome both India and Sri Lanka are playing their best reply. A Dominant Strategy Equilibrium (DSE) is an outcome where both players have a dominant strategy. In this game 3, 2 is also a DSE as Sri Lanka has a dominant strategy in fielding a 6, 5 team combination and India has a dominant strategy in fielding a 7, 4 combination. An outcome is Pareto Efficient if both players can’t agree to go to a better outcome. If the above four outcomes were in a game where both players could gain/lose by moving to another outcome, every outcome would be Pareto Efficient. However as cricket is a zero sum game, the concept of Pareto Efficiency becomes irrelevant here. India went into the final with a 7, 4 combination and Sri Lanka with a 6, 5 combination. Whilst it is unlikely they used game theory as a strategic tool, from the analysis conducted on the above described model it could be seen that both teams did indeed start the game with the best possible combination. 5.0 LIMITATIONS OF THE MODEL This model is a simplification of the real pre-game context. The assumptions made in this model are general enough to be true and aid analysis. However in reality things do not always work as assumed and cannot be captured in a model. Some of these limitations are as follows; • Only two combinations considered between batsmen and bowler split. Teams have many more possible combinations they could consider which includes the mix of fast bowlers and spin bowlers within the decided number of bowlers. Factoring these choices would have made the model more complex but may have generated a different analysis. • The assumptions consider a constant average value for each team’s batsmen and bowlers in calculating team strength. However this assumption maybe too generic. If India decided to replace a batsman of the calibre of Tendulkar with another batsman does not make it a like for like replacement (so it’s not 15 replaced with another 15). • The model assumes the team that won the toss would have chosen to bat first. This assumption need to always hold true. Had India won the toss they may have still opted to bat second. • The probabilities assigned to pitch conditions in the night are subjective and changing these probabilities will offer different expected team strengths. 6.0 CONCLUSION In cricket, a team’s outcome depends not only on its decisions but also on the behaviour of their opponents as well as chance. Good leaders would think strategically and anticipate the behaviour of their opponents and the environment and make decisions to improve/enhance the team’s chances. Game theory can help teams to understand and analyse available actions and predict the behaviour of opponents and design strategy and tactics. 7 Comments 1. Your analysis is extremely boring. You made it into a Physics/Chemistry class. 2. Fair comment Amit, but this was my assignment for my MBA Game Theory module so sadly could not make it humorous. But being a cricket nerd this was one assignment I really enjoyed! 3. 2 comments: 1. Your represenation of outcome ranking (1 to 4) is not representative of the true game. There are only 2 outcomes (win, lose) – the margin of victory matters only to the extent that great expected margin of victory enhances the probability of an actual win. In that case the best strategy for India (Sri Lanka) to pick the team combination that would maximise the expected victory margin (minimzie the expected defeat margin). Obviously, since these 2 are incongruent, it means there is no Nash Equilibrium for this problem. 2. If you maket the assumption that the objective of each team is to maximize it’s own expected value (as opposed to maximising the gap), then the Nash equilibirum is for both teams to go with (7,4) combination. Your first 2×2 box shows this. 3. Even if the analsis were to be accurate, it has to be noted that the equilibrium outcomes are dependent on the quantitative assumptions (nature of pitch, relative team strenght etc.). So in effect what you’re trying to do is to fix the assumptions in such way so that it they would be consistent with the already observed team selection pattern. btw, don’t take this negatively, just wanted to comment on this as I am very much into game theory as well. 4. tharu22…glad to get a comment from someone who knows his stuff. I welcome the comment and do not take it negatively at all. Let me try answer you. 1. My ranking is not based on the run difference but on the “power” or “strength” difference between the two teams. India seem to always have been favorites (always have the most powerful team) but the less this difference the better for SL (worse for IND) and vise-versa. Had my ranking been on runs, then you’re right. 2. No, my assumption is that SL will prefer a smaller power gap whilst IND will prefer a bigger power gap. I understand your comment as you assumed this was runs but it’s not. 3. Oh yes..this is very much dependant on my assumptions and I have highlighted this in section 5.0. But I can assure you I did not fix the assumptions to generate the results. It so happened that based on the values and probabilities I used this was the analysis. 5. @4 Shehan, Thanks for your comments. 6. […] We Said Go TravelTheFlySlip […] 7. Your blog is really awesome. i am so inspire to visit your site. Thanks for sharing this informative article…
{"url":"https://www.flyslipblog.com/a-game-theorists-analysis-on-sri-lankas-wc-2011-final-team-selections/","timestamp":"2024-11-14T15:20:45Z","content_type":"text/html","content_length":"47778","record_id":"<urn:uuid:f1d7b252-b717-4751-b7fd-91ea08140daa>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00797.warc.gz"}
Slope Calculator - cryptocrape.com Slope Calculator Calculate Slope (m) between Two Points How to Calculate Slope Using the Calculator 1. Navigate to the Page with the Calculator: □ Open your web browser and go to the page where you added the Slope Calculator shortcode (e.g., yourwebsite.com/your-page). 2. Select Calculation Type: □ 2 Points are Known: Choose this option if you have the coordinates of two points on a line. The calculator will compute the slope (m) using these two points. □ 1 Point and the Slope are Known: Choose this option if you know one point on the line and the slope (m). The calculator will find the y-intercept (b) for you. 3. Input Values: □ Depending on your selection, different fields will appear: For “2 Points are Known”: □ Enter the coordinates of the two points: ☆ Point 1 (x1, y1): Fill in the x- and y-coordinates of the first point. ☆ Point 2 (x2, y2): Fill in the x- and y-coordinates of the second point. □ Click Calculate Slope to find the slope. For “1 Point and the Slope are Known”: □ Enter the x- and y-coordinates of the known point. □ Enter the Slope (m) of the line. □ Click Calculate Y-Intercept to find the y-intercept (b) of the line. 4. View the Result: □ The result will appear below the form, showing the slope (m) or y-intercept (b), depending on your calculation type. 5. Clear Fields (optional): □ To clear all input fields and the output result, click Clear. Alternatively, switching between calculation types will also clear the fields. Example Scenarios Example 1: Calculate Slope When “2 Points are Known” • Input: □ Point 1 (x1, y1): (2, 3) □ Point 2 (x2, y2): (5, 11) • Result: Slope (m) will display as 2.67. Example 2: Calculate Y-Intercept When “1 Point and the Slope are Known” • Input: □ Point (x, y): (2, 3) □ Slope (m): 2 • Result: Y-Intercept (b) will display as -1. This guide should help you easily calculate slopes and y-intercepts using your calculator.
{"url":"https://cryptocrape.com/slope-calculator/","timestamp":"2024-11-14T02:03:18Z","content_type":"text/html","content_length":"88493","record_id":"<urn:uuid:d692d660-09d8-4a61-8bbf-a7ce2f4c6f0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00285.warc.gz"}
Twelve | MICHEL DELGADO In mathematics, twelve is a number made up of perfect number divisors. There are twelve petals in the Heart Chakra. The human body has twelve cranial nerves. There is twelve inches in a foot, twelve hours in each half of the day, and twelve months in a year. The basic units of time, 60 seconds, 60 minutes and 24 hours can all perfectly divide by twelve. Twelve is the number of keys on any standard digital telephone (1 through 9, 0, * and #). The number of function keys on most PC keyboards is twelve (F1 through F12). The Beatles released twelve studio albums. The Florida Keys Manatee is pregnant for twelve months. Paradise Lost, by John Milton, is divided into twelve books. In ten-pin bowling, twelve is the number of strikes needed for a perfect game. There are twelve of something in a dozen and a Gros is a dozen of a dozen. Regular cubes have twelve edges. The densest three-dimensional lattice sphere packing has each sphere touching twelve others. Force 12 is the maximum wind speed of a hurricane. Jacob had twelve sons who were the progenitors of the Twelve Tribes. Jesus had twelve apostles. In Shi’a Islam there are twelve Imans who are the legitimate successors to the prophet Mohammad. Twelve is the kissing number in three dimensions. Joe Namath wore the number 12. There are twelve basic hues in the color wheel. In English, twelve is the number of the greatest magnitude that has just one syllable. There are twelve face cards in a card deck. Twelve people sit on a United States jury. There are normally twelve pairs of ribs in the human body. Twelve people have walked on Earth’s moon. In a 12-step program to recovery, the twelfth and final step is the door of freedom.
{"url":"https://micheldelgado.com/projects/gum/twelve/","timestamp":"2024-11-13T12:33:30Z","content_type":"text/html","content_length":"44404","record_id":"<urn:uuid:5f57d2bb-ef8f-4b8f-898b-d5399259a6af>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00445.warc.gz"}
Towards a scientific exploration of computational realities. computer "science" predictive theory experiments peer review blog about Computer "science" Computer science deals with machines, employs mathematics and has perfectly reproducible experiments. Science! Right? What if the mathematics do not predict real-world behaviour? What if the experimental design does not allow to generalise beyond a handful problem instances? What are we missing here? Consider the following paradigms: A Reproduceable experiments and objective manner to analyse results B Formal computation model and asymptotic complexity results C Targeted observations that allow to distinguish between possible realities While the engineering paradigm A and mathematical paradigm B may achieve a scientific appearance due to plots and formulas, only the science paradigm C specifically targets observations (e.g., empirical and theoretical results) that build conclusive knowledge about computational realities (more details at the bottom of this page). A hopeful quote from over 15 years ago: The science paradigm has not been part of the mainstream perception of computer science. But soon it will be. — Former ACM President Peter J. Denning [source] Are we there yet? It could be argued that to this day computer science leaves itself open to the following criticisms: • Asymptotic computational complexity: Even average-case/smoothed complexities can be a very poor predictor for real-world behaviour, because for finite problem instances asymptotic formulas are just course approximations. • Handpicked experiments: A handful cherrypicked problem instances are typically not approriate for generalisations. • Disingenous/sensationalist presentation: Colourful narratives are very exciting, but without clear and rigorous evidence it misleads readers and creates unrealistic expectations. Asymptotic computational complexity has been a staple of computer science for many decades and while it is a very useful heuristic, it is perhaps not surprising that simple formulas can only provide some overly simplistic view of computational realities which are often more complex [more details]. There have been some efforts to address cherrypicking and replication problems: • Established Benchmarks: Not available for most problems. Counteracts cherrypicked experiments, but invites cherrypicked approaches aimed to only perform well for the reference benchmark. • Reproducibility efforts [SIGMOD] [ML/NLP]: Counteracts misreported experiments. Publicly available code facilitates follow-up studies, but does not affect any incentive structures (e.g., How can we get there? While previous efforts have been a step in the right direction, a general culture change is needed, specifically: • Predictive theory [more details]: □ Theory that is either guaranteed or demonstrated to bound or predict real-world behaviour. □ Worst-case/average-case/smoothed complexities are supplemented by some study of hidden constants, or even non-asymptotic formulas. □ When approriate, theoretical models are devised for sets of problem instances • Antagonistic experimental design [more details]: Alongside representative problem instances, picking some challenging problem instances that are likely harder than most problem instances. • Honest presentation [more details]: Clearly presenting novelties, but also critically discussing limitations and exploring both strengths and weaknesses in the experiments. Clearly, a lot of it comes down to peer-review of publication manuscripts and applications for grants or positions. If reviewers set unrealistic expectations that all new approaches should be better in every regard, then authors are forced to run a tailored benchmark that shows the new approach in the best light. Furthermore, if old approaches only need to be better in one way, it would almost guarantee getting stuck with bad approaches. If reviewers only consider experimental or purely theoretical results, then there is no incentive for predictive theory although it has larger scientific How do we compare to sciences studying human subjects? One way to reflect on computer science, is to compare it to other sciences, specifically ones where repeatability of experiments is quite challenging. While social/medical sciences investigate a human population, computer science investigates a population of problem instances: Computer Science Medical/Social Sciences Population Problem instances Humans Sample Problem instances used in experiments Humans participating in experiments Sample Size Typically, handful of problem instances (N < 10) Typically, hundreds of participants (N > 100) Sample Type Typically a non-random, handpicked sample Typically a randomised convenience sample Sample publicly available Typically yes Typically no Theory e.g., asymptotic complexity predictive/descriptive theory Independent Variable old approaches vs new approach control group vs intervention Dependent Variable performance measures various measures Study Design within-subject/repeated-measures experiment (without order effects) various Reproducibility Limited to "sample" See replication crisis. Analysis Description of effect sizes Statistical/effect size analysis Generalisability Completely subjective and often overstated Critically discussed and investigated In medical/social sciences most of the statistical analysis is aimed to rule out that the observed effects are merely sampling artifacts (and hence expected results for the null hypothesis). For non-random samples as used in computer science this is not possible and potentially all reported improvements generalise very poorly to other problem instances. It is therefore important to interpret such results very carefully and try to avoid overreaching generalisations. Furthermore, antagonistic sampling can help a bit (that we can think of as the opposite of cherrypicking), i.e., looking for problem instances that are likely harder than a random sample would be. How would we apply the science paradigm to something like algorithms? It boils down to the following questions: Scenarios and hypotheses Which hypothetical scenarios must be considered? How can they be grouped into hypotheses? Predictions Which observations are expected in each scenario? Methodology Which observations can reliably distinguish the considered scenarios? Replicability How can similar observations be replicated? Testing What are the observations and is it possible to repeat them? Analysis Which considered scenarios can be ruled out? Which hypothetical scenarios must be considered? How can they be grouped into hypotheses? For algorithms there are quite a few scenarios to consider. One may for instance consider a table, where the rows are important classes of problem instances, the columns are different priorisations between performance metrics and each table entry rates how well the algorithm fares for a particular class and metric. Each scenario is then a different way to fill out the table. Due to the many scenarios it is often useful to group all scenarios into the ones that reflect a scientifically interesting result (research hypothesis) and the rest (null hypothesis). Which observations are expected in each scenario? The expectations for each scenario formalise how one can assess if a scenario table is an apt description of the algorithm. Which observations can reliably distinguish the considered scenarios? The study design is about figuring out which empirical and theoretical results need to be collected and derived to rule out most of the scenarios. The goal is to be as conclusive as possible. How can similar observations be replicated? How to obtain similar empirical results for other representative problem instances is rarely discussed. Despite that replicability is key to any generalisation claims. While a particular theoretical result is replicable due to the proofs, the broader claims may require to obtain similar results for slightly deviating assumptions and models. Otherwise, the theoretical results can be an artifact of the chosen computation model and presumed performance metrics, which is still of interest, but may not support any broader claims. What are the observations and is it possible to repeat them? In computer science it is typically expected that experiments are repeatable. Repeatability is often required to keep researchers honest and aids transparency. Which considered scenarios can be ruled out? A discussion of the results then should narrow down which scenarios are plausible given the observations. While it is difficult to publish in computer science when admitting to inconclusive results, it is the typical reality of most distribution-dependent algorithms. Why science As a concluding remark some obvious reasons why computer science should aim to live up to its name:
{"url":"https://compsci.science/","timestamp":"2024-11-06T03:56:39Z","content_type":"text/html","content_length":"15182","record_id":"<urn:uuid:12f48c3a-d553-4927-aae6-49917cc7d4eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00135.warc.gz"}
non-minimum phase One more on Nyquist plots this time for a non-minimum-phase transfer-function with a pole at the origin. Continue reading “Step-by-step Nyquist plot example. Part III” We continue our saga of hand sketches of the straight-line approximations for the magnitude and phase of Bode plot diagrams (Section 7.1). This time we consider a non-minimum phase system with a pole at the origin. See Part I, Part II and Part III, for simpler examples. Continue reading “Step-by-step Bode plot example. Part IV” We have gone step-by-step over how to sketch straight-line approximations for the magnitude and phase diagrams of the Bode plot for a simple rational transfer-function. This transfer-function had only left-hand side poles and zeros, that is it was minimum-phase (see Section 7.2). In this post we consider a non-minimum phase transfer-function with a right-hand side zero. See Section 7.1 for details on the approximations. Continue reading “Step-by-step Bode plot example. Part III”
{"url":"https://linearcontrol.info/fundamentals/index.php/tag/non-minimum-phase/","timestamp":"2024-11-14T05:27:53Z","content_type":"text/html","content_length":"61277","record_id":"<urn:uuid:49328846-a3df-4b29-b26c-3dcb77d3c878>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00745.warc.gz"}
Function matching We present problems in the following three application areas: identifying similar codes in which global register reallocation and spill code minimization were done (programming languages); protein threading (computational biology); and searching for color icons under different color maps (image processing). We introduce a new search model called function matching that enables us to solve the above problems. The function matching problem has as its input a text T of length n over alphabet E[T] and a pattern P = P[1]P[2] ⋯ P[m] of length m over alphabet Σ[P]. We seek all text locations i, where the m-length substring that starts at i is equal to f(P[1])/(P[2]) ⋯ f(P[m]), for some function f : Σ[P] → Σ[T]. We give a randomized algorithm that solves the function matching problem in time O(n log n) with probability 1/n of declaring a false positive. We give a deterministic algorithm whose time is O(n|Σ[P]| log m) and show that it is optimal in the convolutions model. We use function matching to efficiently solve the problem of two-dimensional parameterized matching. • Color maps • Function matching • Parameterized matching • Pattern matching Dive into the research topics of 'Function matching'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/function-matching-7","timestamp":"2024-11-07T07:43:23Z","content_type":"text/html","content_length":"54796","record_id":"<urn:uuid:a21406cb-108e-43c9-b0df-a635c19364d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00733.warc.gz"}
Horwell comma Jump to navigation Jump to search Interval information Ratio 65625/65536 Factorization 2^-16 × 3 × 5^5 × 7 Monzo [-16 1 5 1⟩ Size in cents 2.3494767¢ Name horwell comma Color name Lzy^5-2, Lazoquinyo negative 2nd FJS name [math]\text{dd}{-2}^{5,5,5,5,5,7}[/math] Special properties reduced, reduced harmonic Tenney height (log[2] nd) 32.002 Weil height (log[2] max(n, d)) 32.0039 Wilson height (sopfr (nd)) 67 Harmonic entropy ~1.25051 bits (Shannon, [math]\sqrt{nd}[/math]) Comma size unnoticeable open this interval in xen-calc The horwell comma (monzo: [-16 1 5 1⟩, ratio: 65625/65536) is an unnoticeable 7-limit comma measuring about 2.35 cents. It is the difference between 32/21, the septimal superfifth, and a stack of five 5/4's octave reduced. Tempering out this comma leads to the horwell temperament, where 32/21 can be found through a stack of five 5/4's and octave reduction. See Horwell family for the rank-3 family where it is tempered out. See Horwell temperaments for a collection of rank-2 temperaments where it is tempered out. This comma was first named as tertiapont by Gene Ward Smith in 2005 as a contraction of tertiaseptal and pontiac^[1]. It is not clear how it later became horwell, but the root of horwell is obvious, being a contraction of hemithirds and orwell.
{"url":"https://en.xen.wiki/w/Horwell_comma","timestamp":"2024-11-06T01:31:52Z","content_type":"text/html","content_length":"27234","record_id":"<urn:uuid:aa3afb64-347d-4d2c-a572-f33fdbfd9a6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00526.warc.gz"}
Show Posts - Mack « on: April 02, 2023, 01:37:08 PM » There are no straight slopes on a globe. With all due respect, that’s the point you don't understand. Literally, the definition of a straight line is that it has a constant rate of change. You are arguing that the circumference of the globe has a constant rate of change, but isn’t a straight line. Put another way, if the circumference of the globe has a constant rate of change, then it is a straight line. Do you see the contradiction in your own argument?
{"url":"https://forum.tfes.org/index.php?PHPSESSID=v7s8k4eqlmp523mo989cmvp47o&action=profile;area=showposts;u=21156","timestamp":"2024-11-09T10:47:29Z","content_type":"application/xhtml+xml","content_length":"28131","record_id":"<urn:uuid:a65470fa-1522-4815-addf-7a1270e386a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00762.warc.gz"}
Ganita Bharati Current Volume: 44 (2022 ) ISSN: 0970-0307 Periodicity: Half-Yearly Month(s) of Publication: June & December Subject: Mathematics DOI: https://doi.org/10.32381/GB Online Access is Free for Life Member Ganita Bharati, the Bulletin of the Indian Society for History of Mathematics is devoted to publication of significant original articles in history of Mathematics and related areas. Although English is the official language of the journal, an article of exceptional merit written in French, German, Sanskrit or Hindi will also be considered only as a special case. The ISHM aims to Promote study, research and education in history of mathematics. It provides a forum for exchange of ideas and experiences regarding various aspects of history of mathematics. In addition to the annual conferences, ISHM aims at organizing seminars/symposia on the works of ancient, medieval and modern mathematics, and has been bringing out the bulletin Ganita Bharati. Scholars, Teachers, Students and all lovers of mathematical sciences are encouraged to join the Society. Zentralblatt Math Mathematical Review S.G. Dani UM-DAE Centre for Excellence in Basic Sciences Vidyanagari Campus of University of Mumbai Kalina, Mumbai 400098, India Managing Editor Ruchika Verma Ramjas College University of Delhi Delhi-110007, India Assistant Editor V. M. Mallayya Meltra-A23, 'Padmasree' T. C. 25/1974(2) Near Gandhari Amman Kovil, Thiruvananthapuram, Kerala, PIN: 695001, India. S.M.S. Ansari Muzammil Manzil Compound Dodhpur Road Aligarh 202002, India. R. C. Gupta R-20, Ras Bahar Colony P. O. Lahar Gird, Jhansi-284003, India Kim Plofker Department of Mathematics Union College Schenectady, NY 12308 Mohammad Bagheri Encyclopedia Islamic Foundation PO Box 13145-1785 Takao Hayashi Science & Engg. Research Institute Doshisha University Kyotanabe Kyoto 610-0394 F. Jamil Ragep Islamic Studies McGill University Morrice Hall, 3485 McTavish Street Montreal, Quebec, Canada H3A 1Y1 S. C. Bhatnagar Department of Mathematics University of Nevada Las Vegas Jan P. Hogendljk University of Utrecht P.O. Box 80010 3508 TA Utrecht The Netherlands S. R. Sarma Höhenstr. 28 40227 Düsseldorf Umberto Botttazzni Universita degli Studi di Milano Dipartimento di Matematica Federigo Enriques Via Saldini 50 20133, Milano Jens Hoyrup Roskilde University Section for Philosophy and Science Studies Karine Chemla REHSEIS-CNRS and University Paris7, 75019, Paris, France Subhash Kak Dept. of Computer Sc. MSCS 219 Oklahoma State University Stillwater, OK 74078, USA Chikara Sasaki University of Tokyo 3-8-1 Komaba, Tokyo 153-8902 J. W. Dauben The Graduate Centre CUNY, 33, West 42nd Street New York, NY 10036 Victor J. Katz University of the D.C. 4200 Connecticut Ave. N.W.Washington, D.C 20008 M. S. Sriram Prof. K.V. Sarma Research Foundation Venkatarathnam Nagar Adyar, Chennai - 600020 Nachum Dershowitz Department of Computer Science Tel Aviv University, Tel Aviv Wenlin Li Academy of Mathematics & Systems Science Chinese Academy of Science, No. 55, Zhongguancun East Road, Haidan District, Beijing, 100190, Ioannis M. Vandoulakis The Hellenic Open Unversity School of Humanities 23, Syngrou Avenue, GR-11743, Athens, Greece. Nachum Dershowitz Department of Computer Science Tel Aviv University, Tel Aviv Wenlin Li Academy of Mathematics & Systems Science Chinese Academy of Science, No. 55, Zhongguancun East Road, Haidan District, Beijing, 100190, Ioannis M. Vandoulakis The Hellenic Open Unversity School of Humanities 23, Syngrou Avenue, GR-11743, Athens, Greece. Enrico Giusti Dipartimento di Matematica Viale Morgagni, 67/A I-50134 Firenze, Italy Jean-Paul Pier Société mathématique du Luxembourg 117 rue Jean-Pierre Michels L-4243 Esch-sur-Alzette D. E. Zitarelli Department of Mathematics Temple University Philadelphia, PA 19/22, USA. Volume 44 Issue 2 , (Jul-2022 to Dec-2022) Pre-Eudoxean Geometric Algebra By: Stelios Negrepontis , Vasiliki Farmaki , Demetra Kalisperi Page No : 107-152 In the light of our re-interpretation of Plato’s philosophy and of our reconstruction of the proofs of quadratic incommensurabilities by the Pythagoreans, Theodorus, and Theaetetus, in terms of periodic anthyphairesis, we re-examine the Geometric Algebra hypothesis in Greek Mathematics, originally enunciated by Zeuthen and Tannery and supported by van der Waerden and Weil, but challenged by Unguru and several modern historians. Our reconstruction of these proofs employs, for the computation of the anthyphairetic quotient at every step, the solution of a Pythagorean Application of Areas, either in excess or in defect, and is thus qualified as “school algebra” in the spirit of van der Waerden. For the Application of Areas in defect in the Theaetetean Books X and XIII of the Elements, by which the alogoi lines are characterized, the periodic nature of their anthyphairesis is revealed by the Scholia in Eucliden X.135 and 185 and by our re-interpretation of the ill-understood Meno 86e-87b passage. In conclusion, the pre-Eudoxean uses of Applications of Areas fall under the description of “school algebra” solutions of quadratic equations. It is interesting that these early uses stand in sharp contrast to the later uses of more general versions of Application of Areas by Appolonius in his Conic Sections, and which, according to Zeuthen, qualify as Geometric Algebra too, but in the form of pre-Analytic Geometry. Stelios Negrepontis : Department of Mathematics, Athens University, Athens 157 84, Greece Vasiliki Farmaki : Department of Mathematics, Athens University, Athens 157 84, Greece Demetra Kalisperi : Department of Mathematics, Athens University, Athens 157 84, Greece DOI : https://doi.org/10.32381/GB.2022.44.2.1 Price: 251 The “Hundred Fowls” Problem in the Gaṇitasārasaṅgraha of Mahāvīrācārya and Some New Perspectives on the “Kuṭṭaka” By: Catherine Morice-Singh Page No : 153-191 Our main goal in this paper is to analyze the two rules for solving “hundred fowls” type of problems described in Mahāvīrācārya’s well-known Gaṇitasārasaṅgraha. This will be done based on two manuscripts that Prof. M. Rangacharya consulted to prepare his edition and translation of the text, in 1912, and which are still available at the Government Oriental Manuscripts Library and Research Centre – Chennai (Madras). One of the manuscripts contains a running commentary in a medieval form of Kannada that is particularly useful for clarifying the steps of the algorithms. It allows us to see how Rangacharya, in an unusual way, deviated for the first example from the solution given in the manuscripts and provided his own solution instead. It will also allow us to appreciate the uniqueness and originality of Mahāvīrācārya’s second rule. We are fortunate that four well-known Sanskrit texts propound independent rules for this type of problems and give as illustration an identical example involving the buying of four species of birds. This is a rare instance that can help us revise previous understandings regarding the meaning of technical terms such as kuṭṭaka and kuṭṭīkāra – usually considered as synonyms and translated as “pulverizers” – and suggest new perspectives. Catherine Morice-Singh : c/o Laboratoire SPHERE, 8 Rue Albert Einstein, Bâtiment Olympe de Gouge. Université Paris Cité, F-75013 Paris, France DOI : https://doi.org/10.32381/GB.2022.44.2.2 Price: 251 Indian Solutions for Conjunct Pulverisers (lafÜy"Vdqêd) From Āryabhaṭa II to Devarāja By: Shriram M Chauthaiwale Page No : 193-204 After canvassing the solutions for indeterminate linear equations (kuṭṭaka), Indian scholars deliberated on the common solution for the two systems of similar equations under the caption “Conjunct Pulverisers (saṃśliṣṭakuṭṭaka).” Āryabhaṭa II, Mahāvīra, Śrīpatī, Bhāskara II, Nārāyaṇa Paṇḍita, Kṛṣṇa Daivajña, and Devarāja is the chain of the Indian scholars who explained similar or different methods for extracting the solutions. B. Datta discussed some of these methods, and T. Hayashi commented on Devarāja’s methods. S. K. Ganguli discovered an alternative method from the manuscript copies of Līlāvatī. This paper provides the juxtaposed mathematical formats of the methods after translating the relevant verses. Later, these methods are compared. Illustrations from the referred texts are quoted with answers. Shriram M Chauthaiwale : Lecturer (Rt) in Mathematics, Amolakchand College, Yavatmal (M.H.) DOI : https://doi.org/10.32381/GB.2022.44.2.3 Price: 251 By: .. Page No : 205-208 Price: 251 Jan- to Jun-2022 Geometry in the Mahasiddhanta of Aryabhata II By: Sanatan Koley Page No : 1-50 The aim of this article is to present at first the geometrical rules of the 10th century Indian mathematician-astronomer °ryabhaÇa II that are included in his work MahÀsiddhÀnta, composed in c.950 CE. Thereafter an attempt will be made to throw light upon some concepts of present-day geometry (including mensuration) which are implicit in this medieval work. Author : Dr. Sanatan Koley : Former Headmaster, Jagacha High School (H.S.), Howrah-711112. Present Address : Kadambari Housing Complex, Block-I, Flat-IA, 144 Mohiary Road, Jagacha, P.O. GIP Colony, Howrah-711112, W.B. DOI : https://doi.org/10.32381/GB.2022.44.1.1 Price: 251 Further Examples of Apodictic Discourse, II By: Satyanad Kichenassamy Page No : 51-94 The analysis of problematic mathematical texts, particularly from India, has required the introduction of a new category of rigorous discourse, apodictic discourse. In this second part, we show that its introduction clarifies the approach to epistemic cultures. We also show that the notion of fantasy echo is relevant in Epistemology, as suggested by J.W. Scott. We then continue our earlier analysis of Brahmagupta’s Prop. 12.21-32 on the cyclic quadrilateral and identify discursive strategies that enable him to convey definitions, hypotheses and derivations encoded in the very structure of the propositions stating his new results. We also show that the statements of mathematical formulae in words also follow definite discursive patterns. Author : Satyanad Kichenassamy : Professor of Mathematics, Université de Reims Champagne-Ardenne, Laboratoire de Mathématiques de Reims (CNRS, UMR9008), B.P. 1039, F-51687 Reims Cedex 2. DOI : https://doi.org/10.32381/GB.2022.44.1.2 Price: 251 The Abacus and the Slave Market By: Jens Hoyrup Page No : 95-100 Author : Jens Høyrup : Roskilde University, Section for Philosophy and Science Studies Max-Planck-Institut für Wissenschaftsgeschichte, Berlin, Germany. DOI : https://doi.org/10.32381/GB.2022.44.1.3 Price: 251 Some Recent Publications in History of Mathematics By: .. Page No : 101-106 Price: 251 Jan-2021 to Jun-2021 Peeping into Fibonacci’s Study Room By: Jens Hoyrup Page No : 1-70 The following collects observations I made during the reading of Fibonacci’s Liber abbaci in connection with a larger project, “abbacus mathematics analyzed and situated historically between Fibonacci and Stifel”. It shows how attention to the details allow us to learn much about Fibonacci’s way to work. In many respects, it depends crucially upon the critical edition of the Liber abbaci prepared by Enrico Giusti and upon his separate edition of an earlier version of its chapter 12 – not least on the critical apparatus of both. This, and more than three decades of esteem and friendship, explain the dedication. Jens Hoyrup Roskilde University, Section for Philosophy and Science Studies, Denmark. DOI : https://doi.org/10.32381/GB.2021.43.1.1 Price: 251 Treatment of ‘Very large number’ in Cyrillic Numeration By: Dionisy I. Pronin Page No : 71-86 The paper is dedicated to signs meaning ‘very large number’, that is, for 10,000 and higher, in Cyrillic. We discuss the manuscript ‘Arithmetics’ (SaintPetersburg, Russian National Library, Titov. 2414) of XVII c. which contains a previously unknown term ‘kony’ and its sign. The data on ‘very large number’ in this and some other manuscripts probably represent the development of The paper is dedicated to signs meaning ‘very large number’, that is, for 10,000 and higher, in Cyrillic. We discuss the manuscript ‘Arithmetics’ (SaintPetersburg, Russian National Library, Titov. 2414) of XVII c. which contains a previously unknown term ‘kony’ and its sign. The data on ‘very large number’ in this and some other manuscripts probably represent the development of new terms and extension of the counting limit. Another manuscript ‘Arithmetics’ (Saint-Petersburg, Russian National Library, Q.IX.46) of mid. XVII c. contains description of three alternative systems of values of numbers called small number, middle number and great number. The last one, the great number had three different variants of values for terms. We distinguish among concepts of numeral-sign and numeral-term, and discuss differences between numeral-signs and signs with meaning ‘very large number; indeterminately large number’new terms and extension of the counting limit. Another manuscript ‘Arithmetics’ (Saint-Petersburg, Russian National Library, Q.IX.46) of mid. XVII c. contains description of three alternative systems of values of numbers called small number, middle number and great number. The last one, the great number had three different variants of values for terms. We distinguish among concepts of numeral-sign and numeral-term, and discuss differences between numeral-signs and signs with meaning ‘very large number; indeterminately large number’. Author : Dionisy I. Pronin Independent Researcher, Russia, Yakutsk. DOI : https://doi.org/10.32381/GB.2021.43.1.2 Price: 251 Some Recent Publications in History of Mathematics By: No author Page No : 87-92 Price: 251 Jul- to Dec-2021 Further Examples of Apodictic Discourse, I By: Satyanad Kichenassamy Page No : 93-120 The analysis of problematic mathematical texts, particularly from India, has required the introduction of a new category of rigorous discourse, apodictic discourse. We briefly recall why this introduction was necessary. We then show that this form of discourse is widespread among scholars, even in contemporary Mathematics, in India and elsewhere. It is in India a natural outgrowth of the emphasis on non-written communication, combined with the need for freedom of thought. New results in this first part include the following: (i) ?ryabha?a proposed a geometric derivation of a basic algebraic identity; (ii) Brahmagupta proposed an original argument for the irrationality of quadratic surds on the basis of his results on the varga-prak?ti problem, thereby justifying his change in the definition of the word karani. DOI : https://doi.org/10.32381/GB.2021.43.2.1 Price: 251 Meanings of savarnana in Indian Arithmetic By: Taro Tokutake Page No : 121-149 In Indian mathematical texts the term savarnana “reduction to the same color” is usually found in the context of calculation for fractions. A number of explanations for the term have been offered in previous studies, but they slightly differ from each other. The Trisatibhasya is an anonymous commentary on Sridhara’s Trisati. In the present paper, I survey the meanings of savarnana in each text and the usage of it in the Trisatibhasya. DOI : https://doi.org/10.32381/GB.2021.43.2.2 Price: 251 Aryabhatiya 2.19 in a Commentary on Two Examples from Sridhara’s Patiganita: Several Alternative Ways Illustrated By: Taro Tokutake , Takanori Kusuba Page No : 151-165 In a commentary on example verse 112 for rule verses 97-98 in the mathematical series of the Patiganita, various solutions of a problem are described. After solving the problem according to the given rule, the commentator shows alternative methods: Aryabhatiya 2.19, linear equations, and rule verses 99-101. Also in the commentary on example verse 113 for rule verses 99-101, he again employs Aryabhatiya 2.19. The present paper has a threefold objective. First, we fully investigate the ways of solving which the commentary exhibits for the two examples. Secondly, we point out particularly where Aryabhatiya 2.19 is applied, although neither the author Aryabhatiya nor the title of his work is cited in the commentary. And thirdly, we study excerpts of rules concerning the bijaganita quoted there. DOI : https://doi.org/10.32381/GB.2021.43.2.3 Price: 251 Al-Biruni’s Remark About Medieval Indian Theory By: Yue Pan Page No : 167-176 As a Medieval Muslim polymath, al-Biruni had also been an observer of Indian astronomy. He gave some opinions on Indian theory of precession in his Tahqiq ma li-l-Hind. Al- Biruni adhered to Ptolemaic theory of the movement of the sphere of the fixed stars, which is opposite to medieval Indian theory of precession. It was such a contradiction that made al- Biruni misjudge medieval Indian theory of precession. This case reveals a particular aspect, both of the difference between pre-Ptolemaic Greco-Indian astronomy and Ptolemaic Greek one, and of the influence of Greek thought on Muslim scholars including al- Biruni. DOI : https://doi.org/10.32381/GB.2021.43.2.4 Price: 251 Several Algebraic Unknowns – The Road from Pacioli to Descarte By: Jens Hoyrup Page No : 177-198 At the Annual conference of the Indian Society for History of Mathematics in 2020 I spoke about the scattered use of several algebraic unknowns in Italian algebra from Fibonacci to Pacioli, and in 2021 about Benedetto da Firenze’s introduction of symbolic algebraic calculations with up to five unknowns in 1463 – the latter having no impact whatsoever on future developments. Here I shall complete what was not originally planned to become a triptych, looking at the development of the technique from Pacioli onward in the writings of Rudolff, Stifel and Mennher. In the end I shall consider the likely influence on Viete’s and Descartes’ algebras, together with the reasons for their unprecedented introduction of abstract coefficients. DOI : https://doi.org/10.32381/GB.2021.43.2.5 Price: 251 News : Professor R. C. Gupta honored with Padma Shri By: No author Page No : 199 Price: 251 Jan-2020 to Dec-2020 The Central Role of Incommensurability in Pre-Euclidean Greek Mathematics and Philosophy By: Stelios Negrepontis , Vassiliki Farmaki , Marina Brokou Page No : 1-34 In this paper we outline the tremendous impact that the Pythagorean discovery of incommensurability had on pre-Euclidean Greek Mathematics and Philosophy. This will be a consequence of our findings that the Pythagorean method of proof of incommensurability is anthyphairetic, namely depends on Proposition X.2 of the Elements, according to which if the anthyphairesis of two line segments is infinite, then they are incommensurable. Our fundamental finding is that the main entity of Plato’s philosophy, the intelligible Being, is a philosophical analogue/imitation of a dyad in periodic anthyphairesis. One byproduct of our deeper and mathematical understanding of Plato’s philosophy is that we can next show (a) that Plato’s intelligible Beings coincide with the earlier Zeno’s true Beings, and (b) that the purpose of Zeno’s arguments and most exciting paradoxes is not to deny motion or multiplicity, as usually thought, but to separate the true Beings from the sensible entities of opinion. Although Plato’s early/middle work is greatly influenced by the Pythagoreans and Zeno, in his late work he employed via philosophical imitation, the stunning discovery of the great Athenian mathematician Theaetetus, namely the palindromic periodicity theorem for quadratic incommensurabilities (established in modern era by Lagrange and Euler). The study of incommensurability via periodic anthyphairesis produced great Mathematics and great Philosophy; however this approach could only deal with quadratic, and did not extend to solid incommensurabilities. Archytas and Eudoxus marked the beginning of a new, non-anthyphairetic era for incommensurability. In one way or another, the Greek Mathematics (Pythagoreans, Theodorus, Theaetetus, Archytas, Eudoxus) and Philosophy (Pythagoreans, Zeno, Plato) of the pre-Euclidean era were dominated by the Pythagorean discovery of incommensurability. Authors : Stelios Negrepontis Professor Emeritus, Mathematics Department, Athens University, Athens, Greece. Vassiliki Farmaki Professor Emeritus, Mathematics Department, Athens University, Athens, Greece. Marina Brokou Ph. D. Candidate, Mathematics Department, Athens University, Athens, Greece. DOI : https://doi.org/10.32381/GB.2020.42.1-2.1 Price: 251 Some Magic and Latin Squares and the BhuvaneœvarÁ and Other Bimagic Squares By: R. C. Gupta Page No : 35-54 The nine Indian planetary magic squares of order 3 are attributed to Garga, who is said to belong to the hoary past. Formation of magic squares of order 9 from those of order 3, as bimagic squares, is found both in India and China. BhuvaneœvarÁ yantra is a bimagic square of order 8. Its full Sanskrit text, along with translation and the method of construction is described in the present paper. Construction of bimagic squares of orders 8 and 9 from simple orthogonal Latin squares is dealt with in detail. Recalling that Euler’s conjecture on Latin squares has been disproved, a counter-example of order 10 is described. Author : R. C. Gupta R20, Ras Bahar Colony P.O. Sipri Bazar, Jhansi, U.P., India. DOI : https://doi.org/10.32381/GB.2020.42.1-2.2 Price: 251 Fifteenth-century Italian symbolic algebraic calculation with four and five unknowns By: Jens Hoyrup Page No : 55-86 The present article continues an earlier analysis of occurrences of two algebraic unknowns in the writings of Fibonacci, Antonio de’ Mazzinghi, an anonymous Florentine abbacus writer from around 1400, Benedetto da Firenze and another anonymous Florentine writing some five years before Benedetto, and Luca Pacioli. Here I investigate how in 1463 Benedetto explores the use of four or five algebraic unknowns in symbolic calculations, describing it afterwards in rhetorical algebra; in this way he thus provides a complete parallel to what was so far only known (but rarely noticed) from Michael Stifel’s Arithmetica integra (1544) and Johannes Buteo’s Logistica (1559). It also discusses why Benedetto may have seen his innovation as a merely marginal improvement compared to techniques known from Fibonacci’s Liber abbaci, therefore failing to make explicit that he has created something new. Author : Jens Hoyrup Roskilde University, Section for Philosophy and Science Studies, Denmark. DOI : https://doi.org/10.32381/GB.2020.42.1-2.3 Price: 251 Clairaut, Euler and the Figure of the Earth By: Athanase Papadopoulos Page No : 87-127 The sphericity of the form of the Earth was questioned around the year 1687, primarily, by Isaac Newton who deduced from his theory of universal gravitation that the Earth has the form of a spheroid flattened at the poles and elongated at the equator. In France, some preeminent geographers were not convinced by Newton’s arguments, and about the same period, based on empirical measurements, they emitted another theory, claiming that on the contrary, the Earth has the form of a spheroid flattened at the equator and elongated at the poles. To find the real figure of the Earth became one of the major questions that were investigated by geographers, astronomers, mathematicians and other scientists in the 18th century, and the work done around this question had an impact on the development of all these fields. In this paper, we review the work of the 18th-century French mathematician, astronomer and geographer Alexis-Claude Clairaut related to the question of the figure of the Earth. We report on the relation between this work and that of Leonhard Euler. At the same time, we comment on the impact of the question of the figure of the Earth on mathematics, astronomy and hydrostatics. Finally, we review some later mathematical developments that are due to various authors that were motivated by this question. It is interesting to see how a question on geography had such an impact on the theoretical sciences. Author : Athanase Papadopoulos Universite de Strasbourg and CNRS, 7 rue René Descartes, 67084 Strasbourg Cedex, France. DOI : https://doi.org/10.32381/GB.2020.42.1-2.4 Price: 251 Apodictic discourse and the Cauchy-Bunyakovsky-Schwarz inequality By: Satyanad Kichenassamy Page No : 129-147 Bunyakovsky’s integral inequality (1859) is one of the familiar tools of modern analysis. We try and understand what Bunyakovsky did, why he did it, why others did not follow the same path, and explore some of the mathematical (re)interpretations of his inequalities. This is achieved by treating the texts as discourses that provide motivation and proofs by their very discursive structure, in addition to what meets the eye at first reading. Bunyakovsky paper is an outgrowth of the mathematical theory of mean values in Cauchy’s work (1821), but viewed from the point of view of Probability and Statistics. Liouville (1836) gave a result that implies Bunyakovsky’s inequality, but did not identify it as significant because his interests lay elsewhere. Grassmann (1862) stated the inequality in abstract form but did not prove it for reasons that can be identified. Finally, by relating the result to quadratic binary forms, Schwarz (1885) opened the way to a geometric interpretation of the inequality that became important in the theory of integral equations. His argument is the source of one of the proofs most commonly taught nowadays. At about the same time, the Rogers- Hölder inequality suggested generalizations of Cauchy’s and Bunyakovsky’s results in an entirely different direction. Later extensions and reinterpretations show that no single result, even now, subsumes all known generalizations. Author : Satyanad Kichenassamy Université de Reims Champagne-Ardenne,Laboratoire de Mathématiques (CNRS, UMR9008), B.P. 1039, F-51687 Reims Cedex 2, France. DOI : https://doi.org/10.32381/GB.2020.42.1-2.5 Price: 251 Department of Mathematics at Banaras Hindu University: A history, circa 1916-1950 By: Ritesh Gupta Page No : 149-173 A historical study of Science Colleges and their constituting departments and disciplines, viz. Mathematics, Physics, Chemistry, Zoology, Botany, et cetera established in early universities could bring to light new facts and values to the history of science in modern India. However, not much scholarship has catered to the institutional histories of Science Colleges established in the late nineteenth and early twentieth centuries. Survey and scrutiny of institutionalization of modern sciences and mathematics in Indian universities have remained rather neglected. Therefore, the present paper explores the early history of the Banaras Hindu University’s (B.H.U.) Mathematics Department. It lists out the first-generation mathematicians of the university, their education and training, the national and international collaborations, research, and scientific publications. Author : Ritesh Gupta Ph.D. Research Scholar, Zakir Husain Centre for Educational Studies, Jawaharlal Nehru University, New Delhi. DOI : https://doi.org/10.32381/GB.2020.42.1-2.6 Price: 251 By: M.S. Sriram Page No : 175-181 Ganitagannadi, (Mirror of Mathematics) – An astronomy text of 1604 CE in Kannada by Sankaranarayana Joisaru of Srngeri by B.S. Shylaja and Seetharama Javagal Reviewed by M.S. Sriram Prof. K.V. Sarma Research Foundation, 42, Venkatarathnam Nagar, Adyar, Chennai, India. Price: 251 By: Avinash Sathaye Page No : 182-186 A Primer to Bharatiya Ganitam; Bharatiya-Ganita-Pravesa by M.D. Srinivas (Editor), and Authors: V. Ramakalyani, M.V. Mohana, R.S. Venkatakrishna and N. Kartika Reviewed by Avinash Sathaye Department of Mathematics, University of Kentucky, Lexington KY, U.S.A. Price: 251 Some recent publications in History of Mathematics By: No author Page No : 187-196 Price: 251 Jan-2019 to Dec-2019 Brahmagupta’s Apodictic Discourse By: Satyanad Kichenassamy Page No : 1-21 We continue our analysis of Brahmagupta’s BrÀhmasphuÇasiddhÀnta (India, 628), that had shown that each of his sequences of propositions should be read as an apodictic discourse: a connected discourse that develops the natural consequences of explicitly stated assumptions, within a particular conceptual framework. As a consequence, we established that Brahmagupta did provide a derivation of his results on the cyclic quadrilateral. We analyze here, on the basis of the same principles, further problematic passages in Brahmagupta’s magnum opus, regarding number theory and algebra. They make no sense as sets of rules. They become clear as soon as one reads them as an apodictic discourse, so carefully composed that they leave little room for interpretation. In particular, we show that (i) Brahmagupta indicated the principle of the derivation of the solution of linear congruences (the kuÇÇaka) at the end of chapter 12 and (ii) his algebra in several variables is the result of the extension of operations on numbers to new types of quantities – negative numbers, surds and “non-manifest” variables. DOI : https://doi.org/10.32381/GB.2019.41.1-2.1 Price: 251 Reinventing or Borrowing Hot Water? Early Latin and Tuscan Algebraic Operations with Two Unknowns By: Jens Hoyrup Page No : 23-67 In mature symbolic algebra, from Viète onward, the handling of several algebraic unknowns was routine. Before Luca Pacioli, on the other hand, the simultaneous manipulation of three algebraic unknowns was absent from European algebra and the use of two unknowns so infrequent that it has rarely been observed and never analyzed. The present paper analyzes the five occurrences of two algebraic unknowns in Fibonacci’s writings; the gradual unfolding of the idea in Antonio de’ Mazzinghi’s Fioretti; the distorted use in an anonymous Florentine algebra from ca 1400; the regular appearance in the treatises of Benedetto da Firenze; and finally what little we find in Pacioli’s Perugia manuscript and in his Summa. It asks which of these appearances of the technique can be counted as independent rediscoveries of an idea present since long in Sanskrit and Arabic mathematics – metaphorically, to which extent they represent reinvention of the hot water already available on the cooker in the neighbour’s kitchen; and it raises the question why the technique once it had been discovered was not cultivated – pointing to the line diagrams used by Fibonacci as a technique that was as efficient as rhetorical algebra handling two unknowns and much less cumbersome, at least until symbolic algebra developed, and as long as the most demanding problems with which algebra was confronted remained the traditional recreational challenges. DOI : https://doi.org/10.32381/GB.2019.41.1-2.2 Price: 251 Nearest-Integer Continued Fractions in Drkkarana By: Venketeswara Pai R. , M. S. Sriram Page No : 69-89 The Karaõa texts of Indian astronomy give simplified expressions for the mean rates of motion of planets. The Kerala text Karaõapaddhati (c. 1532-1566 CE) expresses these rates which involve ratios of large numerators or multipliers (guõakras) and large demominators or divisors (hÀrakas), as ratios of smaller numbers using essentially the method of simple continued fraction expansion. A modified version of this method is described in a slightly later Malayalam text named DÃkkaraõa (c. 1608 CE), also. A very interesting feature of the DÃkkaraõa algorithm is that a nearest-integer continued fraction expansion with the minimal length is implicit in it. We discuss this algorithm in this paper. DOI : https://doi.org/10.32381/GB.2019.41.1-2.3 Price: 251 Mathematics and Map Drawing in the Eighteenth Century By: Athanase Papadopoulos Page No : 91-126 We consider the mathematical theory of geographical maps, with an emphasis on the eighteenth century works of Euler, Lagrange and Delisle. This period is characterized by the frequent use of maps that are no more obtained by the stereographic projection or its variations, but by much more general maps from the sphere to the plane. More especially, the characteristics of the desired geographical maps were formulated in terms of an appropriate choice of the images of the parallels and meridians, and the mathematical properties required by the map concern the distortion of the maps restricted to these lines. The paper also contains some notes on the general use of mathematical methods in cartography in Greek Antiquity, and on the mutual influence of the two fields, mathematics and geography. DOI : https://doi.org/10.32381/GB.2019.41.1-2.4 Price: 101 On the Contribution of Anders Johan Lexell in Spherical Geometry By: A. Zhukova Page No : 127-149 In this paper, we discuss results in spherical geometry that were obtained by a remarkable mathematician of the XVIIIth century, Anders Johan Lexell. We also present a short note on the place of these results in the history of this field as well as a short biography of Lexell. DOI : https://doi.org/10.32381/GB.2019.41.1-2.5 Price: 251 Magic Squares and Other Numerical Diagrams on the Chittagong Plaster Replicas in the David Eugene Smith Collection By: Takao Hayashi Page No : 151-180 The Rare Book and Manuscript Library of Columbia University has a set of 20 plaster replicas that D. E. Smith brought from Chittagong in 1907 CE. They are twin replicas of 10 stone slabs. Most of the replicas show one or a few numerical diagrams including magic squares. In this paper I analyze them and discuss their construction methods. DOI : https://doi.org/10.32381/GB.2019.41.1-2.6 Price: 251 By: .. Page No : 181-196 Price: 251 -2018 to Jun-2018 T.A. Sarasvati Amma: A Centennial Tribute By: P. P. Divakaran Page No : 1-16 Sarasvati Amma published very few research papers. All her insights into the Indian mathematical (specifically, geometric) tradition are to be found in her book “Geometry in Ancient and Medieval India”, published in 1979 but prepared as her thesis in the University of Madras 20 years earlier. The present article is, consequently, an evaluation of the mathematics described in the book and of the historiographic significance of its interpretation by her. The book pays specific attention to certain themes: e.g., the key ideas of the geometry of the Vedic period, cyclic quadrilaterals, geometric algebra etc. and, especially, the infinitesimal trigonometry of MÀdhava, all in a style designed to bring out the continuity in their evolution. The case is made in this article that Sarasvati Amma’s work, along with the earlier book of B. Datta and A. N. Singh, marks the founding of an autonomous discipline of scholarship into India’s mathematical past. DOI : https://doi.org/10.32381/GB.2018.40.01.1 Price: 251 The Seminal Contribution of K. S. Shukla to our Understanding of Indian Astronomy and Mathematics By: M. D. Srinivas Page No : 17-51 In this article we shall highlight some of the important contributions to the study of Indian astronomy and mathematics made by Prof. Kripa Shankar Shukla (1918 - 2007), on the occasion of his birth centenary. Shukla was a student of Prof. A. N. Singh (1905 - 1954) at Lucknow University and was also fortunate to have come in close contact with Prof. Singh’s renowned collaborator Bibhutibhusan Datta (1888-1958). Dr. Shukla became the worthy successor of Prof. Singh to lead the research programme on Indian astronomy and mathematics at Lucknow University. Prof. Shukla brought out landmark editions of twelve important source-works of Indian astronomy and mathematics. A remarkable feature of many of these editions is that they also include lucid English translations and detailed explanatory notes. This is indeed one of the greatest contributions of Prof. Shukla since, till the 1960s, there had been very few editions of the classical source-works of Indian astronomy which also included a translation as well as explanatory notes. The editions of Shukla have become standard textbooks for the study of development of Indian astronomy during the classical Siddhantic period from Aryabhata to Sripati. DOI : https://doi.org/10.32381/GB.2018.40.01.2 Price: 251 On Old Babylonian Mathematical Terminology and its Transformations in the Mathematics of Later Periods By: Jens Hoyrup Page No : 53-99 Third-millennium (BCE) Mesopotamian mathematics seems to have possessed a very restricted technical terminology. However, with the sudden flourishing of supra-utilitarian mathematics during the Old Babylonian period, in particular its second half (1800–1600 BCE) a rich terminology unfolds. This mostly concerns terms for operations and for definition of a problem format, but names for mathematical objects, for tools, and for methods or tricks can also be identified. In particular the terms for operations and the way to structure problems turn out to allow distinction between single localities or even schools. After the end of the Old Babylonian period, the richness of the terminology is strongly reduced, as is the number of known mathematical texts, but it presents us with survival as well as innovations. Apart from analyzing the terminology synchronically and diachronically, the article looks at two long-lived non-linguistic mathematical practices that can be identified through the varying ways they are spoken about: the use of some kind of calculating board, and a way to construct the perimeter of a circle without calculating it – the former at least in use from the 26th to the 5th century BCE, the later from no later than Old Babylonian times and surviving until the European 15th century CE. DOI : https://doi.org/10.32381/GB.2018.40.01.3 Price: 251 Jul-2018 to Dec-2018 Katyayana Sulvasutra : Some Observations By: S. G. Dani Page No : 101-114 The KÀtyÀyana ŒulvasÂtra has been much less studied or discussed from a modern perspective, even though the first English translation of two adhyÀyas (chapters) from it, by Thibaut, appeared as far back as 1882. Part of the reason for this seems to be that the general approach to the ŒulvasÂtra studies has been focussed on “the mathematical knowledge found in them (as a totality)”; as the other earlier ŒulvasÂtras, especially of BaudhÀyana and °pastamba substantially cover the ground in this respect, the other two ŒulvasÂtras, MÀnava and KÀtyÀyana, received much less attention, the latter especially so. On the other hand the broader purpose of historical mathematical studies extends far beyond cataloguing what was known in various cultures, rather to understand the ethos of the respective times from a mathematical point of view, in their own setting, in order to evolve a more complete picture of the mathematical developments, ups as well as downs, over history. Viewed from this angle, a closer look at KÀtyÀyana ŒulvasÂtra assumes significance. Coming at the tail-end of the ŒulvasÂtras period, after which the ŒulvasÂtras tradition died down due to various historical reasons that are really only partly understood, makes it special in certain ways. What it omits to mention from the body of knowledge found in the earlier ŒulvasÂtras would also be of relevance to analyse in this context, as much as what it chooses to record. Other aspects such as the difference in language, style, would also reflect on the context. It is the purpose here to explore this direction of inquiry. DOI : https://doi.org/10.32381/GB.2018.40.02.1 Price: 251 Essay Review: On the Interpretations of the History of Diophantine Analysis: A Comparative Study of Alternate Perspectives By: Ioannis Vandoulakis Page No : 115-151 This is a review of the following two books, in particular comparing them with relevant works of I.G.Bashmakova on the topic. Les Arithmétiques de Diophante : Lecture historique et mathématique, par Roshdi Rashed en collaboration avec Christian Houzel, Berlin, New York : Walter de Gruyter, 2013, IX-629 p. Histoire de l’analyse diophantienne classique : D’Ab KÀmil à Fermat, par Roshdi Rashed, Berlin, New York : Walter de Gruyter, 2013, X-349 p. DOI : https://doi.org/10.32381/GB.2018.40.02.2 Price: 251 Nasir al-Din al-Tusi Treatise on the Quadrilateral: The Art of Being Exhaustive By: Athanase Papadopoulos Page No : 153-180 We comment on some combinatorial aspects of Nasir al-Din al-Tusi Treatise on the Quadrilateral, a 13th century work on spherical trigonometry. DOI : https://doi.org/10.32381/GB.2018.40.02.3 Price: 251 By: .. Page No : 181-190 The Mathematics of India : Concepts, Methods, Connections by P. P. Divakaran Reviewed by Satyanad Kichenassamy Price: 251 By: .. Page No : 191-198 Karanaapaddhati of Putumana Somayaji with translation and explanatory notes by Venketeswara Pai, K. Ramasubramanian, M.S. Sriram and M.D. Srinivas Reviewed by S.G. Dani and Clemency Montelle Price: 251 Jan-2017 to Jun-2017 Archimedes – Knowledge and Lore from Latin Antiquity to the Outgoing European Renaissance By: Jens Hoyrup Page No : 1-21 With Apuleius and Augustine as the only partial exceptions, Latin Antiquity did not know Archimedes as a mathematician but only as an ingenious engineer and astronomer, serving his city and killed by fatal distraction when in the end it was taken by ruse. The Latin Middle Ages forgot even much of that, and when Archimedean mathematics was translated in the 12th and 13th centuries, almost no integration with the traditional image of the person took place. Petrarca knew the civically useful engineer and the astrologer (!); no other fourteenth-century Humanist seems to know about Archimedes in any role. In the 15th century, however, “higher artisans” with Humanist connections or education took interest in Archimedes the technician and started identifying with him. In mid-century, a new translation of most works from the Greek was made by Jacopo Cremonensis, and Regiomontanus and a few other mathematicians began resurrecting the image of the geometer, yet without emulating him in their own work. Giorgio Valla’s posthumous De expetendis et fugiendis rebus from 1501 marks a watershed. Valla drew knowledge of the person as well as his works from Proclus and Pappus, thus integrating the two. Over the century, a number of editions also appeared, the editio princeps in 1544, and mathematical work following in the footsteps of Archimedes was made by Maurolico, Commandino and others. The Northern Renaissance only discovered Archimedes in the 1530s, and for long only superficially. The first to express a (purely ideological) high appreciation was Ramus in 1569, and the first to make creative use of his mathematics was Viète in the 1590s. Price: 251 On the History of Nested Intervals: From Archimedes to Cantor By: G. I. Sinkevich Page No : 23-45 The idea of the principle of nested intervals, or the concept of convergent sequences which is equivalent to this idea, dates back to the ancient world. Archimedes calculated the unknown in excess and deficiency, approximating with two sets of values: ambient and nested values. J. Buridan came up with a concept of a point lying within a sequence of nested intervals. P. Fermat, D. Gregory, I. Newton, C. MacLaurin, C. Gauss, and J.-B. Fourier used to search for an unknown value with the help of approximation in excess and deficiency. In the 19th century, in the works of B. Bolzano, A.-L. Cauchy, J.P.G. Lejeune Dirichlet, K. Weierstrass, and G. Cantor, this logical construction turned into the analysis argumentation method. The concept of a real number was elaborated in the 1870s in works of Ch. Méray, Weierstrass, H.E. Heine, Cantor, and R. Dedekind. Cantor’s elaboration was based on the notion of a limiting point and principle of nested intervals. What discuss here the development of the idea starting from the ancient times. Price: 251 Explanation of the Vakyasodhana procedure for the Candravakyas By: M. S. Sriram Page No : 47-53 The CandravÀkyas of MÀdhava give the true longitude of the Moon for each day of an anomalistic cycle of 248 days. These coincide with the computed values within an error of one second. Traditionally, a vÀkyaœodhana (exculpating vÀkyas) has been prescribed to check the correctness of the numerical values given by the vÀkyas, if there is any doubt about any of them. We are not aware of any explanation for this procedure in any commentary. In this article, we provide an explanation for the vÀkyaœodhana - procedure, based on the traditional trairÀœika or the “rule of three” procedure. Price: 251 Madhyahnakalalagna in Karanapaddhati of Putumana Somayaji By: Venketeswara Pai R. , M. S. Sriram Page No : 55-74 Madhyahnakalalagna is the time interval between the rise of the equinox and the instant when a star with a non zero latitude is on the meridian. Algorithms for finding the Madhyahnakalalagna are given in the text Karaõapaddhati of Putumana Somayaji. These have no equivalents in the other Kerala astronomical works. Only a person with a great deal of insight into the subject of spherical trigonometry could have arrived at these algorithms. In this paper, we present four algorithms to find the Madhyahnakalalagna as described in the text. We also provide the detailed derivation of two of them, adopting the method of Yuktibhasha for such problems, as it is very likely that the author would have followed such a method. Price: 251 Vedic Mathematics and Science in the Vedas (in Kannada) by S. Balachandra rao Reviewed by Surabhi Saccidananda By: .. Page No : 75-77 Price: 251 Some Recent Publications in History of Mathematics By: .. Page No : 79-90 Price: 251 By: .. Page No : 91-92 Price: 251 By: .. Page No : 93-94 Price: 251 Jul-2017 to Dec-2017 An Indian Version of al-Kashi’s Method of Iterative Approximation of sin 1 By: Kim Plofker Page No : 95-106 The well-known “feedback loop” of trigonometry of sines, from its origin in Indian astronomy to the Islamic world in the first millennium CE and back to India in the mid-second, includes many interesting and under-studied developments. This paper examines a Sanskrit adaptation and refinement of a medieval method foRsine approximation, apparently from the court of Jai Singh in the early 18th century. Price: 251 Nilakantha's Critique on Aryabhata's Verses on Squaring and Square-roots By: N. K. Sundareswaran Page No : 107-124 NÁlakaõÇha’s commentary on °ryabhaÇÁya is well known for clarity and simplicity of language and for its expository nature. He goes on clarifying all the possible doubts. The way in which he formulates and interconnects ideas is simply beautiful. At times his commentary on a particular point runs into pages. But it would be a pleasure to read it, for, the language and style of argument are the same as in a polemical text of philosophy. This paper makes a close study of the commentary on the fourth verse of GaõitapÀda, wherein NÁlakaõÇha explains the method for finding the square root of a number, focusing on the development of ideas and the thought process. Here NÁlakaõÇha deals, at length, with many of the rationales and the concepts involved. The explanation given by NÁlakaõÇha for BaudhÀyana’s approximation of is unique. It is a fine specimen of geometrical demonstration of arithmetical ideas, a significant trend of medieval school of Kerala mathematics. Price: 251 Sign and Reference in Greek Mathematics By: Ioannis Vandoulakis Page No : 125-145 In this paper, we will examine some modes of reference to mathematical entities used in Greek mathematical texts. In particular, we examine mathematical texts from the Early Greek period, the Euclidean, Neo-Pythagorean, and Diophantine traditions. Price: 251 On the History of Analysis -The Formation of Concepts By: G. Sinkevich Page No : 147-162 Mathematical analysis was conceived in XVII century in the works of Newton and Leibniz. The issue of logical rigor in definitions was however first considered by Arnauld and Nicole in ‘’Logique ou l’art de penser’’. They were the first to distinguish between the bulk of the concept and its structure. They created a tradition which was strong in mathematics till XIX century, especially in France. The definitions were in binomial nomenclature mostly, but another type of definition appears in Cantor theory – it was the descriptive definition. As it used to be in humanities, first the object had only one characteristic, then as research continued it got enriched with new characteristics leading to a fledged concept. In this way mathematics acquired its own creativity. In 1915 Luzin laid down a new principle of the descriptive theory: a structural characteristic is done, the analytical form had to be found. New schools of descriptive set theory appeared in Moscow in the first half of the 20th century. Price: 251 By: .. Page No : 163-173 Price: 251 By: .. Page No : 195-197 Price: 251 Jan-2016 to Jun-2016 Embedding: Multipurpose Device for Understanding Mathematics and its Development, or Empty Generalization? By: Jens Hoyrup Page No : 1-29 “Embedding” as a technical concept comes from linguistics, more precisely from grammar. The present paper investigates whether it can be applied fruitfully to certain questions that have been investigated by historians (and sometimes philosophers) of mathematics: 1. The construction of numeral systems, in particular place-value and quasi place-value systems. 2. The development of algebraic symbolisms. 3. The discussion whether “scientific revolutions” ever take place in mathematics, or new conceptualizations always include what preceded them. A final section investigates the relation between spatial and linguistic embedding and concludes that the spatio-linguistic notion of embedding can be meaningfully applied to the former two discussions, whereas the apparent embedding of older within new theories is rather an ideological mirage. Price: 251 Rolle’s Theorem and Bolzano-Cauchy Theorem : A View from the End of the 17th Century until K. Weierstrass’ Epoch By: G. Sinkevich Page No : 31-53 We discuss the history of the famous Rolle’s theorem “If a function is continuous at [a, b], differentiable in (a, b), and f (a) = f (b), then there exists a point c in (a, b) such that f’(c) = 0”, and that of the related theorem on the root interval, “If a function is continuous on [a, b] and has different signs at the ends of the interval, then there exists a point c in (a, b) such that f (c) = 0”. Price: 251 By: .. Page No : 55-72 Price: 251 Some Recent Publications in History of Mathematics By: .. Page No : 73-84 Price: 251 Annual Conference of ISHM - 2015 : A Report By: .. Page No : 85-89 Price: 251 By: .. Page No : 91-92 Price: 251 Jul-2016 to Dec-2016 Siddhanta-karana conversion: Some algorithms in the Grahaganitadhyaya of Bhaskara’s Siddhantasiromani and in his Karanakutuhala By: Kim Plofker Page No : 93-110 One of the great and unique achievements of Sanskrit mathematical astronomy is its wealth of ingenious approximation formulas to substitute for laborious trigonometric computations. This paper examines some intriguing and highly sophisticated examples of such approximations in the twelfth-century work of Bhaskara (II, Bhaskaracarya). Price: 251 By: K. Ramasubramanian , M. D. Srinivas , M. S. Sriram , Page No : 111-139 Price: 251 The poetic features in the golÀdhyÀya of NityÀnanda’s SarvasiddhÀntarÀja By: Anuj Misra , Clemency Montelle , K. Ramasubramanian Page No : 141-156 Many astronomical works in India, like those in other intellectual disciplines, were composed in beautiful verses. While most studies focus on the technical contents of these verses, very few have examined the poetic features, here known as alaôkÀra, that the authors employed to add poetic charm to their treatises. We consider the use of such alaôkÀras in the Gola chapter of a seventeenth century work in Sanskrit astronomy, the SarvasiddhÀntarÀja of Nityananda, and, by highlighting several examples, we examine the ways in which specific embellishments have been woven into the text to make the medium of communication as beautiful as the content. Price: 251 Roshdi Rashed, Historian of Greek and Arabic Mathematics By: Athanase Papadopoulos Page No : 157-182 We survey the work of Roshdi Rashed, the Egyptian-French historian of mathematics. Surveying Rashed’s work gives an overview of the most important part of Greek mathematics that was transmitted to us in Arabic, as well as of the finest pieces of Arabic mathematics that survive. Price: 251 A tribute to Syamadas Mukhopadhyaya– On the occasion of his 150th birth anniversary By: S. G. Dani Page No : 183-194 Price: 251 Annual Conference of ISHM - 2016 : A Report By: .. Page No : 195-197 Price: 251 Jan-2015 to Dec-2015 Bhaskaracarya’s Mathematics and Astronomy: An Overview By: M. S. Sriram Page No : 1-38 Bhaskara’s works incorporate most of the results and methods of mathematics and astronomy in India in his times, and carry them forward significantly. There are clear explanations and proofs of the assertions in the verses in the main texts in his own commentaries on them. This article provides a bird’s eye view of his works. Price: 251 Some Aspects of Patadhikara in Siddhantasiromani By: Venketeswara Pai R. , M. S. Sriram , Sita Sundar Ram Page No : 39-68 Vyatipata and Vaidhrta occur when the magnitudes of the declinations of the Sun and the Moon are equal, and one of them is increasing, while the other is decreasing. In this paper, we discuss the calculations associated with them in the patadhikara in the Grahaganita part of Siddhantsiromani. Some of these are similar to the computations in Brahmasphutasiddhanta and Sisyaddhidatantra, but the computation of the golasandhi appears here for the first time. We also compare Bhaskara’s procedures with the ones in Tantrasangraha (c. 1500 CE). Price: 251 The Phenomena of Retrograde Motion and Visibility of Interior Planets in Bhaskara’s Works By: Shailaja M , Vanaja V , S. Balachandra Rao Page No : 69-82 In this paper we present the interesting phenomena of the retrograde motion of taragrahas as also the visibility of Budha (Mercury) and Sukra (Venus) in the eastern and western horizons. Bhaskaracarya in his astronomical works has dwelt at length on these phenomena and provided the relevant critical and stationary points. WE work out the details in the case of the two interior planets and compare the results with modern ones. Price: 251 True Positions of Planets According to Karanakutuhala By: Vanaja V , Shailaja M , S. Balachandra Rao Page No : 83-96 We will be presenting briefly the procedure of determining the mean and true positions of the Sun, the Moon and the tÀrÀgrahas (planets) according to KaraõakutÂhala of BhÀskara II. We work out the true planetary positions for a contemporary date of BhÀskara’s period and compare the results with those of a couple of other traditional texts and modern procedures for validation of BhÀskara’s procedures and parameters. Price: 251 The Influence of Bhaskaracarya’s Works in “Westernized” Sanskrit Mathematical Traditions By: Kim Plofker Page No : 97-109 The well-known treatises of Bhaskara II or Bhaskaracarya (b.1114) are unanimously recognized as canonical in Sanskrit mathematics and mathematical astronomy, but the specific details of their influence on later works remain largely unexplored (partly because most of those later works themselves still await comprehensive study). This article examines a few texts from the sixteenth to eighteenth centuries whose authors were familiar with some aspects of Greco-Islamic astronomy and mathematics, and discusses their continued use of BhÀskara’s works as a model. Price: 251 Bhaskaracarya’s Treatment of the Concept of Infinity By: Avinash Sathaye Page No : 111-123 Bhaskaracarya’s treatment of the concept of infinity in his Algebra book is strikingly different and his corresponding exercises are sometimes criticized as erroneous. We discuss his ideas and propose a new explanation in terms of an extended number system with idempotents. Price: 251 Issues in Indian Metrology, from Harappa to Bhaskaracharya By: Michel Danino Page No : 125-143 Numerous systems of units were developed in India for lengths, angles, areas, volumes, time or weights. They exhibit common features and a continuity sometimes running from Harappa to BhÀskarÀchÀrya, but also an evolution in time and considerable regional variations. This paper presents an overview of some issues in Indian metrology, especially with regard to units of length and weight, some of which are traceable all the way to the Indus-Sarasvati civilization. It discusses, among others, the aôgula and its multiple variations, and the value of yojana and its impact on calculations for the circumference of the Earth. Price: 251 Indian Records of Historical Eclipses and their Significance By: Aditya Kolachana , K. Ramasubramanian Page No : 145-162 Among the various techniques that are employed in determining the variation in the length of day (LOD), the recorded observations of ancient eclipses play a crucial role, particularly for estimating variations in the remote past. Scholarly investigations of these records preserved in different cultures around the world, for the above purpose, have completely ignored the Indian record of historical eclipses on the presumption that “no early records appear to be extant”. Consequently, estimates of the variations in LOD are entirely based on the records of only a few civilisations - Arabia, Babylon, China, Europe, and lately Japan and Korea. In this paper we aim to show that this presumption is ill informed, and that Indian records of historical eclipse observations are reasonably well extant. We also provide a few examples of eclipses recorded in India which maybe useful for finding the variation in LOD (?T). Price: 251 Medieval Eclipse Prediction: A Parallel Bias in Indian and Chinese Astronomy By: Jayant Shah Page No : 163-178 Since lunar and solar parallax play a crucial role in predicting solar eclipses, the focus of this paper is on the computation of parallax. A brief history of parallax computation in India and China is traced. Predictions of solar eclipses based on NÁlakaõÇha’s Tantrasaôgraha are statistically analyzed. They turn out to be remarkably accurate, but there is a pronounced bias towards predicting false positives rather than false negatives. The false positives occur more to the south of the ecliptic at northerly terrestrial latitudes and more to the north of the ecliptic at southerly latitudes. A very similar bias is found in Chinese astronomy providing another hint at possible links between Indian and Chinese astronomy. The Chinese have traditionally used different values for the eclipse limit north and south of the ecliptic, perhaps to compensate for the southward bias. Price: 251 Jan-2014 to Jun-2014 Mathematical Models and Data in the Bra¯hmapaks.a School of Indian Astronomy By: Kim Plofker Page No : 1-12 While many of the innovative mathematical techniques developed by medieval Indian astronomers have been studied extensively, much less is known about how they chose to select and apply specific mathematical models to physical phenomena. This paper focuses on the paks.a or astronomical school associated with Brahmagupta (628 CE) and investigates what some of its characteristic features may tell us about the evolution of Indian mathematical astronomy. Price: 251 From Verses in Text to Numerical Table — The Treatment of Solar Declination and Lunar Latitude in Bha-skara II’s Karan.akutu-hala and the Related Tabular Work, the Brahmatulyasa-ran. By: Clemency Montelle Page No : 13-25 A twelfth century set of astronomical tables, the Brahmatulyasa-ran. , poses some interesting challenges for the modern historian. While these tables exhibit a range of standard issues that numerical data typically present, their circumstances and mathematical structure are further complicated by the fact that they are purported to be a recasting of another work by Bhaskara II that was originally composed in verse, the Karan.akutu- hala (epoch 1183 CE). We explore this relationship by considering the tables for solar declination and lunar latitude and comparing them to their textual Price: 251 Instantaneous Motion(ta-tka-likagati) and the “Derivative” of the Sine Function in Bh¯askara-II’s Siddha-nta´siroman. By: M. S. Sriram Page No : 37-52 It was well known even before Bha-skara-ca -rya that the daily motion of a planet would vary from day to day. In his Siddha-nta´siroman. i, Bha-skara-II observes that the rate of motion would vary even during the course of a day, and discusses the concept of an instantaneous rate of motion, which involves the derivative of the sine function. We discuss how Bha-skara could have arrived at the correct expression for this, and also how it was applied to find the ‘apogee’ of a planet. Price: 251 On the Works of Euler and his Followers on Spherical Geometry By: Athanase Papadopoulos Page No : 53-108 We review and comment on some works of Euler and his followers on spherical geometry. We start by presenting some memoirs of Euler on spherical trigonometry. We comment on Euler’s use of the methods of the calculus of variations in spherical trigonometry. We then survey a series of geometrical results, where the stress is on the analogy between the results in spherical geometry and the corresponding results in Euclidean geometry. We elaborate on two such results. The first one, known as Lexell’s Theorem (Lexell was a student of Euler), concerns the locus of the vertices of a spherical triangle with a fixed area and a given base. This is the spherical counterpart of a result in Euclid’s Elements, but it is much more difficult to prove than its Euclidean analogue. The second result, due to Euler, is the spherical analogue of a generalization of a theorem of Pappus (Proposition 117 of Book VII of the Collection) on the construction of a triangle inscribed in a circle whose sides are contained in three lines that pass through three given points. Both results have many ramifications, involving several mathematicians, and we mention some of these developments. We also comment on three papers of Euler on projections of the sphere on the Euclidean plane that are related with the art of drawing geographical maps. Price: 251 Sawai Jai Singh’s Efforts to Revive Astronomy By: Virendra N Sharma Page No : 109-125 The paper reviews Sawai Jai Singh’s (1688-1743) efforts to revive astronomy in his domain. For this reviving, he erected observatories, designed instruments of masonry and stone, assembled a team of astronomers of different schools of astronomy such as the Hindu, Islamic and European, and finally sent a fact finding scientific delegation to Europe. Jai Singh did not succeed in his efforts. The paper explains that poor communications of his times and a complex interaction of intellectual stagnation, religious taboos, theological beliefs, national rivalries and simple human failings were responsible for his failure. Price: 251 By: .. Page No : 127 Price: 251 Jul-2014 to Dec-2014 Hyperbolic Geometry in the Work of J. H. Lambert By: Guillaume Théret , Athanase Papadopoulos Page No : 129-155 The memoir Theorie der Parallellinien (1766)* by Johann Heinrich Lambert is one of the founding texts of hyperbolic geometry, even though its author’s aim was, like many of his predecessors’, to prove that such a geometry does not exist. In fact, Lambert developed his theory with the hope of finding a contradiction in a geometry where all the Euclidean axioms are kept except the parallel axiom and where the latter is replaced by its negation. In doing so, he obtained several fundamental results of hyperbolic geometry. This was sixty years before the first writings of Lobachevsky and Bolyai appeared in print. Price: 251 On the Legacy of Ibn Al-Haytham: An Exposition Based on the Work of Roshdi Rashed By: Athanase Papadopoulos Page No : 157-177 We report on the work of Ibn al-Haytham, an Arabic scholar who had settled in Cairo in the eleventh century, and worked in several fields, including mathematics, physics and philosophy. We review some of his work on optics, astronomy, number theory and especially spherical geometry. Our report is mostly based on the books published by Roshdi Rashed, a specialist on Ibn alHaytham and the world expert on Arabic and Greek mathematics and their interaction. We also provide a report on the life of Ibn al-Haytham, his influence, and the general background in which he flourished. The year 2015 has been declared by the UNESCO the “International Year of Light”, and one reason is that we celebrate this year the thousandth anniversary of Ibn Al-Haytham’s fundamental work on optics, KitÀb Price: 251 Otto Hölder : A Multifaceted Mathematician By: R. Sridharan Page No : 179-191 Otto Hölder (1859-1937) was a many sided German mathematician and he worked in diverse areas like Analysis, Group Theory, Mathematical Mechanics, Geometry, Foundational questions in Mathematics and Number theory. The aim of this article is to give a brief sketch of his life and work, paying tribute to this mathematician for his manifold contributions to various branches of mathematics, and how in spite of his remarkable achievements he had to face many obstructions in his academic life. Price: 251 Some Identities and Series Involving Arithmetic and Geometric Progressions in PÀÇÁgaõitam, GaõitasÀrasaôgrahaÍ and GaõitakaumudÁ By: Shriram M. Chauthaiwale Page No : 193-204 The celebrity Indian mathematician trio ŒrÁdhara, MahÀvÁra and NÀrÀyaõa Paõçita elaborates on some identities which are either algebraic sums of numbers with the series on natural numbers or sums of series of natural numbers. Series involving arithmetic and geometric progressions are also found discussed. In this article we discuss these identities and the series, explaining the format of each one followed by their formulation. The rationale for the nontrivial results is provided. The results are amended and extended when necessary. Price: 251 By: .. Page No : 205-218 Price: 251 Some Recent Publications in History of Mathematics By: .. Page No : 219-226 Price: 251 Report on the Annual Conference of ISHM - 2014 Dedicated to Bhaskaracarya By: .. Page No : 227-230 Price: 251 By: .. Page No : 231-233 Price: 251 All the manuscripts submitted for the Ganita Bharati should accompany a covering letter giving an undertaking following certain principles under Ethical Policy. The cover letter should include a written statement from the author(s) that: 1. The manuscript is an original research work and has not been published elsewhere including open access at the internet. 2. The data used in the research has not been manipulated, fabricated, or in any other way misrepresented to support the conclusions. 3. No part of the text of the manuscript has been plagiarised. 4. The manuscript is not under consideration for publication elsewhere. 5. The manuscript will not be submitted elsewhere for review while it is still under consideration for publication in the Ganita Bharati. The cover letter should also include an ethical statement disclosing any conflict of interest that may directly or indirectly impart bias to the research work. Conflict of interest most commonly arises from the source of funding, and therefore, the name(s) of funding agency must be mentioned in the cover letter. In case of no conflict of interest, please include the statement that “the authors declare that they have no conflict of interest”. Products related to this item
{"url":"https://www.printspublications.com/journal/gb","timestamp":"2024-11-11T16:41:27Z","content_type":"text/html","content_length":"528785","record_id":"<urn:uuid:c359c01a-b217-49fa-82e4-e2463fcd76cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00067.warc.gz"}
Bringing It Together: Homework A previous year, the weights of the members of a California football team and a Texas football team were published in a newspaper The factual data are compiled into Table 3.25. The weights in the column headings are in pounds. Shirt# ≤ 210 211–250 251–290 290≤ 1–33 21 5 0 0 34–66 6 18 7 4 66–99 6 12 22 5 For the following, suppose that you randomly select one player from the California team or the Texas team. If having a shirt number from one to 33 and weighing at most 210 pounds were independent events, then what should be true about P(Shirt# 1–33|≤ 210 pounds)? The probability that a male develops some form of cancer in his lifetime is .4567. The probability that a male has at least one false-positive test result, meaning the test comes back for cancer when the man does not have it, is .51. Some of the following questions do not have enough information for you to answer them. Write not enough information for those answers. Let C = a man develops cancer in his lifetime and P = a man has at least one false-positive. a. P(C) = ______ b. P(P|C) = ______ c. P(P|C') = ______ d. If a test comes up positive, based upon numerical values, can you assume that man has cancer? Justify numerically and explain why or why not. Given events G and H: P(G) = .43; P(H) = .26; P(H AND G) = .14 a. Find P(H OR G). b. Find the probability of the complement of event (H AND G). c. Find the probability of the complement of event (H OR G). Given events J and K: P(J) = .18; P(K) = .37; P(J OR K) = .45 a. Find P(J AND K). b. Find the probability of the complement of event (J AND K). c. Find the probability of the complement of event (J OR K). Use the following information to answer the next two exercises: Suppose that you have eight cards. Five are green and three are yellow. The cards are well shuffled. Suppose that you randomly draw two cards, one at a time, with replacement. = first card is green. = second card is green. a. Draw a tree diagram of the situation. b. Find P(G[1] AND G[2]). c. Find P(at least one green). d. Find P(G[2]|G[1]). e. Are G[2] and G[1] independent events? Explain why or why not. Suppose that you randomly draw two cards, one at a time, without replacement. = first card is green = second card is green a. Draw a tree diagram of the situation. b. Find P(G[1] AND G[2]). c. Find P(at least one green). d. Find P(G[2]|G[1]). e. Are G[2] and G[1] independent events? Explain why or why not. Use the following information to answer the next two exercises: The percent of licensed U.S. drivers (from a recent year) who are female is 48.60. Of the females, 5.03 percent are age 19 and under; 81.36 percent are age 20–64; 13.61 percent are age 65 or over. Of the licensed U.S. male drivers, 5.04 percent are age 19 and under; 81.43 percent are age 20–64; 13.53 percent are age 65 or over. Complete the following: a. Construct a table or a tree diagram of the situation. b. Find P(driver is female). c. Find P(driver is age 65 or over|driver is female). d. Find P(driver is age 65 or over AND female). e. In words, explain the difference between the probabilities in Part c and Part d. f. Find P(driver is age 65 or over). g. Are being age 65 or over and being female mutually exclusive events? How do you know? Suppose that 10,000 U.S. licensed drivers are randomly selected. a. How many would you expect to be male? b. Using the table or tree diagram, construct a contingency table of gender versus age group. c. Using the contingency table, find the probability that out of the age 20–64 group, a randomly selected driver is female. Approximately 86.5 percent of Americans commute to work by car, truck, or van. Out of that group, 84.6 percent drive alone and 15.4 percent drive in a carpool. Approximately 3.9 percent walk to work and approximately 5.3 percent take public transportation. a. Construct a table or a tree diagram of the situation. Include a branch for all other modes of transportation to work. b. Assuming that the walkers walk alone, what percent of all commuters travel alone to work? c. Suppose that 1,000 workers are randomly selected. How many would you expect to travel alone to work? d. Suppose that 1,000 workers are randomly selected. How many would you expect to drive in a carpool? When the euro coin was introduced in 2002, two math professors had their statistics students test whether the Belgian one-euro coin was a fair coin. They spun the coin rather than tossing it and found that out of 250 spins, 140 showed a head (event H) while 110 showed a tail (event T). On that basis, they claimed that it is not a fair coin. a. Based on the given data, find P(H) and P(T). b. Use a tree to find the probabilities of each possible outcome for the experiment of spinning the coin twice. c. Use the tree to find the probability of obtaining exactly one head in two spins of the coin. d. Use the tree to find the probability of obtaining at least one head. Use the following information to answer the next two exercises: The following are real data from Santa Clara County, California. As of a certain time, there had been a total of 3,059 documented cases of a disease in the county. They were grouped into the following categories, with risk factors of becoming ill with the disease labeled as Methods A, B, and C and Other: Method A Method B Method C Other Totals Female 0 70 136 49 ____ Male 2,146 463 60 135 ____ Totals ____ ____ ____ ____ ____ Suppose a person with a disease in Santa Clara County is randomly selected. a. Find P(Person is female). b. Find P(Person has a risk factor of method C). c. Find P(Person is female OR has a risk factor of method B). d. Find P(Person is female AND has a risk factor of method A). e. Find P(Person is male AND has a risk factor of method B). f. Find P(Person is female GIVEN person got the disease from method C). g. Construct a Venn diagram. Make one group females and the other group method C. Answer these questions using probability rules. Do NOT use the contingency table. Three thousand fifty-nine cases of a disease had been reported in Santa Clara County, California, through a certain date. Those cases will be our population. Of those cases, 6.4 percent obtained the disease through method C and 7.4 percent are female. Out of the females with the disease, 53.3 percent got the disease from method C. a. Find P(Person is female). b. Find P(Person obtained the disease through method C). c. Find P(Person is female GIVEN person got the disease from method C). d. Construct a Venn diagram representing this situation. Make one group females and the other group method C. Fill in all values as probabilities.
{"url":"https://texasgateway.org/resource/bringing-it-together-homework-1?book=79081&binder_id=78226","timestamp":"2024-11-11T07:57:48Z","content_type":"text/html","content_length":"51526","record_id":"<urn:uuid:f29a9ed0-610f-4532-9e03-ea2d9cf5eb5f>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00821.warc.gz"}
Is sololearn bugged or am I doing something wrong? | Sololearn: Learn to code for FREE! Is sololearn bugged or am I doing something wrong? https://gyazo.com/6ec344de6d3e6c536daa804946df9907 Here is a picture of what i'm going through, If I write more code in the terminal ( which I HAVE to do to finish test case 1 and test case 2 consecutively) it marks both as incorrect even though my input, output, and expected output is exactly as it should be? I get how to do this, but it's not working. Is this lesson bugged? Or am I doing something wrong? When I do them one at a time I get one correct, but then the other gets marked incorrect. Help please. Braden Norvell , there a some issues with your code as already mentioned from . what we need to do is: we need to take to inputs as integer numbers: > take the first input with the input() function. convert the input to int and store it in a variable > take the second input with the input() function. convert the input to int and store it in a variable > now create the sum of the 2 variables and store it in an other variable. > finally output the calculated result Did you hard code the output? I am guessing you are supposed to write a program that takes 2 inputs and output the sum. Your program will be run multiple times with different inputs and is supposed to do the math correctly each time.
{"url":"https://www.sololearn.com/ru/Discuss/3109682/is-sololearn-bugged-or-am-i-doing-something-wrong","timestamp":"2024-11-13T23:04:20Z","content_type":"text/html","content_length":"1043893","record_id":"<urn:uuid:fcfbb6b3-83ca-45b8-9077-9744ed357757>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00394.warc.gz"}
User talk:Siskus - Rosetta CodeUser talk:Siskus HI SISKUS. THIS IS A NOTE THAT CURRENTLY I FIND YOUR BEHAVIOUR INFLAMMATORY AND WILL RESOLVE THIS IN A DAY IF I DON'T SEE MATTERS RESOLVED AND THE RESOLUTION NOTED HERE!. RC Administrator --Paddy3118 (talk) 06:23, 21 August 2014 (UTC) Sorry, stop jelling, control yourself. I only marked a contribution as written by a novice because it doesn't met my quality standard. I get blamed because I should address somebody, which is absolutely isn't the case. In that marked REXX contribution there is: 1. Notice of Garbage collector, which is to my idea a lie. 2. Mal-formed meta-code, which IMHO must be of a novice. I will consider to stop with my contributions. Have fun, --Siskus (talk) 10:43, 21 August 2014 (UTC) As for me I would appreciate to learn what you consider "Mal-formed meta-code" but you won't probably answer that either. --Walterpachl (talk) 11:13, 21 August 2014 (UTC) (admittedly a novice in Metacode) Siskus (in reference to your comments above starting with the "jelling"): It is unfortunate that because a contribution doesn't meet your quality standards, it has to be marked by you as being written by a novice. That is not the definition of a novice (writer). Of course, it would help immensely if you could list what your quality standard are. The second issue is calling someone a liar. I believe that English is your second language, so I am going to presume that you don't know that the telling of a lie is the intentional telling of a non-truth and not just the stating/telling of something inaccurate. Stating that someone is lying, at the very least, is both inflammatory and insulting, and it impugns my reputation (by accusing someone of deliberately stating a falsehood). The third issue would concern your definition of malformed meta-code. I would assume even experts, at one time or another, aren't perfect and produce what you may call malformed meta-code. However, what is at issue here is your definition of malformed meta-code (and stating that it is malformed and using that as a basis to flag an entry as being written by a novice), and in particular, how I have used hard blanks (also known, among other names, as non-breaking blanks) in my various texts. I do not believe your assumption that only a novice would use hard blanks. It's easy to toss around labels like that without defining or pointing out why you think that the use of non-breaking blanks belong to the domain of novices. I can't believe that the creators of HTML sat around and thought of features to include in HTML just for novices to use. I would think that every feature in HTML has a use, even by non-novices. I, for one (and I suspect everyone), don't want you to stop with your (programming language entry) contributions, but your inflammatory and insulting comments have no place at Rosetta Code. On reflection, it's not for me to say that (but that is what I believe as led by observations and by the understanding of Rosetta Code's intent and philosophy). I'm not sure if you meant to be snide in having fun, but that sort of (past) behavior isn't fun for anyone and causes much discord and the least of which, much wasted time and effort to address, not to mention it leaves a bad impression for everyone who may visit these pages on Rosetta Code in the future. Those sort of insults, name-calling, and bad behavior will hang around a long time and besmirch Rosetta Code. I know the administrators of Rosetta Code consider time a very much precious resource (and ditto for me as well). (In my opinion), nobody has time to waste on this sort of name-calling and insulting statements. I have been thinking that you may have raised in a different culture (where maybe confrontational behavior was the norm and challenging someone else's beliefs/ methods were the methods used to address differences), but you've repeatably refused to answer my (and others) queries for clarification, direct questions, or polite requests for more information on your thinking and postings. You have called me (on Rosetta Code) a novice, stupid, and now a liar. You have yet to address/retract/apologize for any of those grievances. I'm trying to remain as civil as I can under these libelous circumstances, and I remain in giving you the benefit of the doubt. -- Gerard Schildberger (talk) 21:03, 21 August 2014 (UTC) Bold textHi Siskus, you seem to have annoyed REXX contributors. Could you answer their polite questions rather than just deleting them as I think they are worried that you will make further edits of the sort they would like to have a dialogue with you about. --Paddy3118 (talk) 17:31, 18 November 2013 (UTC) P.S. There is also a question on the talk page of this page you created: Form:TaskImplmented. Could you have a dialoge about that too? Thanks. --Paddy3118 (talk) 17:31, 18 November 2013 (UTC) Annoying REXX contributors Hi Paddy3118, Congratulations you are the first inquirer on this page. Those gentleman are not so polite as you think. They are completely unaware about Wiki and Wikitext and even HTML is a struggling. They really think that they own the code that is committed. Maybe could you teach them a lesson about Intellectual Property under GNU. Furthermore they are so funny; they are the Statler and Waldorf of Rosetta Code and there pure monoglot contibution is a show called REXX. The right name for a dinosaur. :-) Truly, you are first on this page, Have fun. --Siskus (talk) 20:36, 18 November 2013 (UTC) Yes, some users on Rosetta Code have ongoing(!) difficulties with wiki formatting and, occasionally, ettiquette. They are not, however, habitually rude or obnoxious. They have also not insulted other users' wiki ettiquette while at the same time showing a total lack of respect for other users, their langauges and their code. You had one warning already. Because I simply don't have time to deal with the barrage of complaints I get about you (despite the complaintant's curteously acknowledging my lack of time and clearly extended execution of patience), and because you didn't seem to get the message the first time I banned you, I'm banning you again. This time, for one month. This *is* your final warning; if I have to ban you again for any reason, it will not be a temporary ban. Congratulations would be in order, too; in the seven-year history of this wiki, you would be only the second non-spammer I'd have ever felt the need to perma-ban. Please shape up. This wiki is fundamentally built on cooperation, congeniality and respect. If you cannot manage to imbue those attributes most of the time, or at least fake them, then there will be no place for you here. --Michael Mol (talk) 22:17, 19 November 2013 (UTC) As for me I am truly shocked and appaled by this unbelievable reaction! This is far off the behavior I am used to on this Wiki! Siskus, you need not respect REXX but at least stay awy from it! --Walterpachl (talk) 22:26, 18 November 2013 (UTC) By the way, If you think that adding my request on your user page was inappropriate you could have told me (politely). (I apologize if this annoyed you.) Either there or on my user page or via email which you find on my user page. Permit my not sharing your kind of humor. --Walterpachl (talk) 23:26, 18 November 2013 (UTC) Hi Walter, At last you reached it, this discussions page. Q: Is that supposed to be a punishment, to stay away from that "Stick a fork in it, we're done" goings-on? (That phrase is essential code for the successful execution of the program. Deletion the phrase would ruined the program, I learned from an expert.) Please keep off my user page and stay in your "own" REXX cellar. I must congratulate you that you have noticed the humor. We had a big laugh about it. Have fun. --Siskus (talk) 12:12, 19 November 2013 (UTC) A: Consider it a blessing (for you) B: Look for another expert! What she or he told you is rubbish C: Good for you that you can laugh. I can't D: You have still not answered the question in the Rexx solution to ranking languages by popularity (or removed the 'invalid' note you placed there. Would you be so kind? --Walterpachl (talk) 12:23, 19 November 2013 (UTC) Can I have your suggestion how to communicate properly and professionally? --Walterpachl (talk) 12:23, 19 November 2013 (UTC) In case you "forgot": This example is incorrect. Program does not properly ranks tied counts, counts are not accurate, PARI/GP is missing. Please fix the code and remove this message. --Walterpachl (talk) 12:30, 19 November 2013 (UTC) flagging of REXX entries (The following was deleted from Siskus' user page (and should've been posted on the Siskus User Talk page). For the posting to the wrong webpage, I apologize. If you had just made a note of it and transferred the posting here, it would've been appreciated.) -- Gerard Schildberger (talk) 23:05, 19 November 2013 (UTC) Siskus: your numerous flagging of various REXX programs and/or section headers has been very disruptive and time-consuming to fix and re-instate. You have repeatedly flagged REXX for omission when if fact, an example (solution) of the REXX language was present in the task. This is ridiculous. If a solution provided the answer(s), then it shouldn't be marked for omission. Previously, you had marked the REXX entry in Rank languages by popularity to be omitted because REXX doesn't have web access. Nowhere in the task requirements did it state that web access was to be used (or even necessary); indeed, the REXX section header has such a statement, and furthermore it stated how it accessed the web page data. You further went on to delete a REXX solution three times, and changed two other REXX program solutions (within one task) so that the comments are no longer true (they had references to the deleted REXX programs), and you later added a version which was garbage; it had numerous syntax errors in the program and it even could/would not run (execute), nor produce any output. Yet you cut and pasted text, and included part of the program in the output (which was part of a comment). This act of vandalism (my opinion) has no part on Rosetta Code. Too much time and effort was spent in repairing your malicious efforts (and not just by me). It is clear that you don't know the REXX language (not even as a beginner), and further, you apparently don't have access to a REXX interpreter, otherwise you'd have noticed how badly your version was written/coded (as far as syntax of the language). As for the latest round of flagging, you marked REXX as incorrect (for Rosetta Code, Rank languages by popularity) you cited three reasons: ☆ program does not properly ranks (sic) tied counts, ☆ counts are not accurate, ☆ PARI/GL is missing. REXX is one of two solutions (the other is Icon and Unicon), as far I can tell) that does proper ranking of tied counts, and as a matter of fact, no other solution even addresses the tied count Counts are accurate as of the time of the program execution and the numbers are obtained from the CATEGORY page and filtered through a list of languages (from the language page). PARI/GP is in the ranking and it's ranked 30 with 358 members. Did you mean PARI/GP instead of PARI/GL? There is no PARI/GL in the Rosetta Code list of languages. -- Gerard Schildberger ([[User talk:Gerard Schildberger|talk]]) 09:28, 11 November 2013 (UTC) As for the need review flagging, what is or isn't unnecessary HTML is a matter of opinion, and there is no need to flag entries on your beliefs that there is too much. What is important is the rendering of the HTML. Is it presentable? Is it readable? Is it viewable? Is it accurate? Whether there is special MediaWiki code for formatting (or not) doesn't mean that everybody is aware of it (or not), and there is no requirement that it has to be used, and that's especially true if it isn't known how to use it properly. There is nothing wrong with making a section header as readable as possible, in whatever method is used to format it. The viewer doesn't see any of the HTML Your main thrust (as far as I can see) is to remove whitespace and make short readable lines longer, in fact, way too long. There is a reason why magazines and newspapers use columns --- to reduce line length. Shorter lines are easier to read than lines that go across the whole page. All your efforts do, in fact, is to make the section comments less readable. There is no requirement to use special or specific MediaWiki code for formatting (regarding comments in the section headers, this is excluding the titles, versions, and the like). It's a matter of opinion if too much unnecessary HTML is used or not. It doesn't matter, as long as the output is presentable. What was used is different than what you would use. There is no need to make a big deal of it and flag it for review. Whatever HTML tags are used, they're not part of the program and are essentially invisible to the viewer. I feel that you may be fixated a bit too much against certain entries, there are other programming examples that specifically mention languages that aren't even languages, and yet you don't flag any of those. It appears then, your flagging is beginning to appear to border on vindictiveness. Almost all entries have inaccurate counts, as those change daily, even hourly. Who can say which counts are inaccurate? All counts will become inaccurate as new entries are added to Rosetta Code. -- Gerard Schildberger (talk) 22:37, 10 November 2013 (UTC) Not always do I share Gerard's strong opinions ( :-) ), but this time fullheartedly. As to your messing up the task mentioned above, I asked for your motivation(s) and never got an answer. Is it REXX you are up against or just Gerard??? --Walterpachl (talk) The following has been moved from http://rosettacode.org/wiki/Rosetta_Code/Rank_languages_by_popularity#REXX to here. In the process of moving the text here, I think some of the signature tags' times have been updated. I also added one signature of mine where it wasn't obvious who was talking. Also, the original flagged incorrect had PARI/GL instead of PARI/GP, and both I and Walter Paschl responded to that original incorrect tag text. -- Gerard Schildberger (talk) 02:29, 20 November 2013 (UTC) This example is incorrect. Program does not properly ranks tied counts, counts are not accurate, PARI/GP is missing. Please fix the code and remove this message. Siskus, if you think REXX doesn't properly rank tied counts, show an example. (signature added.) -- Gerard Schildberger (talk) 02:29, 20 November 2013 (UTC) --Siskus (talk) 14:13, 11 November 2013 (UTC) Despite of over 350 solutions PARI/GL is not listed, Icon, C++ and PHP, ALGOL 68 are both ex aequo (Unicon, Scala and PL/SQL apparently does the job right.), too much whitespace, (I am not to only one who is complaining), using the wrong pages... In short, the REXX section and the contibutor clearly sucks... If you think that the counts are inaccurate, show which one you think is inaccurate. -- Gerard Schildberger (talk) 02:29, 20 November 2013 (UTC) --Siskus (talk) 14:13, 11 November 2013 (UTC) E.g. Vim script. What's the point? In the output I see rank: 478 (tied) (1 entry) Vim Script Pages in category "Vim Script" This category contains only the following page. Largest int from concatenated ints So, WHAT'S wrong?? And (again) where should PARI/GL be??? --Walterpachl (talk) 09:51, 12 November 2013 (UTC) PARI/GL is not in the Rosetta Code languages list (on the web page). -- Gerard Schildberger (talk) 09:23, 11 November 2013 (UTC) actually I see no PARI/GL but PARI/GP in rank 30 --Walterpachl (talk) 09:13, 11 November 2013 (UTC) I believe Siskus never even visited the REXX output page at the time he flagged (as incorrect) the REXX entry (the output of the REXX program clearly contains every computer programming language used on Rosetta Code which is 501 entries as of November 19th, 2013). The output for the REXX (RC_POP.REX) program is included here ──► RC_POP.OUT. That web-page has indeed all the programming languages that he stated weren't listed. ■ (tied for 20th place) Icon and C++ ■ (tied for 35th place) ALGOL 68 and PHP (Rankings above are as of November 19th, 2013.) -- Gerard Schildberger (talk) 03:03, 20 November 2013 (UTC) I don't think your usage of Category:Scala Implementations is correct. AIUI this category should be used for implementations of the language (e.g. a compiler or an interpreter; compare with e.g. Category:C Implementations or Category:Java Implementations). When you use the Header template (i.e. {{header|Scala}}) then all the necessary categories and properties to mark the solution as implemented in Scala are set automatically. So you don't need to add anything manually. --Andreas Perstinger (talk) 13:15, 2 June 2014 (UTC) This is yet another reason why you shouldn't blank your talk page without reading/discussing the items. I indepentantly noticed this and added Category_talk:Scala_Implementations and fixed some recent instances of this. —Dchapes (talk) 17:43, 1 September 2014 (UTC) Delete "Starting a web browser" It might be best to delete Starting a web browser as you have unanswered questions for some time in the draft task you link it to: Talk:Separate_the_house_number_from_the_street_name#The Rules?. -- Paddy3118 (talk) 19:43, 27 July 2014 (UTC) Themed edits Hi Siskus, could you possibly do more focused edits, where one edit is for one purpose to split up a couple of large edits you have done to a page over multiple languages. For example, If you are flagguing multiple language examples for one of two reasons then you might have separate edits for each type of flag you apply. This will make it easier to change just one. Thanks. --Paddy3118 (talk) 20:32, 16 August 2014 (UTC) Adding: "Category:Scala examples needing attention" to task examples Hi Siskus. We tend not to add the category in that way as you have left no comment on what is deficient in the current code. It would be better to mark an example as incomplete or incprrect as those templates allow a reason for their use to be added and displayed whith the offending example. Thanks, --Paddy3118 (talk) 13:53, 17 August 2014 (UTC) marking REXX entries as novice Hi Siskus: Please refrain from marking descriptions in REXX entries as "novice" and let people more familiar with the REXX programming language pass judgement on whether or not it was entered by a novice. I've been noticing that you are sniping at various REXX programming language entries and have never responded to any queries by me or others concerning your inflammatory and sometime destructive behavior regarding said language. If you could be more specific in your comments and justify your actions instead of just flagging some REXX entries in a capricious manner, including marking numerous REXX entries as omit even though there was an existing REXX solution. Marking a language as/for omit when there is already a solution entered or marking it as such when one doesn't know the capability of a language is just inappropriate. I know the REXX programming language very well and I would never mark any task as inappropriate for the REXX language, there are always programmers that can find a solution to most tasks. I certainly wouldn't mark a programming language as omit when I barely know the language. -- Gerard Schildberger (talk) 01:54, 20 August 2014 (UTC) wholeheartedly seconded. can you tell us what you meant by your (inappropriate?) flagging and/or provide a better "solution" to this task? --Walterpachl (talk) 07:02, 20 August 2014 (UTC) Siskus, Could you please answer my question?--Walterpachl (talk) 18:02, 20 August 2014 (UTC) BTW Further up there is another one you never responded to --Walterpachl (talk) 19:50, 20 August 2014 (UTC) It seems to me that the author of the task is not "hindered by any knowledge". For as I know, REXX has NO garbage collector. Take also in account that the writer makes ill-formed metadata (causing problems on other browsers then MS Internet Explorer) without knowing the difference between a breaking and non-breaking space (as shown in the _<space> and several double space), then the contributor must be a novice. There is no reason to level ad hominem attacks against the author (or anyone else) by what you seem to know. Even if you know it to be true, Rosetta Code isn't the place to resort to that sort of behavior. Write what you know, and don't make assumptions about the depth of another's knowledge. The REXX computing programming language has garbage collection. What I know or don't know about HTML formatting shouldn't be used as a reason to resort to personal attacks about my depth-of-knowledge of another computing programming language (REXX). The &nbsp; HTML tag is called a non-breaking space and is also known as a no-break space, a hard space, and a fixed space. It is a mechanism to prevent multiple-blank collapsing and it allows the HTML coder to force inclusion of extra blanks in HTML code when more white-space is wanted. The &nbsp; HTML tag also has other uses, such as being used for actual non-breaking space as between name titles, addresses, and other situations where a non-breaking blank is desired, in particular, as in cell (table) creation. I don't want to write a tutorial on the how-comes and why-fors of its use. The inclusion of white spade (one or more extra blanks) causes no problems with web browsers, it's their job to render HTML code. I hope that clears up your misunderstanding of breaking and non-breaking space(s). Also, I can't see why you make the decision to call that ill-formed metadata since you can (hopefully) see the results of why it is being used (and its results). Again, there is no reason to start throwing insults around for a lack of understanding of why I use particular HTML tags. In any case, that has nothing to do with writing REXX code and has no bearing on my knowledge of the REXX computing language. It is akin to me calling your programming skills deficient because you rollerskate badly. -- Gerard Schildberger (talk) 19:36, 20 August 2014 (UTC) It also not fair that missing functionality is called oovice" utside the program language. If REXX is not "internet-fahig" then the language has to be ommited for such tasks. It give a distorted picture of the matter.--Siskus (talk) 12:08, 20 August 2014 (UT Er, no, Siskus. That isn't a valid reason to mandate that a language has to be omitted from such tasks. The REXX language has no trigonometric, hyperbolic, or logarithms (or for that matter, even a square root function or a power function), but that isn't a reason to mark REXX to be omitted for that lack of features. I differ with you on how to deal with missing functionality. That's what programming languages do, if a language can use the host's functions, they can take advantage of that interface to perform a specific task; that's what an operating system is used for (among other things). Exactly what is the distorted picture of which you speak? -- Gerard Schildberger (talk) 19:36, 20 August 2014 (UTC) looking at the task's description it seems to me that Scala is not really addressing it and neither is REXX. Anyway, calling a Rexx expert "novice" is an inappropriate insult. We all try to teach and learn and avoid calling colleagues names. --Walterpachl (talk) 18:49, 20 August 2014 (UTC) For those readers who are in the dark, the Arena storage pool is the Rosetta Code task being discussed. -- Gerard Schildberger (talk) 19:36, 20 August 2014 (UTC) blanking of entire user page The blanking (erasing) of your entire user page isn't going to solve your problems, or for that matter, answer the questions raised here. So far, you've not addressed any of the concerns raised here by me or others who posted disagreements and questions about your actions and behavior. All our actions are accountable. Please try to make an effort to address the concerns raised here, and also (at least) try to answer the queries posted here. -- Gerard Schildberger (talk) 16:50, 31 August 2014 (UTC) Edits to pre blocks in output sections Could you please stop replacing existing markup like this: … program output here … Line 1 Line 2 Line 3 … program output here … Line 1 Line 2 Line 3 (avoiding <pre> by using leading space wiki formatting) Although in some respects they are just different ways of doing the same thing, the former is used here for a reason. Namely, it allows trivial cut-n-paste of program output without further modifications (unless it happens to contain &; or </pre>). Since the HTML produced by both is the same there is no benefit to avoiding <pre>. However, since doing so makes it more difficult to update program output in the future, it does have a negative impact. In general, making changes to existing markup that has no effect on the semantics of the HTML output should be avoided (unless perhaps it's while making other useful changes to the same text). This would include some recent changes you've made to existing text changing only white space. (Of course, things like changing "Output:" to "{{out}}" are different and fine since even if they currently produce the same output the template might change in the future.) Thank you. —Dchapes (talk) 17:40, 1 September 2014 (UTC) You've been using a really ugly colouring for emphasis in tasks. Merely using <em> is enough; it looks better and it is much more semantically correct too. (It will usually get rendered as italics, but you can use a user-style to fix that if you prefer. Yet most won't.) –Donal Fellows (talk) 09:31, 7 September 2014 (UTC) FYI (and for anyone else coming to this page to comment), Siskus has been blocked/banned from RosettaCode and therefore will not be commenting here. (Thank you admins) —dchapes (talk | contribs) 14:25, 7 September 2014 (UTC)
{"url":"https://rosettacode.org/wiki/User_talk:Siskus","timestamp":"2024-11-02T08:10:15Z","content_type":"text/html","content_length":"83303","record_id":"<urn:uuid:4ea60212-3ac7-4168-bca5-60c4076d7d48>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00571.warc.gz"}
Review of sensitivity results for linear networks and a new approximation to reduce the effects of degeneracy Estimating the reduced cost of an upper bound in a classical linear transshipment network is traditionally accomplished using the shadow price for this constraint, given by the standard calculation c̄ [ij] = c[ij] + π[j] - π[i]. This reduced cost is only a subgradient due to network degeneracy and often exhibits errors of 50% or more compared to the actual change in the objective function if the upper bound were raised by one unit and the network reoptimized. A new approximation is developed, using a simple modification of the original reduced cost calculation, which is shown to be significantly more accurate. This paper summarizes the basic theory behind network sensitivity, much of which is known as folklore in the networks community, to establish the theoretical properties of the new approximation. The essential idea is to use least-cost flow augmenting paths in the basis to estimate certain directional derivatives which are used in the development of the approximation. The technique is motivated with an application to pricing in truckload trucking. All Science Journal Classification (ASJC) codes • Civil and Structural Engineering • Transportation Dive into the research topics of 'Review of sensitivity results for linear networks and a new approximation to reduce the effects of degeneracy'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/review-of-sensitivity-results-for-linear-networks-and-a-new-appro","timestamp":"2024-11-12T01:56:54Z","content_type":"text/html","content_length":"49993","record_id":"<urn:uuid:600da047-b0a2-426f-9fda-576cdf46e206>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00454.warc.gz"}
Data Science Interview Questions Part-4 (Unsupervised Learning) Top-20 frequently asked data science interview questions and answers on Unsupervised Learning for fresher and experienced Data Scientist, Data analyst, statistician, and machine learning engineer job Data Science is an interdisciplinary field. It uses statistics, machine learning, databases, visualization, and programming. So in this fourth article, we are focusing on unsupervised learning Let’s see the interview questions. 1. What is clustering? Clustering is unsupervised learning because it does not have a target variable or class label. Clustering divides s given data observations into several groups (clusters) or a bunch of observations based on certain similarities. For example, segmenting customers, grouping super-market products such as cheese, meat products, appliances, etc. 2. What is the difference between classification and clustering? 3. What do you mean by dimension reduction? Dimensionality reduction is the process of reducing the number of attributes from large dimensional data. There are lots of methods for reducing the dimension of the data: Principal Components Analysis(PCA), t-SNE, Wavelet Transformation, Factor Analysis, Linear Discriminant Analysis, and Attribute Subset Selection. 4. How the K-means algorithm work? Kmeans algorithm is an iterative algorithm that partitions the dataset into a pre-defined number of groups or clusters where each observation belongs to only one group. K-means algorithm works in the following steps: 1. Randomly initialize the k initial centers. 2. Assigned observation to the nearest center and form the groups. 3. Find the mean point of each cluster. Update the center coordinates and reassign the observations to the new cluster centers. 4. Repeat steps 2–3 the process until the no change in the cluster observations. 5. How to choose the number of clusters or K in the k-means algorithm? Elbow Criteria: This method is used to choose the optimal number of clusters (groups) of objects. It says that we should choose a number of clusters so that adding another cluster does not add sufficient information to continue the process. The percentage of variance explained is the ratio of the between-group variance to the total variance. It selects the point where marginal gain will You can also create an elbow method graph between the within-cluster sum of squares(WCSS) and the number of clusters K. Here, the within-cluster sum of squares(WCSS) is a cost function that decreases with an increase in the number of clusters. The Elbow plot looks like an arm, then the elbow on the arm is an optimal number of k. 6. What are some disadvantages of K-means? There are the following disadvantages: • The k-means method is not guaranteed to converge to the global optimum and often terminates at a local optimum. • The final results depend upon the initial random selection of cluster centers. • Needs the number of clusters in advance to input the algorithm. • Not suitable for convex shape clusters. • It is sensitive to noise and outlier data points. 7. How do you evaluate the clustering algorithm? The cluster can be evaluated using two types of measures intrinsic and extrinsic evaluation parameters. Intrinsic does not consider the external class labels while extrinsic considers the external class labels. Intrinsic cluster evaluation measures are the Davie-Bouldin Index and Silhouette coefficient. Extrinsic evaluation measures are Jaccard and Rand Index. 8. How do you generate arbitrary or random shape clusters? There are some clustering algorithms that can generate random or arbitrary shape clusters such as Density-based methods such as DBSCAN, OPTICS, and DENCLUE. Spectral clsutering can also generate arbitrary or random shape clusters. 9. What is Euclidean and Manhatten distance? Euclidean measures the ‘as-the-crow-flies’ distance and Manhattan distance is also known as a city block. It measures the distance in blocks between any two points in a city. (or city block). Data mining by Jiawei Han; Micheline Kamber; JianPei Euclidean Distance 10. Explain spectral clustering. It is based on standard linear algebra. Spectral Clustering uses the connectivity approach to clustering. It easy to implement, faster especially for the sparse datasets, and can generate non-convex clusters. Spectral clustering kind of graph partitioning algorithm. The spectral algorithm works in the following steps. 1. Create a similarity graph 2. Create an Adjacency matrix W and Degree matrix D. 3. The adjacency matrix is an n*n matrix that has 1 in each cell that represents the edge between nodes of the column and row. The degree matrix is a diagonal matrix where the diagonal value is the sum of all the elements in each row of the adjacency matrix. 4. Create a Laplacian matrix L by subtracting the adjacency matrix from the degree matrix. 5. Calculates the eigenvectors of the Laplacian matrix L and performs the k-means algorithm on the second smallest eigenvector. 11. What is tSNE? t-SNE stands for t-Distributed Stochastic Neighbor Embedding which considers the nearest neighbors for reducing the data. t-SNE is a nonlinear dimensionality reduction technique. With a large dataset, it will not produce better results. t-SNE has quadratic time and space complexity. The t-SNE algorithm computes the similarity between pairs of observations in the high dimensional space and low dimensional space. And then it optimizes both similarity measures. In simple words we can say, it maps the high-dimensional data into a lower-dimensional space. After transformation input features can’t be inferred from the reduced dimensions. It can be used in recognizing feature expressions, tumor detection, compression, information security, and bioinformatics. 12. What is principal component analysis? PCA is the process of reducing the dimension of input data into a lower dimension while keeping the essence of all original variables. It used is used to speed up the model generation process and helps in visualizing the large dimensional data. 13. How will you decide the number of components in PCA? There are three methods for deciding the number of components: 1. Eigenvalues: you can choose the number of components that have eigenvalues higher than 1. 2. Amount of explained variance: you can choose factors that explain 70 to 80% of your variance at least. 3. Scree plot: It is a graphical method that helps us in choosing the factors until a break in the graph. 14. What is Eigenvalues and Eigenvector? Eigenvectors are rotational axes of the linear transformation. These axes are fixed in direction, and eigenvalue is the scale factor by which the matrix is scaled up or down. Eigenvalues are also known as characteristic values or characteristic roots and eigenvectors are also known as the characteristic vector. 15. How dimensionality reduction improves the performance of SVM? SVM works better with lower-dimensional data compared to large dimensional data. When the number of features is greater than the number of observations, then performing dimensionality reduction will generally improve the SVM. 16. What is the difference between PCA and t-SNE? t-SNE in comparison to PCA: • When the data is huge (in size), t-SNE may fail to produce better results. • t-SNE is nonlinear whereas PCA is linear. • PCA will preserve things that t-SNE will not. • PCA is deterministic; t-SNE is not • t-SNE does not scale well with the size of the dataset, while PCA does. 17. What are the benefits and limitations of PCA? • Removes Correlated Features • Reduces Overfitting • Visualize large dimensional data • Independent variables become less interpretable • Data standardization is a must before PCA • Information Loss • assumes the Linear relationship between original features. • High variance axes considered as components and low variance axes considered as noise. • It assumes principal components as orthogonal. 18. What is the difference between SVD and PCA? • Both are eigenvalue methods that are used to reduce a high-dimensional dataset into fewer dimensions for retaining important information. • PCA is the same as SVD but it is not as efficient to compute as the SVD. • PCA is used for finding the directions while SVD is the factorization of a matrix. • We can use SVD to compute principal components but it is more expensive. 19. Explain DBSCAN. The main idea is to create clusters and add objects as long as the density in its neighborhood exceeds some threshold. The density of any object measured by the number of objects closed to that. It connects the main object with its neighborhoods to form dense regions as clusters. You can also define density as the size of the neighborhood €. DBSCAN also uses another user-specified parameter, MinPts, that specifies the density threshold of dense regions. 20. What is hierarchical clustering? Hierarchical method partition data into groups at different levels such as in a hierarchy. Observations are group together on the basis of their mutual distance. Hierarchical clustering is of two types: Agglomerative and Divisive. Agglomerative methods start with individual objects like clusters, which are iteratively merged to form larger clusters. It starts with leaves or individual records and merges two clusters that are closest to each other according to some similarity measure and form one cluster. It is also known as AGNES (AGglomerative NESting). Divisive methods start with one cluster, which they iteratively split into smaller clusters. It divides the root cluster into several smaller sub-clusters, and recursively partitions those clusters into smaller ones. It is also known as DIANA (DIvisive ANAlysis). In this article, we have focused on unsupervised learning interview questions. In the next article, we will focus on the interview questions related to data preprocessing. Data Science Interview Questions Part-5 (Data Preprocessing)
{"url":"https://discuss.boardinfinity.com/t/data-science-interview-questions-part-4-unsupervised-learning/4837","timestamp":"2024-11-10T05:05:59Z","content_type":"text/html","content_length":"34600","record_id":"<urn:uuid:e86b725e-abb4-4e39-a72f-14e0575c4290>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00099.warc.gz"}
Inertial Mass of an Elementary Particle from the Holographic Scenario Total Page:16 File Type:pdf, Size:1020Kb [email protected] Abstract Various attempts have been made to fully explain the mechanism by which a body has inertial mass. Recently it has been proposed that this mechanism is as follows: when an object accelerates in one direction a dy- namical Rindler event horizon forms in the opposite direction, suppressing Unruh radiation on that side by a Rindler-scale Casimir effect whereas the radiation in the other side is only slightly reduce by a Hubble-scale Casimir effect. This produces a net Unruh radiation pressure force that always op- poses the acceleration, just like inertia, although the masses predicted are twice those expected, see [17]. In a later work an error was corrected so that its prediction improves to within 26% of the Planck mass, see [10]. In this paper the expression of the inertial mass of a elementary particle is derived from the holographic scenario giving the exact value of the mass of a Planck particle when it is applied to a Planck particle. Keywords: inertial mass; Unruh radiation; holographic scenario, Dark matter, Dark energy, cosmology. PACS 98.80.-k - Cosmology PACS 04.62.+v - Quantum fields in curved spacetime PACS 06.30.Dr - Mass and density 1 Introduction The equivalence principle introduced by Einstein in 1907 assumes the com- plete local physical equivalence of a gravitational field and a corresponding non- inertial (accelerated) frame of reference (Einstein was thinking of his famous elevator experiment). In a similar way we can assume a holographic equiva- lence principle where it is the same to have a particle accelerated because it is attracted by a central mass than a particle accelerated by an event horizon. The question of why a particle is accelerated towards an event horizon has two different answers. In the Verlinde's holographic model, see [26], the acceleration 1 of the particle towards the event horizon is due to the entropic force arising from thermodynamics on a holographic screen (the event horizon). The entropic force appears in order to increase the general entropy according to the second law of thermodynamics. However we can also think that the radiation from the region of space behind this event horizon can never hope to catch the particle causing a real imbalance in the momentum transferred by all the radiation from all direc- tions which produces an acceleration of the particle towards the event horizon, see [10, 17]. Both arguments are claims because are based in the existence of effects not universally accepted. If in a future they are proved then we will ac- cept that there will be a complete physical equivalence between a gravitational field and a corresponding event horizon. This holographic equivalence principle would be the base of a new gravitational theory where gravity will be emerging from an holographic scenario from a dynamical point of view. In this work, we can establish the origin of the inertial mass of a elementary particle from the holographic scenario. The problem of the inertial mass of a macroscopic body is still open in this context. First we recall some concepts. The Hawking radiation, predicted by Hawking [14] in 1974, is black-body radia- tion to be released by black holes due to quantum effects near the event horizon of the black hole. The vacuum fluctuations cause a particle-antiparticle pair to appear close to the event horizon. One of the pair falls into the black hole while the other escapes. The particle that fell into the black hole must had negative energy in order to preserve total energy. The black hole loses mass because for an outside observer the black hole just emitted a particle. The Unruh effect [25] is the prediction that an accelerating observer will observe black-body radiation where an inertial observer would observe none. A priori the Unruh effect and the Hawking radiation seem unrelated, but in both cases the radiation is due to the existence of an event horizon. In the case of the Unruh radiation, on the side that the observer is accelerating away from there appears an apparent dynamical Rindler event horizon, see [19]. The appearance of this event horizon produces two effects: a radiation in a similar way to the Hawking radiation from the horizon and a force toward the horizon that accounts for the inertial mass of the elementary particle (see below). Therefore an accelerating observer perceives a warm background whereas a non-accelerated observer will see a cold background with no radiation. Various attempts have been made to fully explain the mechanism by which a body has inertial mass, see for instance [3] where the principle of equivalence is examined in the quantum context. We recall that the relativistic mass [24] is the measure of mass dependent on the velocity of the observer in the context of the special relativity but is not an explanation of the rest mass. In [17] an origin of the inertia mass of a body was suggested: for an accelerated particle the Unruh radiation becomes non-uniform because the Rindler event horizon reduces the energy density in the direction opposite to the acceleration vector due to a Rindler-scale Casimir effect whereas the radiation on the other side is only slightly reduced by a Hubble-scale Casimir effect due to the cosmic horizon. Therefore there is an imbalance in the momentum transferred by the Unruh 2 radiation and this produces a force which is always opposed to the acceleration, like inertia. In [10] it is corrected a mistake detected in [17]. The correct expression for the force is π2ha Fx = − ; (1) 48clp −35 where lp = 1:616 × 10 m is the Planck distance. Hence the inertial mass 2 −8 is given by mi ∼ π h=(48clp) ∼ 2:75 × 10 kg which is 26% greater than the −8 Planck mass mp = 2:176 × 10 kg. In this paper we derive an expression for the inertia of an elementary particle from the holographic scenario, giving the exact value of the mass of the Planck particle when it is applied to this Planck particle. 2 Holographic scenario for the inertia The holographic principle proposed by 't Hooft states that the description of a volume space is encoded on a boundary to the region, preferably a light-like boundary like a gravitational horizon, see [23]. This principle suggests that the entire universe can be seen as a two-dimensional information structure encoded on the cosmological horizon, such that the three dimensions we observe are only an effective description at macroscopic scales and at low energies. Verlinde pro- posed a model where the Newton's second law and Newton's law of gravitation arise from basic thermodynamic mechanisms. In the context of Verline's holo- graphic model, the response of a body to the force may be understood in terms of the first law of thermodynamics. Indeed Verlinde conjecture that Newton and Einstein's gravity originate from an entropic force arising from the thermo- dynamics on a holographic screen, see [26]. Moreover the holographic screen in Verlinde's formalism can be identified as local Rindler horizons and it is sug- gested that quantum mechanics is not fundamental but emerges from classical information theory applied to these causal horizons, see [15, 16]. An important cosmological consequence is that at the horizon of the universe there is a horizon temperature given by ~ H −30 TH = ∼ 3 × 10 K; (2) 2πkB and this temperature has associated the acceleration aH given by the Unruh [25] relationship 2πc kBTH aH = ~ ; (3) −9 2 and substituting the value of TH we arrive to aH = cH ∼ 10 m=s in agree- ment with the observation. The entropic force pulls outward towards the horizon apparently creating a Dark energy component and the accelerated expansion of the universe, see [4, 5]. 3 Due to the existence of the cosmic horizon all the matter of the universe is attracted by the horizon comparable to the Hubble horizon due to the entropic force and accelerated towards this horizon with an acceleration given by Eq. 3. However this acceleration is ridiculously small compared to local acceleration due to nearby bodies and it is only relevant for isolated bodies with very low local accelerations for instance a star at the edge of a galaxy giving also an explanation to the obtained rotation curves. First you fix an observer and equation (3) gives the acceleration that any body feels toward the horizon in the direction far away from the observer. Moreover this acceleration is ridiculously small compared with the local acceleration of bodies at small distance where the local movement is the relevant. For instance the movement in collision of our galaxy with the Andromeda galaxy. However for distant bodies, where the local movement is irrelevant for an observer so far, the accelerate expansion is relevant and we see that these bodies accelerate outside from the observer. Additionally in an accelerating universe, the universe was expanding more slowly in the past than it is today. Therefore the total acceleration measured by an observer is a = aL + aH where aL is the local acceleration due to the local dynamics that suffers a particle. It is clear that only for very low local movements the acceleration aH becomes im- portant. We can assume that the local movement is the gravitational attraction of a central mass and then we have GM⊙ a − a = a = : (4) H L r2 Equation (4) can be written into the form ( ) a GM⊙ a 1 − H = : (5) a r2 Hence, following [9] (see also [7]) for low local accelerations we obtain a modified inertia given by ( ) ( ) a 2πc k T m = m 1 − H = m 1 − B H ; (6) I i a i ~a where mi is the inertial mass and mI is the modified inertial mass.
{"url":"https://docslib.org/doc/525375/inertial-mass-of-an-elementary-particle-from-the-holographic-scenario","timestamp":"2024-11-12T08:59:39Z","content_type":"text/html","content_length":"65732","record_id":"<urn:uuid:ce359917-46c6-485f-a3d6-2a194e80711b>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00520.warc.gz"}
Take a closer look at HashMap - Moment For Technology Nice to meet you HashMap is a very important collection that is used frequently in daily life and is also the focus of interviews. This article is not intended to cover the basic usage apis, but rather to drill down to the bottom of a HashMap to get the most out of it. Some familiarity with hash tables and HashMaps is required. HashMap is essentially a hash table, so it is inseparable from the hash function, hash conflict, expansion scheme; At the same time, as a data structure, we must consider the problem of multi-thread concurrent access, that is, thread safety. These four key points are the key points of learning HashMap, and also the key points of designing HashMap. HashMap is part of the Map collection architecture and inherits the Serializable interface, which can be serialized, and the Cloneable interface, which can be copied. His succession structure is as HashMap is not universal, and other classes have been extended to meet the needs of some special situations, such as thread-safe ConcurrentHashMap, LinkHashMap for recording insertion order, TreeMap for sorting keys, and so on. This article focuses on four key topics: hash functions, hash conflicts, expansion solutions, and thread safety, supplemented by key source code analysis and related issues. All content in this article is JDK1.8 unless otherwise stated. The hash function The purpose of a hash function is to compute the index of a key in an array. The criteria for judging a hash function are whether the hash is uniform and whether the calculation is simple. The steps of the HashMap hash function: 1. Perturbed the Hashcode of the key object 2. Find the subscripts of the array by taking modules The purpose of the perturbation is to make hashcode more random, so that the second step of modulus will not make all the keys together, improving the hash uniformity. The perturbation can be seen in the hash() method: static final int hash(Object key) { int h; // Get the hashcode of the key, xor in the high and low bits return (key == null)?0 : (h = key.hashCode()) ^ (h >>> 16); Copy the code That is, the lower 16 bits are xor with the higher 16 bits, and the higher 16 bits remain the same. Generally, the length of the array is relatively short, and only the low order is involved in the hash in the modulus operation. The xor between the high position and the position enables the high position to participate in the hash operation, making the hash more uniform. The specific calculation is shown in the figure below (for the convenience of 8-bit demonstration, the same is true for 32-bit demonstration) : After the hashcode is disturbed, the result needs to be modulo. HashMap in JDK1.8 takes a different, more high-performance approach rather than simply using % for modulating. A HashMap controls an array of length 2 to an integer power, which has the advantage of having the same effect as a bit and operation on HashCode as an array of length -1. The diagram below: However, bit and operation is much more efficient than redundancy, thus improving performance. This feature is also used in capacity expansion operations, as discussed later. The putVal() method is called in the put() method: final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {...// Perform a bit sum with the array length -1 to get the index if ((p = tab[i = (n - 1) & hash]) == null)... }Copy the code For the complete hash calculation process, see the following figure: We mentioned above that the HashMap array is an integer power of 2. How does HashMap control the array to an integer power of 2? There are two ways to change an array length: 1. Length specified during initialization 2. Length increment during capacity expansion So let’s do the first case. By default, if the length is not specified in the HashMap constructor, the initial length is 16. 16 is a good rule of thumb because it is an integer power of 2, but too small triggers frequent expansion and wastes space. If you specify an integer power that is not 2, it is automatically converted to the lowest integer power of 2 greater than the specified number. If 6 is specified, it is converted to 8, and if 11 is specified, it is converted to 16. TableSizeFor () When we initialize a non-2 integer power, HashMap calls tableSizeFor() : public HashMap(int initialCapacity, float loadFactor) {...this.loadFactor = loadFactor; // The tableSizeFor method is called this.threshold = tableSizeFor(initialCapacity); static final int tableSizeFor(int cap) { // Note that you must subtract one here int n = cap - 1; n |= n >>> 1; n |= n >>> 2; n |= n >>> 4; n |= n >>> 8; n |= n >>> 16; return (n < 0)?1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; Copy the code The tableSizeFor() method looks complicated. The purpose of the tableSizeFor() method is to make the highest bit 1 all subsequent bits 1 and then +1 to produce the lowest power of 2 that is just greater than initialCapacity. As shown below (8 bits are used here for simulation, and 32 bits for simulation) : So why do I have to evaluate cap to -1? If the specified number is exactly an integer power of 2, if there is no -1, the result will be a number twice as large as it, as follows: 00100High -1After the total variation1--> 00111-1---> 01000 Copy the code The second way to change the length of an array is to expand it. The size of the array must be a power of 2. The size of the array must be an integer power of 2. final Node<K,V>[] resize() { if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY) // Set it to double newThr = oldThr << 1; . }Copy the code 1. HashMap improves the hash effect by xOR operation between the high 16 bits and the low 16 bits. 2. HashMap controls the integer power of array length 2 to simplify modular operations and improve performance. 3. A HashMap controls an array to a certain power of 2 by initializing it to a power of 2 and expanding it by a factor of 2. Hashing conflict resolution A good hash algorithm can never avoid hash collisions. A hash conflict is when two different keys hash to get the same array index. There are many ways to resolve hash conflicts, such as open addressing, rehash, public overflow table, and chained address. HashMap uses the chain address method. After JDK1.8, we also added red-black tree optimization, as shown in the following figure: When conflicts occur, the current node will form a linked list, and when the linked list is too long, it will automatically transform into a red-black tree to improve the search efficiency. Red black tree is a data structure with high search efficiency and time complexity O(logN), but red black tree can only play its advantages when the data volume is large. HashMap imposes the following restrictions on red-black tree conversions • If the list length >=8 and the array length >=64, the list is converted to a red-black tree. • If the list length is greater than or equal to 8, but the array length is less than 64, it will be expanded preferentially rather than converted to a red-black tree. • When the number of nodes in the red-black tree <=6, it is automatically converted into a linked list. That raises the following questions: • Why do I need an array length of 64 to convert a red-black tree? When the array length is short, such as 16, list the length of the eight is occupied 50%, maximum mean load has almost reached ceiling, as if into a red-black tree, after the expansion will once again split the red-black tree to the new array, so that instead of performance benefits, it will reduce performance. Therefore, when the array length is less than 64, the expansion is preferred. • Why does it have to be greater than or equal to 8 to turn into a red-black tree, rather than 7 or 9? Tree nodes are larger than ordinary nodes. In short linked lists, red-black trees do not show obvious performance advantages, but waste space. In short linked lists, linked lists are used instead of red-black trees. In theoretical mathematics (loading factor =0.75), the probability of the list reaching 8 is 1 in a million; With 7 as the watershed, anything above 7 becomes a red-black tree, and anything below 7 becomes a linked list. Red-black trees are designed to withstand a lot of hash collisions in extreme cases, where a linked list is more appropriate. Note that red-black trees come after JDK1.8, which uses the array + linked list schema. 1. HashMap adopts the chained address method, which is converted to a linked list in case of conflicts, and to a red-black tree in case of long lists to improve efficiency. 2. HashMap limits red-black trees to withstand only a few extreme cases. Expansion plan As more and more data are stored in HashMap, the probability of hash conflicts will be higher and higher. Space swap time can be utilized through array expansion to keep the search efficiency at constant time complexity. So when is the expansion going to happen? Controlled by a key parameter of the HashMap: the load factor. Load factor = number of nodes in the HashMap/array length, which is a scale value. When the number of nodes in the HashMap reaches the ratio of the load factor, the expansion will be triggered. That is, the load factor controls the threshold for the number of nodes that the current array can carry. If the length is 16 and the load factor is 0.75, then the number of nodes that can be accommodated is 16*0.75=12. The numerical size of the load factor needs to be carefully weighed. The larger the load factor is, the higher the utilization of the array is, and the higher the probability of hash conflict is. The smaller the load factor, the lower the array utilization, but also the lower the probability of hash collisions. So the size of the loading factor needs to balance the relationship between space and time. In theoretical calculation, 0.75 is a more appropriate value, and the probability of hash conflict increases exponentially when it is greater than 0.75, while the probability of hash conflict decreases not obviously when it is smaller than 0.75. The default size of the load factor in the HashMap is 0.75, and it is not recommended to change its value without special So how does the HashMap scale up once the threshold is reached? Whereas HashMap expands the array twice as long and migrates data from the old array to the new one, HashMap is optimized for migration: using the HashMap array length as an integer power of two, it migrates data in a more efficient way. Data migration prior to JDK1.7 was relatively simple, iterating through all nodes, using the hash function to compute new subscripts, and inserting them into the linked list of new arrays. This will have two disadvantages: **1. Each node needs to perform a calculation of complements; 2, insert into the new array using the head insert method, in multi-threaded environment will form a linked list ring. **jdk1.8 is optimized because it controls the array length to always be a power of 2. Each time the array is extended, it is doubled. The advantage is that the hash result of the key in the new array is only two ways: in the original position, or in the original position + the original array length. Specifically, we can look at the following figure: As you can see from the figure, the hash result in the new array depends only on the higher value. If the higher bit is 0, the result is in the original position, and if it is 1, the length of the original array is added. So we just need to determine whether a node’s higher bit is 1 or 0 to get its position in the new array, without having to hash it repeatedly. HashMap splits each list into two linked lists, corresponding to the original location or the original location + the original array length, and inserts them into the new array respectively, keeping the original node order as One problem remains: the head plug method creates a linked list ring. This is covered in the thread safety section. 1. Load factor determines the threshold of HashMap expansion, which needs to weigh time and space. Generally, 0.75 is kept unchanged. 2. The HashMap expansion mechanism combines the characteristics of the integer power of array length 2 to complete data migration with a higher efficiency, while avoiding the linked list ring caused by header insertion. Thread safety As a collection, the main function of HashMap is CRUD, that is, adding, deleting, checking and modifying data. Therefore, it must involve concurrent access of data by multiple threads. Concurrency issues require special attention. HashMap is not thread-safe, and data consistency cannot be guaranteed in the case of multiple threads. For example, thread A needs to insert node X at subscript 2 of A HashMap when the subscript 2 position is null. After checking for null, thread A suspends. Thread B inserts the new node Y at subscript 2; When thread A is restored, node X will be directly inserted into subscript 2, overwriting node Y, resulting in data loss, as shown below: In JDK1.7 and earlier expansion, the head insertion method is adopted, which is fast to insert, but in multi-threaded environment, it will cause linked list rings, and the linked list rings will not find the tail of the linked list and occur an infinite loop. For space, please refer to the interviewer for this question: Why is HashMap thread unsafe? , the author answers the concurrency problem of HashMap in detail. After THE expansion of JDK1.8, tail insertion method is adopted to solve this problem, but it does not solve the problem of data consistency. What if the results are inconsistent? There are three solutions to this problem: • The Hashtable • callCollections.synchronizeMap()Method to make HashMap multithreaded • usingConcurrentHashMap The idea of the first two schemes is similar, in each method, the entire object is locked. Hashtable is an old-generation collection framework with many backward designs. It adds the synchronize keyword to each method to ensure thread safety // Hashtable public synchronized V get(Object key) {... }public synchronized V put(K key, V value) {... }public synchronized V remove(Object key) {... }public synchronized V replace(K key, V value) {...} Copy the code The second method returns a SynchronizedMap object that, by default, locks the entire object for each method. The following source code: What is mutex in this case? See the constructor directly: final Object mutex; // Object on which to synchronize SynchronizedMap(Map<K,V> m) { this.m = Objects.requireNonNull(m); // The default is this object mutex = this; SynchronizedMap(Map<K,V> m, Object mutex) { this.m = m; this.mutex = mutex; Copy the code You can see that the default lock is itself, and the effect is the same as Hashtable. The result of this simple and crude locking of the entire object is: • Locks are very heavyweight and can seriously affect performance. • Only one thread can read or write at a time, limiting concurrency efficiency. ConcurrentHashMap is designed to solve this problem. He improves efficiency by reducing lock granularity +CAS. ConcurrentHashMap locks only one node of an array, not the entire object. Therefore, other threads can access other nodes of the array without affecting each other, greatly improving concurrency efficiency. Meanwhile, the ConcurrentHashMap read operation does not need to acquire the lock, as shown below: More on ConcurrentHashMap and Hashtable, due to space limitations, will be covered in another article. So, is using all three of these solutions absolutely thread-safe? Take a look at the following code: ConcurrentHashMap<String, String> map = new ConcurrentHashMap<>(); if (map.containsKey("abc")){ String s = map.get("abc"); Copy the code When Thread1 calls containsKey and releases the lock, Thread2 acquires the lock, removes “ABC” and releases the lock, and Thread1 reads a null s. So ConcurrentHashMap class or Collections. SynchronizeMap () method or Hashtable is only on a certain extent to ensure the safety of the thread, and there is no guarantee that absolutely thread-safe. There is also a fast-fail problem with thread safety. When iterators of a HashMap are used to iterate over a HashMap, Iteractor will throw a fast-fail exception if structural changes occur to the HashMap, such as inserting new data, removing data, or expanding the HashMap. This prevents concurrent exceptions and ensures thread safety to a certain extent. The following source code: final Node<K,V> nextNode(a) {...if(modCount ! = expectedModCount)throw newConcurrentModificationException(); . }Copy the code The modCount variable of the HashMap is recorded when the Iteractor object is created, and modCount is incremented by one whenever a structural change to the HashMap is made. During iteration, we can judge whether the HashMap’s modCount is consistent with the self-saved expectedModCount to determine whether structural changes have taken place. The fast-fail exception can only be used as a security guarantee for traversal, not as a means of accessing a HashMap concurrently by multiple threads. If you have concurrency requirements, you still need to use the three methods described above. 1. HashMap is not thread-safe, and unexpected problems such as data loss can occur with concurrent access by multiple threads 2. HashMap1.8 uses the tail insertion method for expansion, to prevent the linked list loop caused by the problem of endless loop 3. Solutions to the concurrency problem areHashtable,Collections.synchronizeMap(),ConcurrentHashMap. One of the best solutions isConcurrentHashMap 4. The above solution does not guarantee thread-safety 5. Fast failure is a concurrency safety guarantee in the HashMap iteration mechanism The source code parsing Understanding of key variables There are many internal variables in the HashMap source code, and these variables will appear frequently in the following source code analysis. First, you need to understand the meaning of these // An array of data transient Node<K,V>[] table; // The number of key-value pairs stored transient int size; // Number of changes to the HashMap structure, mainly used to determine fast-fail transient int modCount; // Maximum number of key-value pairs to store (threshodl=table.length*loadFactor), also known as threshold int threshold; // Load factor, which represents the proportion of the maximum amount of data that can be held final float loadFactor; // Static inner class, the node type stored in HashMap; Can store key-value pairs, itself a linked list structure. static class Node<K.V> implements Map.Entry<K.V> {... }Copy the code The HashMap source code also includes initialization operations in the expansion method, so the expansion method source code is mainly divided into two parts: determine the new array size, migration data. Detailed source analysis is as follows, I played a very detailed annotation, optional view. The expansion steps have been described above, so you can use the source code to analyze how HashMap implements the above design. final Node<K,V>[] resize() { // The variables are the original array, the original array size, the original threshold; New array size, new threshold Node<K,V>[] oldTab = table; int oldCap = (oldTab == null)?0 : oldTab.length; int oldThr = threshold; int newCap, newThr = 0; // If the original array length is greater than 0 if (oldCap > 0) { // If you have exceeded the maximum length set (1<<30, that is, the maximum positive integer) if (oldCap >= MAXIMUM_CAPACITY) { // Set the threshold directly to the maximum positive number threshold = Integer.MAX_VALUE; return oldTab; else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY) // Set it to double newThr = oldThr << 1; // The original array length is 0, but the maximum is not 0, set the length to the threshold // The array length is specified when creating a HashMap else if (oldThr > 0) newCap = oldThr; // First initialization, default 16 and 0.75 // Create a HashMap object using the default constructor else { newCap = DEFAULT_INITIAL_CAPACITY; newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); // If the original array length is less than 16 or the maximum length is exceeded after doubling, the threshold is recalculated if (newThr == 0) { float ft = (float)newCap * loadFactor; newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE); threshold = newThr; // Create a new array Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; table = newTab; if(oldTab ! =null) { // Loop through the array and compute the new position for each node for (int j = 0; j < oldCap; ++j) { Node<K,V> e; if((e = oldTab[j]) ! =null) { oldTab[j] = null; // If there is no successor node, then just use the new array length modulo to get the new index if (e.next == null) newTab[e.hash & (newCap - 1)] = e; // If it is a red-black tree, call the red-black tree disassembly method else if (e instanceof TreeNode) ((TreeNode<K,V>)e).split(this, newTab, j, oldCap); // The new location has only two possibilities: the original location, the original location + the old array length // Split the original list into two lists and insert them into two positions in the new array // Don't call the put method more than once else { // Are the same position of the list and the original position + the original length of the list Node<K,V> loHead = null, loTail = null; Node<K,V> hiHead = null, hiTail = null; Node<K,V> next; // Select 1or0 as the new bit do { next = e.next; if ((e.hash & oldCap) == 0) { if (loTail == null) loHead = e; loTail.next = e; loTail = e; else { if (hiTail == null) hiHead = e; elsehiTail.next = e; hiTail = e; }}while((e = next) ! =null); // Finally assign to the new array if(loTail ! =null) { loTail.next = null; newTab[j] = loHead; if(hiTail ! =null) { hiTail.next = null; newTab[j + oldCap] = hiHead; // Returns a new array return newTab; Copy the code Add value Call the put() method to add the key-value pair, and eventually call putVal() to actually implement the add logic. Code parsing is as follows: public V put(K key, V value) { // Get the hash value and call putVal to insert the data return putVal(hash(key), key, value, false.true); // onlyIfAbsent Indicates whether the old value is overwritten. True indicates that the old value is not overwritten. False indicates that the old value is overwritten // Evict is related to LinkHashMap callback methods and is beyond the scope of this article final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { // TAB is the internal array of the HashMap, n is the length of the array, I is the subscript to insert, and p is the node corresponding to that subscript Node<K,V>[] tab; Node<K,V> p; int n, i; // Check whether the array is null or empty. If so, call resize() to expand the array if ((tab = table) == null || (n = tab.length) == 0) n = (tab = resize()).length; // Use bit and operations instead of modulo to get subscripts // Check whether the current subscript is null, if so, create a node directly insert, if not, enter the following else logic if ((p = tab[i = (n - 1) & hash]) == null) tab[i] = newNode(hash, key, value, null); else { // e indicates the node with the same key as the current key. If the node does not exist, it is null // k is the key of the current array subscript node Node<K,V> e; K k; // Check whether the current node is the same as the key to be inserted. If yes, the existing key is found if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k)))) e = p; // Check whether the node is a tree node, if the red-black tree method is called to insert else if (p instanceof TreeNode) e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); // The last case is direct list insertion else { for (int binCount = 0; ; ++binCount) { if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); If the length is greater than or equal to 8, it is converted to a red-black tree // Note that the treeifyBin method does the array length judgment, // If less than 64, array expansion is preferred over tree conversion if (binCount >= TREEIFY_THRESHOLD - 1) treeifyBin(tab, hash); // Find the same direct exit loop if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k)))) break; p = e; }}// If the same key is found, check whether onlyIfAbsent and the old value are null // Perform the update or do nothing, and return the old value if(e ! =null) { V oldValue = e.value; if(! onlyIfAbsent || oldValue ==null) e.value = value; returnoldValue; }}// If the old values are not updated, the number of key-value pairs in the HashMap has changed // modCount +1 indicates structure change // Check whether the length reaches the upper limit. If so, expand the capacity if (++size > threshold) AfterNodeInsertion is a LinkHashMap callback. return null; Copy the code Each step is explained in detail in the code, so here’s a summary: 1. Generally, there are two cases: finding the same key and not finding the same key. If yes, determine whether to update and return the old value. If no, insert a new Node, update the number of nodes and determine whether to expand. 2. Lookups fall into three categories: arrays, linked lists, and red-black trees. Array subscript I position is not empty and not equal to key, then need to determine whether the tree node or list node and search. 3. When the list reaches a certain length, it needs to expand to a red-black tree if and only if the list length >=8 and the array length >=64. Finally draw a picture to give a better impression of the whole process: Other problems Why does a Hashtable control array have a prime length, while a HashMap takes an integer power of 2? A: Prime length can effectively reduce hash collisions; The use of the integer power of 2 in HashMap is to improve the efficiency of complementation and capacity expansion, while combining the high-low xOR method to make the hash more uniform. Why do prime numbers reduce hash collisions? If the key’s Hashcode is evenly distributed between each number, the effect is the same for both prime and composite numbers. For example, if hashcode is evenly distributed between 1 and 20, the distribution is uniform regardless of whether the length is composite 4 or prime 5. If hashcode is always spaced by 2, such as 1,3,5… For an array of length 4, the subscripts of position 2 and position 4 do not hold data, while an array of length 5 does not. An array of composite length causes hashcode aggregates spaced by its factors to appear, making the hash less effective. For more details, see this blog: Algorithm Analysis: Why hash table Sizes are Prime, which uses data analysis to prove why prime numbers are better hashes. Why does inserting HashMap data require implementing hashCode and equals methods? What are the requirements for these two methods? A: Use HashCode to determine the insertion subscript and use equals comparison to find the data; The Hashcodes of two equal keys must be equal, but objects that have the same Hashcode are not necessarily equal. A Hashcode is like a person’s first name. People with the same name must have the same name, but the same name is not necessarily the same person. Equals compares whether or not the content is the same, overridden by objects. By default, equals compares the reference address. The “==” reference formation compares whether the reference address is the same, and the value object compares whether the value is the same. In a HashMap, you need to use hashcode to get the subscript of the key. If two identical objects have different Hashcodes, the same key will exist in the HashMap. So equals returns the same key and they have to have the same hashcode. HashMap compare two elements are the same using the three kinds of comparison method and combining with: p.h ash = = hash && ((k = p.k ey) = = key | | (key! = null && key.equals(k)). For a more in-depth look at the equals() and hashCode() methods in Java Enhancement, the author dissects the differences between these methods in great detail. The last The content of HashMap is difficult to cover in one article, and its design covers a lot of content. For example, thread-safe design can be extended to ConcurrentHashMap and Hashtable. The differences between these two classes and HashMap as well as the internal design are very important, which WILL be covered in another article. Finally, I hope the article is helpful to you. Full text here, the original is not easy, feel help can like collection comments forward. I have no talent, any ideas welcome to comment area exchange correction. If need to reprint please comment section or private communication. And welcome to my blog: Portal
{"url":"https://dev.mo4tech.com/take-a-closer-look-at-hashmap.html","timestamp":"2024-11-14T07:49:42Z","content_type":"text/html","content_length":"110437","record_id":"<urn:uuid:e0fd83ba-942a-41da-b27f-94303a1c9291>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00277.warc.gz"}
Class 10 Maths All in One - Student Factory Class 10 Maths All in One This app Class 10 Maths NCERT Solution ++ contains NCERT Solutions, Notes, Old Question Paper, Important Q/A, NCERT Book 🥳 to all the chapters included in the Class 10 Maths NCERT Book📖 in which is also used in Bihar Board & UP Board This App Contains✨ Class 10 Maths NCERT Solution Class 10 Maths Notes, Important Q/A Class 10 Maths Sample Paper (Old Papers with Solution) Class 10 Maths NCERT Book Chapter 1: Real Numbers Chapter 2: Polynomials Chapter 3: Pair of Linear Equations in Two Variables Chapter 4: Quadratic Equations Chapter 5: Arithmetic Progression Chapter 6: Triangles Chapter 7: Coordinate Geometry Chapter 8: Introduction to Trigonometry Chapter 9: Some Applications of Trigonometry Chapter 10: Circles Chapter 11: Constructions Chapter 12: Area Related to Circles Chapter 13: Surface Areas and Volumes Chapter 14: Statistics Chapter 15: Probability Class 10 Maths NCERT solution app is developed as per the requirements of our CBSE students to solve maths problem effectively and in real time with better understanding. In this Maths NCERT solution app, you can find every chapter wise solution, notes, previous year papers with solution. All things are managed. You will feel easy to use this app. Experience the App Now💯
{"url":"https://studentfactory.in/class-10-maths-all-in-one/","timestamp":"2024-11-09T18:49:17Z","content_type":"text/html","content_length":"270697","record_id":"<urn:uuid:a8ac8960-041d-45e8-bdc7-67548bea693b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00393.warc.gz"}
Facrorization of Symmetric and Obliquely Symmetric Polynomials [1] Alan S. Tussy., R. David Gustatson., Diane R. Koing. (2011). Basic Mathematics for college students. Fourth education, pp. 638–658, 688–696. [2] Borwein P., and Erdein T. (1995). Polynomials and Polynomials inequality. New York, springer-Verlag. [3] Buckle N., Danbar I. (1997). Mathematics Higher Level. IBID Press, Australia. pp. 122–129. [4] Chairsson K. N., Tolbert L. M., McKenzie K. J., and Zhong Du. (2005). Elimination of Harmonics in a Multilevel Converted Using the Theory of Symmetric Polynomial and Resultants. TECC Transaction on Control Systems Technology (13/2). [5] Fabio Cirrito., Nigel Buckle., Iain Dunbar. (2007). Mathematics Higher Level. [6] Fine B., Rosenberger G. (1997). The fundamental Theorem of Algebra. Undergraduate texts in Mathematics. Springer-Verlag, New-York. [7] Gilbert Strang. (2006). Linear Algebra and its Applications. Fourth edition. [8] Gowers T. (2008). The Princeton Companion to Mathematics. Princeton University Press. [9] Jean Linsky., James Nicholson., Brian Western. (2018). Complete Pure Mathematics 213 for Campridge International AS&Level. pp. 12–18. [10] Lang S. (2002). Algebra. Revised 3rd edition. Springer-Verlag. New York. [11] Michael Artin. Algebra; Second edition: Pearson, 2010. [12] Tony Beadsworth. (2017). Complete Additional Mathematics for Campridge IGCSE&0level. pp. 119–120, 124–126. [13] Vaughn Climenhaga. (2013). Lecture notes. Advanced linear Algebra I. [14] Takagi T. (2007). Algebra Lecture. Revised New Edition. Kyoritsu Publication, in Japanese. [15] Weisstein E. W. (1998). CRC Concise Encyclopedia of Mathematics. English Edution; 2nd Eduation. CRC Press, Kindle version. [16] Winters G. B. (1974). On the existence of certain families of curves. American Journal Math (96). pp. 215–228.
{"url":"http://mathletters.org/article/10.11648/j.ml.20230902.12","timestamp":"2024-11-08T14:44:39Z","content_type":"text/html","content_length":"69951","record_id":"<urn:uuid:4b57f323-d54d-4c9a-8987-282af1be83fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00276.warc.gz"}
The Stacks project Lemma 30.10.5. Let $X$ be a Noetherian scheme. Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_ X$-module. Let $\mathcal{G}$ be a coherent $\mathcal{O}_ X$-module. Let $\mathcal{I} \subset \ mathcal{O}_ X$ be a quasi-coherent sheaf of ideals. Denote $Z \subset X$ the corresponding closed subscheme and set $U = X \setminus Z$. There is a canonical isomorphism \[ \mathop{\mathrm{colim}}\nolimits _ n \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ X}(\mathcal{I}^ n\mathcal{G}, \mathcal{F}) \longrightarrow \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ U}(\ mathcal{G}|_ U, \mathcal{F}|_ U). \] In particular we have an isomorphism \[ \mathop{\mathrm{colim}}\nolimits _ n \mathop{\mathrm{Hom}}\nolimits _{\mathcal{O}_ X}( \mathcal{I}^ n, \mathcal{F}) \longrightarrow \Gamma (U, \mathcal{F}). \] Comments (3) Comment #947 by correction_bot on Second to last sentence, should say ``we have proved…'' Comment #6793 by Yuto Masamura on I think $\mathcal I^{\otimes n}$ should be $\mathcal I^n$ in a displayed equation in the 2nd paragraph of proof. Comment #6940 by Johan on Thanks and fixed here. Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 01YB. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 01YB, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/01YB","timestamp":"2024-11-12T12:50:19Z","content_type":"text/html","content_length":"20764","record_id":"<urn:uuid:8fac024c-b322-4580-8299-eaa6fc8874e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00437.warc.gz"}
Write a script file on Octave based on the following criteria. The users should be able to add a value for ‘x’ and then see output Posted inStudy Guide Write a script file on Octave based on the following criteria. The users should be able to add a value for ‘x’ and then see the output ‘y’ when the program is run. Homework Help: Questions and Answers: Part a) Write a script file on Octave based on the following criteria. The users should be able to add a value for ‘x’ and then see the output ‘y’ when the program is run. Name the program as Question4. • If x>100, then y=10x • If 100 ≥ x ≥ 0, then y=x/10 • If x<0, then y=abs(x) Part b) Test your program you created in part a for x=200, x=10, and x=-10. Part a) Writing the Octave Script Here’s the script file named Question4.m that implements the given criteria. % Question4.m % This script calculates the value of y based on the input value of x. % Prompt the user to enter a value for x x = input('Enter a value for x: '); % Calculate the value of y based on the value of x if x > 100 y = 10 * x; elseif x >= 0 && x <= 100 y = x / 10; y = abs(x); % Display the result fprintf('The value of y is: %f\n', y); • Input: The script prompts the user to enter a value for x. • Conditions: □ If x > 100, then y is calculated as 10 * x. □ If 100 ≥ x ≥ 0, then y is calculated as x / 10. □ If x < 0, then y is calculated as abs(x). • Output: The script then displays the value of y. Part b) Testing the Program To test the program for x = 200, x = 10, and x = -10, you can run the script in Octave with these inputs. Below are the expected outputs: Test 1: x = 200 Enter a value for x: 200 The value of y is: 2000.000000 Explanation: Since x > 100, y = 10 * 200 = 2000. Test 2: x = 10 Enter a value for x: 10 The value of y is: 1.000000 Explanation: Since 100 ≥ x ≥ 0, y = 10 / 10 = 1. Test 3: x = -10 Enter a value for x: -10 The value of y is: 10.000000 Explanation: Since x < 0, y = abs(-10) = 10. Learn More: Homework Help Q. What is a sequence of characters enclosed within quotation marks or apostrophes, and is python’s data type for storing text? Q. Decide whether or not the following are statements. In the case of a statement, say if it is true or false, if possible. Q. Express each statement or open sentence in a symbolic form such as P∧Q, P∨Q, P∨∼Q or ∼P, etc. Be sure to also state exactly what statements P and Q stand for: Q. Lucy has to move data from column A to column N in a worksheet. Which keys should she select to move data. Q. Which of the following best describes the function of an IDMS server? No comments yet. Why don’t you start the discussion?
{"url":"https://www.fdaytalk.com/write-a-script-file-on-octave-based-on-the-following-criteria-the-users-should-be-able-to-add-a-value-for-x-and-then-see-the-output-y-when-the-program-is-run/","timestamp":"2024-11-07T19:56:09Z","content_type":"text/html","content_length":"151238","record_id":"<urn:uuid:77f3ad50-3142-41d1-88f2-e6ee80f14509>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00409.warc.gz"}
Algebraic Curves When f is a polynomial in x and y, then f defines an algebraic curve in a plane. Puiseux Expansion The function has two singularities and one regular point on the line . We can obtain information (such as the tangent lines, the delta invariant, and other invariants) on singularities by computing the Puiseux expansions. One can view these Puiseux expansions as a sort of Taylor expansion (note that Puiseux expansions can also have fractional powers of x, whereas a Taylor expansion does not) of the algebraic function RootOf(f, y). Because this algebraic function is multivalued, we will get several expansions corresponding to the different branches of at . The following command gives these expansions of at : The fourth argument tells puiseux to compute a minimal number of terms. The number of terms that will be computed in this way is precisely the number of terms that are required to be able to distinguish the different Puiseux expansions from one another. Note: It appears as though only three different Puiseux expansions were given, whereas the function has five different branches. The other two expansions are implicitly given by taking the conjugates of these expansions over the field Q((x)). This command means the following: Give the Puiseux expansions up to accuracy 3, which means modulo . So the coefficients of are given, but not the coefficients of . To view the terms of the Puiseux expansions, we must compute the Puiseux expansions up to accuracy > 3. As one can see from the Puiseux expansions, the point , is singular, because two Puiseux expansions are going through this point: and its conjugate. Similarly, is a singular point. The output consists of lists of the form [ the location of the singularity, the multiplicity, the delta invariant, the number of local branches ]. The location is given as a list of three homogeneous coordinates, (x, y, z). The points (x, y, 1) are points in the affine plane , where is the field of constants. The points (x, y, 0) are on the line at infinity. (In this example, there are no singularities at infinity.) A point is singular if and only if the multiplicity of that point is > 1, and also if and only if the delta invariant is > 0. In this example, all of the singularities are double points. A double point has multiplicity 2 and delta invariant 1. The genus of an algebraic curve equals minus the sum of the delta invariants. Because these have already been determined by the previous command, computing the genus is easy now: The genus only depends on the algebraic function field of the curve. This field does not change if we apply birational transformations, so the genus is invariant under such transformations. This means, for example, that must have genus 0 as well: Parametrization for Curves with Genus 0 An irreducible algebraic curve allows a parametrization if and only if the genus is 0. A parametrization is a birational map from the projective line (= the field of constants union {infinity}) to the algebraic curve. This map is 1-1, except for a finite number of points. It is a 1-1 map between the places of the projective line and the places of the algebraic curve . This parametrization algorithm computes a parametrization over an algebraic extension of the constants of degree <= 2 if degree(f,{x,y}) is even, and a rational (that is, no field extension) parametrization if the degree of the curve is odd, as in this example. If we substitute an arbitrary number for (avoiding roots of the denominators to avoid "division by zero" messages), we get a point on the curve. Verify if this is indeed a point on the curve: Integral Basis The function field of an irreducible algebraic curve f can be identified with the field C(x)[y]/(f). This is an algebraic extension of C(x) of degree degree(f,y). In some applications (integration of algebraic functions and the method that algcurves[parametrization] uses), one must be able to recognize the poles of elements in the function field. For this purpose, one can compute a basis (as a C[x] module) for the ring of functions in C(x)[y]/(f) that have only poles on the line at infinity. This basis is computed as follows: Note: This did not require computation time because it has already been determined for use in the parametrization algorithm. The map( normal, b ) command makes the output look somewhat smaller. The integral basis has a factor in the denominator if and only if there is a singularity on the line x=RootOf(k). This can only happen if divides the discriminant discrim(f, y). The integral basis contains information about the singularities in a form that is useful for computations. The advantage of this form is that it is rational--one requires no algebraic extensions of the field of constants to denote the integral basis, whereas we do need algebraic numbers to denote the Puiseux expansions. Suppose that we are only interested in the singularities on the line x=0. Then we can compute a local integral basis for the factor . A local integral basis for a factor is a basis for all elements in the function field that are integral over C[[]]. An element of the function field is integral over C[[]] if it has no pole at the places on the line . An example of the kind of information that the integral basis contains is the sum of the multiplicities of the factor in the denominators. This sum equals the sum of the delta invariants of the points on the line . So this local integral basis for the set of factors {x} tells us that the sum of the delta invariants on the line x=0 is 2. Homogeneous Representation Until now an algebraic curve was represented by a polynomial in two variables, x and y. An algebraic curve is normally not viewed as lying in the affine plane (where C is the field of constants), but in the projective plane (C). The notation as a nonhomogeneous polynomial in two variables is convenient if we want to study the affine part of the curve (for example, in the integral basis computation), but not if we are interested in the part of the curve on the line at infinity. Often (for example, for computing the genus), the part of the curve at infinity is needed as well. The nonhomogeneous notation in two variables can be converted to the homogeneous notation as follows: This can be converted again to f with Now the line at infinity is the line z=0 on homogeneous(f,x,y,z). By switching x and z we can move the line x=0 to infinity. We see that now there are two singularities at infinity, namely (1,0,0) and (-1,1,0). This may look different in the output of singularities, because in homogeneous coordinates, the points (x, y, z) and () are the same for nonzero . This curve is given as a homogeneous polynomial; however, the input for the algorithms in this package must be the curve in its nonhomogeneous representation: This polynomial is a curve of degree 10 having a maximal number of cusps according to the Plucker formulas. It was found by Rob Koelman. It has 26 cusps and no other singularities. Now check if these points are indeed cusps. The multiplicities are 2 and the delta invariants are 1, so that part is correct. To decide if these points are cusps, we can use Puiseux expansions. Take one of these points: Now compute the Puiseux expansions at the line x = <the x coordinate of this point> : To obtain the y coordinates of the points on the line from this, we need only substitute . We see that there are eight different points on this line. The stands for six conjugated points (namely the roots of the polynomial inside the ). However, the expression is only one point, because our field of definition is not anymore, but . This is because we needed to extend the field to be able to "look" on the line x=. The Puiseux series in this set (which have only been determined up to minimal accuracy) are series (with fractional powers) in (). Substitute the following to get series in x instead of in (). That makes it somewhat easier to read. For determining the type of the singularity, the coefficients here are not relevant. We have an expansion of the form The higher order terms (which have not yet been determined) have no influence on the type of the singularity, nor do the precise values of these constants. These expansions show that there are six regular points on this line and two cusps. One can easily get more terms of the Puiseux expansions, although that is not necessary for determining the type of the singularities. We see that if we compute more terms, the results can get bigger quickly. Graphics: Singularity Knots A different way to show information about a singularity is the plot_knot command. The input of this procedure is a polynomial f in and , for which the singularity that we are interested in is located at . For example, the curve on the top of this worksheet has a singularity at 0, a double point. The curve is irreducible, and so it consists of only one component. But locally, around the point 0, it has two components. Information on these components and their intersection multiplicities can be given in the form of Puiseux pairs, obtained by computing the Puiseux expansions. A different way of representing this information is as follows: By identifying with , the curve can be viewed as a two-dimensional surface over the real numbers. Now we can draw a small sphere inside around the point 0. The surface of the sphere has dimension 3 over R. The intersection with the curve (which has dimension 1 over the complex numbers, so dimension 2 over the real numbers) consists of a number of closed curves over the real numbers, inside a space (the sphere surface) of dimension 3. After applying a projection from the sphere surface to , these curves can be plotted. (See also: E. Brieskorn, H. Knorrer: Ebene Algebraische Kurven, Birkhauser 1981.) In this plot, each component will correspond to one of the local components. Furthermore, the winding number in the plot equals the intersection multiplicity of the two branches of the curve. In this example this number is 1. Of course, we want to see more complicated 3-D plots. For this, we need only make the singularity more complicated, and the intersection multiplicities of the branches higher. Because we are interested in the curve only locally, it does not matter if the curve is irreducible. However, the input of plot_knot must be square-free. We see that a cusp gives a 2-3 torus knot. More generally, if , then gives a p-q torus knot. It gets more interesting when we have plots consisting of more components. For this, we need only have a singularity consisting of more components. In this example, we start with a 2-3 torus knot using . To obtain a high intersection multiplicity, we add a high power of , and multiply these two components. Then we get: Getting good plots sometimes requires tweaking with the various options (see plot_knot), or changing some of the coefficients (for example, the coefficient of ). Plot options can be experimented with interactively by clicking the plot and using the plot menus, or right-clicking the plot. A useful option is Light Schemes available using the Color menu (or specified as the lightmodel option to the plot_knot call). Weierstrassform, j_invariant For curves with genus , one can compute a parametrization--a bijection between the curve and a projective line. One can view this projective line as a normal form for curves with genus . For curves with genus , we can also compute a normal form, the Weierstrass normal form. In this form the curve is written as F=- (polynomial in of degree ). To avoid ambiguity, we will denote the Weierstrass normal form with the variables and instead of and . Now the curves f and F are birationally equivalent. The Weierstrass form algorithm computes such an equivalence in two directions, [w[2] , w[3]] is a morphism from f to F, and [w[4] , w[5]] is the inverse morphism. Check this for the point (-2,2,1) on . Now check if this is on F: Now try the inverse, and see if we get the point (-2,2,1): The Weierstrassform procedure handles hyperelliptic curves as well. A curve f is called hyperelliptic if and only if the genus is >1 and f is birational to a curve F of the form where P is a polynomial in X. This means that the algebraic function field C(x)[y]/(f) is isomorphic to C(X)[Y]/(F). So this is similar to the elliptic case, the only difference is that the degree of F is The procedure is_hyperelliptic tests if a curve f is hyperelliptic. The curve given by h is birational to the curve F. The other entries of W give the images of x, y, X, and Y under the isomorphism and inverse isomorphism from C(x)[y]/(h) to C(X)[Y]/(F). Further Results In the subsequent sections, the following additional functions of algcurves are demonstrated: differentials: Compute basis of holomorphic differentials homology: Compute canonical basis of the homology. is_hyperelliptic: Test if a curve is hyperelliptic. monodromy: Compute the monodromy. periodmatrix: Determine the periodmatrix (Riemann matrix). The algebraic function field L of the following curve is the field of all meromorphic functions on the algebraic curve (Riemann surface). It is the fraction field of the ring C[x,y]/(f), where C is the field of complex numbers. We can write L=C(x)[y]/(f). The category of Riemann surfaces is equivalent to the category of algebraic curves, and also equivalent to the category of algebraic function fields. Now L is an algebraic extension of C(x) of degree 4. By interchanging the roles of x and y we can also view L as an algebraic extension of C(y) of degree 6. Holomorphic Differentials A regular point on the curve corresponds to one point on the Riemann surface. A singular point corresponds to one or more points on the Riemann surface. These points can be represented by Puiseux The following function A has a pole at one of the two points P1, P2 on the Riemann surface. The following is a basis for all functions that have no poles at all points where x has no pole. A differential is an expression "A * dx" where A is an element of L. Using a Puiseux expansion with local parameter T we can write it as A(T) * dT. If A(T) has no poles at any point, then the differential A*dx is called holomorphic. A basis of the holomorphic differentials is given by: Now we will verify using Puiseux expansions that this differential (which has no poles anywhere on the Riemann surface) has no poles at P1 or P2. We see that the differential dif1 has no pole at P1 and P2. It should also have no poles at infinity, which we can verify as follows. The Monodromy Let f be a polynomial in x and y. If we take a point x=b, then subs(x=b,f) will in general have n different solutions , where . The points where there are fewer than n different solutions are called discriminant points, since they are roots of . Let b be some fixed point that is not a discriminant point, and let be the solutions of f at x=b, obtained by: fsolve(subs(x=b,f),y,complex). If we take a path, starting at b, avoiding all , going in a loop around one discriminant point , then we can analytically continue along this path. When we return to b, this analytic continuation will transform into new solutions of subs(x=b,f). Since the solutions in the complex numbers of subs(x=b,f) are unique up to permutations, the analytic continuation of along this path will result in a permutation of . If this permutation is nontrivial then is called a branch point. The monodromy procedure will compute these permutations for all branch points . The group generated by these permutations is isomorphic to the Galois group of C(x)[y]/(f) over C(x), where C stands for the complex numbers. Now M[1] is the basepoint. M[2] are the solutions of fsolve(subs(x=M[1],f),complex). M[3] is a list with elements of the form [, permutation for ]. The group generated by these permutations is: G is the Galois group of C(x)[y]/(f) over C(x). This is a subgroup of the Galois group H of Q(x)[y]/(f) over Q(x). We see that G is a subgroup of H with index 2. This means that the intersection of the complex numbers C with the splitting field of f over Q(x) is a quadratic extension of Q. This quadratic extension is Q(I) because the splitting field of f over Q(x) is Q(x, I, RootOf(f,y)), which we can verify by: The Homology Given the homology, one can determine closed paths, called cycles, on the Riemann surface. The procedure homology computes 2*g cycles that form a canonical basis of the homology of the Riemann surface. Every closed path on the Riemann surface is homologically equivalent to a Z-linear combination of these 2*g cycles. The Period Matrix If omega is a holomorphic differential, then its periods defined by the integrals of omega over closed paths on the Riemann surface. A basis (as a Z-module) of the periods of omega is obtained by integrating omega over every element of the homology basis. The basis of the holomorphic differentials contains g elements. The homology basis has 2*g elements. By computing the integrals of these g holomorphic differentials over these 2*g paths, we get 2g by g integrals, which form a matrix called the period matrix. By taking a different basis of the holomorphic differentials we can obtain a normalized period matrix of the form (I, Z) where I is the g by g identity and where the g by g matrix Z is called the Riemann matrix. In this example we can give an exact Riemann matrix. In most cases the entries of the Riemann matrix will be transcendental numbers that can only be computed approximately. The accuracy will depend on the global variable Digits. Increasing this value will lead to more accurate digits, but also to a longer computation time. The algebraic curve f is determined up to birational equivalence by the matrix P and also by the matrix Z. A curve is birational to f if and only if its Riemann matrix is equivalent (not necessarily equal) to Z. Related Information For more examples, test files, plots, and documentation on the algcurves package, see: http://www.math.fsu.edu/~hoeij/maple.html and the help page algcurves. The package CASA contains code for curves and for other algebraic varieties as well, and can be obtained from: Return to Index of Example Worksheets Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"https://cn.maplesoft.com/support/helpJP/Maple/view.aspx?path=examples%2Falgcurve","timestamp":"2024-11-13T22:24:44Z","content_type":"application/xhtml+xml","content_length":"489131","record_id":"<urn:uuid:153e647e-cd91-44ea-a799-3f79681c2b23>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00161.warc.gz"}
Linear Algebra/Topic: Linear Recurrences - Wikibooks, open books for an open world In 1202 Leonardo of Pisa, also known as Fibonacci, posed this problem. A certain man put a pair of rabbits in a place surrounded on all sides by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets a new pair which from the second month on becomes productive? This moves past an elementary exponential growth model for population increase to include the fact that there is an initial period where newborns are not fertile. However, it retains other simplyfing assumptions, such as that there is no gestation period and no mortality. The number of newborn pairs that will appear in the upcoming month is simply the number of pairs that were alive last month, since those will all be fertile, having been alive for two months. The number of pairs alive next month is the sum of the number alive current month and the number of newborns. ${\displaystyle f(n+1)=f(n)+f(n-1)\qquad {\text{where }}f(0)=1{\text{, }}f(1)=1}$ The is an example of a recurrence relation (it is called that because the values of ${\displaystyle f}$ are calculated by looking at other, prior, values of ${\displaystyle f}$). From it, we can easily answer Fibonacci's twelve-month question. ${\displaystyle {\begin{array}{r|ccccccccccccc}{\textit {month}}&0&1&2&3&4&5&6&7&8&9&10&11&12\\\hline {\textit {pairs}}&1&1&2&3&5&8&13&21&34&55&89&144&233\end{array}}}$ The sequence of numbers defined by the above equation (of which the first few are listed) is the Fibonacci sequence. The material of this chapter can be used to give a formula with which we can can calculate ${\displaystyle f(n+1)}$ without having to first find ${\displaystyle f(n)}$, ${\displaystyle f(n-1)}$, etc. For that, observe that the recurrence is a linear relationship and so we can give a suitable matrix formulation of it. ${\displaystyle {\begin{pmatrix}1&1\\1&0\end{pmatrix}}{\begin{pmatrix}f(n)\\f(n-1)\end{pmatrix}}={\begin{pmatrix}f(n+1)\\f(n)\end{pmatrix}}\qquad {\text{where }}{\begin{pmatrix}f(1)\\f(0)\end Then, where we write ${\displaystyle T}$ for the matrix and ${\displaystyle {\vec {v}}_{n}}$ for the vector with components ${\displaystyle f(n+1)}$ and ${\displaystyle f(n)}$, we have that ${\ displaystyle {\vec {v}}_{n}=T^{n}{\vec {v}}_{0}}$. The advantage of this matrix formulation is that by diagonalizing ${\displaystyle T}$ we get a fast way to compute its powers: where ${\displaystyle T=PDP^{-1}}$ we have ${\displaystyle T^{n}=PD^{n}P^{-1}}$, and the ${\displaystyle n}$-th power of the diagonal matrix ${\displaystyle D}$ is the diagonal matrix whose entries that are the ${\ displaystyle n}$-th powers of the entries of ${\displaystyle D}$. The characteristic equation of ${\displaystyle T}$ is ${\displaystyle \lambda ^{2}-\lambda -1}$. The quadratic formula gives its roots as ${\displaystyle (1+{\sqrt {5}})/2}$ and ${\displaystyle (1-{\ sqrt {5}})/2}$. Diagonalizing gives this. ${\displaystyle {\begin{pmatrix}1&1\\1&0\end{pmatrix}}={\begin{pmatrix}{\frac {1+{\sqrt {5}}}{2}}&{\frac {1-{\sqrt {5}}}{2}}\\1&1\end{pmatrix}}{\begin{pmatrix}{\frac {1+{\sqrt {5}}}{2}}&0\\0&{\ frac {1-{\sqrt {5}}}{2}}\end{pmatrix}}{\begin{pmatrix}{\frac {1}{\sqrt {5}}}&-{\frac {1-{\sqrt {5}}}{2{\sqrt {5}}}}\\{\frac {-1}{\sqrt {5}}}&{\frac {1+{\sqrt {5}}}{2{\sqrt {5}}}}\end{pmatrix}}}$ Introducing the vectors and taking the ${\displaystyle n}$-th power, we have ${\displaystyle {\begin{array}{rl}{\begin{pmatrix}f(n+1)\\f(n)\end{pmatrix}}&={\begin{pmatrix}1&1\\1&0\end{pmatrix}}^{n}{\begin{pmatrix}f(1)\\f(0)\end{pmatrix}}\\&={\begin{pmatrix}{\frac {1+{\ sqrt {5}}}{2}}&{\frac {1-{\sqrt {5}}}{2}}\\1&1\end{pmatrix}}{\begin{pmatrix}{\frac {1+{\sqrt {5}}}{2}}^{n}&0\\0&{\frac {1-{\sqrt {5}}}{2}}^{n}\end{pmatrix}}{\begin{pmatrix}{\frac {1}{\sqrt {5}}}& -{\frac {1-{\sqrt {5}}}{2{\sqrt {5}}}}\\{\frac {-1}{\sqrt {5}}}&{\frac {1+{\sqrt {5}}}{2{\sqrt {5}}}}\end{pmatrix}}{\begin{pmatrix}f(1)\\f(0)\end{pmatrix}}\end{array}}}$ We can compute ${\displaystyle f(n)}$ from the second component of that equation. ${\displaystyle f(n)={\frac {1}{\sqrt {5}}}\left[\left({\frac {1+{\sqrt {5}}}{2}}\right)^{n+1}-\left({\frac {1-{\sqrt {5}}}{2}}\right)^{n+1}\right]}$ Notice that ${\displaystyle f}$ is dominated by its first term because ${\displaystyle (1-{\sqrt {5}})/2}$ is less than one, so its powers go to zero. In general, a linear recurrence relation has the form ${\displaystyle f(n+1)=a_{n}f(n)+a_{n-1}f(n-1)+\dots +a_{n-k}f(n-k)}$ (it is also called a difference equation). This recurrence relation is homogeneous because there is no constant term; i.e, it can be put into the form ${\displaystyle 0=-f(n+1)+a_{n}f(n)+a_{n-1}f (n-1)+\dots +a_{n-k}f(n-k)}$. This is said to be a relation of order ${\displaystyle k}$. The relation, along with the initial conditions ${\displaystyle f(0)}$, ..., ${\displaystyle f(k)}$ completely determine a sequence. For instance, the Fibonacci relation is of order ${\displaystyle 2}$ and it, along with the two initial conditions ${\displaystyle f(0)=1}$ and ${\displaystyle f(1)= 1}$, determines the Fibonacci sequence simply because we can compute any ${\displaystyle f(n)}$ by first computing ${\displaystyle f(2)}$, ${\displaystyle f(3)}$, etc. In this Topic, we shall see how linear algebra can be used to solve linear recurrence relations. First, we define the vector space in which we are working. Let ${\displaystyle V}$ be the set of functions ${\displaystyle f}$ from the natural numbers ${\displaystyle \mathbb {N} =\{0,1,2,\ldots \}} $ to the real numbers. (Below we shall have functions with domain ${\displaystyle \{1,2,\ldots \}}$, that is, without ${\displaystyle 0}$, but it is not an important distinction.) Putting the initial conditions aside for a moment, for any recurrence, we can consider the subset ${\displaystyle S}$ of ${\displaystyle V}$ of solutions. For example, without initial conditions, in addition to the function ${\displaystyle f}$ given above, the Fibonacci relation is also solved by the function ${\displaystyle g}$ whose first few values are ${\displaystyle g(0)=1}$, ${\ displaystyle g(1)=3}$, ${\displaystyle g(2)=4}$, and ${\displaystyle g(3)=7}$. The subset ${\displaystyle S}$ is a subspace of ${\displaystyle V}$. It is nonempty because the zero function is a solution. It is closed under addition since if ${\displaystyle f_{1}}$ and ${\ displaystyle f_{2}}$ are solutions, then ${\displaystyle a_{n+1}(f_{1}+f_{2})(n+1)+\dots +a_{n-k}(f_{1}+f_{2})(n-k)}$ {\displaystyle {\begin{aligned}&=(a_{n+1}f_{1}(n+1)+\dots +a_{n-k}f_{1}(n-k))\\&\quad +(a_{n+1}f_{2}(n+1)+\dots +a_{n-k}f_{2}(n-k))\\&=0.\end{aligned}}} And, it is closed under scalar multiplication since ${\displaystyle a_{n+1}(rf_{1})(n+1)+\dots +a_{n-k}(rf_{1})(n-k)}$ {\displaystyle {\begin{aligned}&=r(a_{n+1}f_{1}(n+1)+\dots +a_{n-k}f_{1}(n-k))\\&=r\cdot 0\\&=0.\end{aligned}}} We can give the dimension of ${\displaystyle S}$. Consider this map from the set of functions ${\displaystyle S}$ to the set of vectors ${\displaystyle \mathbb {R} ^{k}}$. ${\displaystyle f\mapsto {\begin{pmatrix}f(0)\\f(1)\\\vdots \\f(k)\end{pmatrix}}}$ Problem 3 shows that this map is linear. Because, as noted above, any solution of the recurrence is uniquely determined by the ${\displaystyle k}$ initial conditions, this map is one-to-one and onto. Thus it is an isomorphism, and thus ${\displaystyle S}$ has dimension ${\displaystyle k}$, the order of the recurrence. So (again, without any initial conditions), we can describe the set of solutions of any linear homogeneous recurrence relation of degree ${\displaystyle k}$ by taking linear combinations of only ${\ displaystyle k}$ linearly independent functions. It remains to produce those functions. For that, we express the recurrence ${\displaystyle f(n+1)=a_{n}f(n)+\dots +a_{n-k}f(n-k)}$ with a matrix equation. ${\displaystyle {\begin{pmatrix}a_{n}&a_{n-1}&a_{n-2}&\ldots &a_{n-k+1}&a_{n-k}\\1&0&0&\ldots &0&0\\0&1&0\\0&0&1\\\vdots &\vdots &&\ddots &&\vdots \\0&0&0&\ldots &1&0\end{pmatrix}}{\begin {pmatrix}f(n)\\f(n-1)\\\vdots \\f(n-k)\end{pmatrix}}={\begin{pmatrix}f(n+1)\\f(n)\\\vdots \\f(n-k+1)\end{pmatrix}}}$ In trying to find the characteristic function of the matrix, we can see the pattern in the ${\displaystyle 2\!\times \!2}$ case ${\displaystyle {\begin{pmatrix}a_{n}-\lambda &a_{n-1}\\1&-\lambda \end{pmatrix}}=\lambda ^{2}-a_{n}\lambda -a_{n-1}}$ and ${\displaystyle 3\!\times \!3}$ case. ${\displaystyle {\begin{pmatrix}a_{n}-\lambda &a_{n-1}&a_{n-2}\\1&-\lambda &0\\0&1&-\lambda \end{pmatrix}}=-\lambda ^{3}+a_{n}\lambda ^{2}+a_{n-1}\lambda +a_{n-2}}$ Problem 4 shows that the characteristic equation is this. ${\displaystyle {\begin{vmatrix}a_{n}-\lambda &a_{n-1}&a_{n-2}&\ldots &a_{n-k+1}&a_{n-k}\\1&-\lambda &0&\ldots &0&0\\0&1&-\lambda \\0&0&1\\\vdots &\vdots &&\ddots &&\vdots \\0&0&0&\ldots &1&-\ lambda \end{vmatrix}}}$ ${\displaystyle =\pm (-\lambda ^{k}+a_{n}\lambda ^{k-1}+a_{n-1}\lambda ^{k-2}+\dots +a_{n-k+1}\lambda +a_{n-k})}$ We call that the polynomial "associated" with the recurrence relation. (We will be finding the roots of this polynomial and so we can drop the ${\displaystyle \pm }$ as irrelevant.) If ${\displaystyle -\lambda ^{k}+a_{n}\lambda ^{k-1}+a_{n-1}\lambda ^{k-2}+\dots +a_{n-k+1}\lambda +a_{n-k}}$ has no repeated roots then the matrix is diagonalizable and we can, in theory, get a formula for ${\displaystyle f(n)}$ as in the Fibonacci case. But, because we know that the subspace of solutions has dimension ${\displaystyle k}$, we do not need to do the diagonalization calculation, provided that we can exhibit ${\displaystyle k}$ linearly independent functions satisfying the relation. Where ${\displaystyle r_{1}}$, ${\displaystyle r_{2}}$, ..., ${\displaystyle r_{k}}$ are the distinct roots, consider the functions ${\displaystyle f_{r_{1}}(n)=r_{1}^{n}}$ through ${\displaystyle f_ {r_{k}}(n)=r_{k}^{n}}$ of powers of those roots. Problem 2 shows that each is a solution of the recurrence and that the ${\displaystyle k}$ of them form a linearly independent set. So, given the homogeneous linear recurrence ${\displaystyle f(n+1)=a_{n}f(n)+\dots +a_{n-k}f(n-k)}$ (that is, ${\displaystyle 0=-f(n+1)+a_{n}f(n)+\dots +a_{n-k}f(n-k)}$) we consider the associated equation ${\ displaystyle 0=-\lambda ^{k}+a_{n}\lambda ^{k-1}+\dots +a_{n-k+1}\lambda +a_{n-k}}$. We find its roots ${\displaystyle r_{1}}$, ..., ${\displaystyle r_{k}}$, and if those roots are distinct then any solution of the relation has the form ${\displaystyle f(n)=c_{1}r_{1}^{n}+c_{2}r_{2}^{n}+\dots +c_{k}r_{k}^{n}}$ for ${\displaystyle c_{1},\dots ,c_{n}\in \mathbb {R} }$. (The case of repeated roots is also easily done, but we won't cover it here— see any text on Discrete Mathematics.) Now, given some initial conditions, so that we are interested in a particular solution, we can solve for ${\displaystyle c_{1}}$, ..., ${\displaystyle c_{n}}$. For instance, the polynomial associated with the Fibonacci relation is ${\displaystyle -\lambda ^{2}+\lambda +1}$, whose roots are ${\displaystyle (1\pm {\sqrt {5}})/2}$ and so any solution of the Fibonacci equation has the form ${\ displaystyle f(n)=c_{1}((1+{\sqrt {5}})/2)^{n}+c_{2}((1-{\sqrt {5}})/2)^{n}}$. Including the initial conditions for the cases ${\displaystyle n=0}$ and ${\displaystyle n=1}$ gives ${\displaystyle {\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&1\\(1+{\sqrt {5}}/2)c_{1}&+&(1-{\sqrt {5}}/2)c_{2}&=&1\end{array}}}$ which yields ${\displaystyle c_{1}=1/{\sqrt {5}}}$ and ${\displaystyle c_{2}=-1/{\sqrt {5}}}$, as was calculated above. We close by considering the nonhomogeneous case, where the relation has the form ${\displaystyle f(n+1)=a_{n}f(n)+a_{n-1}f(n-1)+\dots +a_{n-k}f(n-k)+b}$ for some nonzero ${\displaystyle b}$. As in the first chapter of this book, only a small adjustment is needed to make the transition from the homogeneous case. This classic example illustrates. In 1883, Edouard Lucas posed the following problem. In the great temple at Benares, beneath the dome which marks the center of the world, rests a brass plate in which are fixed three diamond needles, each a cubit high and as thick as the body of a bee. On one of these needles, at the creation, God placed sixty four disks of pure gold, the largest disk resting on the brass plate, and the others getting smaller and smaller up to the top one. This is the Tower of Bramah. Day and night unceasingly the priests transfer the disks from one diamond needle to another according to the fixed and immutable laws of Bramah, which require that the priest on duty must not move more than one disk at a time and that he must place this disk on a needle so that there is no smaller disk below it. When the sixty-four disks shall have been thus transferred from the needle on which at the creation God placed them to one of the other needles, tower, temple, and Brahmins alike will crumble into dusk, and with a thunderclap the world will vanish. (Translation of De Parvill (1884) from Ball (1962).) How many disk moves will it take? Instead of tackling the sixty four disk problem right away, we will consider the problem for smaller numbers of disks, starting with three. To begin, all three disks are on the same needle. After moving the small disk to the far needle, the mid-sized disk to the middle needle, and then moving the small disk to the middle needle we have this. Now we can move the big disk over. Then, to finish, we repeat the process of moving the smaller disks, this time so that they end up on the third needle, on top of the big disk. So the thing to see is that to move the very largest disk, the bottom disk, at a minimum we must: first move the smaller disks to the middle needle, then move the big one, and then move all the smaller ones from the middle needle to the ending needle. Those three steps give us this recurrence. ${\displaystyle T(n+1)=T(n)+1+T(n)=2T(n)+1\quad {\text{where }}T(1)=1}$ We can easily get the first few values of ${\displaystyle T}$. ${\displaystyle {\begin{array}{r|cccccccccc}n&1&2&3&4&5&6&7&8&9&10\\\hline T(n)&1&3&7&15&31&63&127&255&511&1023\end{array}}}$ We recognize those as being simply one less than a power of two. To derive this equation instead of just guessing at it, we write the original relation as ${\displaystyle -1=-T(n+1)+2T(n)}$, consider the homogeneous relation ${\displaystyle 0=-T(n)+2T(n-1)}$, get its associated polynomial ${\displaystyle -\lambda +2}$, which obviously has the single, unique, root of ${\displaystyle r_{1}=2}$, and conclude that functions satisfying the homogeneous relation take the form ${\displaystyle T(n)=c_{1}2^{n}}$. That's the homogeneous solution. Now we need a particular solution. Because the nonhomogeneous relation ${\displaystyle -1=-T(n+1)+2T(n)}$ is so simple, in a few minutes (or by remembering the table) we can spot the particular solution ${\displaystyle T(n)=-1}$ (there are other particular solutions, but this one is easily spotted). So we have that— without yet considering the initial condition— any solution of ${\displaystyle T(n+1)=2T(n)+1}$ is the sum of the homogeneous solution and this particular solution: ${\displaystyle T(n)=c_{1}2^{n}-1}$. The initial condition ${\displaystyle T(1)=1}$ now gives that ${\displaystyle c_{1}=1}$, and we've gotten the formula that generates the table: the ${\displaystyle n}$-disk Tower of Hanoi problem requires a minimum of ${\displaystyle 2^{n}-1}$ moves. Finding a particular solution in more complicated cases is, naturally, more complicated. A delightful and rewarding, but challenging, source on recurrence relations is (Graham, Knuth & Patashnik 1988 )., For more on the Tower of Hanoi, (Ball 1962) or (Gardner 1957) are good starting points. So is (Hofstadter 1985). Some computer code for trying some recurrence relations follows the exercises. Problem 1 Solve each homogeneous linear recurrence relations. 1. ${\displaystyle f(n+1)=5f(n)-6f(n-1)}$ 2. ${\displaystyle f(n+1)=4f(n)}$ 3. ${\displaystyle f(n+1)=6f(n)+7f(n-1)+6f(n-2)}$ Problem 2 Give a formula for the relations of the prior exercise, with these initial conditions. 1. ${\displaystyle f(0)=1}$ , ${\displaystyle f(1)=1}$ 2. ${\displaystyle f(0)=0}$ , ${\displaystyle f(1)=1}$ 3. ${\displaystyle f(0)=1}$ , ${\displaystyle f(1)=1}$ , ${\displaystyle f(2)=3}$ . Problem 3 Check that the isomorphism given between ${\displaystyle S}$ and ${\displaystyle \mathbb {R} ^{k}}$ is a linear map. It is argued above that this map is one-to-one. What is its inverse? Problem 4 Show that the characteristic equation of the matrix is as stated, that is, is the polynomial associated with the relation. (Hint: expanding down the final column, and using induction will work.) Problem 5 Given a homogeneous linear recurrence relation ${\displaystyle f(n+1)=a_{n}f(n)+\dots +a_{n-k}f(n-k)}$ , let ${\displaystyle r_{1}}$ , ..., ${\displaystyle r_{k}}$ be the roots of the associated 1. Prove that each function ${\displaystyle f_{r_{i}}(n)=r_{k}^{n}}$ satisfies the recurrence (without initial conditions). 2. Prove that no ${\displaystyle r_{i}}$ is ${\displaystyle 0}$ . 3. Prove that the set ${\displaystyle \{f_{r_{1}},\dots ,f_{r_{k}}\}}$ is linearly independent. Problem 6 (This refers to the value ${\displaystyle T(64)=18,446,744,073,709,551,615}$ given in the computer code below.) Transferring one disk per second, how many years would it take the priests at the Tower of Hanoi to finish the job? Computer Code This code allows the generation of the first few values of a function defined by a recurrence and initial conditions. It is in the Scheme dialect of LISP (specifically, it was written for A. Jaffer's free scheme interpreter SCM, although it should run in any Scheme implementation). First, the Tower of Hanoi code is a straightforward implementation of the recurrence. (define (tower-of-hanoi-moves n) (if (= n 1) (+ (* (tower-of-hanoi-moves (- n 1)) 1) ) ) (Note for readers unused to recursive code: to compute ${\displaystyle T(64)}$ , the computer is told to compute ${\displaystyle 2*T(63)-1}$ , which requires, of course, computing ${\displaystyle T (63)}$ . The computer puts the "times ${\displaystyle 2}$ " and the "plus ${\displaystyle 1}$ " aside for a moment to do that. It computes ${\displaystyle T(63)}$ by using this same piece of code (that's what "recursive" means), and to do that is told to compute ${\displaystyle 2*T(62)-1}$ . This keeps up (the next step is to try to do ${\displaystyle T(62)}$ while the other arithmetic is held in waiting), until, after ${\displaystyle 63}$ steps, the computer tries to compute ${\displaystyle T(1)}$ . It then returns ${\displaystyle T(1)=1}$ , which now means that the computation of $ {\displaystyle T(2)}$ can proceed, etc., up until the original computation of ${\displaystyle T(64)}$ finishes.) The next routine calculates a table of the first few values. (Some language notes: '() is the empty list, that is, the empty sequence, and cons pushes something onto the start of a list. Note that, in the last line, the procedure proc is called on argument n.) (define (first-few-outputs proc n) (first-few-outputs-helper proc n '()) ) (define (first-few-outputs-aux proc n lst) (if (< n 1) (first-few-outputs-aux proc (- n 1) (cons (proc n) lst)) ) ) The session at the SCM prompt went like this. >(first-few-outputs tower-of-hanoi-moves 64) Evaluation took 120 mSec (1 3 7 15 31 63 127 255 511 1023 2047 4095 8191 16383 32767 9223372036854775807 18446744073709551615) This is a list of ${\displaystyle T(1)}$ through ${\displaystyle T(64)}$ . (The ${\displaystyle 120}$ mSec came on a 50 mHz '486 running in an XTerm of XWindow under Linux. The session was edited to put line breaks between numbers.) • Ball, W.W. (1962), Mathematical Recreations and Essays, MacMillan (revised by H.S.M. Coxeter). • De Parville (1884), La Nature, vol. I, Paris, pp. 285–286. • Gardner, Martin (May. 1957), "Mathematical Games: About the remarkable similarity between the Icosian Game and the Tower of Hanoi", Scientific American: 150–154{{citation}}: Check date values in: |year= (help). • Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1988), Concrete Mathematics, Addison-Wesley. • Hofstadter, Douglas R. (1985), Metamagical Themas:~Questing for the Essence of Mind and Pattern, Basic Books.
{"url":"https://en.m.wikibooks.org/wiki/Linear_Algebra/Topic:_Linear_Recurrences","timestamp":"2024-11-01T22:36:36Z","content_type":"text/html","content_length":"281816","record_id":"<urn:uuid:3932019f-17c3-4771-aa55-6d595caca619>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00374.warc.gz"}
Back to Papers Home Back to Papers of School of Physics Paper IPM / P / 15192 School of Physics Title: Strong Decay of P[c](4380) Pentaquark in a Molecular Picture Author(s): 1. K. Azizi 2. Y. Sarac 3. H. Sundu Status: Published Journal: Phys. Lett. B Vol.: 782 Year: 2018 Pages: 694-701 Supported by: IPM There are different assumptions on the substructure of the pentaquarks P[c](4380) and P[c](4450), newly founded in J/ψN invariant mass by the LHCb collaboration, giving consistent mass results with the experimental observations. The experimental data and recent theoretical studies on their mass allow us to interpret them as spin-3/2 negative-parity and spin-5/2 positive-parity pentaquarks, respectively. There may exist opposite-parity states corresponding to these particles, as well. Despite a lot of studies, however, the nature and organization of these pentaquarks in terms of quarks and gluons are not clear. To this end we need more theoretical investigations on other physical properties of these states. In this accordance, we study a strong and dominant decay of the P[c](4380) to J/ψ and N in the framework of three point QCD sum rule method. An interpolating current in a molecular form is applied to calculate six strong coupling form factors defining the transitions of the positive and negative parity spin-3/2 pentaquark states. The results of the coupling constants are used in the calculation of the decay widths of these transitions. The obtained result for the decay width of the negative parity state is compatible with the experimental observation. Our prediction for the width of the opposite-parity state's transition may help experimentalists in the search for the possible positive parity spin-3/2 pentaquark of ―ccduu. Download TeX format back to top
{"url":"https://ipm.ac.ir/ViewPaperInfo.jsp?PTID=15192&school=Physics","timestamp":"2024-11-05T19:24:55Z","content_type":"text/html","content_length":"42641","record_id":"<urn:uuid:3c5fb029-a1b6-4461-b7c8-98dc692e1634>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00293.warc.gz"}
Digital Signal Processing Important MCQ Set 2 (DSP) » UNIQUE JANKARI Digital Signal Processing Important MCQ Set 2 (DSP) DSP MCQ set 2:- Questions and answers are much important for any interviews of Digital Signal processing. Join Our Telegram Channel For pdf. Q1.For what kind of signals one-sided z-transform is unique? • (A) All signals • (B) Anti-causal signal • (C) Causal signal • (D) Linear Ans:- (C) Causal signal Q2. If X1(k) and X2(k) are the N-point DFTs of X1(n) and x2(n) respectively, then what is the N-point DFT of x(n)=ax1(n)+bx2(n)? • (A) X1(ak)+X2(bk) • (B) aX1(k)+bX2(k) • (C) akX1(k)+bkX2(k) • (D) X1(ak)-X2(bk) Ans:- (B) aX1(k)+bX2(k) Q3. Digital angular frequency has a unit • (A) radian per second • (B) radian per samples • (C) No unit • (D) Hz Ans:-(B) radian per samples Q4. Type of digital crossover • (A) Active • (B) Passive • (C) Active and Passive • (D) Static and Dynamic Ans:- (C) Active and Passive Q5. condition for orthogonality of vectors • (A) Inner product Zero • (B) Inner product 1 • (C) outer product 1 • (D) Magnitude 1 Ans:- (A) Inner product Zero Q6. The speech signal is obtained after • (A) Analog to digital conversion • (B) Digital to analog conversion • (C) Modulation • (D) Quantization Ans:- (B) Digital to analog conversion Q7. Which of the following techniques are NOT used to convert analog filter to digital filter • (A) Successive Approximation • (B) Impulse Invariant • (C) Approximation of Derivatives • (D) Bilinear Transformation Ans:- (A) Successive Approximation Q8. Which of the following is done to convert a continuous-time signal into discrete-time signal? • (A) Modulating • (B) Sampling • (C) Differentiating • (D) Integrating Ans:- (B) Sampling Q9. Normalized frequency of discrete-time signal has a maximum value • (A) infinite • (B) 0.5 • (C) 1 • (D) 1.5 Ans:- (B) 0.5 Q10. IIR Filter design methods are based on different mapping functions used for mapping the following domains • (A) DFT to Z domain • (B) S domain to Z domain • (C) Z domain to S domain • (D) Z domain to DFT Ans:- (B) S domain to Z domain Q11. The transfer function of IIR Filters contains • (A) Only Zeros • (B) Only Poles • (C) Poles and Zeros • (D) Neither Poles or Zeros Ans:- (C) Poles and Zeros Q12. The Chebyshev filters have • (A) Flat passband and Flat stopband • (B) Flat stopband and Tapering stopband • (C) Flat stopband and Equiripple stopband • (D) Flat passband and Equiripple stopband Ans:- (C) Flat stopband and Equiripple stopband Q13. Which filter is present in the DSP system • (A) band stop • (B) High pass • (C) Aliasing • (D) Anti aliasing Ans:- (D) Anti-aliasing Q14. What is the set of all values of z for which X(z) attains a finite value? • (A) Radius of convergence • (B) Radius of divergence • (C) Feasible solution • (D) Region of convergence Ans:- (C) Region of convergence Q15. Which of the following method is not used to find the inverse z-transform of a signal? • (A) Counter integration • (B) Expansion into a series of terms • (C) Partial fraction expansion • (D) Matrix Multiplication Method Ans:- (D) Matrix Multiplication Method Q16. For a decimation-in-time FFT algorithm, which of the following is true? (A) Both input and output are in order (B) Both input and output are shuffled (C) Input is shuffled and output is in order (D) None of the above Ans:- (C) Input is shuffled and output is in order Q17. Basis functions for Fourier transform are (A) square (B) Triangular (C) Cos & sin (D) cosine Ans:- (C) Cos & sin Q18. Digital Active cross over includes (A) LPF & HPF (B) LPF (C) HPF (D) BPF Ans:- (A) LPF & HPF Q19. Which of the following is used after DAC and DEMUX in Voice decoders (A) Balance Modulator (B) Amplifier (C) MUX (D) DEMUX Ans:- (A) Balance Modulator Q20. The poles of a Butterworth filter lie on a ———– and poles of a Chebyshev filter lie on a ————- (A) Ellipse, Circle (B) Ellipse, Trapezoid (C) Circle, Circle (D) Circle, Ellipse Ans:- (D) Circle, Ellipse Q21. The large side lobes of W(ω) results in which of the following undesirable effects? (A) Circling effects (B) : Broadening effects (C) Ringing effects (D) quantization Ans:- (C) Ringing effects Q22. Which of the following is True regarding stability of IIR filters? (A) IIR filters are always stable (B) Number of Zeros in transfer function of IIR filter determines stability (C) Stability is independent of presence of poles and zeros (D) IIR fiter can tend to become unstable due to presence of poles Ans:- (D) IIR fiter can tend to become unstable due to presence of poles Q23. A linear time invariant system is said to be BIBO stable if and only if the ROC of the system function (A) Includes unit circle (B) Excludes unit circle (C) Is an unit circle (D) In Z plane Ans:- (A) Includes unit circle Q24. Mapping of points from the s-plane to the z-plane is implied by the relation (A) z=esT where T represents sampling interval (B) z = e/(s*T) where T represents sampling interval (C) z= e+s+T where T represents sampling interval (D) z = esT where T represents sampling interval Ans:- D) z = esT where T represents sampling interval Q25. ECG have frequency range from (A) 0.5Hz to 80 Hz (B) 20 Hz to 20KHz (C) 1Hz to 10Hz (D) 10 Hz to 1 KHz Ans:- A) 0.5Hz to 80 Hz See More :- Digital Signal Processing Important MCQ Set 1 (DSP), DSP MCQ set 2:- Questions and answers are much important for any interviews of Digital Signal processing. Join Our Telegram Channel For pdf. 1 thought on “Digital Signal Processing Important MCQ Set 2 (DSP)” Leave a Comment You must be logged in to post a comment.
{"url":"https://uniquejankari.in/dsp-mcq-set-2-questions-and-answers-digital-signal-processing/","timestamp":"2024-11-08T09:02:47Z","content_type":"text/html","content_length":"186344","record_id":"<urn:uuid:789a4439-9fd8-4ede-a1e0-da9d0147416a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00373.warc.gz"}
How to extract the overlap of two samples from Cobaya and draw the contour plot? Good afternoon, Thank you for welcoming me into the CosmoCoffee forum! I have a question about Cobaya. I have to check the distribution that results from the overlap of two or more samples drawn from Cobaya package (for simplicity, let's consider only two for the moment). For the first distribution, the code is the following: Code: Select all gdsamples1of3 = MCSamplesFromCobaya(updated_info, products.products()["sample"],ignore_rows=0.3) gdplot = gdplt.getSubplotPlotter(width_inch=5) p1 = gdsamples1of3.getParams() gdsamples1of3.addDerived(O_m1, name='O_m1', label=r"\Omega_{0m}") gdsamples1of3.addDerived(H01, name='H01', label=r"H_0(Km \: s^{-1} \: Mpc^{-1})") gdplot.triangle_plot(gdsamples1of3, ["O_m1","H01"], filled=True) while for the second the code is similar Code: Select all gdsamples2of3 = MCSamplesFromCobaya(updated_info, products.products()["sample"],ignore_rows=0.3) gdplot = gdplt.getSubplotPlotter(width_inch=5) p1 = gdsamples2of3.getParams() gdsamples2of3.addDerived(O_m1, name='O_m1', label=r"\Omega_{0m}") gdsamples2of3.addDerived(H01, name='H01', label=r"H_0(Km \: s^{-1} \: Mpc^{-1})") gdplot.triangle_plot(gdsamples2of3, ["O_m1","H01"], filled=True) When I run the codes and plot the distributions I obtain the overlap of the whole contours, e.g. in the following case with multiple distributions: Code: Select all plot=g.plot_2d([gdsamples1of3,gdsamples2of3,gdsamples3of3,gdsamplesfull],'O_m1','H01',filled=False,colors=['cyan','orange','lime','red'],lims=[0, 1, 64, 76]) What I would like to obtain is only the sub-distribution given by the overlap of all the contours so that I can plot it alone. I draw it in black in the following image. Can you please clarify me how to do it? Thank you in advance! Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Hi Simone, I have tried to solve your problem by following https://getdist.readthedocs.io/en/latest/plot_gallery.html. This is one example plot from Plot Gallery. I have followed some steps here to plot only the overlap region between the two distributions... I have created grid of points and checked for those lie within both contours. Then. created a mask that shows only the overlapping points. Code: Select all %matplotlib inline g = plots.get_single_plotter(width_inch=3, ratio=1) g.plot_2d([samples, samples2], 'x1', 'x2', filled=True) # Get the paths from the contour plots paths1 = g.subplots[0][0].collections[0].get_paths() paths2 = g.subplots[0][0].collections[1].get_paths() # Create a grid of points in the range of your plots x = np.linspace(-5, 5, 500) y = np.linspace(-5, 5, 500) X, Y = np.meshgrid(x, y) points = np.c_[X.ravel(), Y.ravel()] for path1 in paths1: for path2 in paths2: # Check which points are inside each contour mask1 = path1.contains_points(points).reshape(X.shape) mask2 = path2.contains_points(points).reshape(X.shape) # Find overlapping region and plot it overlap_mask = np.logical_and(mask1, mask2) g.subplots[0][0].scatter(X[overlap_mask], Y[overlap_mask], color='purple', s=1, alpha=0.6) # Remove the original unfilled contours for collection in g.subplots[0][0].collections: g.add_legend(['Overlap Region'], colored_text=True); Please let me know if this approach solves your problem! plot1.png (28.83 KiB) Viewed 11222 times plot1.png (28.83 KiB) Viewed 11223 times plot2.png (25.25 KiB) Viewed 11225 times Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? I assume the contour is supposed to be the joint constraint. In general you cannot do this without generating new samples by sampling from the joint likelihood (using all three bins). (if you have more than 2 parameters, the overlap of the 2D projected contours is in general not the same as two the 2D projection of the ND-intersection of the ND distributions) Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Dear All, Thank you for your support. The graphical solution of Aruna Harikant is useful for the graphical showing of the intersection and surely I will leverage it but my idea was to consider the joint distribution of the posteriors. As Antony Lewis was pointing out, I should rewrite the likelihood considering the joint contribution of the three or more bins. But in this case it is already given since it is the one in red called "Full Pantheon". Thank you again for your kind support. Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Hi Simone, I'm grateful for Prof. Lewis's insights and based on his suggestions, I am addressing this issue. Accordingly, the focus of this problem is on separating the joint constraint. For eg. I think we can separate it out, one method is to apply kernel density estimation. [/img]. Please let me know if you still face any problems separating it out. Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Dear Aruna Harikant, This is a very good idea! So, what should I do with the code? Extract the posteriors for the different bins as list of values, join together the values and run a KDE method in Python? Thanks to you and Prof. Lewis for your kind support! Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Hi both, As stated by Antony a few posts above, this is not correct: the joint constraint is imposed by the product of the likelihoods, which intuitively is closer to the intersection than the union (what you are doing). As Antony said, you have to generate samples from the joint posterior, containing the two likelihoods. Alternatively, you can use post-processing ( https://cobaya.readthedocs.io/en/latest/post.html) to reweight ("add") one of the samples with the other likelihood. Since you have samples from each individual likelihood, you can reweight each with the other likelihood to check that you get consistent results (if the result differs significantly between the two approaches, you cannot reweight, and need to sample from the joint posterior). Re: How to extract the overlap of two samples from Cobaya and draw the contour plot? Dear Prof. Torrado, Thank you for the further clarification. I will proceed with the joint likelihoods or the alternative approach you suggested. With best regards,
{"url":"https://cosmocoffee.info/viewtopic.php?p=10311&sid=335c2960e91537cd63f4eca744aa2740","timestamp":"2024-11-03T09:53:11Z","content_type":"text/html","content_length":"51836","record_id":"<urn:uuid:5fe55459-9e31-4b34-ba59-a141e32e290b>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00193.warc.gz"}
Exploring Python's Built-in Functions Python, an open-source and high-level programming language, is known for its simplicity and readability, making it a favorite among both novice and experienced developers. One aspect that contributes to Python’s simplicity is its vast library of built-in functions. Let’s have a look at some of these functions and see them in action. The len() function returns the number of items in an object. This could be the number of elements in a list, characters in a string, items in a dictionary, and so on. # Example usage of len() function my_list = [1, 2, 3, 4, 5] print(len(my_list)) # Output: 5 The sum() function returns the sum of all items in an iterable. This function is handy when you want to add up all the numbers in a list or tuple quickly. # Example usage of sum() function my_list = [1, 2, 3, 4, 5] print(sum(my_list)) # Output: 15 The sorted() function returns a new sorted list from the elements of any iterable. # Example usage of sorted() function my_list = [5, 1, 3, 4, 2] print(sorted(my_list)) # Output: [1, 2, 3, 4, 5] The round() function rounds a number to the nearest integer, or to the specified number of decimals if the second parameter is provided. # Example usage of round() function print(round(3.14159)) # Output: 3 print(round(3.14159, 2)) # Output: 3.14 These are just a few examples of Python’s built-in functions. Others include abs(), all(), any(), enumerate(), filter(), map(), and many more. Familiarizing yourself with these functions can greatly increase your productivity and efficiency as a Python programmer. Remember, programming is about problem-solving. The more tools you have in your toolbox, and the better you understand how to use them, the more capable you’ll be at tackling whatever programming challenge comes your way. For further reading on Python’s built-in functions, check out the Python documentation.
{"url":"https://juanleonardosanchez.com/posts/python/exploring-pythons-built-in-functions/","timestamp":"2024-11-02T02:49:17Z","content_type":"text/html","content_length":"26622","record_id":"<urn:uuid:e1ad486f-2ae7-4c3e-9b2a-e474a615c558>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00261.warc.gz"}
Revision history Reducing the Coefficients of a Polynomial Modulo an Ideal I have a polynomial in two variables $t_1$ and $t_2$ (say $2t_1 + at_2$) defined over a ring which is itself a polynomial ring (say $\mathbf{Z}[a]$). I'd like to reduce the coefficients of the polynomial modulo an ideal of the latter ring (say $(2)$ or $(a)$ or $(2,a)$). When I execute M.<a> = PolynomialRing(ZZ) R.<t1,t2> = PolynomialRing(M) m = M.ideal(a) (2*t1 + a*t2).change_ring(QuotientRing(M,m)) I get 2*t1, as I would expect. On the other hand, the code M.<a> = PolynomialRing(ZZ) R.<t1,t2> = PolynomialRing(M) m = M.ideal(2) (2*t1 + a*t2).change_ring(QuotientRing(M,m)) gives me a type error ("polynomial must have unit leading coefficient"). And the input M.<a> = PolynomialRing(ZZ) R.<t1,t2> = PolynomialRing(M) m = M.ideal(2,a) (2*t1 + a*t2).change_ring(QuotientRing(M,m)) gives the output 2*t1 + abar*t2 rather than the 0 I would have expected. What should I do to get the outputs I would expect (namely 2*t1, abar*t2, and 0, respectively)?
{"url":"https://ask.sagemath.org/questions/43982/revisions/","timestamp":"2024-11-07T04:48:52Z","content_type":"application/xhtml+xml","content_length":"15650","record_id":"<urn:uuid:22893297-a793-4954-ba5f-e4e7cd29c6da>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00864.warc.gz"}
Magnitude of Total Momentum Calculator - GEGCalculators Magnitude of Total Momentum Calculator The magnitude of total momentum in a system of particles is the absolute value of the vector sum of their individual momenta. It represents the overall “size” or strength of the system’s momentum without considering its direction. This scalar quantity is calculated by summing the magnitudes of the momenta of all the particles in the system. Magnitude of Total Momentum Calculator Concept Definition Total Momentum The total momentum of a system of particles is the vector sum of the momenta of all the particles in the system. Magnitude of Total Momentum The magnitude of total momentum is the absolute value or size of the total momentum vector, without considering its direction. It is a scalar quantity. Calculation To find the magnitude of total momentum, sum the magnitudes of the momenta of all individual particles in the system. Formula Magnitude of Total Momentum = How do you find the magnitude of total momentum? The magnitude of total momentum for a system of particles is calculated by adding up the magnitudes of the momentum vectors of all the individual particles in the system. It can be estimated as the sum of the individual momenta. What is the magnitude of the total initial momentum? The magnitude of the total initial momentum of a system can be estimated as the sum of the magnitudes of the initial momenta of all the particles in the system. How do you find the magnitude of momentum with vectors? To find the magnitude of momentum with vectors, you use the formula: Magnitude of Momentum = Mass x Velocity. The magnitude is equal to the product of the mass and the magnitude of the velocity vector. How do you find the magnitude of momentum after a collision? The magnitude of momentum after a collision can be estimated as the sum of the magnitudes of the momenta of the objects involved in the collision. This is based on the principle of conservation of momentum if no external forces are acting on the system. What is the formula for total magnitude? There isn’t a specific formula for “total magnitude” as it depends on what you’re calculating the total magnitude of (e.g., total momentum, total force). Generally, you sum the magnitudes of the individual quantities involved. What does magnitude of momentum mean? The magnitude of momentum represents the “size” or “strength” of the momentum vector. It is a scalar quantity that gives you the absolute value of momentum without regard to its direction. Does momentum have magnitude? Yes, momentum has magnitude. The magnitude of momentum is determined by the mass of an object and its velocity. It represents how much motion an object possesses. How do you find magnitude from initial and final velocity? The magnitude of the change in momentum can be estimated using the formula: Change in Momentum = Mass x (Final Velocity – Initial Velocity). You calculate the difference between the final and initial velocities and multiply it by the mass of the object. What is the formula for calculating the magnitude of total momentum for a system of particles? The formula for calculating the magnitude of total momentum for a system of particles is: Total Momentum = Σ (Mass_i x Velocity_i), where Σ represents the sum over all particles in the system. How do you find the magnitude of momentum of two objects? To find the magnitude of momentum for two objects, calculate the momentum of each object separately using the formula Momentum = Mass x Velocity for each object, and then add their magnitudes together. How do you find the magnitude of velocity in momentum? The magnitude of velocity in momentum is the absolute value of the velocity vector. It represents the speed of the object without regard to its How do you find the magnitude of force with momentum and time? The magnitude of force (F) can be estimated using the formula: Force = Change in Momentum / Time. If you know the change in momentum and the time over which it occurred, you can calculate the force. What is the magnitude of the change in momentum of a body equal to? The magnitude of the change in momentum of a body is equal to the product of the force applied to it and the time over which the force is applied. It can be expressed as: Change in Momentum = Force x Time. How do you find the magnitude of the momentum of an incident photon? The magnitude of the momentum of a photon can be estimated using the formula: Momentum = (Photon Energy) / (Speed of Light). The speed of light is approximately 3 x 10^8 meters per second. How do you find the magnitude and direction of momentum? To find the magnitude of momentum, use the formula Momentum = Mass x Velocity, and to find the direction, you need to consider the direction of the velocity vector. How do you find the magnitude of a moving object? The magnitude of a moving object can be found using its speed or the magnitude of its velocity vector, depending on the context. How do you find the magnitude of the resultant force? The magnitude of the resultant force can be found by adding up the magnitudes of all the individual forces acting on an object, taking into account both their magnitudes and directions. What is total magnitude in physics? “Total magnitude” in physics is not a standard term. It usually refers to the overall magnitude or total quantity of a particular physical property, such as total momentum or total force. What is the difference between momentum and magnitude of momentum? Momentum is a vector quantity that includes both magnitude and direction, whereas the magnitude of momentum is a scalar quantity that represents only the size or strength of the momentum vector. Is the magnitude the same as momentum in physics? No, magnitude and momentum are not the same. Momentum is a vector quantity that includes both magnitude and direction, while magnitude is a scalar that represents only the size or strength of a vector. What is magnitude in a collision? In the context of a collision, magnitude typically refers to the absolute value or size of quantities like momentum, velocity, or force, without considering their What does the magnitude of momentum depend on? The magnitude of momentum depends on the mass and speed (magnitude of velocity) of the object. It is directly proportional to both mass and speed. Is magnitude of momentum always positive? No, the magnitude of momentum can be positive or zero. It depends on the direction and speed of the object’s motion. If an object is moving in the positive direction, its momentum has a positive magnitude; if it’s stationary, the magnitude is zero. Is magnitude the same as final velocity? No, magnitude and final velocity are not the same. Magnitude refers to the size or strength of a vector, while final velocity is a vector quantity that includes both magnitude and direction. What is magnitude of total velocity? The magnitude of total velocity for a system of objects is the absolute value of the resultant velocity vector, representing the overall speed of the system. How do you find magnitude with mass and velocity? To find the magnitude of momentum, you use the formula: Magnitude of Momentum = Mass x Velocity, where mass is the mass of the object, and velocity is the magnitude of the velocity vector. How do you calculate the total momentum before and after the collision? The total momentum before the collision is the sum of the momenta of all objects involved, and the total momentum after the collision is similarly calculated. If no external forces act, these total momenta should be equal due to the conservation of momentum. What is the magnitude of the momentum of each photon? The magnitude of the momentum of each photon is given by the formula: Momentum = (Photon Energy) / (Speed of Light). The momentum of a photon depends on its energy and the speed of light. How do you find the magnitude of velocity? To find the magnitude of velocity, you take the absolute value of the velocity vector. For example, if velocity is represented as (5 m/s, -3 m/s), the magnitude is sqrt((5 m/s)^2 + (-3 m/s)^2), which is approximately 5.83 m/s. How to calculate total momentum of two objects before collision? To calculate the total momentum of two objects before a collision, add the magnitudes of their individual momenta. The total momentum is equal to the sum of the momenta of both objects. Is 2 the total momentum before a collision the total momentum after a collision? No, the total momentum before a collision is not necessarily equal to the total momentum after a collision. Total momentum is conserved only if there are no external forces acting on the system. What is the magnitude of the sum of 2 vectors? The magnitude of the sum of two vectors can be calculated using the Pythagorean theorem. If you have two vectors A and B, the magnitude of their sum (C) is given by: |C| = sqrt(|A|^2 + |B|^2), where |A| and |B| are the magnitudes of vectors A and B, respectively. How do you find the magnitude of two vectors A and B? To find the magnitude of vectors A and B, use their respective magnitude formulas: |A| = sqrt(Ax^2 + Ay^2) and |B| = sqrt(Bx^2 + By^2), where Ax, Ay, Bx, and By are the components of the vectors in the x and y directions. How do you find the magnitude of change in velocity? The magnitude of the change in velocity can be found by taking the absolute value of the difference between the final and initial velocities. It represents the speed of the change in velocity without considering its direction. What is the formula for magnitude and direction of velocity? The formula for magnitude and direction of velocity is: Velocity = Speed x Direction, where Speed is the magnitude of velocity, and Direction is a unit vector pointing in the direction of velocity. Is there a unit for magnitude? No, there is no specific unit for magnitude. Magnitude is a scalar quantity that represents the size or strength of a vector and is expressed in the same units as the vector itself. How do you find the magnitude of force given mass and velocity? The magnitude of force can be found using the formula: Force = Mass x Acceleration, where acceleration is calculated as the change in velocity divided by the time it takes for the change to occur. How do you find the magnitude of force without acceleration? To find the magnitude of force without acceleration, you need additional information such as the change in momentum and the time over which the force is applied. You can use the formula: Force = Change in Momentum / Time. What is the magnitude and unit of unit? The term “unit” usually refers to a standard measurement. It doesn’t have a magnitude or unit of its own; instead, it represents one instance or a specific quantity of a chosen measurement unit (e.g., one meter, one kilogram). What is magnitude equal to? Magnitude is equal to the absolute value or size of a vector or scalar quantity without regard to its direction. It represents the “strength” or “size” of the quantity. GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations. Leave a Comment
{"url":"https://gegcalculators.com/magnitude-of-total-momentum-calculator/","timestamp":"2024-11-10T16:04:30Z","content_type":"text/html","content_length":"181375","record_id":"<urn:uuid:8d5f3a62-f049-44ab-bdf9-17d9e5230142>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00206.warc.gz"}
World Top 4 Strategy | Logical Invest What do these metrics mean? 'Total return is the amount of value an investor earns from a security over a specific period, typically one year, when all distributions are reinvested. Total return is expressed as a percentage of the amount invested. For example, a total return of 20% means the security increased by 20% of its original value due to a price increase, distribution of dividends (if a stock), coupons (if a bond) or capital gains (if a fund). Total return is a strong measure of an investment’s overall performance.' Applying this definition to our asset in some examples: • Compared with the benchmark ACWI (72.1%) in the period of the last 5 years, the total return, or increase in value of 110% of World Top 4 Strategy is greater, thus better. • Compared with ACWI (19.6%) in the period of the last 3 years, the total return of 27.4% is larger, thus better. 'Compound annual growth rate (CAGR) is a business and investing specific term for the geometric progression ratio that provides a constant rate of return over the time period. CAGR is not an accounting term, but it is often used to describe some element of the business, for example revenue, units delivered, registered users, etc. CAGR dampens the effect of volatility of periodic returns that can render arithmetic means irrelevant. It is particularly useful to compare growth rates from various data sets of common domain such as revenue growth of companies in the same industry.' Using this definition on our asset we see for example: • Compared with the benchmark ACWI (11.5%) in the period of the last 5 years, the compounded annual growth rate (CAGR) of 16% of World Top 4 Strategy is larger, thus better. • During the last 3 years, the annual performance (CAGR) is 8.4%, which is greater, thus better than the value of 6.1% from the benchmark. 'Volatility is a statistical measure of the dispersion of returns for a given security or market index. Volatility can either be measured by using the standard deviation or variance between returns from that same security or market index. Commonly, the higher the volatility, the riskier the security. In the securities markets, volatility is often associated with big swings in either direction. For example, when the stock market rises and falls more than one percent over a sustained period of time, it is called a 'volatile' market.' Using this definition on our asset we see for example: • The 30 days standard deviation over 5 years of World Top 4 Strategy is 8.2%, which is lower, thus better compared to the benchmark ACWI (19.9%) in the same period. • Looking at volatility in of 6.8% in the period of the last 3 years, we see it is relatively smaller, thus better in comparison to ACWI (16.6%). 'Risk measures typically quantify the downside risk, whereas the standard deviation (an example of a deviation risk measure) measures both the upside and downside risk. Specifically, downside risk in our definition is the semi-deviation, that is the standard deviation of all negative returns.' Using this definition on our asset we see for example: • Compared with the benchmark ACWI (14.4%) in the period of the last 5 years, the downside volatility of 5.7% of World Top 4 Strategy is lower, thus better. • Compared with ACWI (11.5%) in the period of the last 3 years, the downside risk of 4.6% is lower, thus better. 'The Sharpe ratio was developed by Nobel laureate William F. Sharpe, and is used to help investors understand the return of an investment compared to its risk. The ratio is the average return earned in excess of the risk-free rate per unit of volatility or total risk. Subtracting the risk-free rate from the mean return allows an investor to better isolate the profits associated with risk-taking activities. One intuition of this calculation is that a portfolio engaging in 'zero risk' investments, such as the purchase of U.S. Treasury bills (for which the expected return is the risk-free rate), has a Sharpe ratio of exactly zero. Generally, the greater the value of the Sharpe ratio, the more attractive the risk-adjusted return.' Which means for our asset as example: • Looking at the risk / return profile (Sharpe) of 1.65 in the last 5 years of World Top 4 Strategy, we see it is relatively larger, thus better in comparison to the benchmark ACWI (0.45) • Looking at Sharpe Ratio in of 0.87 in the period of the last 3 years, we see it is relatively greater, thus better in comparison to ACWI (0.22). 'The Sortino ratio, a variation of the Sharpe ratio only factors in the downside, or negative volatility, rather than the total volatility used in calculating the Sharpe ratio. The theory behind the Sortino variation is that upside volatility is a plus for the investment, and it, therefore, should not be included in the risk calculation. Therefore, the Sortino ratio takes upside volatility out of the equation and uses only the downside standard deviation in its calculation instead of the total standard deviation that is used in calculating the Sharpe ratio.' Which means for our asset as example: • Looking at the downside risk / excess return profile of 2.36 in the last 5 years of World Top 4 Strategy, we see it is relatively greater, thus better in comparison to the benchmark ACWI (0.62) • During the last 3 years, the excess return divided by the downside deviation is 1.28, which is higher, thus better than the value of 0.32 from the benchmark. 'The Ulcer Index is a technical indicator that measures downside risk, in terms of both the depth and duration of price declines. The index increases in value as the price moves farther away from a recent high and falls as the price rises to new highs. The indicator is usually calculated over a 14-day period, with the Ulcer Index showing the percentage drawdown a trader can expect from the high over that period. The greater the value of the Ulcer Index, the longer it takes for a stock to get back to the former high.' Which means for our asset as example: • The Ulcer Ratio over 5 years of World Top 4 Strategy is 2.68 , which is smaller, thus better compared to the benchmark ACWI (9.93 ) in the same period. • Compared with ACWI (11 ) in the period of the last 3 years, the Downside risk index of 2.92 is smaller, thus better. 'Maximum drawdown is defined as the peak-to-trough decline of an investment during a specific period. It is usually quoted as a percentage of the peak value. The maximum drawdown can be calculated based on absolute returns, in order to identify strategies that suffer less during market downturns, such as low-volatility strategies. However, the maximum drawdown can also be calculated based on returns relative to a benchmark index, for identifying strategies that show steady outperformance over time.' Applying this definition to our asset in some examples: • Compared with the benchmark ACWI (-33.5 days) in the period of the last 5 years, the maximum DrawDown of -14.6 days of World Top 4 Strategy is greater, thus better. • During the last 3 years, the maximum drop from peak to valley is -8.1 days, which is greater, thus better than the value of -26.4 days from the benchmark. 'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Max Drawdown Duration is the worst (the maximum/longest) amount of time an investment has seen between peaks (equity highs). Many assume Max DD Duration is the length of time between new highs during which the Max DD (magnitude) occurred. But that isn’t always the case. The Max DD duration is the longest time between peaks, period. So it could be the time when the program also had its biggest peak to valley loss (and usually is, because the program needs a long time to recover from the largest loss), but it doesn’t have to be' Which means for our asset as example: • Looking at the maximum days below previous high of 247 days in the last 5 years of World Top 4 Strategy, we see it is relatively lower, thus better in comparison to the benchmark ACWI (516 days) • Looking at maximum days below previous high in of 247 days in the period of the last 3 years, we see it is relatively smaller, thus better in comparison to ACWI (516 days). 'The Average Drawdown Duration is an extension of the Maximum Drawdown. However, this metric does not explain the drawdown in dollars or percentages, rather in days, weeks, or months. The Avg Drawdown Duration is the average amount of time an investment has seen between peaks (equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum duration it does not measure only one drawdown event but calculates the average of all.' Which means for our asset as example: • Looking at the average days below previous high of 44 days in the last 5 years of World Top 4 Strategy, we see it is relatively lower, thus better in comparison to the benchmark ACWI (132 days) • Compared with ACWI (191 days) in the period of the last 3 years, the average days below previous high of 61 days is lower, thus better.
{"url":"https://logical-invest.com/app/strategy/WTOP4","timestamp":"2024-11-11T08:18:52Z","content_type":"text/html","content_length":"66323","record_id":"<urn:uuid:7b090453-c160-4cab-955d-f6f041d887f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00511.warc.gz"}
Bonn Physics Colloquium: Where Physics and Math collideBonn Physics Colloquium: Where Physics and Math collideBonn Physics Colloquium: Where Physics and Math collide Grant Sanderson was born in Utah and studied Mathematics and Computer Science at Stanford University. After graduation he transitioned into an increasingly exposed role as a mathematics educator using animations and a unique visual approach to conveying concepts from higher mathematics. His Youtube videos are now famous for making difficult things accessible and for highlighting the fascination in math, awarding him an audience of over 5.5 million subscribers. In the Bonn Physics Colloquium, Grant Sanderson explained the connection between a simple problem of colliding blocks in a frictionless setting with the mathematics of the circle and the digits of pi emerging from the number of collisions. In the second half of the talk he revealed that the same principles are at play in Grover's quantum search algorithm. This talk was attended by a huge audience of students, staff and faculty in the physics department's biggest venue: the Wolfgang Paul Hörsaal. The crowd was so big that many had to evacuate the lecture hall before the talk started and move to neighboring rooms, HS I of Physicalisches Institut, HS IAP and the IAP seminar room, to follow an impromptu live stream. In total, more than 1000 people listened to the Colloquium. After the talk, a Q&A session and a coffee break in the foyer of the lecture hall allowed for direct interaction with the speaker, sparking long discussions of groups of students with Grant Sanderson. To enable encounters in different settings, the Physics Colloquium was surrounded by a "mathematical salon" on Thursday evening, and a panel discussion at the mathematics department on Friday evening, both events were sold out.
{"url":"https://www.pi.uni-bonn.de/en/news/bonn-physics-colloquium-where-physics-and-math-collide","timestamp":"2024-11-08T05:54:14Z","content_type":"application/xhtml+xml","content_length":"33905","record_id":"<urn:uuid:5b969fe8-821e-4493-aa73-34bebe23501f>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00286.warc.gz"}
The Stacks project Lemma 115.24.3. Let $(S, \delta )$ be as in Chow Homology, Situation 42.7.1. Let $X$ be locally of finite type over $S$. Assume $X$ integral and $\dim _\delta (X) = n$. Let $\{ D_ j\} _{j \in J}$ be a locally finite collection of effective Cartier divisors on $X$. Let $n_ j, m_ j \geq 0$ be collections of nonnegative integers. Set $D = \sum n_ j D_ j$ and $D' = \sum m_ j D_ j$. Assume that $\dim _\delta (D_ j \cap D_{j'}) = n - 2$ for every $j \not= j'$. Then $D \cdot [D']_{n - 1} = D' \cdot [D]_{n - 1}$ in $\mathop{\mathrm{CH}}\nolimits _{n - 2}(X)$. Comments (2) Comment #97 by Pieter Belmans on There is a typo in the label. Comment #100 by Johan on Fixed. Thanks! Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 02TE. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 02TE, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/02TE","timestamp":"2024-11-11T23:04:05Z","content_type":"text/html","content_length":"18284","record_id":"<urn:uuid:155c16c9-587c-4590-bd46-f9a7504b9266>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00346.warc.gz"}
ML Aggarwal Class 7 Solutions for ICSE Maths Chapter 10 Lines and Angles Ex 10.1 ML Aggarwal Class 7 Solutions Chapter 10 Lines and Angles Ex 10.1 for ICSE Understanding Mathematics acts as the best resource during your learning and helps you score well in your exams. ML Aggarwal Class 7 Solutions for ICSE Maths Chapter 10 Lines and Angles Ex 10.1 Question 1. (i) Can two right angles be complementary? (ii) Can two right angles be supplementary? (iii) Can two adjacent angles be complementary? (iv) Can two adjacent angles be supplementary? (v) Can two obtuse angles be adjacent? (vi) Can an acute angle be adjacent to an obtuse angle? (vii) Can two right angles form a linear pair? Question 2. Find the complement of each of the following angles: Question 3. Find the supplement of each of the following angles: Question 4. Identify which of the following pairs of angles are complementary and which are supplementary: (i) 55°, 125° (ii) 34°, 56° (iii) 137°, 43° (iv) 112°, 68° (v) 45°, 45° (vi) 72°, 18° Question 5. (i) Find the angle which is equal to its complement. (ii) Find the angle which is equal to its supplement. Question 6. Two complementary angles are (x + 4)° and (2x – 7)°, find the value of x. Question 7. Two supplementary angles are in the ratio of 2 : 7, find the angles. Question 8. Among two supplementary angles, the measure of the longer angle is 44° more than the measure of the smaller angle. Find their measures. Question 9. If an angle is half of its complement, find the measure of angles. Question 10. Two adjacent angles are in the ratio 5 : 3 and they together form an angle of 128°, find these angles. Question 11. Find the value of x in each of the following diagrams: Question 12. Find the values of x, y and z in each of the following diagrams: Question 13. In the given figure, lines AB and CD intersect at F. If ∠EFA = ∠AFD and ∠CFB = 50°, find ∠EFC.
{"url":"https://ncertmcq.com/ml-aggarwal-class-7-solutions-for-icse-maths-chapter-10-ex-10-1/","timestamp":"2024-11-12T02:36:58Z","content_type":"text/html","content_length":"65013","record_id":"<urn:uuid:ff6506dc-e686-4d95-87d6-ba899afd0879>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00529.warc.gz"}
Roulette Odds are Important for Winning the Game Players who want to be good at roulette need to do much more than simply place some bets and cross their fingers. There is very little strategy available for this game since the outcome is always unpredictable. However, by learning the roulette odds that dictate the possibilities of certain outcomes, players can make wiser and more informed bets that will help them keep their bankrolls for longer periods of time -- nd perhaps even win more money. Different Games mean Different Odds Before they even get started, players are often daunted by the task of learning the odds of roulette. First and foremost, there are many different variations of the game that are available in different casinos, and players simply are not sure how to keep all of the information straight. However, there is a way to calculate the odds of any roulette wager in a matter of seconds regardless of whether the player enjoys American, European, French, Mini or even Racetrack roulette. Simple Calculations for Main Variations In the primary versions of roulette (American, European, French), players can use simple calculations to determine the odds of any wager. All the player has to do is divide 36 pockets--or the number of pockets on the wheel--by the amount he or she wants to bet an then subtract one. This provides the payout percentage and also dictates the odds of winning, in turn. If a player wants to make a four-number corner bet, then, he or she would divide 36 by four for a total of nine, then subtract one to get eight. This results in a payout of eight to one on a corner bet. Mini Roulette Some players get a bit sidetracked when it comes to calculating the odds at a mini roulette table simply because gameplay is so much different. The wheel is different and only uses the numbers one through 12, but the formula used to calculate the odds is still the same. Players will divide 12 (the number of pockets on the wheel) by the number of squares on which they are betting and subtract one. A player betting on three numbers would divide 12 by three for a total of four, then subtract one for a payout rate of three to one. Understanding how to calculate these odds is an important step towards understanding optimal mini roulette strategy. Other Versions No matter how obscure the version of roulette a player enjoys, this formula can always be used to calculate odds. This holds true for game variations such as Racetrack Roulette. All the player has to do is determine the number of non-zero spaces on the wheel, divide this by the number of spaces being wagered on, and then subtract one. By doing this, not only will the player learn how much can be won in the event that the bet actually wins, but he or she will also understand the odds of winning such a bet. The higher the ratio, the less likely the bet is to win. Practicing the Calculations Players who are interested in calculating odds without the risk of losing real money can find plenty of free roulette tables on the internet in all different variants. These present a great opportunity for players to not only practice their calculations, but also for them to try out different betting strategies at the same time.
{"url":"https://video-roulette.ca/roulette-odds.htm","timestamp":"2024-11-03T09:52:45Z","content_type":"application/xhtml+xml","content_length":"11216","record_id":"<urn:uuid:576a86fe-20b8-4069-8e23-6f3bd6977dba>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00623.warc.gz"}
Bootstrapping is a nonparametric procedure that allows testing the statistical significance of various PLS-SEM results such path coefficients, Cronbach’s alpha, HTMT, and R² values. Brief Description PLS-SEM does not assume that the data is normally distributed, which implies that parametric significance tests (e.g., as used in regression analyses) cannot be applied to test whether coefficients such as outer weights, outer loadings and path coefficients are significant. Instead, PLS-SEM relies on a nonparametric bootstrap procedure (Efron and Tibshirani, 1986; Davison and Hinkley, 1997) to test the significance of estimated path coefficients in PLS-SEM. In bootstrapping, subsamples are created with randomly drawn observations from the original set of data (with replacement). The subsample is then used to estimate the PLS path model. This process is repeated until a large number of random subsamples has been created, typically about 10,000. The parameter estimates (e.g., outer weights, outer loadings and path coefficients) obtained from the subsamples are used to derive the 95% confidence intervals for significance testing (e.g., original PLS-SEM results are significant when they are outside the confidence interval). In addition, bootstrapping provides the standard errors for the estimates, which allow t-values to be calculated to assess the significance of each estimate. Becker et al. (2023) and Hair et al. (2022) explain bootstrapping in in PLS-SEM in more detail. Bootstrapping Settings in SmartPLS Bootstrapping creates subsamples with observations drawn at random from the original dataset (with replacement). The number of observations per bootstrap subsample is identical to the number of observations in the original sample (SmartPLS also considers the smaller number of observations in the original sample if you use case-by-case deletion to handle missing values). To ensure stability of results, the number of subsamples should be large. For an initial assessment, one may wish to choose a smaller number of bootstrap subsamples (e.g., 1000) to be randomly drawn and estimated with the PLS-SEM algorithm, since that requires less time. For the final results preparation, however, one should use a large number of bootstrap subsamples (e.g., 10,000). Note: Larger numbers of bootstrap subsamples increase the computation time. Do Parallel Processing If chosen the bootstrapping algorithm will be performed on multiple processors (if your computer offers more than one core). As each subsample can be calculated individually, subsamples can be computed in parallel mode. Using parallel computing will reduce computation time. Confidence Interval Method Sets the bootstrapping method used for estimating nonparametric confidence intervals. The following bootstrapping procedures are available (for more details, see Hair et al., 2022): 1. Percentile Bootstrap (default) 2. Studentized Bootstrap 3. Bias-Corrected and Accelerated (BCa) Bootstrap By default, we recommend using percentile bootstrapping. If you have concerns about a non-normal bootstrap distribution, you can alternatively use bias-corrected and accelerated (BCa) bootstrapping. Test Type Specifies if a one-sided or two-sided significance test is conducted. Significance Level Specifies the significance level of the test statistic. Random number generator The algorithm randomly generates subsamples from the original data set, which requires a seed value for the random number generator. You have the option to choose between a random seed and a fixed The random seed produces different random numbers and therefore results every time the algorithm is executed (this was the default and only option in SmartPLS 3). The fixed seed uses a pre-specified seed value that is the same for every execution of the algorithm. Thus, it produces the same results if the same number of subsamples are drawn. It thereby addresses concerns about the replicability of research findings. • Becker, J.-M., Cheah, J. H., Gholamzade, R., Ringle, C. M., Sarstedt, M. (2023): PLS-SEM’s Most Wanted Guidance, International Journal of Contemporary Hospitality Management, 35(1), pp. • Hair, J. F., Hult, G. T. M., Ringle, C. M., and Sarstedt, M. (2022). A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 3rd Ed., Sage: Thousand Oaks. • Davison, A. C., and Hinkley, D. V. (1997). Bootstrap Methods and Their Application, Cambridge University Press: Cambridge. • Efron, B., and Tibshirani, R. J. (1993). An Introduction to the Bootstrap, Chapman Hall: New York. Cite correctly Please always cite the use of SmartPLS! Ringle, Christian M., Wende, Sven, & Becker, Jan-Michael. (2024). SmartPLS 4. Bönningstedt: SmartPLS. Retrieved from https://www.smartpls.com
{"url":"https://smartpls.com/documentation/algorithms-and-techniques/bootstrapping/","timestamp":"2024-11-12T15:40:14Z","content_type":"text/html","content_length":"670944","record_id":"<urn:uuid:c95696f8-f38a-4ad3-95c5-43af9f775bfa>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00819.warc.gz"}
A precessing source frame, constructed using the Newtonian orbital angular momentum $\bf L_{\rm N}$, can be invoked to model inspiral gravitational waves from generic spinning compact binaries. An attractive feature of such a precessing convention is its ability to remove all spin precession induced modulations from the orbital phase evolution. However, this convention usually employs a post-Newtonian (PN) accurate precessional equation, appropriate for the PN accurate orbital angular momentum $\bf L$, to evolve the $\bf L_{\rm N}$-based precessing source frame. This influenced us to develop inspiral waveforms for spinning compact binaries in a precessing convention that explicitly employ $\bf L$ to describe the binary orbits. Our approach introduces certain additional 3PN order terms in the evolution equations for the orbital phase and frequency with respect to the usual $\bf L_{\rm N}$-based implementation of the precessing convention. We examine the practical implications of these additional terms by computing the match between inspiral waveforms that employ $\bf L$ and $\bf L_{\rm N}$-based precessing conventions. The match estimates are found to be smaller than the optimal value, namely $0.97 $, for a non-negligible fraction of unequal mass spinning compact binaries.
{"url":"https://www.icra.it/mg14/FMPro%3F-db=3_talk_mg14_.fp5&-format=riassunto2.htm&-lay=talk_reg&-sortfield=order2&ps::web_code=9921732969&main_1::Attivo=yes&talk_accept=yes&-max=50&-recid=43781&-find=.html","timestamp":"2024-11-11T00:39:59Z","content_type":"text/html","content_length":"6160","record_id":"<urn:uuid:92edca81-fc4e-440c-8071-3dbca8cf9e02>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00598.warc.gz"}
First order ordinary differential equations First order ordinary differential equations » My Coding > Mathematics > First order ordinary differential equations First order ordinary differential equations A first-order ordinary differential equation (ODE I) is a mathematical equation that describes the relationship between a function and its derivative. There are several types of first-order ODEs, including separable, homogeneous, linear, exact, Bernoulli, Riccati, and nonlinear ODEs. Each type has its own characteristic form and requires different methods for solving. Types of ODE(I) 1. Separable ODEs: A separable ODE is one that can be written in the form \(y\prime(x) = f(x)g(y)\), where \(f(x)\) and \(g(y)\) are functions of \(x\) and \(y\), respectively. To solve a separable ODE, we can separate the variables and integrate both sides with respect to \(x\) and \(y\), respectively. 2. Homogeneous ODEs: A homogeneous ODE is one that can be written in the form \(y\prime(x) = f(\frac{y}{x})\), where \(f\) is a function of \(\frac{y}{x}\). To solve a homogeneous ODE, we can use the substitution \(u = \frac{y}{x}\) to transform the equation into a separable ODE. 3. Linear ODEs: A linear ODE is one that can be written in the form \(y\prime(x) + p(x)y(x) = q(x)\), where \(p(x)\) and \(q(x)\) are functions of \(x\). To solve a linear ODE, we can use an integrating factor, which is a function that makes the left-hand side of the equation equal to the derivative of a product. Then, we can integrate both sides to solve for \(y(x)\). 4. Exact ODEs: An exact ODE is one that can be written in the form \(M(x,y)dx + N(x,y)dy = 0\), where \(\frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}\). To solve an exact ODE, we can find a function \(\phi(x,y)\) such that \(\frac{\partial \phi}{\partial x} = M(x,y)\) and \(\frac{\partial \phi}{\partial y} = N(x,y)\). Then, the general solution is given by \(\phi(x,y) = C\), where \(C\) is a constant. 5. Bernoulli ODEs: A Bernoulli ODE is one that can be written in the form \(y\prime(x) + p(x)y(x) = q(x)y(x)^n\), where \(n\) is a constant. To solve a Bernoulli ODE, we can use the substitution \(u = y^{(1-n)}\) to transform the equation into a linear ODE. 6. Riccati ODEs: A Riccati ODE is one that can be written in the form \(y\prime(x) = p(x)y(x)^2 + q(x)y(x) + r(x)\), where \(p(x)\), \(q(x)\), and \(r(x)\) are functions of \(x\). To solve a Riccati ODE, we can use a substitution \(u = \frac{y\prime(x)}{y(x)}\) to transform the equation into a linear ODE. 7. Nonlinear ODEs: A nonlinear ODE is one that cannot be written in any of the above forms. These can be more difficult to solve and often require numerical methods to approximate the solution. Some common numerical methods include Euler's method, the Runge-Kutta method, and the shooting method. 8. I also would like to mention the special case of first-order ordinary differential equations. These equations can be solved by predicting the form of the solution. Some of these first-order ordinary differential equations are described in the video: Published: 2023-05-13 00:04:49 Updated: 2023-12-25 00:58:08 Last 10 artitles 9 popular artitles
{"url":"https://mycoding.uk/a/first_order_ordinary_differential_equations.html","timestamp":"2024-11-08T02:24:16Z","content_type":"text/html","content_length":"9427","record_id":"<urn:uuid:9cc27024-fb97-42f7-b6bf-b3cbc2f0072c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00216.warc.gz"}
K-Means Interactive Demo in your browser In this blog post, I introduce a new interactive tool for showing a demonstration of the K-Means algorithm for students (for teaching purposes). The K-Means clustering demo tool can be accessed here: The K-Means demo, first let you enter a list of 2 dimensional data points in the range of [0,10] or to generate 100 random data points: Then the user can choose the value of K, adjusts other settings, and run the K-Means algorithm. The result is then displayed for each iteration, step by step. Each cluster is represented by a different color. The SSE (Sum of Squared Error) is displayed, and the centroids of clusters are illustrated by the + symbol. For example, this is the result on the provided example dataset: Because K-Means is a randomized algorithm, if we run it again the result may be different: Now, let me show you the feature of generating random points. If I click the button for generating a random dataset and run K-Means, the result may look like this: And again, because K-Means is randomized, I may execute it again on the same random dataset and get a different result: I think that this simple tool can be useful for illustrating how the K-Means algorithm works to students. You may try it. It is simple to use and allows to visualize the result and clustering process. Hope that it will be useful! Philippe Fournier-Viger is a distinguished professor working in China and founder of the SPMF open source data mining software. This entry was posted in Data Mining, Data science and tagged browser, clustering, interactive demo, javascript k-means, k-means, k-means tool. Bookmark the permalink.
{"url":"https://data-mining.philippe-fournier-viger.com/k-means-interactive-demo-in-your-browser/","timestamp":"2024-11-11T04:54:01Z","content_type":"text/html","content_length":"71807","record_id":"<urn:uuid:f524a0a5-c524-4798-9be6-035b6c46dfd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00532.warc.gz"}
The misuse of p-values in medicine 08 Jun 2020 (Last Modified 06 Jun 2020) The misuse of p-values in medicine A p-value quantifies the chance that the population parameters is at least as big as the sample statistic. A p-value is often used as a proxy for how likely the results of a biomedical experiment could have risen by chance. Comparing these two sentences reveals the divide between the intended and actual uses of p-values. P-values are misused, by those failing to appreciate the difference between quantifying the significance of an association (essentially the fraction of area under a probability density function) and quantifying the magnitude of the association. The confidence interval quantifies the precision of the estimates of the magnitude of association. A blind application of p-values without consideration of experimental design may lead to false positives (because of failing to correct for multiple simultaneous comparisons) or failure to consider that p-values fall with increasing n, but this reflects a mathematical relationship rather than a more precise estimate of something (recall the above distinction between p-values and confidence To Read
{"url":"https://mac389.github.io/2020/06/08/p_values.html","timestamp":"2024-11-03T06:13:45Z","content_type":"text/html","content_length":"11603","record_id":"<urn:uuid:147d9e71-aef5-47ed-b9de-88e7ac53272e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00182.warc.gz"}
HAT-trie, a cache-conscious trie Tries (also known as prefix trees) are an interesting beast. A Trie is a tree-like data structure storing strings where all the descendants of a node share a common prefix. The structure allows fast prefix-searches, like searching for all the words starting with ap, and it can take advantage of the common prefixes to store the strings in a compact way. Like most trees, the problem with the main implementations of tries is that they are not cache-friendly. Going through each prefix node in a trie will probably cause a cache-miss at each step which doesn’t help us to get fast lookup times. In this article we will present the HAT-trie^1 which is a cache-conscious trie, a mix between a classic trie and a cache-conscious hash table. You can find a C++ implementation along with some benchmarks comparing the HAT-trie to other tries and associative data structures on GitHub. In the next sections, we will first explain the concept of trie more thoroughly. We will then explain the burst-trie and the array hash table, some intermediate data structures on which the HAT-trie is based. We then finish with the HAT-trie itself. A trie is a tree where each node has between 0 and | Σ | children, where | Σ | is the size of the alphabet Σ. For a simple ASCII encoding, each node will have up to 128 children. With UTF-8 we can break down a character into its 8-bits code units and store each code unit in its own node which would have up to 256 children. The Greek α character which takes two octets in UTF-8, 0xCEB1, would be stored in two nodes, one for the 0xCE octet and one child node for the 0xB1 octet. All the strings which are descendants of a node share the node and its ancestor as common prefix. A leaf node, which has no child, marks the end of a string. Let’s illustrate all this with some figures. We will use the following words in our examples: • romane • romanes • romanus • romulus • rubens • ruber • rubes • rubicon • rubicundus • rubric Which will give us the following trie. Trie with romane, romanes, romanus, romulus, rubens, ruber, rubes, rubicon, rubicundus and rubric. When we want to search for all the words starting with the roma prefix, we just go down the tree until we match all the letters of the prefix. We then just have to take all the descendants leafs of the node we reached, in this example we would have romane, romanes and romanus. The implementation itself would just look like a k-ary tree. To store the children, we can use an array of size | Σ | (let’s say 128, we only need to support the ASCII encoding in our example). Fast but not memory efficient (a sparse array/sparsetable can reduce the overhead with some slowdown). class node { // array of 128 elements, null if there is no child for the ASCII letter. node* children[128]; One common way to reduce the memory usage while still using an array is to use the alphabet reduction technique. Instead of using an alphabet of 256 characters (8 bits), we use an alphabet of 16 characters (4 bits). We then just have to cut the octet in two nibbles (4 bits) and store two parent-child nodes (same as described above with the UTF-8 code units). More compact but longer paths in the tree (and thus more potential cache-misses). Another way is to use a simple associative container mapping a character code unit to a child node. A binary search tree or a sorted array if we want to keep the order, or eventually a hash table otherwise. Slower but more compact even if the alphabet is large. class node { // C++ // Binary search tree std::map<char, node*> children; // Hash table // std::unordered_map<char, node*> children; // Sorted array // boost::flat_map<char, node*> children; // Java // Binary search tree // TreeMap<char, node> children; // Hash table // HashMap<char, node> children; We can also store the children in a linked list. The parent has a pointer to the first child, and each child has a pointer to the next child (its sibling). When then have to do a linear search in the list to find a child. Compact but slow. class node { char symbol; node* first_child; node* next_sibiling; Note that we use empty nodes to represent the end of a string in the figures as visual marker. An actual trie implementation could use a simple flag inside the node containing the last letter of a string to mark its end. Compressed trie Reducing the size of a node is important, but we can also try to reduce the number of nodes to have a memory efficient trie. You may have noticed that in the previously exposed trie, some nodes just form a linked list that we could compress. For example the end of the word rubicundus is formed by the chain u -> n -> d -> u -> s. We could compress this chain into one undus node instead of five nodes. Compressed trie with romane, romanes, romanus, romulus, rubens, ruber, rubes, rubicon, rubicundus and rubric. The idea of compressing these chains has been used by a lot of trie based data structures. They will not be detailed here as it would be too long, but if you are interested you may look into radix tries, crit-bit tries and qp tries. Now that we know the basic concept of a trie, let’s move to the burst-trie. A burst-trie^2 is a data structure similar to a trie but where the leafs of the trie are replaced with a container which is able to store a small number of strings efficiently. The internal nodes are normal trie nodes (in the following figures we will use an array to represent the pointers to the children). The container itself may vary depending on the implementation. The original paper studies the use of three containers: a linked list (with an eventual move-to-front policy on accesses), a binary search tree and a splay tree (which is a self-balancing binary search tree where frequently accessed nodes are moved close to the root). Burt-trie using a binary search tree as container with romane, romanes, romanus, romulus, rubens, ruber, rubes, rubicon, rubicundus and rubric. The burst-trie starts with a single empty container which grows when new elements are inserted in the trie until the container is deemed inefficient by a burst heuristic. When this happen, the container is burst into multiple containers During the burst of a container node, a new trie node is created to take the spot of the container node in the tree. For each string in the original container, the first letter is removed and the rest of the string is added to a new container which is assigned as child of the new trie node at the position corresponding to the removed first letter. The process recurses until each new container satisfies the burst heuristic. Burst process after adding the word romule when the burst heuristic limits the size of a container to four elements. The burst heuristic which decides when a container node should be burst can vary depending on the implementation. The original paper proposes three options. • Limit. The most straightforward heuristic bursts the container when the number of elements in the container is higher than some predefined limit L. • Ratio. The heuristic assigns two counters to each container node. A counter N which is the number of times the container has been searched and a counter S which is the number of searches that ended successfully at the root node of the container, i.e. a direct hit. When the ration S/N is lower than some threshold, the container is burst. The heuristic is useful when a move-to-root approach on lookups is used, as in the the splay tree, combined with skewed lookups (some strings have more lookups than others). • Trend. When a container is created a capital C is allocated to the container node. On each successful access, the capital is modified. If the access is a direct hit, the capital is incremented with a bonus B, otherwise the capital is decremented by a penalty M. A burst occurs when the capital reaches 0. As with the ratio heuristic, the heuristic is useful with a move-to-root approach and skewed lookups. Array hash table An array hash table^3 is a cache-conscious hash table specialized to store strings efficiently. It’s the container that will be used in the burst-trie to form a HAT-trie. A hash table is a data structure which offers an average access time complexity of O(1). To do so, a hash table uses a hash function to maps its elements into an array of buckets. The hash function, with the help of a modulo, associates an element to an index in the array of buckets. uint hash = hash_funct("romulus"); // string to uint function uint bucket_index = hash % buckets_array_size; // [0, buckets_array_size) The problem is that two keys can end-up in the same bucket (e.g. if hash_funct("romulus") % 4 == hash_funct("romanus") % 4). To resolve the problem, all hash tables implement some sort of collisions One common way is to use chaining. Each bucket in the buckets array has a linked list of all the elements that map into the bucket. On insertion, if a collision occurs, the new element is simply appended at the end of the list. For more information regarding hash tables, check the Wikipedia article as this is just a quick reminder. Hash table using chaining with romane, romanes, romanus, romulus, rubens, ruber, rubes, rubicon, rubicundus and rubric. The main problem of the basic chaining implementation is that it’s not cache-friendly. In C++, if we use a standard hash table of strings, std::unordered_map<std::string, int>, every node access in the list requires two pointers indirections (only one if the small string optimization can be used). One to get the next node of the list and one to compare the key. In addition to the potential cache-misses, it also requires to store at least two pointers per node (one to the next node, and one to the heap allocated area of the string). It can be a significant overhead if the strings are small. The goal of the array hash table is to alleviate these inconveniences by storing all the strings of a bucket in an array instead of a linked list. The strings are stored in the array with their length as prefix. In most cases, we will only cause one pointer indirection (more if the array is large) to resolve the collisions. We also don’t need to store superfluous pointers reducing the memory usage of the hash table. The drawback is that when a string needs to be appended at the end of the bucket, a reallocation of the whole array may be needed. Array hash table with romane, romanes, romanus, romulus, rubens, ruber, rubes, rubicon, rubicundus and rubric. The array hash table provides an efficient and compact way to store some strings in a hash table. You can find the C++ implementation used by the HAT-trie on GitHub. Now that we have all the building blocks for our HAT-trie, let’s assemble everything. The HAT-trie is essentially a burst-trie that uses an array hash table as container in its leaf nodes. HAT-trie with romane, romanes, romanus, romulus, rubens, ruber, rubes, rubicon, rubicundus and rubric. As with the burst-trie, a HAT-trie starts with an empty container node which is here an empty array hash table node. When the container node is too big, the burst procedure starts (the limit burst heuristic described earlier is used in the HAT-trie). Two approaches are proposed by the paper for the burst. • Pure. In a way similar to the burst-trie, a new trie node is created and takes the spot of the original container node. The first letter of each string in the original container is removed and the rest of the string is added to a new array hash table which is assigned as child of the new trie node at the position corresponding to the removed first letter. The process recurses until the new array hash tables sizes are below the limit. • Hybrid. Unlike a pure container node, a hybrid container node has more than one parent. When we create new hybrid nodes from a pure node, we try to find the split-letter that would split the pure node the most evenly in two. All the strings having a first letter smaller than the split-letter will go in a left hybrid node and the rest will go in the right hybrid node. The parent will then have its pointers for the children corresponding to a letter smaller than the split-letter point to the new left hybrid node, the rest to the right hybrid node. Note that unlike the pure node, we keep the first letter of the original strings so we can distinguish from which parent they come from. If we are bursting a hybrid node, we don’t need to create a new trie node. We just have to split the hybrid node in two nodes (potentially pure nodes if the split would result for a node to only have one parent). and reassign the children pointers in the parent. Using hybrid nodes can help to reduce the size of the HAT-trie. A HAT-trie may contain a mix of pure and hybrid nodes. HAT-trie with pure and hybrid nodes. The main drawback of the HAT-trie is that the elements are only in a near-sorted order as the elements in the array hash table are not sorted. The consequence is that on prefix lookups we may need to do some extra work to filter the elements inside an array hash table node. If we are searching for all the words starting with the ro prefix, things are easy as going down the tree leads us to a trie node. We just have to take all the words in the descendants of the trie Things get more complex if we are looking for the words having roma as prefix. Going down the tree gets us to a container node while we still have the letters ma to check for our prefix. There is no guarantee that the suffixes in the container node will have the ma letters as prefix (e.g. mulus suffix), we need to do a linear filtering. But as the maximum size of the array hash table is fixed, the complexity of the prefix search remains in O(k + M), where k is the size of the prefix and M is the number of words matching the prefix, even if we have millions of strings in our HAT-trie. We just have a higher constant factor which depends on the maximum size of the array hash table. Another consequence of this near-sorted order is that if we want to iterate through all the elements in a sorted order, the iterator has to sort all the elements of the container node each time it hits a new container node. As the maximum size of a container node is still fixed, it shouldn’t be too bad, but other structures may be a better fit if the elements need to be sorted. In the end the HAT-trie offers a good balance between speed and memory usage, as you can see in the benchmark, by sacrificing the order of the elements for a near-sorted order instead. 1. Askitis, Nikolas, and Ranjan Sinha. “HAT-trie: a cache-conscious trie-based data structure for strings.” Proceedings of the thirtieth Australasian conference on Computer science-Volume 62. Australian Computer Society, Inc., 2007. ↩ 2. Heinz, Steffen, Justin Zobel, and Hugh E. Williams. “Burst tries: a fast, efficient data structure for string keys.” ACM Transactions on Information Systems (TOIS) 20.2 (2002): 192-223. ↩ 3. Askitis, Nikolas, and Justin Zobel. “Cache-conscious collision resolution in string hash tables.” International Symposium on String Processing and Information Retrieval. Springer Berlin Heidelberg, 2005. ↩
{"url":"https://tessil.github.io/2017/06/22/hat-trie.html","timestamp":"2024-11-12T06:08:30Z","content_type":"text/html","content_length":"28222","record_id":"<urn:uuid:a45729f2-7f71-4727-9291-bac30da85338>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00344.warc.gz"}
Bear's Den Recently I’ve been happily working with and on Neural Networks – there is enough new knowledge and there are enough new techniques, especially for working with networks deeper than just a couple of layers, to make it very exciting just now! Thinking back, the last time I felt this much excitement about neural networks was when I was still completely new to them and had just written my first truly functional neural network program and watched it sort out its first fairly complicated problem. And thinking about that experience led me to consider the long period of frustration that went before it. The period where I was looking at the Greek-alphabet soup of notation that most workers use to describe backpropagation and trying to work out what the hell they were referring to with each symbol. And because the Greek alphabet isn’t one I use all the time, sometimes even getting confused about which symbols were supposed to be the same and different. Which symbol refers to an individual value, and which to a vector of values, and which to a summation over a vector, and which to a matrix? Hint; some of those assholes use different typefaces and sizes of the SAME variable name for ALL of those things! IN THE SAME EQUATION! The result is that even though the actual math is not too difficult, I had to get the math without any help from the equations. The math served me as the key, the equations as the ciphertext, and using the math to decrypt the equations, all I figured out is what the hell they were smoking when they made up their notation. And that information was completely useless, because they were smoking something else when they make up the notation to describe the next system I needed to figure out. You don’t have to be stupid to not get math notation, folks. I happen to have an IQ closer to 200 than it is to 100, a couple of patents, etc. I do hyper-dimensional coordinate transformations and geometry for fun, and apply differential calculus to solve genuine problems about twice a month. But the dense notation that mathematicians use often escapes my brain. Especially when, as with neural networks, they’re talking about long sequences of operations involving differential calculus,on sets of long sequences of matrix values. Even though I do the math, in deadly earnest to solve real problems, my thinking about it is in unambiguous terms of the sort I can express in computer code. With symbols that actually mean something to me instead of being single letters. Without using symbols that aren’t on my keyboard, without using the same symbol to mean different things in different sizes and typefaces, without having to guess whether a juxtaposition means multiplication or subscripting, without having to guess whether a subscript means an array index or per-value or per-instance or per-set or per-iteration or is-a-member-of, without having to guess whether they’re using multiplication to mean matrix or vector or scalar multiplication, and without reusing the same operator symbols to mean different things in every publication. I actually live in the land over which mathematicians have jurisdiction. And I wish they had a little more damn consideration and respect for their citizens out here and would write the laws of the land using a language that didn’t leave so many ways to misinterpret it. Lucky for me, I can eventually understand the relationships without much help from the gods-damned obscure equations. Usually. And I guess that puts me ahead of a lot of people who have trouble with the notation. Or maybe if I couldn’t get the math any other way, I’d have done a better job of learning the mathematicians-only language they use to make their descriptions of the math absolutely useless to everyone else. Then I’d be so over this problem, right, and think it’s something that only bothers stupid people? The way a lot of the guys who write those equations evidently Hey guys? Buy a clue from programming languages. USE A DESCRIPTIVE WORD OR TWO to name your variables! Then FORMALLY DEFINE what types your operators are working on, and not just in an offhanded sentence in the middle of text six pages previously that refers to other definitions made also in the middle of text nine pages subsequently, nor just by inference from the context in which the idea that led to the variable came up! Vectors, Matrices, or Scalars? It’s a simple question with a simple answer, and you’re allowed to say what the answer is! Then use a DIFFERENT DAMN NOTATION when you mean a different operation, and a consistent bracketing convention when you mean a subscript and DIFFERENT KINDS of bracketing notations when you mean different things by your subscripts! Grrf. Sorry, I’ll try to be calmer. As I said, I was remembering that period of frustration when I was trying to understand the process from the equations instead of being able to decipher the equations because I had finally figured out the process. The operations on Neural networks are sequential operations on multiple vectors of values using partial differentials, so it is a “perfect storm” as far as notation is concerned, where I was looking at something encrypted in three different ciphers, and eventually I had to work it ALL out for myself before I could even begin to see how the Greek-alphabet soup in front of me even related to the clear relationships that were, you know, “obvious” in retrospect. Most equations involving deriving real information from data using statistics and big-data operations suffer from the same problems. I *STILL* don’t have any intuition when typefaces and sizes are different, even now that I know that has to mean something, as to exactly what the different things they mean are likely to turn out to be. I have to fully understand the mathematical relationships and processes that equations DESCRIBE in order to understand by a sort of reverse-analysis, how the heck the equations are even relevant to those relationships and processes. Which is sort of ass-backwards to the way it’s supposed to work. I wind up using the relationships to explain the equations by inferring the notation. The way it’s supposed to work is that people should be able to use the equations to explain and understand the relationships because the notation is obvious. This post is a digression. I intended to explain neural networks and gradient descent in a straightforward unambiguous way, and instead I have rambled on for a thousand words or so about how mathematical notation, for those not steeped in it, serves to obscure rather than reveal and so why such a straightforward explanation of neural networks is necessary. But you know what? I think maybe this rant about notation is something that needs to be said. So I’ll leave it up here and save the unambiguous explanation of neural networks and gradient descent for next time.
{"url":"http://dillingers.com/blog/category/math/page/2/","timestamp":"2024-11-11T07:59:17Z","content_type":"text/html","content_length":"63335","record_id":"<urn:uuid:0fc1de2e-6bb4-4a15-a49f-6ea72e90c331>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00144.warc.gz"}
RISC Activity Database author = {Jose Capco and Georg Grasegger and Matteo Gallet and Christoph Koutschan and Niels Lubbes and Josef Schicho}, title = {{The number of realizations of a Laman graph}}, language = {english}, abstract = {Laman graphs model planar frameworks that are rigid for a general choice of distances between the vertices. There are finitely many ways, up to isometries, to realize a Laman graph in the plane. Such realizations can be seen as solutions of systems of quadratic equations prescribing the distances between pairs of points. Using ideas from algebraic and tropical geometry, we provide a recursion formula for the number of complex solutions of such systems. }, year = {2017}, institution = {Research Institute for Symbolic Computation (RISC/JKU)}, length = {42}, url = {http://www.koutschan.de/data/laman/}
{"url":"https://www3.risc.jku.at/publications/show-bib.php?activity_id=5418","timestamp":"2024-11-10T05:36:39Z","content_type":"text/html","content_length":"3460","record_id":"<urn:uuid:345b2af0-be83-45c1-9ab3-91134fd84c86>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00711.warc.gz"}
The Stacks project Lemma 20.45.2. Let $f : (X, \mathcal{O}_ X) \to (Y, \mathcal{O}_ Y)$ be a morphism of ringed spaces. Let $\mathcal{B}$ be a basis for the topology on $Y$. 1. Assume $K$ is in $D(\mathcal{O}_ X)$ such that for $V \in \mathcal{B}$ we have $H^ i(f^{-1}(V), K) = 0$ for $i < 0$. Then $Rf_*K$ has vanishing cohomology sheaves in negative degrees, $H^ i(f^ {-1}(V), K) = 0$ for $i < 0$ for all opens $V \subset Y$, and the rule $V \mapsto H^0(f^{-1}V, K)$ is a sheaf on $Y$. 2. Assume $K, L$ are in $D(\mathcal{O}_ X)$ such that for $V \in \mathcal{B}$ we have $\mathop{\mathrm{Ext}}\nolimits ^ i(K|_{f^{-1}V}, L|_{f^{-1}V}) = 0$ for $i < 0$. Then $\mathop{\mathrm{Ext}}\ nolimits ^ i(K|_{f^{-1}V}, L|_{f^{-1}V}) = 0$ for $i < 0$ for all opens $V \subset Y$ and the rule $V \mapsto \mathop{\mathrm{Hom}}\nolimits (K|_{f^{-1}V}, L|_{f^{-1}V})$ is a sheaf on $Y$. Comments (0) There are also: • 4 comment(s) on Section 20.45: Glueing complexes Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0D66. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0D66, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0D66","timestamp":"2024-11-12T00:39:26Z","content_type":"text/html","content_length":"15494","record_id":"<urn:uuid:8dc59773-b952-4580-ae07-dc7615d7245f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00499.warc.gz"}
Proof of Proposition 2.17 Proposition 2.17 (Aumann 1976) be the meet of the agents' partitions for each . A proposition ⊆ Ω is common knowledge for the agents of at ω iff (ω) ⊆ . In Aumann (1976), to be common knowledge at ω iff (ω) ⊆ (⇐) By Lemma 2.16, M( ω ) is common knowledge at ω, so E is common knowledge at ω by Proposition 2.4. (⇒) We must show that K*[N](E) implies that M(ω) ⊆ E. Suppose that there exists ω′ ∈ M(ω) such that ω′ ∉ E. Since ω′ ∈ M(ω), ω′ is reachable from ω, so there exists a sequence 0, 1, … , m−1 with associated states ω[1], ω[2], … , ω[m] and information sets H[i[k]](ω[k]) such that ω[0] = ω, ω[m] = ω′, and ω[k] ∈ H[i[k]](ω[k+1]). But at information set H[i[k]](ω[m]), agent i[k] does not know event E. Working backwards on k, we see that event E cannot be common knowledge, that is, agent i[1] cannot rule out the possibility that agent i[2] thinks that … that agent i[m−1] thinks that agent i[m] does not know E. □
{"url":"https://plato.stanford.edu/ARCHIVES/WIN2009/entries/common-knowledge/proof2.17.html","timestamp":"2024-11-02T09:07:14Z","content_type":"application/xhtml+xml","content_length":"6568","record_id":"<urn:uuid:e42a0afb-be70-4550-8337-6264777c6353>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00401.warc.gz"}