content
stringlengths
86
994k
meta
stringlengths
288
619
What is the level of measurement for weight? What is the level of measurement for weight? ratio level An example of the ratio level of measurement is weight. Is weight a ratio or ordinal? For example, because weight is a ratio variable, a weight of 4 grams is twice as heavy as a weight of 2 grams. Is weight an interval or ratio? Most physical measures, such as height, weight, systolic blood pressure, distance etc., are interval or ratio scales, so they fall into the general “continuous ” category. What are the 3 types of measurement? The three measures are descriptive, diagnostic, and predictive. Descriptive is the most basic form of measurement. What is ordinal scale with example? An ordinal scale is a scale (of measurement) that uses labels to classify cases (measurements) into ordered classes. Some examples of variables that use ordinal scales would be movie ratings, political affiliation, military rank, etc. Example. One example of an ordinal scale could be “movie ratings”. Is age an interval or ratio? A ratio scale has the first characteristic of the interval scale (interval) but also has a meaningful zero point—which means the absence of the attribute. This enables multiplication and division on the values. Using the aforementioned definition, age is in a ratio scale. What is difference between interval and ratio? The difference between interval and ratio scales comes from their ability to dip below zero. Interval scales hold no true zero and can represent values below zero. For example, you can measure temperature below 0 degrees Celsius, such as -10 degrees. Ratio variables, on the other hand, never fall below zero. What are the 2 types of measuring cups? There are basically two kinds of measuring cups: one type for liquids and the other for dry goods such as flour and sugar. Measuring cups for dry ingredients are generally made of metal or plastic, and they cannot be used for liquid. What are the 2 types of measurements? Systems of Measurement: there are two main systems of measurement in the world: the Metric (or decimal) system and the US standard system. In each system, there are different units for measuring things like volume and mass. What is measurement example? Measurement is defined as the act of measuring or the size of something. An example of measurement means the use of a ruler to determine the length of a piece of paper. An example of measurement is 15″ by 25″. A waist measurement of 32 inches. What do you mean by ordinal scale? The Ordinal scale includes statistical data type where variables are in order or rank but without a degree of difference between categories. The ordinal scale contains qualitative data; ‘ordinal’ meaning ‘order’. It places variables in order/rank, only permitting to measure the value as higher or lower in scale. What is the use of ordinal scale? Thus, an ordinal scale is used as a comparison parameter to understand whether the variables are greater or lesser than one another using sorting. Thus, an ordinal scale is used when the order of options is to be deduced and not when the interval difference is also to be established. Is gender nominal or ordinal? A nominal variable has no intrinsic ordering to its categories. For example, gender is a categorical variable having two categories (male and female) with no intrinsic ordering to the categories. An ordinal variable has a clear ordering. Is blood type nominal or ordinal? Nominal scales name and that is all that they do. Some other examples are sex (male, female), race (black, hispanic, oriental, white, other), political party (democrat, republican, other), blood type (A, B, AB, O), and pregnancy status (pregnant, not pregnant. What is ratio data example? An excellent example of ratio data is the measurement of heights. Height could be measured in centimeters, meters, inches, or feet. In ratio data, the difference between 1 and 2 is the same as the difference between 3 and 4, but also here 4 is twice as much as 2. Hear this out loudPauseAn example of the ratio level of measurement is weight. Hear this out loudPauseFor example, because weight is a ratio variable, a weight of 4 grams is twice as heavy as a weight of 2 grams. Hear this out loudPauseThe difference between interval and ratio data is simple. Ratio data has a defined zero point. Income, height, weight, annual sales, market share, product defect rates, time to repurchase, unemployment rate, and crime rate are examples of ratio data. What is the level of measurement in statistics? Hear this out loudPauseIn statistics, level of measurement is a classification that relates the values that are assigned to variables with each other. Psychologist Stanley Smith is known for developing four levels of measurement: nominal, ordinal, interval, and ratio. What are the 5 types of measurements? Hear this out loudPauseTypes of data measurement scales: nominal, ordinal, interval, and ratio. Hear this out loudPauseThe three measures are descriptive, diagnostic, and predictive. Descriptive is the most basic form of measurement. Hear this out loudPauseAn ordinal scale is a scale (of measurement) that uses labels to classify cases (measurements) into ordered classes. Some examples of variables that use ordinal scales would be movie ratings, political affiliation, military rank, etc. Example. One example of an ordinal scale could be “movie ratings”. Is birth year nominal or ordinal? Hear this out loudPauseKnowing the scale of measurement for a variable is an important aspect in choosing the right statistical analysis. This scale enables us to order the items of interest using ordinal numbers. Thereof, is age nominal or ordinal? Year of birth is interval level of measurement; age is ratio. Hear this out loudPause[Ratio] Age is at the ratio level of measurement because it has an absolute zero value and the difference between values is meaningful. For example, a person who is 20 years old has lived (since birth) half as long as a person who is 40 years old. Age is, technically, continuous and ratio. Hear this out loudPauseAn excellent example of ratio data is the measurement of heights. Height could be measured in centimeters, meters, inches, or feet. In ratio data, the difference between 1 and 2 is the same as the difference between 3 and 4, but also here 4 is twice as much as 2. Hear this out loudPauseThere are basically two kinds of measuring cups: one type for liquids and the other for dry goods such as flour and sugar. Measuring cups for dry ingredients are generally made of metal or plastic, and they cannot be used for liquid. Hear this out loudPauseSystems of Measurement: there are two main systems of measurement in the world: the Metric (or decimal) system and the US standard system. What are the four levels of measurement in statistics? Here’s more of the four levels of measurement in research and statistics: Nominal, Ordinal, Interval, Ratio. Nominal Scale: 1 st Level of Measurement Nominal Scale, also called the categorical variable scale, is defined as a scale used for labeling variables into distinct classifications and doesn’t involve a quantitative value or order. Which is the third level of measurement measurement? The interval scale is the third level of measurement and encompasses both nominal and ordinal scales. This scale can also be referred to as an interval variable scale ( interval variable is used to describe the meaningful nature of the difference between values). Which is the most commonly used measurement scale? Though they appear simple, nominal data is the foundation of quantitative research and is among the most used measurement scale. Which is the simplest measurement scale to label a variable? The simplest measurement scale we can use to label variables is a nominal scale. Nominal scale: A scale used to label variables that have no quantitative values. Some examples of variables that can be measured on a nominal scale include: Variables that can be measured on a nominal scale have the following properties: What are the different levels of measurement in statistics? Levels of Measurement: Nominal, Ordinal, Interval and Ratio In statistics, we use data to answer interesting questions. But not all data is created equal. There are actually four different data measurement scales that are used to categorize different types of data: Which is an example of a variable measurement scale? Example: Likert Scale; Net Promoter Score (NPS) Bipolar Matrix Table; Ratio Scale. The ratio scale is the 4 th level of measurement scale, which is quantitative. It is a type of variable measurement scale. It allows researchers to compare the differences or intervals. The ratio scale has a unique feature. It possesses the character of the How are height and weight measured on a ratio scale? Ratio scale: A scale used to label variables that have a natural order, a quantifiable difference between values,and a “true zero” value. Height: Can be measured in centimeters, inches, feet, etc. and cannot have a value below zero. Weight: Can be measured in kilograms, pounds, etc. and cannot have a value below zero. What is the ordinal level of measurement in statistics? Ordinal Level of Measurement The next level is called the ordinal level of measurement. Data at this level can be ordered, but no differences between the data can be taken that are meaningful. Here you should think of things like a list of the top ten cities to live.
{"url":"https://www.onteenstoday.com/topic-ideas/what-is-the-level-of-measurement-for-weight/","timestamp":"2024-11-09T17:08:12Z","content_type":"text/html","content_length":"46149","record_id":"<urn:uuid:9373fc40-2831-4900-8e17-45d768e6cf75>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00426.warc.gz"}
The Adomian Decomposition Method for a Type of Fractional Differential Equations The Adomian Decomposition Method for a Type of Fractional Differential Equations () 1. Introduction Fractional calculus can be dated back to the end of 17^th century. In 1695, Leibniz and L’Hospital have discussed 1/2 order derivative, which is regarded as the birth of fractional differential equation. For a long time, fractional calculus does not attract enough attention. It is only considered and studied by many mathematicians. However, in last few decades, fractional calculus has been studied more and more in applied sciences and engineering. The fractional derivative has been applied in many physical problems such as frequency-dependent damping behavior of materials, motion of a large thin plate in a Newtonian fluid, creep and relaxation functions for viscoelastic materials etc. There are many authors who have demonstrated the application of fractional derivative like Oldham and Spainer [1] , Miller and Ross [2] , Podlubny [3] , Samko et al. [4] , Hilfer [5] . And more recently Sabiter et al. [6] have demonstrated the development and application of fractional calculus in physical and engineering. Other applications of fractional differential equations we can refer to [7] - [12] . Fractional calculus is found to be more suitable modeling the process with long range interaction and physical problems described by fractional equations, but sometimes it’s difficult to get the solution of fractional differential equations. For that reason, we need a reliable and efficient technique for solving fractional differential equations. In [13] , Tamsir and Srivastava give an analytical study for time fractional Klein-Gordon equation. Chen et al. use the discrete method to study the time fractional Klein-Gordon equation [14] . Fewer researchers consider giving an approximate solution. In this paper, we give an analytical solution of the time fractional differential equation of the following form ${}_{C}D{}_{0,t}^{\alpha }u=\frac{{\partial }^{2}u}{\partial {x}^{2}}+F\left(u\right)$ . (1.1) where, ${}_{C}{D}_{0,t}^{\alpha }$ is the fractional operator in Caputo sense, $1<\alpha \le 2$ . Except Caputo fractional derivative, there are many other different fractional derivatives, such as the Riemann-Liouville fractional derivative, Grünwald-Letnikov fractional derivative, Rietz fractional derivative etc. From the pure mathematical point, Riemann-Liouville derivative is somewhat more popular than Caputo derivative. Many earlier researchers use it instead of Caputo derivative, but for the Riemann-Liouville derivative we need to specify the values of certain fractional derivatives of the unknown solution at the initial conditions. However, when we deal with the concrete physical problem, the fractional derivative doesn’t have physical meaning. When we deal with the Caputo derivatives, we may only specify the integer order derivative. It has a clearly physical meaning and can be measured. Another reason we choose Caputo derivative is that under homogeneous conditions the equations with Riemann-Liouville operator are equivalent to the equations with Caputo operator, if we choose Caputo derivative it allows us to specify inhomogeneous initial conditions, too, if we This paper is organized as follows. In Section 2, we discuss some basic properties about fractional derivative and fractional integral which will be used in the following part. In Section 3, we introduce the Adomian decomposition method, and the detailed Scheme about the time fractional differential Equation (1.1) will be discussed. In Section 3, a numerical test will be showed, the approximate solution will be compared with the exact solution, and the error analysis will be given. 2. Fractional Integral and Fractional Derivative First, we will give some definitions about fractional calculus including fractional integral and fractional derivative. For fractional derivative there are already exist several different definitions and in general these different definitions are not equivalent to each other. Here we only give the most common definition. Definition 2.1. If $f\left(x\right)$ is continuous on $\left(0,+\infty \right)$ and $\alpha >0$ then the fractional integral is defined as ${D}_{0,t}^{-\alpha }f\left(t\right)=\frac{1}{\Gamma \left(\alpha \right)}{\int }_{0}^{t}{\left(t-\tau \right)}^{\alpha -1}f\left(\tau \right)\text{d}\tau$ . (2.1) Definition 2.2. If ${f}^{\left(n\right)}\left(x\right)$ is continuous on $\left(0,+\infty \right)$ and $n-1<\alpha \le n$ then the Caputo fractional derivative is defined as ${}_{C}D{}_{0,t}^{-\alpha }f\left(t\right)=\frac{1}{\Gamma \left(n-\alpha \right)}{\int }_{0}^{t}{\left(t-\tau \right)}^{n-\alpha -1}{f}^{\left(n\right)}\left(\tau \right)\text{d}\tau$ . (2.2) Property 2.1. [3] If $f\left(x\right)$ is continuous on $\left(0,+\infty \right)$ and $\alpha >0$ , $\beta >0$ then ${D}_{0,t}^{-\alpha }{D}_{0,t}^{-\beta }f\left(t\right)={D}_{0,t}^{-\alpha -\beta }f\left(t\right)$ . (2.3) Property 2.2. [3] If ${f}^{\left(n\right)}\left(x\right)$ is continuous on $\left(0,+\infty \right)$ and $n-1<\alpha \le n$ then ${}_{C}D{}_{0,t}^{\alpha }{D}_{0,t}^{-\alpha }f\left(t\right)=f\left(t\right)$ . (2.4) Property 2.3. [3] If ${f}^{\left(n\right)}\left(x\right)$ is continuous on $\left(0,+\infty \right)$ and $n-1<\alpha \le n$ then ${D}_{0,t}^{-\alpha }{}_{C}D{}_{0,t}^{\alpha }f\left(t\right)=f\left(t\right)-\underset{k=0}{\overset{n-1}{\sum }}\frac{{f}^{\left(k\right)}\left(0\right)}{k!}{t}^{k}$ . (2.5) Here, we only give some basic properties about Caputo fractional derivative and fractional integral which we will use in the following part. For some other properties about Caputo fractional derivative and other definitions about fractional calculus we can refer to [4] . 3. Adomian Decomposition Method The Adomian decomposition method [15] [16] is powerful tool for solving linear or nonlinear equations. For every nonlinear differential equation can be decomposed into the following form $L\left(u\right)+R\left(u\right)+N\left(u\right)=g$ , (3.1) where L is the highest order differential operator, $Ru$ is the remainder of the linear part, $Nu$ represents the nonlinear part and g is a given function. In general, the operator L is invertible. If we take ${L}^{-1}$ on both sides of Equation (3.1), an equivalent expression can be given, $u=-{L}^{-1}R\left(u\right)-{L}^{-1}N\left(u\right)+{L}^{-1}g+\phi$ , (3.2) where $\phi$ satisfy $L\phi =0$ and the initial conditions. If L is the second order derivative, ${L}^{-1}$ is the two-fold definite integral. For the Adomian decomposition method, the solution u is expressed in terms of a series form, $u=\underset{n=0}{\overset{\infty }{\sum }}{u}_{n}$ . (3.3) The nonlinear term $N\left(u\right)$ is represented by the Adomian polynomials ${A}_{n}$ , i.e. $N\left(u\right)=\underset{n=0}{\overset{\infty }{\sum }}{A}_{n}$ . (3.4) ${A}_{n}$ depends on ${u}_{0},{u}_{1},\cdots ,{u}_{n}$ and can be formulated by ${A}_{n}=\frac{1}{n!}\frac{{\text{d}}^{n}}{\text{d}{\lambda }^{n}}{\left[N\left(\underset{k=0}{\overset{\infty }{\sum }}{\lambda }^{k}{u}_{k}\right)\right]}_{\lambda =0},n=0,1,2,\cdots$ . (3.5) For clarity, first few several items of the Adomian polynomials will be listed $\left\{\begin{array}{c}{A}_{0}=N\left({u}_{0}\right),\\ {A}_{1}={u}_{1}{N}^{\left(1\right)}\left({u}_{0}\right),\\ {A}_{2}={u}_{2}{N}^{\left(1\right)}\left({u}_{0}\right)+\frac{1}{2!}{u}_{1}^{2}{N}^ {\left(2\right)}\left({u}_{0}\right),\\ {A}_{3}={u}_{3}{N}^{\left(1\right)}\left({u}_{0}\right)+{u}_{1}{u}_{2}{N}^{\left(2\right)}\left({u}_{0}\right)+\frac{1}{3!}{u}_{1}^{3}{N}^{\left(3\right)}\left ({u}_{0}\right),\\ {A}_{4}={u}_{4}{N}^{\left(1\right)}\left({u}_{0}\right)+\left[\frac{1}{2!}{u}_{2}^{2}+{u}_{1}{u}_{3}\right]{N}^{\left(2\right)}\left({u}_{0}\right)+\frac{1}{2!}{u}_{1}^{2}{u}_{2} {N}^{\left(3\right)}\left({u}_{0}\right)+\frac{1}{4!}{u}_{1}^{4}{N}^{\left(4\right)}\left({u}_{0}\right),\\ ⋮\end{array}$ Then for the Equation (3.1), we have $\underset{n=0}{\overset{\infty }{\sum }}{u}_{n}=-{L}^{-1}R\underset{n=0}{\overset{\infty }{\sum }}{u}_{n}-{L}^{-1}\underset{n=0}{\overset{\infty }{\sum }}{A}_{n}+{L}^{-1}g+\phi$ . (3.6) The Adomian’s technique is equivalent to the following relation which can be defined as $\left\{\begin{array}{c}{u}_{0}={L}^{-1}g+\phi ,\\ {u}_{1}=-{L}^{-1}R\left({u}_{0}\right)-{L}^{-1}{A}_{0},\\ {u}_{2}=-{L}^{-1}R\left({u}_{1}\right)-{L}^{-1}{A}_{1},\\ ⋮\\ {u}_{n}=-{L}^{-1}R\left({u}_ {n-1}\right)-{L}^{-1}{A}_{n-1},\\ ⋮\end{array}$ In theory, if we calculate all the terms ${u}_{n}$ we can find the exact solution. In fact, we just need to compute the first finite terms. Cherruault et al. have proved the convergence of Adomian decomposition method [17] [18] . From some numerical tests of the following part, we can find that the sum of the first three or four terms has high accuracy. The more terms we calculate, the higher the accuracy. 4. Numerical Examples In order to verify the accuracy of the method which described in the last section, two numerical examples will be considered. Example 1. ${}_{C}{D}_{0,t}^{\alpha }u=\frac{{\partial }^{2}u}{\partial {x}^{2}}+u$ , subject to the initial conditions $u\left(x,0\right)=1+\mathrm{sin}x$ , ${u}_{t}\left(x,0\right)=0$ . First, we take ${D}_{0,t}^{-\alpha }$ on both sides of the example 1, the following relation is given $u=u\left(x,0\right)+t{u}_{t}\left(x,0\right)+\frac{1}{\Gamma \left(\alpha \right)}{\int }_{0}^{t}{\left(t-\tau \right)}^{\alpha -1}\left({u}_{xx}\left(x,\tau \right)+u\left(x,\tau \right)\right)\ text{d}\tau .$(3.7) With the scheme we discussed in the last part, we have $\left\{\begin{array}{l}{u}_{0}=1+\mathrm{sin}x,\hfill \\ {u}_{1}=\frac{{t}^{\alpha }}{\Gamma \left(\alpha +1\right)},\hfill \\ {u}_{2}=\frac{{t}^{2\alpha }}{\Gamma \left(2\alpha +1\right)},\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{\hspace{0.17em}}⋮\hfill \end{array}$ Then the approximate solution is $\stackrel{^}{u}=1+\mathrm{sin}x+\frac{{t}^{\alpha }}{\Gamma \left(\alpha +1\right)}+\frac{{t}^{2\alpha }}{\Gamma \left(2\alpha +1\right)}+\cdots +\frac{{t}^{n\alpha }}{\Gamma \left(n\alpha +1\right)}$ . In order to test the accuracy of the approximate solution, we consider when $\alpha =2$ , the exact solution is $u\left(x,t\right)=\mathrm{sin}x+\mathrm{cosh}t$ . Figure 1 shows the exact solution and the approximate solution with the first four terms. Table 1 shows the error of the exact solution and the approximate solution. In this example, we only use the forst four terms to approximate the exact solution. From the error column we can find that the absolute error is very small, the Adomian decomposition method has a high convergence order. The more terms we use, the higher accuracy we get. Example 2. ${}_{C}{D}_{0,t}^{\alpha }u=\frac{{\partial }^{2}u}{\partial {x}^{2}}-\mathrm{sin}u$ , $1<\alpha \le 2$ subject to the initial conditions $u\left(x,0\right)=\mathrm{sin}\pi x$ , ${u}_{t}\ left(x,0\right)=0$ . Similarly, with the procedure we used in the first example, we have the following result about ${u}_{i}$ $\left\{\begin{array}{l}{u}_{0}=\mathrm{sin}\pi x,\hfill \\ {u}_{1}=\frac{{t}^{\alpha }\left(-{\pi }^{2}\mathrm{sin}\left(\pi x\right)-\mathrm{sin}\left(\mathrm{sin}\left(\pi x\right)\right)\right)} {\Gamma \left(\alpha +1\right)},\hfill \\ \begin{array}{l}{u}_{2}=\frac{{t}^{2\alpha }}{\Gamma \left(2\alpha +1\right)}\left({\pi }^{4}\mathrm{sin}\left(\pi x\right)+{\pi }^{2}\mathrm{sin}\left(\ mathrm{sin}\left(\pi x\right)\right){\mathrm{cos}}^{2}\left(\pi x\right)\\ \text{}+2{\pi }^{2}\mathrm{sin}\left(\pi x\right)\mathrm{cos}\left(\mathrm{sin}\left(\pi x\right)\right)+\mathrm{cos}\left(\ mathrm{sin}\left(\pi x\right)\right)\mathrm{sin}\left(\mathrm{sin}\left(\pi x\right)\right),\end{array}\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace {0.17em}}\text{\hspace{0.17em}}⋮\hfill \end{array}$ In this example we use the sum of the first three terms as the approximate solution of the problem we discussed. When we consider $\alpha =2$ , the exact solution is $u\left(x,t\right)=\frac{1}{2}\ left(\mathrm{sin}\left(\pi \left(x+t\right)\right)+\mathrm{sin}\left(\pi \left(x-t\right)\right)\right)$ . Figure 2 shows the exact solution and the approximate solution. Table 2 shows the exact solution and approximate solution of the nonlinear fractional differential equation. In the last column we can find that the absolute error is small, here, we only use the first three terms to approximate the solution. If we use more terms, the approximation works better. 5. Conclusion In this work, the Adomian decomposition method is applied to solving a time fractional differential equation. Both the linear and nonlinear type of fractional Figure 1. Approximate solution and exact solution of example 1. Figure 2. Approximate solution and exact solution of example 2. Table 1. Error of exact solution and approximate solution, where $x=1$ . Table 2. Error of exact solution and approximate solution, where $x=1$ . differential equations are considered. From the numerical result, we can find that the Adomian decomposition method is an efficient algorithm. We use only first several terms to approximate the exact solution, the numerical result has high precision. In general, some differential equations are hard to deal with because of the nonlinear terms. The Adomian decomposition method is a powerful tool to cope with this problem. Moreover, no linearization or perturbation is required in this method. This research was funded by the Humanity and Social Science Youth Foundation of Ministry of Education (No. 18YJC630120), the Applied Mathematics of Shanghai Dianji University (No. 16JCXK02).
{"url":"https://scirp.org/journal/paperinformation?paperid=95943","timestamp":"2024-11-05T16:35:22Z","content_type":"application/xhtml+xml","content_length":"166544","record_id":"<urn:uuid:d6037eda-71a1-45fd-b153-127752f1e288>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00477.warc.gz"}
4 Methods To Calculate Probability - Resilyes May 29, 2022 4 Methods To Calculate Probability by Radhe The number 5.42 couldn’t be a probability as a end result of it is bigger than 1. The quantity 0.001 couldn’t be a chance as a outcome of it is extremely small. The number zero.58 couldn’t be a likelihood because it’s smaller than 1. The number negative 120% could not be a chance as a result of it is adverse. The probability that an event doesn’t occur is 1 minus the chance that the occasion does happen. The likelihood of any end result is a number between zero and 1. The possibilities of all the outcomes add up to 1. Since the outcomes have the identical possibilities, which must add up to 1, each consequence is assigned chance half diagonal lines are often used to _______________. of. A is the sum of the probabilities of the individual outcomes of which it’s composed. An event E is claimed to occur on a selected trial of the experiment if the result observed is an element of the set E. Let the occasion that Hanif will lose the sport be denoted by E. The number of all potential outcomes is 36 and not eleven. A student argues that th ere are 11 potential sible outcomes 2 , 3, 4, 5, 6, 7, 8 , 9, 10, 11 and 12. ∵ The bulb drawn above just isn’t included within the lot. Q.16.12 defectivepens are by accident mixed with 132 good ones. It isn’t possible to just have a look at a pen and tell whether or not or not it is defective. Determine the chance tint the pen taken out is an efficient one. A trial is made to answer a true-false query. “I beloved the language, easy to grasp. Following these steps I received 10 out of 10 in chance exams.” “Thank you, I liked to undergo your article, and it appeared it has helped me lots to understand likelihood.” % of individuals told us that this text helped them. ∴ The outcomes both a boy or a woman are equally likely to occur. ∴ The outcomes right or incorrect are equally prone to occur. Since, the automotive could or might not begin, thus the outcomes aren’t equally doubtless. The probability of an event that’s sure to occur is 1. The chance of an event that cannot occur is 0. For more mathematical enjoyable, you might dive into examples of quantitative knowledge. Or, if you’re excited about extra statistics concepts, take a glance at these examples of ordinary deviation. The chance that the 2 randomly chosen marbles are not each pink is 7/9. Where P is the chance of events A and B both occurring, P is the likelihood of event A occurring, and P is the likelihood of occasion B occurring.
{"url":"https://resilyes.com/4-methods-to-calculate-probability/","timestamp":"2024-11-03T22:33:35Z","content_type":"text/html","content_length":"54207","record_id":"<urn:uuid:d1910dd7-4e81-4dd7-87ec-b8389f3c8aa7>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00783.warc.gz"}
CompLaB v1.0: a scalable pore-scale model for flow, biogeochemistry, microbial metabolism, and biofilm dynamics Articles | Volume 16, issue 6 © Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License. CompLaB v1.0: a scalable pore-scale model for flow, biogeochemistry, microbial metabolism, and biofilm dynamics Microbial activity and chemical reactions in porous media depend on the local conditions at the pore scale and can involve complex feedback with fluid flow and mass transport. We present a modeling framework that quantitatively accounts for the interactions between the bio(geo)chemical and physical processes and that can integrate genome-scale microbial metabolic information into a dynamically changing, spatially explicit representation of environmental conditions. The model couples a lattice Boltzmann implementation of Navier–Stokes (flow) and advection–diffusion-reaction (mass conservation) equations. Reaction formulations can include both kinetic rate expressions and flux balance analysis, thereby integrating reactive transport modeling and systems biology. We also show that the use of surrogate models such as neural network representations of in silico cell models can speed up computations significantly, facilitating applications to complex environmental systems. Parallelization enables simulations that resolve heterogeneity at multiple scales, and a cellular automaton module provides additional capabilities to simulate biofilm dynamics. The code thus constitutes a platform suitable for a range of environmental, engineering and – potentially – medical applications, in particular ones that involve the simulation of microbial dynamics. Received: 30 Sep 2022 – Discussion started: 04 Oct 2022 – Revised: 14 Jan 2023 – Accepted: 23 Feb 2023 – Published: 27 Mar 2023 Biogeochemical turnover in Earth's near-surface environments is governed by the activity of microbes adapted to their surroundings to catalyze reactions and gain energy. In turn, these activities shape the environmental composition, which feeds back on metabolic activities and creates ecological niches. Such feedbacks can be captured by reactive transport models that compute the evolution of geochemical conditions as a function of time and space and simulate microbial activities in porous media (Meile and Scheibe, 2019). Commonly used macroscopic reactive transport models simplify small-scale features of natural porous media. For example, heterogeneous pore geometry and transport phenomena are represented by only a few macroscopic parameters such as porosity, permeability, and dispersivity (Steefel et al., 2015). However, such simplifications can lead to a disparity between model estimations and actual observations because these models do not resolve the physical and geochemical conditions at the scale that is relevant for microbial activity (e.g., Molins, 2015; Oostrom et al., 2016). Furthermore, microbial reaction rates are often formulated using Monod expressions, which describe a dependency of metabolic rates on nutrient availability but substantially simplify the complex metabolic adaptation of microbes in changing environments. This recognition has prompted the development of constraint-based models including, for example, COMETS (Harcombe et al., 2014), BacArena (Bauer et al., 2017), and IndiMeSH (Borer et al., 2019), which have enabled detailed descriptions of complex microbial metabolisms and metabolic interactions (Dukovski et al., 2021). However, most constraint-based models are not designed to capture combined diffusive and advective transport of metabolites in heterogeneous subsurface environments and are not optimized to handle such settings in a computationally efficient way. Notably, computational efficiency and the integration of adequate formulations of microbial function have been identified as critical aspects in pore-scale models of microbial activity (Golparvar et al., 2021). To account for the feedback between environmental conditions, chemical processes, microbial metabolism, and structural changes in the porous medium caused by these activities, we introduce a novel pore-scale reactive transport modeling framework with spatially explicit descriptions of hydrological and biogeochemical processes. Our work complements existing efforts, encompassing both individual- and population-based spatially explicit microbial models reviewed by König et al. (2020), some of which take into consideration the structure of the porous medium. Our modular framework is developed to account for various chemical reactions and/or genome-scale metabolic models with advective and diffusive transports in porous media at the pore scale. The lattice Boltzmann (LB) method is used to compute fluid flow and solute transport in complex porous media, capable of simulating both advection- and diffusion-dominated settings. Microbial metabolism and chemical reactions are incorporated as source or sink terms in the LB method solving mass conservation equations. These sources or sinks can be described classically using approximations such as Monod kinetics (Tang et al., 2013) or can be derived from cell-scale growth and metabolic fluxes simulated with flux balance analysis (Orth et al., 2010). Biomass dynamics can be described by keeping track of cell densities (similar to chemical concentration fields) of different organisms or populations, with cell movement either based on an advection–diffusion formulation or using a cellular automaton approach. In addition, we incorporate a surrogate modeling approach to make larger-scale simulations possible. Thus, the framework provides options either to maximize computational efficiency via the use of surrogate models or to directly utilize well-established metabolic modeling environments without losing the inherent parallel scalability of the LB method. The model is validated by comparing model simulations to published simulation results. We demonstrate the flexibility of the new microbial reactive transport framework, its scalability, and the benefits of using surrogate models to circumvent computational bottlenecks posed by flux balance analysis. Our work therefore facilitates cross-disciplinary efforts that integrate bioinformatic approaches underlying cell models with descriptions suitable to resolving the dynamic nature of natural environments. This allows for the representation of microbial interactions, which is a major challenge to our current quantitative understanding of microbially mediated elemental cycling (Sudhakar et al., 2021). 2Use of open-source codes To establish a modeling framework that builds on the existing and future knowledge and know-how from multiple disciplines, our approach uses the open-source software Palabos (Parallel Lattice Boltzmann Solver) and integrates it with the open-source linear programming solver GLPK (GNU Linear Programming Kit) and the COBRApy (CPY) Python package for genome-scale metabolic modeling. Palabos is a modeling platform that has established itself as a powerful approach in the field of computational fluid dynamics based on the lattice Boltzmann (LB) method. The Palabos software is designed to be highly extensible to couple complex physics and other advanced algorithms without losing its inherent capability of massive parallelization (Latt et al., 2021). Palabos has been parallelized using the Message Passing Interface, where computational domains are subdivided while minimizing the inter-process communication. It has been used for building modeling platforms to simulate deformable cell suspensions in relation to blood flows (Kotsalos et al., 2019) and complex subsurface biogeochemical processes at the pore scale (Jung and Meile, 2019, 2021). It is highly scalable and hence was chosen as a high-performance modeling framework to be integrated with our representations of chemical and microbial dynamics. GLPK is an open-source library designed for solving linear programming (LP), mixed integer programming and other related problems (GLPK, 2022). It contains the simplex method, a well-known efficient numerical approach to solve LP problems, and the interior-point method, which solves large-scale LP problems faster than the simplex method. GLPK provides an application programming interface (API) written in C language to interact with a client program. COBRApy is an object-oriented Python implementation of constraints-based reconstruction and analysis (COBRA) methods (Ebrahim et al., 2013), which is suitable to be integrated with other libraries without requiring commercial software. Through a simple Python API, the fast evolving and expanding biological modeling capacity of COBRApy, which includes features such as flux balance analysis (FBA), flux variability analysis (FVA), metabolic models (M models), and metabolism and expression models (ME models), can be employed. CompLaB simulates a fully saturated 2D fluid flow and solute transport at the pore scale based on the LB method implemented in Jung and Meile (2019, 2021). These earlier efforts established some of the underlying model developments, such as the simulation of the flow field, mass transport, and biochemical processes including kinetic rate expressions and cellular automaton implementation of biofilm growth. This study expands on the previously established models to offer a much broader applicability by building the modular structure that makes the use of flux balance and surrogate models possible. The LB method is particularly useful for simulating subsurface processes because boundaries between solid and fluid can be handled by a simple bounce-back algorithm (Ziegler, 1993) in addition to its massive parallelization efficiency (Latt et al., 2021). For these reasons, the LB method has been applied to simulate a broad range of pore-scale reactive transport processes (e.g., Huber et al., 2014; Kang et al., 2007; Tang et al., 2013). The simulations are guided by an input file, CompLaB.xml, that sets the scope of the simulation and capabilities utilized through command blocks that define the model domain, chemical state variables, microorganisms, and model input and output (Fig. 1). Below, the features of the model are presented, and we refer to the manual in the code repository for examples of the implementation. 3.1The lattice Boltzmann flow and mass transport solvers The LB method retrieves the numerical solutions of the Navier–Stokes (NS) for fluid flow and advection–diffusion-reaction equations (ADREs) for solute transport by solving the mesoscopic Boltzmann equation across a defined set of particles (Krüger et al., 2017). CompLaB obtains a steady-state flow field by running a flow solver with a D2Q9 lattice BGK (Bhatnagar–Gross–Krook; Bhatnagar et al., 1954) model defined as $\begin{array}{}\text{(1)}& {f}_{i}\left(\mathbit{r}+{\mathbit{c}}_{i}\mathrm{\Delta }t,t+\mathrm{\Delta }t\right)={f}_{i}\left(\mathbit{r},t\right)-\frac{\mathrm{\Delta }t}{{\mathit{\tau }}_{\mathrm where f[i](r,t) is the ith discrete set of particles streamed from a position r to a new position r+c[i] after a time step with lattice velocities c[i](c[0]=[0,0], c[1]=[1,0], c[2]=[0,1], c[3] = [−1,0], c[4]=[0,−1], c[5]=[1,1], c[6]=[−1,1], c[7]=[−1,−1], c[8]=[1,−1]). τ[f] is the relaxation time related to the fluid viscosity ν[f] (${\mathit{u }}_{\mathrm{f}}={c}_{\mathrm{s}}^{\ mathrm{2}}\left({\mathit{\tau }}_{\mathrm{f}}-\frac{\mathrm{\Delta }t}{\mathrm{2}}\right)$; c[s] is a lattice-dependent constant; here, ${c}_{\mathrm{s}}^{\mathrm{2}}$=$\mathrm{1}/\mathrm{3}$). The equilibrium distribution function for fluid flow (${f}_{i}^{\text{eq}}$) is given by $\begin{array}{}\text{(2)}& {f}_{i}^{\text{eq}}={\mathit{\omega }}_{i}\mathit{\rho }+{\mathit{\omega }}_{i}{\mathit{\rho }}_{\mathrm{0}}\left(\frac{\mathbit{u}\cdot {\mathbit{c}}_{i}}{{c}_{\mathrm {s}}^{\mathrm{2}}}+\frac{{\left(\mathbit{u}\cdot {\mathbit{c}}_{i}\right)}^{\mathrm{2}}}{\mathrm{2}{c}_{\mathrm{s}}^{\mathrm{4}}}-\frac{\mathbit{u}\cdot \mathbit{u}}{\mathrm{2}{c}_{\mathrm{s}}^{\ where ω[i] is the lattice weights (ω[0]=$\mathrm{4}/\mathrm{9}$, ω[1−4]=$\mathrm{1}/\mathrm{9}$, ω[5−8]=$\mathrm{1}/\mathrm{36}$), ρ is the macroscopic density (ρ=Σf[i]), ρ[0] is the rest state constant, and u is the macroscopic velocity calculated from the momentum (ρu=Σc[i]f[i]). The steady-state flow field is then imposed on a transport solver defined as $\begin{array}{}\text{(3)}& \begin{array}{rl}{g}_{i}^{j}\left(\mathbit{r}+{\mathbit{c}}_{i}\mathrm{\Delta }t,t+\mathrm{\Delta }t\right)=& \phantom{\rule{0.25em}{0ex}}{g}_{i}^{j}\left(\mathbit{r},t\ right)-\frac{\mathrm{\Delta }t}{{\mathit{\tau }}_{\mathrm{g}}^{j}}\left[{g}_{i}^{j}-{g}_{i}^{j,\text{eq}}\right]\\ & +{\mathrm{\Omega }}_{i}^{\text{RXN}}\left(\mathbit{r},t\right),\end{array}\end where g[i](r,t) represents the discrete particle set i of a transported entity j at position r and time t. ${\mathit{\tau }}_{\mathrm{g}}^{j}$ is the relaxation time for solute transport related to the diffusivity (D^j=${c}_{\mathrm{s}}^{\mathrm{2}}\left({\mathit{\tau }}_{\mathrm{g}}^{j}-\frac{\mathrm{\Delta }t}{\mathrm{2}}\right)$). The equilibrium distribution function for transport (${g}_ {i}^{j,\mathrm{eq}}$) using a D2Q5 lattice for numerical efficiency, which, satisfying the isotropy requirement for a LB transport solver, is given by $\begin{array}{}\text{(4)}& {g}_{i}^{j,\text{eq}}={\mathit{\omega }}_{i}{C}^{j}\left(\mathrm{1}+\frac{\mathbit{u}\cdot {\mathbit{c}}_{i}}{{c}_{\mathrm{s}}^{\mathrm{2}}}\right),\end{array}$ with the lattice weights ω[0]=$\mathrm{1}/\mathrm{3}$ and ω[1−4]=$\mathrm{1}/\mathrm{6}$, lattice velocities c[0−4], and the solute concentration C^j (${C}^{j}=\sum {g}_{i}^{j}$). In solving an advection–diffusion problem, CompLaB adjusts the value of ${\mathit{\tau }}_{\mathrm{g}}^{j}$, which controls the length of a time step, to obtain a user-provided Péclet number (Pe^j=$UL/{D}^{j}$) for a given average flow velocity U and a user-provided characteristic length L. With a steady-state flow field obtained from the solution of Eq. (1) and a reaction step ${\mathrm{\Omega }}_{i}^{\ text{RXN}}=\mathrm{\Delta }t{\mathit{\omega }}_{i}R$, the LB transport solver recovers an ADRE with the following form: $\begin{array}{}\text{(5)}& \frac{\partial j}{\partial t}=\mathrm{abla }\cdot \left({D}^{j}\mathrm{abla }j\right)-\mathbit{u}\cdot \mathrm{abla }j+R.\end{array}$ Here, the transported entity j includes solute concentrations (C) and planktonic biomass densities, and R is a reaction term computed by the reaction solver of CompLaB as described below. The reaction step (${\mathrm{\Omega }}_{i}^{\text{RXN}}$) computation is separated from a transport computation via the sequential non-iterative approach (Alemani et al., 2005). A unique feature of CompLaB is that its reaction solver can compute biochemical reaction rates R through (1) kinetic rate expressions, (2) flux balance analysis, and (3) a surrogate model such as a pre-trained artificial neural network or combinations thereof, by summing their contributions to the net reaction rates of individual state variables. 3.2.1Kinetic rate expressions CompLaB provides a C$++$ template that users can adapt to formulate kinetic rate expressions using metabolite concentrations and biomass densities (defineReactions.hh). This is designed to accommodate user-specific needs and to enable simulating various microbial dynamics including Monod kinetics, microbial attachment and detachment, and arbitrary rate expressions defined by the user. Reactions can be restricted to particular locations using material numbers (mask) differentiating fluid, biomass, and grain surfaces. Local biomass densities and concentrations calculated after the collision step of transport solvers are transferred to the function as vectors B and C, where the vector elements follow the order defined in the user interface (CompLaB.xml). The biomass density and metabolite concentrations are updated according to $\begin{array}{}\text{(6)}& {\mathbit{B}}_{t+\mathrm{\Delta }t}={\mathbit{B}}_{\mathrm{t}}+{\mathit{\gamma }}_{\mathrm{t}}{\mathbit{B}}_{\mathrm{t}}\mathrm{\Delta }t,\text{(7)}& {\mathbit{C}}_{t+\ mathrm{\Delta }t}={\mathbit{C}}_{\mathrm{t}}+{\mathbit{R}}_{\mathrm{t}}\mathrm{\Delta }t,\end{array}$ where γ[t] values are the cell-specific biomass growth rates, R[t] values are the microbially mediated reaction rates, expressed as the product of metabolite uptake/release rates per cell (F[t]) and the cell density B[t] (i.e., R[t]=F[t]B[t]), calculated every time step for every pore and biomass grid cell. 3.2.2Flux balance analysis For genome-enabled metabolic modeling, CompLaB loads metabolic networks and calculates microbial growth rates as well as metabolite uptake/release rates through an FBA method (Orth et al., 2010). FBA investigates the metabolic capabilities by imposing several constraints on the metabolic flux distributions. Assuming that metabolic systems are at steady state, the system dynamics for a metabolic network are described by the mass balance equation Sv=0. Here, S is a m×n matrix with m compounds and n reactions, where the entries in each column are the stoichiometric coefficients of the metabolites composing a reaction and v is a n×1 flux vector representing metabolic reactions and uptake/release of chemicals by the cell. Most metabolic models have more reactions than compounds (n>m), meaning that there are more unknowns than equations. To solve such underdetermined systems, FBA confines the solutions to a feasible set by imposing constraints on metabolic fluxes lb (lower bounds)≤v≤ub (upper bounds) and applies an objective function f(v)=c^Tv, where c is the vector of weights for the objective function to identify an optimal solution. Commonly used objective functions include maximization of biomass yield, maximization of ATP production, and minimization of nutrient uptake (Nikdel et al., 2018). CompLaB utilizes the stoichiometric matrix S from standard metabolic databases such as BiGG and KEGG, which are widely used in FBA simulation environments (e.g., COBRA toolbox, Heirendt et al., 2019; COBRApy, Ebrahim et al., 2013; KBase, Arkin et al., 2018). Therefore, CompLaB can integrate many existing in silico cell models. CompLaB computes the solution of the metabolic models at each point in space and time for each organism or microbial community (if the model represents multiple microorganisms) and updates biomass density and metabolite concentrations according to Eqs. (6) and (7). The metabolic uptake fluxes are set through imposing constraints by defining the lower bound (lb) of a chemical (uptake fluxes are negative) through one of the following approaches. The first is the parameter-based method employed by Harcombe et al. (2014), setting the metabolic fluxes in analogy with Michaelis–Menten kinetics using a maximum uptake rate (V[max]; e.g., $\mathrm{mmol}\phantom{\rule{0.125em}{0ex}}{\mathrm{g}}_{\text{DW}}^{-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}{\mathrm{h}}^{-\mathrm{1}}$, where g[DW] is gram dry weight): $\begin{array}{}\text{(8)}& \text{lb}=-{V}_{max}\left(\frac{C}{C+{K}_{\mathrm{s}}}\right),\end{array}$ where C is a local metabolite concentration (e.g., mM) and K[s] is a half-saturation constant (e.g., mM). The second is the semi-linear approach employed by Borer et al. (2019). This method replaces V[max] with C(BΔt)^−1, where B is a local biomass density (e.g., g[DW]L^−1) and Δt is the length of a time step (e.g., h): $\begin{array}{}\text{(9)}& \text{lb}=-\frac{C}{B\mathrm{\Delta }t}\left(\frac{C}{C+{K}_{\mathrm{s}}}\right).\end{array}$ If K[s] is set to 0, then the uptake flux estimation becomes a linear function to local concentrations. Note that the units in the fluid flow and mass conservations model simulation must match those of the FBA bounds, which in our case were $\mathrm{mmol}\phantom{\rule{0.125em}{0ex}}{\mathrm{g}}_{\text{DW}}^{-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}{\mathrm{h}}^{-\mathrm{1}}$. With lower bounds defined, the solution of an FBA problem outputs biomass growth rate (γ; e.g., h^−1) and uptake/release rates of metabolites (F; e.g., $\mathrm{mmol}\phantom{\rule{0.125em}{0ex}}{\mathrm{g}}_{\text 3.2.3Surrogate model CompLaB also provides a C$++$ template (surrogateModel.hh) where users can incorporate a pre-trained surrogate model for calculating biogeochemical reactions, including artificial neural networks (ANNs). This functionality can be used to replace FBA, which requires solving many computationally expensive linear optimization problems (Sect. 3.2.2). In the example shown in Sect. 5, CompLaB provides local metabolite concentrations and biomass densities as inputs and the surrogate model outputs microbial growth rate (γ) and uptake/production rates of metabolites (F). While our demonstration is based on ANN models, any pre-trained statistical surrogate model (e.g., De Lucia and Kühn, 2021) that describes the sources and sinks – or their parameterization – can be used to enhance computational efficiency and accommodate various user-specific needs. 3.3Biomass redistribution To explicitly model the spatial biomass expansion, CompLaB utilizes a cellular automaton (CA) with a predefined maximum biomass density (B[max]) based on the CA algorithm developed by Jung and Meile (2021). After updating local biomass densities, the CA algorithm checks at every time step if there is any grid cell exceeding B[max] and redistributes the excess biomass ($B-{B}_{max}$) to a randomly selected neighboring grid cell. If the selected grid cell cannot hold the excess biomass, the first chosen grid cell is filled up to the maximum holding capacity (B[max], a value defined by the user), and then the remaining excess biomass is allocated to a randomly chosen second neighbor cell. If all the neighboring grid cells are saturated with biomass ($B\ge {B}_{max}$) and hence the excess biomass cannot be placed, the Manhattan distances of biomass grid cells to the closest pore cells are evaluated. The remaining excess biomass is then placed in a neighboring grid cell that is closer to pores. and this biomass allocation process is repeated until all the excess biomass is redistributed. Figure 2 shows an example of the cellular automaton process for biomass redistribution. Note that this biomass redistribution method is a simple approximation for biomass density conservation with room for improvement (e.g., Tang and Valocchi, 2013). When the sessile biomass reaches a threshold density ($B\ge \mathit{\psi }{B}_{max}$, where ψ is a user-defined threshold biomass fraction; $\mathrm{0}\le \mathit{\psi }\le \mathrm{1}$), the pore grid cell is designated as biomass; if the biomass density falls below the threshold due to microbial decay or detachment (B<ψB[max]), then a biomass grid cell is converted back to a pore. Pore grid cells allow for both advective and diffusive transport, but in the biofilm, sessile biomass hinders (i.e., permeable biofilm) or prevents (i.e., impermeable biofilm) flow, and the feedback between biomass growth/decay and advective flow conditions is accounted for by rerunning the flow solver to steady state after updating biomass distribution and corresponding pore geometry (Jung and Meile, 2021). The reduced advective transport efficiency in permeable biomass grid cells is implemented by modifying local fluid viscosity in the biofilm (ν[bf]) with ${\mathit{u }}_{\text{bf}}={\ mathit{u }}_{\mathrm{f}}/X$, where X is a user-defined viscosity ratio ($\mathrm{0}\le X\le \mathrm{1}$), while for impermeable biomass, a bounce-back condition is imposed (Pintelon et al., 2012). After imposing the new steady-state flow field, a streaming step of the transport solver is executed (Fig. 1). To verify the CompLaB implementation, the engineered metabolic interaction between two co-dependent mutant strains (Escherichia coli K12 and Salmonella enterica LT2) originally established by Harcombe (2010) and implemented in COMETS (Harcombe et al., 2014) and IndiMeSH (Borer et al., 2019) was chosen as a test case. E. coli K12 is deficient in producing methionine and relies on the release of methionine by the mutant S. enterica LT2. In turn, S. enterica LT2 requires acetate released by E. coli K12 because of its inability to metabolize lactose. As a result, these genetically engineered strains are obliged to engage in mutual interaction where neither species can grow in isolation. The ratio of the two strains converges to a stable relative composition after 48h in all the in vitro and in silico experiments at both initial ratios of 1:99 and 99:1. Both COMETS (Harcombe et al., 2014) and IndiMeSH (Borer et al., 2019) integrate flux balance cell models of these two microorganisms into a two-dimensional environment in which metabolites are exchanged via diffusion. The initial and boundary conditions of these simulations were mirrored in CompLaB, with 100 grid cells containing 3×10^−7g biomass each (total=3×10^−5g biomass) distributed randomly across a two-dimensional domain of 25×25 grid squares. The grid length was set to 500µm, and the initial distributions of the two species were allowed to overlap. Three replicate simulations were carried out for each initial microbial ratio of E. coli and S. enterica (1:99 and 99:1). For the exchange metabolites acetate and methionine, fixed boundary concentrations of 0mM were imposed on the left and right side of the domain, respectively, with no flux conditions at the top and bottom boundaries. The concentrations of lactose (2.92mM) and oxygen (0.25mM) were imposed at all domain boundaries. Solute and biomass diffusion coefficients were fixed at 5×10^−10 and 3×10^−13m^2s^−1, respectively. Consistent with the IndiMeSH model implementation, the simulation was carried out using Eq. (9), a finite-difference method for biomass diffusion, and reduced metabolic models of E. coli K12 and S. enterica LT2 in which the number of metabolites and reactions of the original metabolic models was systematically reduced by 1 order of magnitude (Borer et al., 2019). Figure 3 illustrates the average ratio of E. coli and S. enterica of all six simulations (triplicates for each initial composition ratios of 1:99 and 99:1) after 48h of simulation time. CompLaB simulation results agree with both the observations and the model results of COMETS and IndiMeSH, demonstrating the metabolic inter-dependence of two strains, and the convergence to a stable composition ratio (Appendix A, Fig. A1). 5Surrogate model integration A major issue in fully coupling genome-scale metabolic networks to reactive transport models is the large computational demand due to the repeated calculation of the LP problems at every biomass grid cell and every time step. Previous studies have alleviated this issue by interpolating the solutions of LP problems from a lookup table generated in advance to a reactive transport simulation (Scheibe et al., 2009), dynamically creating a solution pool of the LP problems during the reactive transport process (Fang et al., 2011), or systematically reducing the size of the matrix encoding the metabolic network (Ataman et al., 2017; Ataman and Hatzimanikatis, 2017). Here, we introduce a statistical surrogate modeling approach using a pre-trained artificial neural network (ANN) following the approach presented in Song et al. (2023). A trained ANN model directly relates input parameters (i.e., uptake rates of substrates) to outputs (i.e., biomass and metabolite production rates) through a set of nonlinear algebraic equations. As computing such input–output relationships from a pre-trained ANN model is several orders of magnitude faster than running FBA using a fully fledged metabolic model, the use of such surrogate models to achieve a significant speed-up has attracted much attention recently (e.g., De Lucia and Kühn, 2021; Prasianakis et al., 2020). Here, we use the metabolic network of Geobacter metallireducens GS-15 (iAF987; Lovley et al., 1993; http://bigg.ucsd.edu/models/iAF987, last access: September 2022), a strict anaerobe capable of coupling the oxidation of organic compounds to the reduction of metals such as iron and manganese, using ammonium as its nitrogen source to train an ANN model. The dataset used for training a surrogate ANN model was obtained by collecting FBA solutions of the base model obtained using the IBM ILOG cplex optimizer with the objective function chosen to maximize biomass production. The solution set was prepared by randomly sampling 5000 combinations of two growth-limiting metabolites – acetate (C[Ac]) and ammonium (${C}_{{\mathrm{NH}}_{\mathrm{4}}}$) – within the concentration ranges of $\mathrm{0}\le {C}_{\mathrm{Ac}}\le$0.5mM and $\mathrm{0}\le {C}_{{\mathrm{NH}}_{\mathrm{4}}}\le$0.05mM via Monte Carlo simulations. Substrate concentrations were converted to uptake fluxes via the parameter-based approach (Eq. 7) using the parameters from Fang et al. (2011) (V[max]= 10$\mathrm{mmol}\phantom{\rule{0.25em}{0ex}}\mathrm{acetate}\phantom{\rule{0.25em}{0ex}}{\ mathrm{g}}_{\mathrm{DW}}^{-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}{\mathrm{h}}^{-\mathrm{1}}$ and 0.5$\mathrm{mmol}\phantom{\rule{0.25em}{0ex}}\mathrm{ammonium}\phantom{\rule{0.25em}{0ex}}{\mathrm {g}}_{\mathrm{DW}}^{-\mathrm{1}}\phantom{\rule{0.125em}{0ex}}{\mathrm{h}}^{-\mathrm{1}}$; K[s]=0.01mM acetate and 0.1mM ammonium). These fluxes were used as lower bounds, and Fe^3+ was allowed to be consumed without limitation. The collected FBA solution dataset was split and used to train (70%), validate (15%), and test (15%), and we developed an ANN model using MATLAB's neural network toolbox. As key hyperparameters, the number of layers and the number of nodes in the ANN model were, respectively, determined to be 4 and 10 through grid search. Ensuring the accuracy of a surrogate model is of critical importance in reactive transport models because even small errors accumulating over successive time steps can lead to a substantial error. In this simple example, the trained ANN estimates the biomass growth rate of the full FBA model almost perfectly (R>0.999) against training, validation, and test datasets. This shows that the fully fledged metabolic model can be replaced by the surrogate ANN model without substantial loss of accuracy, boosting the simulation speed as shown next. CompLaB inherits the massive scalability of Palabos which decomposes a simulation domain into multiple subdomains and assigns them to individual computational nodes. In the following, the scalability of various components of CompLaB is assessed for a simplified microbial dynamics problem. 6.1Test case The simulation domain (Fig. 4) was prepared by taking a subsection of 500×300 square elements from the porous medium of Souzy et al. (2020). The length of each element was 16.81µm, and material numbers were assigned to solid (0), pore (1), and solid–pore interfaces (solid side of interface: 2; pore side of interface: 3). In total, 10% of the interface grid cells (pore side) were randomly assigned as sessile biomass grid cells (4) initially. Flow was induced from left to right by imposing a fixed pressure gradient between left- and right-side boundaries, and no flow conditions were set at the top and bottom boundaries as well as on the grain surfaces. The steady-state flow field was then provided to the CompLaB transport solver for mass transport and reaction simulations (Péclet number is 1 for a characteristic length scale of 2mm). Two growth-limiting metabolites – acetate (CH[3]COO^−) and ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$) – were considered for the mass transport simulations. Acetate was injected at the inlet (left) boundary with the fixed concentration of 0.45mM to the simulation domain initially filled with the same concentration. Ammonium concentration in the inflowing fluid and initially in the domain was 0mM, but it was produced at solid surfaces assuming a zeroth-order mineralization rate (Table 1). For both metabolites, no gradient conditions were imposed at the outlet, top, bottom, and grain boundaries. No external source and initial planktonic biomass were assumed, so that all planktonic biomass is detached sessile The biogeochemical problem is described by the following set of ADREs: $\begin{array}{}\text{(10)}& \frac{\partial {C}_{\mathrm{Ac}}}{\partial t}=\mathrm{abla }\cdot \left({D}_{\mathrm{Ac}}\mathrm{abla }{C}_{\mathrm{Ac}}\right)-\mathbit{u}\cdot \mathrm{abla }{C}_{\ mathrm{Ac}}-{F}_{\mathrm{Ac}}{B}_{\mathrm{S}},\text{(11)}& \begin{array}{rl}\frac{\partial {C}_{{\mathrm{NH}}_{\mathrm{4}}}}{\partial t}=& \phantom{\rule{0.25em}{0ex}}\mathrm{abla }\cdot \left({D}_ {{\mathrm{NH}}_{\mathrm{4}}}\mathrm{abla }{C}_{{\mathrm{NH}}_{\mathrm{4}}}\right)-\mathbit{u}\cdot \mathrm{abla }{C}_{{\mathrm{NH}}_{\mathrm{4}}}+{k}_{{\mathrm{NH}}_{\mathrm{4}}}{\mathit{\delta }}_{\ mathrm{S}}\\ & -{F}_{{\mathrm{NH}}_{\mathrm{4}}}{B}_{\mathrm{S}},\end{array}\text{(12)}& \begin{array}{rl}\frac{\partial {B}_{\mathrm{P}}}{\partial t}=& \phantom{\rule{0.25em}{0ex}}\mathrm{abla }\ cdot \left({D}_{{\mathrm{B}}_{\mathrm{P}}}\mathrm{abla }{B}_{\mathrm{P}}\right)-\mathbit{u}\cdot \mathrm{abla }{B}_{\mathrm{P}}-{\mathit{\mu }}_{\mathrm{B}}{B}_{\mathrm{P}}-{k}_{\text{att}}{\mathit{\ delta }}_{\mathrm{B}}{B}_{\mathrm{P}}\\ & +{k}_{\text{det}}{B}_{\mathrm{S}},\end{array}\text{(13)}& \frac{\partial {B}_{\mathrm{S}}}{\partial t}=\mathit{\gamma }{B}_{\mathrm{S}}-{\mathit{\mu }}_{\ mathrm{B}}{B}_{\mathrm{S}}+{k}_{\text{att}}{\mathit{\delta }}_{\mathrm{B}}{B}_{\mathrm{P}}-{k}_{\text{det}}{B}_{\mathrm{S}}.\end{array}$ C[Ac] and ${C}_{{\mathrm{NH}}_{\mathrm{4}}}$ are the concentrations of acetate and ammonium, respectively; B[P] is the planktonic biomass density; B[S] is the sessile biomass density; u denotes the flow field; δ indicates the presence (δ=1) or absence (δ=0) of a grain surface (δ[S]) and a biomass grid cell (δ[B]); and D[Ac], ${D}_{{\mathrm{NH}}_{\mathrm{4}}}$, and D[B] are the diffusion coefficients of acetate, ammonium, and planktonic biomass, respectively, which differ between pore, biomass, and solids. For simplicity, the diffusivities of all the metabolites and planktonic biomass were set to 10^−9m^2s^−1 in the pores, 8×10^−10m^2s^−1 in biomass grid cells, and 0 in the solid. Biomass attachment and detachment (k[att] and k[det]), biomass decay (μ[B]), and organic matter mineralization (${k}_{{\mathrm{NH}}_{\mathrm{4}}}$) were simulated using the reaction kinetics solver, with the corresponding rate constants summarized in Table 1. The simulation assumed that G. metallireducens grows only on solid surfaces and planktonic biomass attaches only to existing surface-attached aggregates (Grinberg et al., 2019). The computational efficiency and parallel performance of CompLaB was tested by executing four independent simulations utilizing each reaction solver. The cell-specific metabolic fluxes (F, Eq. 9) and biomass growth rates (γ, Eq. 12) were calculated either through FBA (CPY or GLPK), ANN, and/or reaction kinetics (KNS). The KNS solver was combined with other solvers (CPY, GLPK, and ANN) or used as a stand-alone reaction solver with the cellular automaton algorithm invoked (CA) and B[max] set to 100g[DW]L^−1. The model performance simulations with the FBA solvers (CPY and GLPK) were prepared with the same conditions used for training the ANN model (Sect. 5). The pre-trained ANN model from Sect. 5 was used for the separate simulation ANN. For the CA simulation, we create a situation similar to the above examples but in which substantial biofilm growth over a short simulation time is artificially induced. To that end, F and γ were computed as $\begin{array}{}\text{(14)}& F={k}_{\text{kns}}\left(\frac{{C}_{\mathrm{Ac}}}{{C}_{\mathrm{Ac}}+{K}_{\mathrm{Ac}}}\right)\left(\frac{{C}_{{\mathrm{NH}}_{\mathrm{4}}}}{{C}_{{\mathrm{NH}}_{\mathrm{4}}} +{K}_{{\mathrm{NH}}_{\mathrm{4}}}}\right),\text{(15)}& \mathit{\gamma }=Y{F}_{\mathrm{Ac}},\end{array}$ where F denotes the metabolic fluxes (for simplicity assuming F[Ac]=${F}_{{\mathrm{NH}}_{\mathrm{4}}}$), γ is the biomass growth rate, k[kns]=2.5×10^−6mMs^−1 (Marozava et al., 2014) is the maximum uptake rate, and K[Ac]=0.1mM and ${K}_{{\mathrm{NH}}_{\mathrm{4}}}$=0.01mM are the half-saturation constants for acetate and ammonium, respectively. The growth yield Y is set to 40000 ${\mathrm{g}}_{\mathrm{DW}}\phantom{\rule{0.125em}{0ex}}{\mathrm{mmol}}_{\mathrm{Ac}}^{-\mathrm{1}}$, an arbitrarily large number used only to invoke the CA algorithm within 10000 time steps (440s ). The flow field was updated every 10 time steps when the CA algorithm was invoked (as, e.g., Thullner and Baveye, 2008; Jung and Meile, 2021). The performance tests were carried out on the computing nodes of the Georgia Advanced Computing Resources Center. Each node has an AMD EPYC 7702P 64-core processor with a 2.0GHz clock cycle (AMD Rome), with 128GB of RAM. The nodes are interconnected via an EDR InfiniBand network with 100GBs^−1 effective throughput and run a 64-bit Linux operating system (CentOS 7.9 distribution). The elapsed (wall-clock) time for 10000 time steps was recorded. A comparison of simulation times for flow, transport, and reaction shows that most compute time is used for simulating the reactions, in particular when integrating in silico cell models into a reactive transport framework (Fig. 5). This highlights the benefit of using efficient surrogate models. The surrogate ANN model substantially improves computational efficiency compared to CPY and GLPK (about 2 orders of magnitude in total elapsed time; Fig. 5a) because calculating the pre-trained ANN is much faster than solving the linear programming problem every time step in FBA (Fig. 5c). For reaction calculations (Fig. 5d), the ANN simulation (solid sky-blue line with gray square symbols) even exhibits comparable but slightly shorter simulation times than the traditional reaction kinetics calculation (KNS; dashed lines with orange symbols) because the ANN implementation only operates on biomass grid cells while KNS operates on both biomass and pore grid cells. In addition to the computational efficiency, negligible errors introduced by the surrogate ANN model assure the use of a surrogate ANN model (Fig. B1). Although the error in biomass calculation accumulates over simulation time steps, it is kept to very low values throughout the simulation (the relative error is on the order of 10^−9; Appendix B) and has practically no influence on metabolite concentration calculations. This observation illustrates that CompLaB can calculate microbial metabolic reactions in porous media with a heterogeneous distribution of pore, biomass, organic matter, and minerals, based on the genome-scale metabolic model with the superior computational efficiency of a surrogate model without losing accuracy. Applying Monte Carlo simulations to generate FBA data to train a neural network model required solving the linear programming problem 5000 times with a set of randomly chosen uptake rates of acetate and ammonium, which does not add a significant computational burden. In our application, we determined the number of layers and nodes in the neural network to be 4 and 10, respectively, because no further improvement in model performance was observed beyond those values. The computational efficiency of ANN also works in favor of scalability. The reaction processes of CompLaB are inherently an embarrassing parallel task because calculating biogeochemical sub-processes is completely independent of the neighbors (except CA) and all model performance simulations show the reasonable scalability up to 64 cores (Fig. 5). However, the scaling behaviors of all simulations except ANN illustrate suboptimal scalability with no or limited speed-up when using more compute resources. The loss of efficiency originates mostly from the calculation of reaction processes (Fig. 5c) because the domain decomposition applied to the heterogeneous simulation domain (Fig. 4) resulted in an uneven distribution of biofilm grid cells per subdomain and hence a variable size of the problem to be solved on each core. In fact, in our simple example problem (a total of 500×300 computational grid cells, constant random seeds), a domain decomposition when using 64, 128, and 192 cores produced 6, 38, and 76 subdomains, respectively, that have no initial biomass grid cells. Such subdomains do not contribute to computing microbial metabolisms (FBA) and biomass redistribution (CA) that consumes most of the computational cost (e.g., GLPK calculation consumes ∼99% of the total computational cost), preventing ideal parallel performance (Fig. 5d). While this is also true for ANN, computational efficiency of ANN reduces the time wasted by such subdomains. As a result, the suboptimality is not readily apparent in ANN (Fig. 5c). The CA algorithm implemented in CompLaB is a nonlocal process requiring information from neighboring grid cells, but the result still exhibits a good scalability when using up to 64 cores and suboptimal scalability when more cores are used. The CA simulation required less time than FBA simulations because the metabolic reactions were calculated using KNS. But CA spent extra time on updating the flow field (Fig. 5b) and redistributing excess biomass (Fig. 5c and d). This illustrates that the actual time required for the CA algorithm depends on the nature of the biomass expansion. For example, more time will be required for a system with rapid biofilm growth excess because excess biomass has to travel a longer distance through a thick biofilm. Furthermore, flow fields will need to be updated more often to reflect the influence of rapid biofilm growth on flow. The numerical modeling platform CompLaB for simulating 2D pore-scale reactive transport processes is capable of utilizing the quantitative implementation of the microbial metabolism through the coupling of genome-scale metabolic models. The integration of in silico cell models with reactive transport simulations makes this framework broadly applicable and enables the integration of knowledge gained from the “omics”-based characterization of microbial systems. For example, the successful reproduction of experimentally observed convergences to a stable composition of a two-species consortium (S. enterica and E. coli) demonstrates the capability of CompLaB to investigate metabolic interactions between multiple microbial species. Our novel numerical framework based on the lattice Boltzmann method allows simulating advection as well as diffusion of metabolites in complex porous media. A wide range of simulation domains can be used to represent soil structure and fractured rock images directly obtained from various imaging techniques (e.g., μ-CT, FIB-SEM) or numerically generated porous media with material numbers assigned to pore, solid, and source/sink grid cells for biogeochemical reactions which include but are not limited to biofilms. The inherent parallel efficiency of CompLaB facilitates simulating dynamic flux balance analysis capturing the microbial feedback on flow and transport in porous media, as done previously using Monod-type representations of microbial activity (Jung and Meile, 2021). Furthermore, the versatile simulation environment of CompLaB allows utilizing surrogate models, such as an artificial neural network. The resulting speed-up enables the investigation of complex biogeochemical processes in natural environments. Appendix A:Convergence of the verification model to a stable ratio after 100h The six simulation cases used in Sect. 4 for model verification were run for 100h of simulation time to further evaluate whether the observed convergence to an average composition ratio is stable (Fig. A1). The composition ratio observed after 48h (0.75) is largely maintained through the extended simulation period (increases only to 0.78 after 100h). Appendix B:Surrogate model accuracy The surrogate modeling approach inevitably introduces errors in model estimations. The errors should be maintained sufficiently low throughout the surrogate simulation otherwise errors can propagate in successive iterations and result in unphysical results. To quantify the errors in surrogate model estimations, the solutions of our artificial neural network (ANN) were compared to the reference simulation COBRApy (CPY) by calculating the arithmetic mean of the root mean squared errors: $\begin{array}{}\text{(B1)}& {\text{error}}_{t}=\frac{\mathrm{1}}{m}\sum _{j}^{m}\frac{\sqrt{\frac{\mathrm{1}}{{n}_{j}}{\sum }_{i}^{{n}_{j}}\left({\text{CPY}}_{i,j}-{\text{ANN}}_{i,j}{\right)}^{\ where j is the variable type (B[S], B[P], C[Ac], and ${C}_{{\mathrm{NH}}_{\mathrm{4}}}$), m is the number of variables j used in calculating the error, n[j] is the number of grid cells for each variable j, and t is the time step where the error is evaluated. The differences between the FBA simulations using GLPK and CPY solvers are negligible, so only the CPY solution was chosen as the reference (Fig. B1). The model code, input files used for this study, and a manual are available at https://doi.org/10.5281/zenodo.7095756 (Jung et al., 2022). Developments after the publication of this article will continue to be hosted at https://bitbucket.org/MeileLab/complab/ (Jung and Meile, 2022). The neural network model was established from flux balance model simulations. That model (the genome-scale metabolic network of Geobacter metallireducens GS-15 (iAF987)) was obtained from the BiGG database (King et al., 2016). The artificial neural network model along with iAF987 are publicly available through the Zenodo code repository (Jung et al., 2022). HJ developed the research, performed the overall programming and simulations, analyzed and interpreted the data, wrote the initial draft, and revised the paper, HS trained the ANN model and reviewed the paper, CM conceived the research, carried out the performance measures, and revised the paper. The contact author has declared that none of the authors has any competing interests. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. The authors thank Shan-Ho Tsai for help with the simulations, Benjamin Borer for constructive discussions and feedback on IndiMeSH, and Maria De La Fuente Ruiz and an anonymous reviewer for their comments, which helped improve the paper. This study was also supported by resources and technical expertise from the Georgia Advanced Computing Resource Center, a partnership between the University of Georgia's Office of the Vice President for Research and the Office of the Vice President for Information Technology. This work was supported by the U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research, Genomic Sciences Program under award number DE-SC0016469 and DE-SC0020373 to Christof Meile and by the Ministry of Science and ICT, South Korea, Institute for Korea Spent Nuclear Fuel (iKSNF) and the National Research Foundation of Korea (NRF) under award number 2021M2E1A1085202 to Heewon Jung. This paper was edited by Sandra Arndt and reviewed by Maria De La Fuente Ruiz and one anonymous referee. Alemani, D., Chopard, B., Galceran, J., and Buffle, J.: LBGK method coupled to time splitting technique for solving reaction-diffusion processes in complex systems, Phys. Chem. Chem. Phys., 7, 3331–3341, https://doi.org/10.1039/b505890b, 2005. Arkin, A. P., Cottingham, R. W., Henry, C. S., Harris, N. L., Stevens, R. L., Maslov, S., Dehal, P., Ware, D., Perez, F., Canon, S., Sneddon, M. W., Henderson, M. L., Riehl, W. J., Murphy-Olson, D., Chan, S. Y., Kamimura, R. T., Kumari, S., Drake, M. M., Brettin, T. S., Glass, E. M., Chivian, D., Gunter, D., Weston, D. J., Allen, B. H., Baumohl, J., Best, A. A., Bowen, B., Brenner, S. E., Bun, C. C., Chandonia, J. M., Chia, J. M., Colasanti, R., Conrad, N., Davis, J. J., Davison, B. H., Dejongh, M., Devoid, S., Dietrich, E., Dubchak, I., Edirisinghe, J. N., Fang, G., Faria, J. P., Frybarger, P. M., Gerlach, W., Gerstein, M., Greiner, A., Gurtowski, J., Haun, H. L., He, F., Jain, R., Joachimiak, M. P., Keegan, K. P., Kondo, S., Kumar, V., Land, M. L., Meyer, F., Mills, M., Novichkov, P. S., Oh, T., Olsen, G. J., Olson, R., Parrello, B., Pasternak, S., Pearson, E., Poon, S. S., Price, G. A., Ramakrishnan, S., Ranjan, P., Ronald, P. C., Schatz, M. C., Seaver, S. M. D., Shukla, M., Sutormin, R. A., Syed, M. H., Thomason, J., Tintle, N. L., Wang, D., Xia, F., Yoo, H., Yoo, S., and Yu, D.: KBase: The United States Department of Energy Systems Biology Knowledgebase, Nat. Biotechnol., 36, 566–569, https://doi.org/10.1038/nbt.4163, 2018. Ataman, M. and Hatzimanikatis, V.: lumpGEM: Systematic generation of subnetworks and elementally balanced lumped reactions for the biosynthesis of target metabolites, PLOS Comput. Biol., 13, 1–21, https://doi.org/10.1371/journal.pcbi.1005513, 2017. Ataman, M., Hernandez Gardiol, D. F., Fengos, G., and Hatzimanikatis, V.: redGEM: Systematic reduction and analysis of genome-scale metabolic reconstructions for development of consistent core metabolic models, PLOS Comput. Biol., 13, 1–22, https://doi.org/10.1371/journal.pcbi.1005444, 2017. Bauer, E., Zimmermann, J., Baldini, F., Thiele, I., and Kaleta, C.: BacArena: Individual-based metabolic modeling of heterogeneous microbes in complex communities, PLOS Comput. Biol., 13, 1–22, https://doi.org/10.1371/journal.pcbi.1005544, 2017. Bhatnagar, P. L., Gross, E. P., and Krook, M.: A Model for Collision Processes in Gases. I. Small Amplitude Processes in Charged and Neutral One-Component Systems, Phys. Rev., 94, 511–525, https:// doi.org/10.1103/PhysRev.94.511, 1954. Borer, B., Ataman, M., Hatzimanikatis, V., and Or, D.: Modeling metabolic networks of individual bacterial agents in heterogeneous and dynamic soil habitats (IndiMeSH), Plos Comput. Biol., 15, 1–21, https://doi.org/10.1371/journal.pcbi.1007127, 2019. De Lucia, M. and Kühn, M.: DecTree v1.0 – chemistry speedup in reactive transport simulations: purely data-driven and physics-based surrogates, Geosci. Model Dev., 14, 4713–4730, https://doi.org/ 10.5194/gmd-14-4713-2021, 2021. Dukovski, I., Bajić, D., Chacón, J. M., Quintin, M., Vila, J. C. C., Sulheim, S., Pacheco, A. R., Bernstein, D. B., Riehl, W. J., Korolev, K. S., Sanchez, A., Harcombe, W. R., and Segrè, D.: A metabolic modeling platform for the computation of microbial ecosystems in time and space (COMETS), Nat. Protoc., 16, 5030–5082, https://doi.org/10.1038/s41596-021-00593-3, 2021. Ebrahim, A., Lerman, J. A., Palsson, B. O., and Hyduke, D. R.: COBRApy: COnstraints-Based Reconstruction and Analysis for Python, BMC Syst. Biol., 7, 74, https://doi.org/10.1186/1752-0509-7-74, Fang, Y., Scheibe, T. D., Mahadevan, R., Garg, S., Long, P. E., and Lovley, D. R.: Direct coupling of a genome-scale microbial in silico model and a groundwater reactive transport model, J. Contam. Hydrol., 122, 96–103, https://doi.org/10.1016/j.jconhyd.2010.11.007, 2011. Fang, Y., Wilkins, M. J., Yabusaki, S. B., Lipton, M. S., and Long, P. E.: Evaluation of a genome-scale in silico metabolic model for Geobacter metallireducens by using proteomic data from a field biostimulation experiment, Appl. Environ. Microbiol., 78, 8735–8742, https://doi.org/10.1128/AEM.01795-12, 2012. GLPK (GNU Linear Programming Kit): GLPK – GNU Project – Free Software Foundation (FSF), https://www.gnu.org/software/glpk/glpk.html, last access: July 2022. Golparvar, A., Kästner, M., and Thullner, M.: Pore-scale modeling of microbial activity: What we have and what we need, Vadose Zone J., 20, 1–17, https://doi.org/10.1002/vzj2.20087, 2021. Grinberg, M., Orevi, T., and Kashtan, N.: Bacterial surface colonization, preferential attachment and fitness under periodic stress, PLOS Comput Biol, 15, e1006815, https://doi.org/10.1371/ journal.pcbi.1006815, 2019. Harcombe, W.: Novel cooperation experimentally evolved between species, Evolution (N.Y)., 64, 2166–2172, https://doi.org/10.1111/j.1558-5646.2010.00959.x, 2010. Harcombe, W. R., Riehl, W. J., Dukovski, I., Granger, B. R., Betts, A., Lang, A. H., Bonilla, G., Kar, A., Leiby, N., Mehta, P., Marx, C. J., and Segrè, D.: Metabolic resource allocation in individual microbes determines ecosystem interactions and spatial dynamics, Cell Rep., 7, 1104–1115, https://doi.org/10.1016/j.celrep.2014.03.070, 2014. Heirendt, L., Arreckx, S., Pfau, T., Mendoza, S. N., Richelle, A., Heinken, A., Haraldsdóttir, H. S., Wachowiak, J., Keating, S. M., Vlasov, V., Magnusdóttir, S., Ng, C. Y., Preciat, G., Žagare, A., Chan, S. H. J., Aurich, M. K., Clancy, C. M., Modamio, J., Sauls, J. T., Noronha, A., Bordbar, A., Cousins, B., El Assal, D. C., Valcarcel, L. V., Apaolaza, I., Ghaderi, S., Ahookhosh, M., Ben Guebila, M., Kostromins, A., Sompairac, N., Le, H. M., Ma, D., Sun, Y., Wang, L., Yurkovich, J. T., Oliveira, M. A. P., Vuong, P. T., El Assal, L. P., Kuperstein, I., Zinovyev, A., Hinton, H. S., Bryant, W. A., Aragón Artacho, F. J., Planes, F. J., Stalidzans, E., Maass, A., Vempala, S., Hucka, M., Saunders, M. A., Maranas, C. D., Lewis, N. E., Sauter, T., Palsson, B. Ø., Thiele, I., and Fleming, R. M. T.: Creation and analysis of biochemical constraint-based models using the COBRA Toolbox v.3.0, Nat. Protoc., 14, 639–702, https://doi.org/10.1038/s41596-018-0098-2, 2019. Huber, C., Shafei, B., and Parmigiani, A.: A new pore-scale model for linear and non-linear heterogeneous dissolution and precipitation, Geochim. Cosmochim. Ac., 124, 109–130, https://doi.org/10.1016 /j.gca.2013.09.003, 2014. Jung, H. and Meile, C.: Upscaling of microbially driven first-order reactions in heterogeneous porous media, J. Contam. Hydrol., 224, 103483, https://doi.org/10.1016/j.jconhyd.2019.04.006, 2019. Jung, H. and Meile, C.: Pore-Scale Numerical Investigation of Evolving Porosity and Permeability Driven by Biofilm Growth, Transport Porous Med., 139, 203–221, https://doi.org/10.1007/ s11242-021-01654-7, 2021. Jung, H. and Meile, C.: Complab bitbucket repository [code], https://bitbucket.org/MeileLab/complab/, last access: September 2022. Jung, H., Song, H.-S., and Meile, C.: CompLaB: a scalable pore-scale model for flow, biogeochemistry, microbial metabolism, and biofilm dynamics, Zenodo [code], https://doi.org/10.5281/zenodo.7095756 , 2022. Kang, Q., Lichtner, P. C., and Zhang, D.: An improved lattice Boltzmann model for multicomponent reactive transport in porous media at the pore scale, Water Resour. Res., 43, 1–12, https://doi.org/ 10.1029/2006WR005551, 2007. King, E. L., Tuncay, K., Ortoleva, P., and Meile, C.: Modeling biogeochemical dynamics in porous media: Practical considerations of pore scale variability, reaction networks, and microbial population dynamics in a sandy aquifer, J. Contam. Hydrol., 112, 130–140, https://doi.org/10.1016/j.jconhyd.2009.12.002, 2010. King, Z. A., Lu, J. S., Dräger, A., Miller, P. C., Federowicz, S., Lerman, J. A., Ebrahim, A., Palsson, B. O., and Lewis, N. E.: BiGG Models: A platform for integrating, standardizing, and sharing genome-scale models, Nucleic Acids Res., 44, D515–D522, doi:10.1093/nar/gkv1049, 2016. König, S., Vogel, H. J., Harms, H., and Worrich, A.: Physical, Chemical and Biological Effects on Soil Bacterial Dynamics in Microscale Models, Front. Ecol. Evol., 8, 1–10, https://doi.org/10.3389/ fevo.2020.00053, 2020. Kotsalos, C., Latt, J., and Chopard, B.: Bridging the computational gap between mesoscopic and continuum modeling of red blood cells for fully resolved blood flow, J. Comput. Phys., 398, 108905, https://doi.org/10.1016/j.jcp.2019.108905, 2019. Krüger, T., Kusumaatmaja, H., Kuzmin, A., Shardt, O., Silva, G., and Viggen, E. M.: The Lattice Boltzmann Method, Springer International Publishing, Cham, https://doi.org/10.1007/978-3-319-44649-3, Latt, J., Malaspinas, O., Kontaxakis, D., Parmigiani, A., Lagrava, D., Brogi, F., Belgacem, M. Ben, Thorimbert, Y., Leclaire, S., Li, S., Marson, F., Lemus, J., Kotsalos, C., Conradin, R., Coreixas, C., Petkantchin, R., Raynaud, F., Beny, J., and Chopard, B.: Palabos: Parallel Lattice Boltzmann Solver, Comput. Math. Appl., 81, 334–350, https://doi.org/10.1016/j.camwa.2020.03.022, 2021. Lovley, D. R., Giovannoni, S. J., White, D. C., Champine, J. E., Phillips, E. J., Gorby, Y. A., and Goodwin, S.: Geobacter metallireducens gen. nov. sp. nov., a microorganism capable of coupling the complete oxidation of organic compounds to the reduction of iron and other metals, Arch Microbiol., 159, 336–344, https://doi.org/10.1007/BF00290916, 1993. Marozava, S., Röling, W. F. M., Seifert, J., Küffner, R., Von Bergen, M., and Meckenstock, R. U.: Physiology of Geobacter metallireducens under excess and limitation of electron donors. Part I. Batch cultivation with excess of carbon sources, Syst. Appl. Microbiol., 37, 277–286, https://doi.org/10.1016/j.syapm.2014.02.004, 2014. Meile, C. and Scheibe, T. D.: Reactive Transport Modeling of Microbial Dynamics, Elements, 15, 111–116, https://doi.org/10.2138/gselements.15.2.111, 2019. Molins, S.: Reactive Interfaces in Direct Numerical Simulation of Pore-Scale Processes, Rev. Mineral. Geochem., 80, 461–481, https://doi.org/10.2138/rmg.2015.80.14, 2015. Nikdel, A., Braatz, R. D., and Budman, H. M.: A systematic approach for finding the objective function and active constraints for dynamic flux balance analysis, Biopro. Biosyst. Eng., 41, 641–655, https://doi.org/10.1007/s00449-018-1899-y, 2018. Oostrom, M., Mehmani, Y., Romero-Gomez, P., Tang, Y., Liu, H., Yoon, H., Kang, Q., Joekar-Niasar, V., Balhoff, M. T., Dewers, T., Tartakovsky, G. D., Leist, E. A., Hess, N. J., Perkins, W. A., Rakowski, C. L., Richmond, M. C., Serkowski, J. A., Werth, C. J., Valocchi, A. J., Wietsma, T. W., and Zhang, C.: Pore-scale and continuum simulations of solute transport micromodel benchmark experiments, Comput. Geosci., 20, 857–879, https://doi.org/10.1007/s10596-014-9424-0, 2016. Orth, J. D., Thiele, I., and Palsson, B. O.: What is flux balance analysis?, Nat. Biotechnol., 28, 245–248, https://doi.org/10.1038/nbt.1614, 2010. Pintelon, T. R. R., Picioreanu, C., van Loosdrecht, M. C. M., and Johns, M. L.: The effect of biofilm permeability on bio-clogging of porous media, Biotechnol. Bioeng., 109, 1031–1042, https:// doi.org/10.1002/bit.24381, 2012. Prasianakis, N. I., Haller, R., Mahrous, M., Poonoosamy, J., Pfingsten, W., and Churakov, S. V.: Neural network based process coupling and parameter upscaling in reactive transport simulations, Geochim. Cosmochim. Ac., 291, 126–143, https://doi.org/10.1016/j.gca.2020.07.019, 2020. Scheibe, T. D., Mahadevan, R., Fang, Y., Garg, S., Long, P. E., and Lovley, D. R.: Coupling a genome-scale metabolic model with a reactive transport model to describe in situ uranium bioremediation, Microb. Biotechnol., 2, 274–286, https://doi.org/10.1111/j.1751-7915.2009.00087.x, 2009. Song, H.-S., Ahamed, F., Lee, J.-Y., Henry, C. C., Edirisinghe, J. N., Nelson, W. C., Chen, X., Moulton, J. D., and Scheibe, T. D.: Coupling flux balance analysis with reactive transport modelling through machine learning for rapid and stable simulation of microbial metabolic switching, bioRxiv, 2023.02.06.527371, https://doi.org/10.1101/2023.02.06.527371, 2023. Souzy, M., Lhuissier, H., Méheust, Y., Le Borgne, T., and Metzger, B.: Velocity distributions, dispersion and stretching in three-dimensional porous media, J. Fluid Mech., 891, A16, https://doi.org/ 10.1017/jfm.2020.113, 2020. Steefel, C. I., Appelo, C. A. J., Arora, B., Jacques, D., Kalbacher, T., Kolditz, O., Lagneau, V., Lichtner, P. C., Mayer, K. U., Meeussen, J. C. L., Molins, S., Moulton, D., Shao, H., Šimůnek, J., Spycher, N., Yabusaki, S. B., and Yeh, G. T.: Reactive transport codes for subsurface environmental simulation, Comput. Geosci., 19, 445–478, https://doi.org/10.1007/s10596-014-9443-x, 2015. Sudhakar, P., Machiels, K., Verstockt, B., Korcsmaros, T., and Vermeire, S.: Computational Biology and Machine Learning Approaches to Understand Mechanistic Microbiome-Host Interactions, Front. Microbiol., 12, 1–19, https://doi.org/10.3389/fmicb.2021.618856, 2021. Tang, Y. and Valocchi, A. J.: An improved cellular automaton method to model multispecies biofilms, Water Res., 47, 5729–5742, https://doi.org/10.1016/j.watres.2013.06.055, 2013. Tang, Y., Valocchi, A. J., Werth, C. J., and Liu, H.: An improved pore-scale biofilm model and comparison with a microfluidic flow cell experiment, Water Resour. Res., 49, 8370–8382, https://doi.org/ 10.1002/2013WR013843, 2013. Thullner, M. and Baveye, P.: Computational pore network modeling of the influence of biofilm permeability on bioclogging in porous media, Biotechnol. Bioeng., 99, 1337–1351, https://doi.org/10.1002/ bit.21708, 2008. Trinsoutrot, I., Recous, S., Bentz, B., Linères, M., Chèneby, D., and Nicolardot, B.: Biochemical Quality of Crop Residues and Carbon and Nitrogen Mineralization Kinetics under Nonlimiting Nitrogen Conditions, Soil Sci. Soc. Am. J., 64, 918–926, https://doi.org/10.2136/sssaj2000.643918x, 2000. Ziegler, D. P.: Boundary conditions for lattice Boltzmann simulations, J. Stat. Phys., 71, 1171–1177, https://doi.org/10.1007/BF01049965, 1993.
{"url":"https://gmd.copernicus.org/articles/16/1683/2023/","timestamp":"2024-11-05T09:40:44Z","content_type":"text/html","content_length":"314526","record_id":"<urn:uuid:d52c2522-d3cf-4a78-a60a-0d164cd395df>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00677.warc.gz"}
Quantization is a process that, in effect, digitizes an analog signal. The process maps input sample values within range partitions to different common values. Quantization mapping requires a partition and a codebook. Use the quantiz function to map an input signal to a scalar quantized signal. Represent Partitions A quantization partition defines several contiguous, nonoverlapping ranges of values within a set of real numbers. To specify a partition, list the distinct endpoints of the different ranges in a For example, suppose the partition separates the real number line into the four sets or intervals: • {x: x ≤ 0} • {x: 0 < x ≤ 1} • {x: 1 < x ≤ 3} • {x: 3 < x} Then you can represent the partition as a three-element vector: The length of the partition vector is one less than the number of partition intervals. Represent Codebooks A codebook tells the quantizer which common value to assign to inputs that fall into the distinct intervals defined by the partition vector. Represent a codebook as a vector whose length is the same as the number of partition intervals. For example: codebook = [-1, 0.5, 2, 3]; This vector is one possible codebook for the partition vector [0,1,3]. Determine Quantization Interval for Each Input Sample To determine quantization intervals, in this example, you examine the index and quants vectors returned by the quantiz function. The index vector indicates the quantization interval for each input sample as specified by the input partition vector and the quants vector maps the input samples to quantization values specified by the input codebook vector. Quantize a set of data by using the quantiz function with the specified partition and examine the returned vector index. If you run quantiz without specifying a codebook input, the function assigns the codebook quantization values based on the partition values. data = [2 9 8]; partition = [3,4,5,6,7,8,9]; index = quantiz(data,partition) The index vector that indicates the input data samples, [2 9 8], lie within the intervals labeled 0, 6, and 5 because partition specifies that • Interval 0 consists of real numbers less than or equal to 3. • Interval 6 consists of real numbers greater than 8 but less than or equal to 9. • Interval 5 consists of real numbers greater than 7 but less than or equal to 8. Suppose you continue this example by defining a codebook vector as follows: codebook = [3,3,4,5,6,7,8,9]; Then this formula relates the vector index to the quantized signal quants. quants = codebook(index+1) This formula for quants is exactly what the quantiz function uses if you input the codebook vector and return the quants vector. Examine the returned vector quants to see the partition vector intervals [0 6 5] map to the quantization values [3 8 7] as defined by the codebook vector. partition = [3,4,5,6,7,8,9]; codebook = [3,3,4,5,6,7,8,9]; [index,quants] = quantiz(data,partition,codebook) Quantize Sampled Sine Wave To illustrate the nature of scalar quantization, this example shows how to quantize a sine wave. Plot the original and quantized signals to contrast the x symbols that make up the sine curve with the dots that make up the quantized signal. The vertical coordinate of each dot is a value in the vector codebook. Generate a sine wave sampled at times defined by t. Specify the partition input by defining the distinct endpoints of the different intervals as the element values of the vector. Specify the codebook input with an element value for each interval defined in the partition vector. The codebook vector must be one element longer than the partition vector. t = [0:.1:2*pi]; sig = sin(t); partition = [-1:.2:1]; codebook = [-1.2:.2:1]; Perform quantization on the sampled sine wave. [index,quants] = quantiz(sig,partition,codebook); Plot the quantized sine wave and the sampled sine wave. title('Quantization of Sine Wave') legend('Original sampled sine wave','Quantized sine wave'); axis([-.2 7 -1.2 1.2]) Optimize Quantization Parameters Testing and selecting parameters for large signal sets with a fine quantization scheme can be tedious. One way to produce partition and codebook parameters easily is to optimize them according to a set of training data. The training data should be typical of the kinds of signals to be quantized. This example uses the lloyds function to optimize the partition and codebook according to the Lloyd algorithm. The code optimizes the partition and codebook for one period of a sinusoidal signal, starting from a rough initial guess. Then the example runs the quantiz function twice to generate quantized data by using the initial partition and codebook input values and by using the optimized partitionOpt and codebookOpt input values. The example also compares the distortion for the initial and the optimized quantization. Define variables for a sine wave signal and initial quantization parameters. Optimize the partition and codebook by using the lloyds function. t = 0:.1:2*pi; sig = sin(t); partition = -1:.2:1; codebook = -1.2:.2:1; [partitionOpt,codebookOpt] = lloyds(sig,codebook); Generate quantized signals by using the initial and the optimized partition and codebook vectors. The quantiz function automatically computes the mean square distortion and returns it as the third output argument. Compare mean square distortions for quantization with the initial and optimized input arguments to see how less distortion occurs when using the optimized quantized values. [index,quants,distor] = quantiz(sig,partition,codebook); [indexOpt,quantOpt,distorOpt] = ... [distor, distorOpt] Plot the sampled sine wave, the quantized sine wave, and the optimized quantized sine wave. title('Quantization of Sine Wave') legend('Original sampled sine wave', ... 'Quantized sine wave', ... 'Optimized quantized sine wave'); axis([-.2 7 -1.2 1.2]) Quantize and Compand an Exponential Signal When transmitting signals with a high dynamic range, quantization using equal length intervals can result in signal distortion and a loss of precision. Companding applies a logarithmic computation to compress the signal before quantization on the transmit side and to expand the signal to restore it to full scale on the receive side. Companding avoids signal distortion without the need to specify many quantization levels. Compare distortion when using 6-bit quantization on an exponential signal with and without companding. Plot the original exponential signal, the quantized signal, and the expanded signal. Create an exponential signal and calculate its maximum value. sig = exp(-4:0.1:4); V = max(sig); Quantize the signal by using equal-length intervals. Set partition and codebook values, assuming 6-bit quantization. Calculate the mean square distortion. partition = 0:2^6 - 1; codebook = 0:2^6; [~,qsig,distortion] = quantiz(sig,partition,codebook); Compress the signal by using the compand function configured to apply the mu-law method. Apply quantization and expand the quantized signal. Calculate the mean square distortion of the companded mu = 255; % mu-law parameter csig_compressed = compand(sig,mu,V,'mu/compressor'); [~,quants] = quantiz(csig_compressed,partition,codebook); csig_expanded = compand(quants,mu,max(quants),'mu/expander'); distortion2 = sum((csig_expanded - sig).^2)/length(sig); Compare the mean square distortion for quantization versus combined companding and quantization. The distortion for the companded and quantized signal is an order of magnitude lower than the distortion of the quantized signal. Equal-length intervals are well suited to the logarithm of an exponential signal but not well suited to an exponential signal itself. [distortion, distortion2] Plot the original exponential signal, the quantized signal, and the expanded signal. Zoom in on the axis to highlight the quantized signal error at lower signal levels. plot([sig' qsig' csig_expanded']); title('Comparison of Original, Quantized, and Expanded Signals'); axis([0 70 0 20]) See Also Related Topics
{"url":"https://au.mathworks.com/help/comm/ug/quantization.html;jsessionid=2dccd5c9bb131967add543b5409c","timestamp":"2024-11-06T19:04:05Z","content_type":"text/html","content_length":"93007","record_id":"<urn:uuid:53104d3b-3b5d-406d-a176-2699b31795ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00367.warc.gz"}
How do you find the minimum of a function? How do you find the minimum of a function? You can find this minimum value by graphing the function or by using one of the two equations. If you have the equation in the form of y = ax^2 + bx + c, then you can find the minimum value using the equation min = c – b^2/4a. How do you find the maximum or minimum of a function? Finding max/min: There are two ways to find the absolute maximum/minimum value for f(x) = ax2 + bx + c: Put the quadratic in standard form f(x) = a(x − h)2 + k, and the absolute maximum/minimum value is k and it occurs at x = h. If a > 0, then the parabola opens up, and it is a minimum functional value of f. What is minimum () in math? The smallest value. The minimum of {14, 4, 16, 12} is 4. How do you find the maximum of a function? If you are given the formula y = ax2 + bx + c, then you can find the maximum value using the formula max = c – (b2 / 4a). If you have the equation y = a(x-h)2 + k and the a term is negative, then the maximum value is k. How do you find the minimum of a set of data? The minimum is the first number listed as it is the lowest, and the maximum is the last number listed because it is the highest. What is minimum in quadratic function? Minimum Value of a Quadratic Function The minimum value is ‘y’ coordinate at the vertex of the parabola. Note : There is no maximum value for the parabola which opens up. What is minimum example? 6. 1. Minimum means the lowest amount or allowable amount of something. An example of a minimum is 40 miles per hour as the lowest speed allowed on a parkway. noun. How do you find the maximum and minimum of a function on a calculator? Press 2nd, then CALC (above the TRACE key). Choose 4: maximum or 3: minimum. The calculator will ask you for a left bound, a right bound, and a guess for the maximum or minimum. You can enter these by using your left and right arrows to move the cursor to a reasonable x-value, then pressing ENTER.
{"url":"https://thecrucibleonscreen.com/how-do-you-find-the-minimum-of-a-function/","timestamp":"2024-11-04T10:51:57Z","content_type":"text/html","content_length":"52792","record_id":"<urn:uuid:5d5ed1f3-0de3-4eb7-a27c-793682f78f67>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00774.warc.gz"}
VLOOKUP Function & Formula in Excel - Free Excel Tutorial In our daily work, we often encounter the situation of finding data based on the original data and conditions provided, such as finding cell phone numbers based on name, finding employee names and positions based on HRID, etc. When we talk about the lookup function in excel workbook, most people will first think of using the VLOOKUP function. VLOOKUP function is the most commonly used lookup function in excel, even if you don’t use VLOOKUP function, you must have heard of it. How do we apply the VLOOKUP function in excel? What can it help us to achieve? Can we do lookups across worksheets? Today we will discuss the applications and usages of VLOOKUP function in Microsoft Excel Spreadsheet. Whether you’re a beginner or have some experience with Excel, this step by step VLOOKUP function tutorial is an essential resource for improving your data analysis skills. What is VLOOKUP Function in Excel 1. VLOOKUP Function Definition The VLOOKUP function in Microsoft Excel is a built-in formula used to search for a specific value in the first column of a table array and return a related value from another column in the same row. It stands for Vertical Lookup, as the function searches for the value vertically in the first column of the table array. If you need to find data by column, you need to use the HLOOKUP function, which is a horizontal lookup function with the similar arguments and usage as the VLOOKUP function. The VLOOKUP function is not case sensitive. It treats upper-case and lower-case characters as the same and returns a match regardless of the case of the search value or the values in the first column of the table array. The VLOOKUP function is available in Microsoft Excel 365, Excel 2019, Excel 2016, Excel 2013 and Excel 2010. 2. Two Types of VLOOKUP functions There are two types of VLOOKUP functions in Microsoft Excel: • Exact Match VLOOKUP: Returns the exact match for the search value, if it exists in the first column of the table array. It returns an error if the exact match is not found. • Approximate Match VLOOKUP: Returns the closest match for the search value, based on the closest value that is less than or equal to the search value. If you want to use the approximate match VLOOKUP, just set the last (optional) argument to TRUE or omit it, while to use the exact match VLOOKUP, you need to set the last argument to FALSE. 3. Limitations of VLOOKUP Function The VLOOKUP function has the following limitations in MS Excel Spreadsheet: • Can only search the first column: The VLOOKUP function can only search for a value in the first column of the specified table array, which limits its use in certain scenarios where values may be located in other columns. • Only returns the first match: If there are multiple instances of the lookup value in the first column, the VLOOKUP function will only return the first match, not all of them. • Can only return values from the same row: The VLOOKUP function can only return values from the same row as the lookup value, not from different rows or columns. • Can only return values to the right: The VLOOKUP function can only return values that are located to the right of the lookup value in the same row, not values to the left. • Can be slow with large data sets: The VLOOKUP function may slow down your spreadsheet if used extensively on large data sets, as it recalculates every time the worksheet is calculated. Despite these limitations, the VLOOKUP function is still a widely used and valuable tool in Microsoft Excel, particularly for basic data analysis and retrieval tasks. To overcome these limitations, you may consider using other functions such as INDEX and MATCH, or using arrays and conditional statements in combination with VLOOKUP. Where is VLOOKUP Function in Excel The VLOOKUP function can be found in Microsoft Excel in the “Formulas” ribbon or Formula box. 4. Insert VLOOKUP Function Using Formulas Ribbon The following are the steps to insert the VLOOKUP function in Microsoft Excel spreadsheets using the Formulas ribbon. Step 1: Select the cell where you want the result of the VLOOKUP function to be displayed. Step 2: Go to the “Formulas” ribbon and click on the “Lookup & Reference” category. Step 3: From the “Lookup & Reference” category, click on the “VLOOKUP” function. Step 4: The VLOOKUP function will be inserted into the selected cell and the Function argument dialog box will appear. Step 5: In the Function argument dialog box, specify the following values for each of the VLOOKUP function’s arguments: Lookup Value, Table Array, col_index_num, [Range Lookup]. Step 6: Press “Enter” or click “OK” to complete the function and display the result in the selected cell. 5. Insert VLOOKUP Function in the Formula Box The following are the steps to insert the VLOOKUP function in Microsoft Excel spreadsheets using the Formula box: Step 1: Select the cell where you want the result of the VLOOKUP function to be displayed. Step 2:Type “=VLOOKUP(” in the formula box. Step 3: Specify the lookup value in the first argument. For example, you can type a cell reference, such as “A1“, or a value in quotes, such as “A“. Step 4: Type a comma to move to the next argument, which is the table array. For example, you can type “C1:D3” to specify a range of cells from C1 to D3 as the table array. Step 5: Type a comma to move to the next argument, which is the column index number. For example, you can type “2” to specify that you want to return the value from the second column in the table Step 6: Type “)” to close the function. Step 7: Press “Enter” to complete the function and display the result in the selected cell. How To Use VLOOKUP Function & Formula The VLOOKUP function can be confusing at first, but with practice and understanding of the function’s syntax and arguments, it can be a useful function.The below is a brief explanation on how to use the VLOOKUP function in Microsoft Excel Spreadsheet. 6. VLOOKUP Function Syntax and Arguments The correct syntax of VLOOKUP function is as below: Argument Description Data Type lookup_value Value to find A numeric value, reference or text string table_array Lookup area Data table index_num Returns the number of columns in the lookup area Integer (positive) range_lookup Exact match / Approximate match False (0, exact match) True (1 or empty, approximate match) 4 arguments of VLOOKUP function are explained as follows: • lookup_value is the value to be looked up, which appears in the first column of the data table. lookup_value can be a numeric value, a cell or area reference, a text string. When this parameter is omitted, it means that look up is performed with 0 value. • table_array is the data table to look for. Enter a cell or range reference or table name. • col_index_num is the data column number of the table_array. col_index_num is 1, the value of the first column of the table_array is returned, col_index_num is 2, the value of the second column of the table_array is returned. If col_index_num is less than 1, VLOOKUP returns the error value #VALUE!, if col_index_num is greater than the number of columns of table_array, VLOOKUP returns the error value #REF!. • range_lookup is a logical value that specifies whether the VLOOKUP function will look for an exact match or an approximate match. If range_lookup is TRUE or 1, VLOOKUP will look for an approximate match, and if it does not find an exact match, it will return the maximum value less than lookup_value. In the following sections we will explain exact match and approximate match with concrete examples. Note: If the VLOOKUP function cannot find the value being looked up, it returns an error value, usually “#N/A”. 7. Alternative to VLOOKUP Function in Excel Some alternatives to the VLOOKUP function in Microsoft Excel Spreadsheet are: • INDEX and MATCH function: Instead of VLOOKUP, you can use the INDEX and MATCH functions together to achieve the same results. The INDEX function returns a value from a specified range based on a row and column number, while the MATCH function returns the relative position of a value within a range. • HLOOKUP function: This function is similar to VLOOKUP, but it searches for a value in the first row of a table array instead of the first column. • FILTER function: The FILTER function allows you to filter data based on specified criteria, and then return a value from the filtered data. • Pivot Tables: Pivot tables are an extremely useful tool in Excel that can be used to quickly summarize and analyze large amounts of data. You can use pivot tables as an alternative to VLOOKUP if you need to retrieve specific information from a large data set. 8. How To do VLOOKUP Faster Here are some tips to make your VLOOKUP formulas run faster: • Use the exact match option whenever possible. • Sort your data in ascending order. • Avoid using VLOOKUP with entire columns. • Use INDEX-MATCH instead of VLOOKUP, as it is faster in large data sets. • Keep your data organized and clean. • Convert your data to an Excel table if it meets the criteria. • Use absolute cell references in the VLOOKUP formula. Note: You should know that the performance of a VLOOKUP formula depends on several factors, including the size of the data set, the complexity of the formula, and the computing resources of your 9. How to Do a VLOOKUP on a VLOOKUP You can perform a VLOOKUP on a VLOOKUP by using a nested formula. In a nested formula, one formula is used as an argument within another formula. Here’s an example of how to do a VLOOKUP on a VLOOKUP: Assume that you have two data sets: • Table A with columns A and B • Table B with columns C, D, and E In Table A, you want to look up the value in column B based on the value in column A. In Table B, you want to look up the value in column E based on the value in column C. To perform a VLOOKUP on a VLOOKUP, you will first do a VLOOKUP in Table A to find the value in column D based on the value in column A. Then, you will use the result from the first VLOOKUP as the lookup value in the second VLOOKUP to find the value in column E. Here’s the formula: =VLOOKUP(VLOOKUP(A2, TableA, 2, TRUE), TableB, 3, TRUE) In this example, A2 is the lookup value in Table A, TableA is the data range for Table A, 2 is the column number for column B, TRUE specifies an exact match, TableB is the data range for Table B, and 3 is the column number for column E. 10. How To Use VLOOKUP Function in Excel VBA The VLOOKUP function can also be used in VBA (Visual Basic for Applications), the programming language for Excel. The syntax for using VLOOKUP in VBA is similar to using it in a formula in a cell. Here’s an example of how to use VLOOKUP in VBA: Sub vlookupExample() 'Declare the variables Dim LookupValue As String Dim TableArray As Range Dim ColIndexNum As Integer Dim RangeLookup As Boolean 'Set the variables LookupValue = "Black T-shirt" Set TableArray = Worksheets("Sheet1").Range("A1:C4") ColIndexNum = 2 RangeLookup = False 'Execute the VLOOKUP function result = Application.WorksheetFunction.VLookup(LookupValue, TableArray, ColIndexNum, RangeLookup) 'Display the result MsgBox result End Sub In the above code, the LookupValue is set to “Black T-shirt” and the TableArray is set to the range A1:C4 on a worksheet named “Sheet1“. The ColIndexNum is set to 2, which means the second column of the TableArray will be returned as the result. The RangeLookup argument is set to False, which means an exact match is required. Then You can Click on “Macros” button in Code group, choose “vlookupExample” Macro from “Macro” window, then click “Run” button. Let’s see the final result as below: VLOOKUP Function Examples 11. How To Do a VLOOKUP with an IF Function You can use the IF function (a logical function)in combination with the VLOOKUP function together to return a specified result if the lookup value is not found in the table array. Suppose you have two tables in Excel, one with a list of product names and their prices (table array), and another with a list of product names (lookup value) and you want to retrieve the corresponding prices, but return a “Not Found” message if the lookup value is not in the table array. =IF(ISNA(VLOOKUP(A2,$C$2:$D$5,2,FALSE)),"Not Found",VLOOKUP(A2,$C$2:$D$5,2,FALSE)) • The ISNA function checks if the VLOOKUP function returns the #N/A error, meaning the lookup value is not found in the table array. • If the lookup value is found, the VLOOKUP function returns the corresponding price. • If the lookup value is not found, the IF function returns the “Not Found” message. 12. How To Use VLOOKUP with AND Function The “Vlookup” function in Microsoft Excel can be combined with the “And” function to perform a search based on multiple criteria. The syntax for this combined formula is: =VLOOKUP(lookup_value,table_array,col_index_num, [range_lookup], (And(condition1, condition2, ...))) The above And function checks multiple conditions and returns “TRUE” if all conditions are met, and “FALSE” otherwise. 13. VLOOKUP Function with & Operator The & operator in Excel is used to concatenate or join together two or more strings. It can be used in combination with the VLOOKUP function to dynamically reference cells or ranges within the For example, if you have a table in the range A1:C5 and you want to search for a value in column B, you can use the following formula: If you want to dynamically reference the range of cells in the table based on the value in a separate cell, you can use the & operator to concatenate the cell reference with the column letter: 14. VLOOKUP Function with CHOOSE Function The CHOOSE function in Excel is used to select one item from a list of values based on a specified index number. It can be used in combination with the VLOOKUP function to dynamically choose the column to return a value from, based on the value of a separate cell. For example, if you have a table in the range A1:C5, and you want to return a value from either column B or column C based on the value of a separate cell (cell D1), you can use the following This formula uses the value in cell D1 as the index number for the CHOOSE function, which in turn selects either 2 or 3 as the col_index_num for the VLOOKUP function. If the value in cell D1 is 1, the formula returns a value from column B. If the value in cell D1 is 2, the formula returns a value from column C. 15. VLOOKUP Exact Match When range_lookup is false, the VLOOKUP formula performs an exact match lookup. It means that only if the lookup value is found in the first column of the data table, the value of the corresponding column is returned, and if the lookup value does not appear in the first column, the error value #N/A is returned. Enter “=VLOOKUP” in E2, enter details of arguments according to the lookup value and the table. In this example, we want to find the correct level for score 50, so 50 is the lookup value. The data table is A2:B11 which records the correspondence between grades and levels. Level is recorded in the second column of data table, so column index number is 2. At last, we want to get an exact match result, so enter “0” or “false” in the last argument. Argument Value lookup_value D2 table_array A2:B11 index_num 2 range_lookup 0 In this example, lookup value appears in the first column of data table. If it doesn’t appear, an error value #N/A will be returned. 16. VLOOKUP Approximate Match When range_lookup is true, the VLOOKUP function performs an approximate match lookup. Meaning that if the lookup value is not found in the first column of the data table, it returns the value of the corresponding column of the maximum value of all values less than the lookup value. Let’s look at the following example for a concrete analysis. At the end of the previous paragraph “VLOOKUP Exact Match”, we got an ERROR because score “55” does not appear in the first column of the data table. Now let’s change the range_lookup value to 1 (approximate match) and see the difference in the result. This time level “F” is returned in level value. See the example below: Referring to the rules of the VLOOKUP function, if no lookup value is found, the function will find the value closest to the lookup value. And the closest value should be a value less than the lookup value, and it is also the largest of all values less than the lookup value. In this example, the largest value less than 55 is 50, so VLOOKUP returns the corresponding level to 50. Argument Value lookup_value D2 table_array A2:B11 index_num 2 range_lookup 1 You can also assume that the passing line for an F level is 50 and for an E level is 60, and if you fail to reach 60, you will only be F. 17. VLOOKUP with Absolute Range Reference We know that in the Excel workbook, enter a formula in a cell, if the formula is dragged to other cells, cell references or range references used in the formula arguments will be automatically updated, so that we can directly drag and drop the formula into other cells to copy the formula without having to manually update the arguments. See example below: In this case, we have entered a VLOOKUP formula in H2, and we can drag the formula directly to H3 to reuse the formula, while the lookup value and lookup range get updated due to moving the formula. The VLOOKUP formula is dragged down from H2 to H3, the lookup value is updated from G2 to G3 and lookup range is updated from A2:B7 to A3:B8 accordingly. In this example, along with the movement of the formula, the lookup value HRID=5 still appears in the first column even though the lookup range has been moved down one row, and we can get the correct employee name by this formula. However, in some cases, this considerate automatic update can cause errors in real cases, see the following example. In this case, HRID=1 does not appear in the first column of data table A3:B8, so the lookup value does not exist and the formula returns error #N/A. In fact, in most VLOOKUP formulas, if the lookup range is fixed, we should lock the range by adding $ in front of the index numbers of the rows and columns, so that no matter where we copy this formula to later, the locked range will not be Add $ before row and column indexes to lock both row and column. The lookup range is $A$2:$E$7. Even copy this formula to H3, the range is still $A$2:$E$7. Sometimes we just need to lock the row or column by adding $ before the index number of the row or column, so that the locked row or column will not be changed by the movement of the formula. For more examples you can refer to the section “VLOOKUP returns multiple column values”. 18. VLOOKUP with Duplicate Lookup Value VLOOKUP has an important feature, when looking for duplicate values, the first duplicate value will be matched by default. Look at the following example. There are two “Cindy” in the second column. When we use VLOOKUP to find the cell phone number corresponding to “Cindy”, the phone number of the first Cindy will be We can add more helper information to return unique matching values. For more details and explanations, you can refer to the section “VLOOKUP Two-conditional Lookup”. 19. VLOOKUP Lookup the Largest/Smallest value Based on the property that when VLOOKUP encounters a duplicate lookup value, it only returns the value that matches the first lookup value, we can use VLOOKUP function to find the maximum or minimum value of the data. First of all, we need to sort the results of the column. See the example below. Fill in the highest score in the exam. Sort the results in descending order. Check on “Expand the selection” when soring result column, so that sorting will be applied to all three columns. Then use VLOOKUP formula to return the score from the first Math. 20. VLOOKUP Reverse Lookup Refer to above example we can see that VLOOKUP can only find data to the left of the lookup value in the data table. If you want to find the data to the right of the lookup value, it is called a reverse lookup. A reverse lookup needs the help of IF function. The above example is a very typical example, usually we find the name by looking up the HRID, this is finding the HRID by name. In the formula, the table_array is a IF formula, IF({1,0},B2:B6,A2:A6) will create a new two-column lookup table with the name first and the HRID second. This allows us to do reverse lookups. Argument Value lookup_value F2 table_array IF({1,0},B2:B6,A2:A6) index_num 2 range_lookup 0 21. VLOOKUP to Return Multiple Values A one-to-many lookup returns multiple results by looking for a unique lookup value. VLOOKUP can implement a one-to-many lookup with the help of creating a helper column. In this example, it is clear that there are two people in our table who belong to department “PA”. If we use the normal VLOOKUP formula “=VLOOKUP(G2,B2:E7,3,0)” to return the matching people in department “PA”, only one value “Ada” will be returned, because VLOOKUP will stop searching when it encounters the first “PA” in the data table. To output all values corresponding to the lookup value, we need to create a new helper column. See details in below steps. Step 1: First we insert a blank column on the left side of the data table, then enter =(C2=$G$2)+A1 in cell A2. Here A1 is 0, C2=$G$2 is a logical test, if it is true, it returns value 1, and if it is false, it returns value 0. In A2, the logical test is false, so the result is equal to “0+0=0”. Step 2: Drag the handle down to copy the formula to A3:A7, so that for every PA department encountered, it will be increased by 1. Step 3:In H2 enter the formula: In this formula, the lookup value is ROW (A1). ROW (A1) returns the cell reference A1’s row index number, which returns a value of 1. In fact, in the helper column, the first number “1” and the number “2” both correspond to the PA department (because the counter only increases by 1 when it encounters PA), what we want to do is to query the first two numbers “1” and “2”, so here we need the help of the Excel ROW function. Step 4:Drag the handle down to fill out the form below. You can stop filling out the form when an error value appears, which means the search is complete. 22. VLOOKUP to Return Multiple Column Values When we want to return multiple columns of data based on the lookup value, we can enter the VLOOKUP function once in each column. Although this can also be effective, it is not a quick way to do it. Please see the following example. We want to know the “Name” and “Title” information based on the HRID provided. Although we can enter two VLOOKUP formulas with different column index numbers, this approach is not convenient if we want to get multiple column values from the original data table. In above screenshot, you can see that we build the following VLOOKUP formula: Using this formula, we can drag the handle directly down to fill the “Name” and “Title” information in the form we want (G2:H3). The steps are as follows: Step 1: In cell G2 enter the formula =VLOOKUP(). Step 2: Enter cell $F2 in the lookup value and lock column F with ($) because the horizontal fill needs to keep column F unchanged. This step ensures that if you drag this formula to fill H2, column F remains unchanged and the lookup value remains $F2 (the column is locked, the row remains unchanged). Step 3: Enter $A$2:$D$7 in table_array. Add ($) in front of the column and row indexes to lock this range. Therefore, this range remains the same no matter where the user copies the formula. Step 4: Enter COLUMN(C1) in col_index_num. COLUMN(C1) returns cell C1’s column number, which returns a value of 3. If drag this formula to H2, COLUMN(C1) is changed to COLUMN(D1), which returns a value of 4, so that VLOOKUP will returns a value in the fourth column of data table. The fourth column is “Title”. Step 5: Enter “0” in range_lookup to perform an exact match. Step 6: Drag the handle down to fill the form G2:H3. There is another way to return multiple column values for multiple items at the same time based on the same lookup value. See the following example. Step1: Select range H2:I2. Step2: In the formula bar enter the VLOOKUP formula =VLOOKUP(G2,A2:E7,{3,5},FALSE). Please refer to the third argument col_index_num, different from usual, we only enter one column number in this argument, this time we enter an array containing two column numbers, so in step #1 we choose the range H2:I2 (one row and two columns) to save the returned result. Step3: As this is an array formula, press Control+Shift+Enter to load result. 23. VLOOKUP Two-Conditional Lookup This situation applies to tables with duplicate lookup values, because the lookup value is not unique and VLOOKUP may get the wrong result, so we need to add a condition as the lookup value. See the following example. There are two conditions “PA” and “Lead”. “PA” and “Lead” are both duplicate in the table. We need to get correct person’s name based on the two conditions. In this example, the formula is : To solve this problem, we can combine the two conditions “PA” and “Lead” into “PALead” and combine the content in the “Department” column and “Title” column into one column, then we can lookup “PALead” in this column. In addition to the new column, we should also add the “Name” column (where we want to get the values) to the right of the new column. This will give us a new table with two columns, the first with the “Department” and “Title”, and the second with “Name”. To create this new data table, we need the help of Excel’s IF function. The formula IF({1,0},B2:B7&D2:D7,C2:C7) can create a table according to our requirements. IF({1,0},B2:B7&D2:D7,C2:C7) returns the following array: {"RCLead","Nova";"PALead","Ada";"DCEngineer","Cindy";"ENEngineer","Kris";"VBProject Manager","Steven";"PAProject Manager","Calvin"} In this example, we enter F2&G2 in lookup value. We concentrate F2 and G2 to get a new condition “PALead”. After completing above steps, enter “2” in col_index_num to return a value in “Name” column, and enter “0” in range_lookup to perform an exact match. As this is an array formula, so we should press “ Control+Shift+Enter” to get a value. 24. VLOOKUP with Wildcards VLOOKUP can perform the search function normally if the lookup value contains wildcards. A wildcard is a special statement containing mainly asterisk (*) and question mark (?), wildcards are often used to fuzzy search for files. It can be used in place of one or more real characters when searching for a folder; wildcards are often used in place of one or more real characters when the real characters are not known or when you are too lazy to type the full name. • (*) – An asterisk stands for one or more characters • (?) – A question mark stands for single one character See the example below. In this example, only one letter “S” is shown in cell F2. We cannot look up this letter in the “Name” list because the spelling of the name is incomplete. We can determine that the first character inside the name is “S”. If we want to find a suitable title for this person, we can set the lookup value to F2&”*”, and the symbol (&) will combine the letter “S” in cell F2 with the wildcard character (*), so the lookup value is “S*”. “S*” represents a string that starts with the letter “S”. Therefore, when searching the list of names, “Steven” will be locked and the corresponding value “Project Manager” will be returned. 25. VLOOKUP Range Lookup by Approximate Match A range lookup is a range that corresponds to a result, such as a grade based on a range of scores like “50 to 60”. In order to perform a range lookup, we need to use the VLOOKUP approximate match First we need to make sure that the first column of the table is sorted in ascending order. The lookup value must fall in one of the data ranges in the first column. We then use VLOOKUP to do a regular lookup, where we need to set the fourth parameter to 1, which means that VLOOKUP performs an approximate match. The following example is copied from “VLOOKUP Approximate Match”. We have added a column in front of the original table to explain the individual score segments. You can refer to “VLOOKUP Approximate Matching” for more details. You just need to remember that when VLOOKUP performs an approximate match, VLOOKUP will take the value which is the largest value among all values less than the actual lookup value as the lookup VLOOKUP Error Value Handling 26. VLOOKUP returns error #N/A Failure reason 1: The lookup value doesn’t appear in the lookup range. Check lookup value and make sure it lists in the first column of data table. Failure reason 2: The lookup value doesn’t list in the first column. Check lookup value and make sure it lists in the first column of data table. Failure reason 3: Lookup range is not locked. So lookup value cannot be found in moved lookup range. Add $ to lock lookup range. Make sure lookup value exists in the first column of data table. 27. VLOOKUP returns error #REF! Failure reason 1: The lookup range cannot be found. For example, delete Sheet2, which holds the lookup range, so the range reference cannot be found. Sheet2 is not deleted. Sheet2 is deleted. Check lookup range to ensure values are saved in the entered range reference. Failure reason 2: If the number of columns entered in the third argument is greater than the actual number of columns in the second argument lookup range. Check the third argument “col_index_num”. 28. VLOOKUP returns error #VALUE Failure reason 1: If the argument “col_index_num” contains text or is less than 0, then error #VALUE is returned. Normally, we do not accidentally write a negative value or text to a column index, but if the column index is a value passed by another formula, it is possible to generate this error. Check the third argument “col_index_num”. 29. VLOOKUP returns error #NAME Failure reason 1: The lookup value format is incorrect. “Alice” is a text, if we enter “Alice” in the lookup value, rather than using the cell reference G2, the text or string should be quoted in double quotes, as shown below. Check format for all arguments. 30. VLOOKUP Eliminate the Error Value/ No Error Value Returned by VLOOKUP In VLOOKUP formulas, if you cannot find the lookup value in the provided data table, then the formula will return an error value such as #N/A, which doesn’t seem appropriate because displaying the error in the table will make the table look unprofessional. So, how to eliminate this error value or make it show 0 or empty when it returns and detects an error, we will show you a convenient way to eliminate the error. The VLOOKUP formula returns an error because HRID “7” doesn’t appear in the first column. To convert the error value to empty or “0” value, we can add the IFERROR function in front of the VLOOKUP Use formula =IFERROR(VLOOKUP(G3,$A$2:$E$7,2,0),”0″) to set error value to “0”. Use formula =IFERROR(VLOOKUP(G3,$A$2:$E$7,2,0),””) to set error value to empty. See Also:
{"url":"https://www.excelhow.net/vlookup-function-formula-in-excel.html","timestamp":"2024-11-04T09:07:15Z","content_type":"text/html","content_length":"156290","record_id":"<urn:uuid:8d00a0bd-2d43-4710-bfc0-4a5bf453b7ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00675.warc.gz"}
Math’s “Oldest Problem Ever” Gets a New Answer - Hedonismonline Number theorists are always looking for hidden structure. And when confronted by a numerical pattern that seems unavoidable, they test its mettle, trying hard—and often failing—to devise situations in which a given pattern cannot appear. One of the latest results to demonstrate the resilience of such patterns, by Thomas Bloom of the University of Oxford, answers a question with roots that extend all the way back to ancient Egypt. “It might be the oldest problem ever,” said Carl Pomerance of Dartmouth College. The question involves fractions that feature a 1 in their numerator, like 1/2, 1/7, or 1/122. These “unit fractions” were especially important to the ancient Egyptians because they were the only types of fractions their number system contained: With the exception of a single symbol for 2/3, they could only express more complicated fractions (like 3/4) as sums of unit fractions (1/2 + 1/4). The modern-day interest in such sums got a boost in the 1970s, when Paul Erdős and Ronald Graham asked how hard it might be to engineer sets of whole numbers that don’t contain a subset whose reciprocals add to 1. For instance, the set {2, 3, 6, 9, 13} fails this test: It contains the subset {2, 3, 6}, whose reciprocals are the unit fractions 1/2, 1/3, and 1/6—which sum to 1. More exactly, Erdős and Graham conjectured that any set that samples some sufficiently large, positive proportion of the whole numbers—it could be 20 percent or 1 percent or 0.001 percent—must contain a subset whose reciprocals add to 1. If the initial set satisfies that simple condition of sampling enough whole numbers (known as having “positive density”), then even if its members were deliberately chosen to make it difficult to find that subset, the subset would nonetheless have to exist. “I just thought this was an impossible question that no one in their right mind could possibly ever do,” said Andrew Granville of the University of Montreal. “I didn’t see any obvious tool that could attack it.” Bloom’s involvement with Erdős and Graham’s question grew out of a homework assignment: Last September, he was asked to present a 20-year-old paper to a reading group at Oxford. That paper, by a mathematician named Ernie Croot, had solved the so-called coloring version of the Erdős-Graham problem. There, the whole numbers are sorted at random into different buckets designated by colors: Some go in the blue bucket, others in the red one, and so on. Erdős and Graham predicted that no matter how many different buckets get used in this sorting, at least one bucket has to contain a subset of whole numbers whose reciprocals sum to 1. Croot introduced powerful new methods from harmonic analysis—a branch of math closely related to calculus—to confirm the Erdős-Graham prediction. His paper was published in the Annals of Mathematics, the top journal in the field. “Croot’s argument is a joy to read,” said Giorgis Petridis of the University of Georgia. “It requires creativity, ingenuity, and a lot of technical strength.” Yet as impressive as Croot’s paper was, it could not answer the density version of the Erdős-Graham conjecture. This was due to a convenience Croot took advantage of that’s available in the bucket-sorting formulation, but not in the density one. ANCIENT MATH: The mathematical scroll known as the Rhind Papyrus, which dates back to around 1650 B.C., shows how the ancient Egyptians represented rational numbers as sums of unit fractions. Image courtesy of Album / Alamy Stock Photo. When sorting numbers into buckets, Croot wanted to dodge composite numbers with large prime factors. The reciprocals of those numbers tend to add to fractions with a massive denominator instead of reducing to simpler fractions that more easily combine to make 1. So Croot proved that if a set has sufficiently many numbers with lots of relatively small prime factors, it must always contain a subset whose reciprocals add to 1. Croot showed that at least one bucket always satisfies that property, which was enough to prove the coloring result. But in the more general density version, mathematicians cannot simply choose whichever bucket happens to be most convenient. They might have to look for a solution in a bucket that contains no numbers with small prime factors—in which case, Croot’s method does not work. “It was something I couldn’t quite get around,” Croot said. But two decades later, as Bloom was preparing to present Croot’s paper to his reading group, he realized that he could get even more out of the techniques Croot had introduced. “I thought, hold on, Croot’s method [is] actually stronger than it first seemed,” said Bloom. “So I played around for a few weeks, and this stronger result came out of it.” Croot’s proof relied on a type of integral called an exponential sum. It’s an expression that can detect how many integer solutions there are to a problem—in this case, how many subsets contain a sum of unit fractions that equals 1. But there’s a catch: It’s almost always impossible to solve these exponential sums exactly. Even estimating them can get prohibitively difficult. Croot’s estimate allowed him to prove that the integral he was working with was positive, a property that meant that at least one solution existed in his initial set. “He solves it in an approximate way, which is good enough,” said Christian Elsholtz of the Graz University of Technology in Austria. Bloom adapted Croot’s strategy so that it worked for numbers with large prime factors. But doing this required surmounting a series of obstacles that made it harder to prove that the exponential sum was greater than zero (and therefore that the Erdős-Graham conjecture was true). Both Croot and Bloom broke the integral into parts and proved that one main term was large and positive, and that all the other terms (which could sometimes be negative) were too small to make a meaningful difference. MATH WIZARD: Thomas Bloom of the University of Oxford studies problems in arithmetic combinatorics, including ones about how common certain numerical patterns might be. Courtesy of Thomas Bloom But whereas Croot disregarded integers with large prime factors to prove that those terms were small enough, Bloom’s method gave him better control over those parts of the exponential sum—and, as a result, more wiggle room when dealing with numbers that might otherwise spell trouble. Such troublemakers could still get in the way of showing that a given term was small, but Bloom proved that there were relatively few places where that happened. “We’re always estimating exponential sums,” said Greg Martin of the University of British Columbia. “But when the exponential itself has so many terms, it takes a lot of optimism to trust that you’ll find a way to estimate [it] and show that [it’s] big and positive.” Instead of using this method to hunt for sets of numbers whose reciprocals sum to 1, Bloom employed it to find sets with reciprocals that add up to smaller constituent fractions. He then used these as building blocks to get to the desired result. “You’re not finding 1 honestly,” Bloom said. “You’re finding maybe 1/3, but if you do that three times in three different ways, then just add them to each other and you’ve got 1.” That left him with a much stronger statement about how robust this numerical pattern really is: So long as a set contains some tiny but sufficiently large sliver of the number line—no matter what that sliver looks like—it’s impossible to avoid finding these neat sums of unit fractions. “It’s an outstanding result,” said Izabella Łaba of the University of British Columbia. “Combinatorial and analytic number theory has evolved a lot over the last 20 years. That made it possible to come back to an old problem with a new perspective and with more efficient ways to do things.” At the same time, it also leaves mathematicians with a new question to solve, this time about sets in which it’s not possible to find a sum of unit fractions that equals 1. The primes are one example—there’s no subset of primes whose reciprocals sum to 1—but this property can also hold true for other infinite sets that are “larger,” in the sense that the sum of their reciprocals approaches infinity even more quickly than the reciprocals of the primes do. Just how quicky can those sums grow before hidden structure reemerges and some of their reciprocals inevitably add to 1? “The Erdős-Graham conjecture was a very natural question, but it’s not the full answer,” Petridis said. Lead image: The number 1 can be written as a sum of distinct unit fractions, such as 1/2 + 1/3 + 1/12 + 1/18 + 1/36. A mathematician has proved that so long as a set of whole numbers contains a sufficiently large sliver of the number line, it must include some subset of numbers whose reciprocals add to 1. Credit: Quanta Magazine This article was originally published on the Quanta Abstractions blog. The post Math’s “Oldest Problem Ever” Gets a New Answer appeared first on Nautilus | Science Connected.
{"url":"https://hedonismonline.com/maths-oldest-problem-ever-gets-a-new-answer/","timestamp":"2024-11-07T18:33:05Z","content_type":"text/html","content_length":"117431","record_id":"<urn:uuid:107f11d0-86ac-4f6e-a705-7b98057a4c0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00113.warc.gz"}
Math Colloquia - Mirror symmetry of pairings 줌 회의실 : 889 8813 5947 (https://snu-ac-kr.zoom.us/j/88988135947) 초록: Mirror symmetry has served as a rich source of striking coincidences of various kinds. In this talk we will first review two kinds of mirror symmetry statements, and observe how they are related. Then we will investigate the relation between pairings of two Frobenius algebras which are quantum cohomology and Jacobian ring respectively, via Cardy identity in 2-dimensional open-closed TQFT. As an example we will deal with an elliptic orbifold sphere, and see that the expected equivalence between two pairings gives rise to an identity of modular forms. This is a joint work with Cheol-Hyun Cho and Hyung-Seok Shin.
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=colloquia&sort_index=room&order_type=asc&l=en&page=8&document_srl=870993","timestamp":"2024-11-09T06:30:15Z","content_type":"text/html","content_length":"45578","record_id":"<urn:uuid:3bcec284-c5ca-4f94-9127-c499f0487f26>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00628.warc.gz"}
SuperMemo Algorithm FAQ • Question: How much will users benefit from Alg-SM17?. Answer: Is Algorithm SM-17 much better than Algorithm SM-15? • Will I have an option to use the old algorithm (Alg-SM15) in my collection? This will be possible in some early releases of the algorithm in SuperMemo for Window. However, the new algorithm is intended to replace all prior variants of the algorithms used in all SuperMemos and in all products. The old algorithm will be retained for a while for research purposes only • Will the algorithm be open source, e.g. like Alg-SM2? No. The algorithm forms the basis of commercial activities of SuperMemo World and some of its core equations form a trade secret. • Is it possible that Alg-SM17 would perform worse despite all the theory, e.g. due to a bug? Rigorous testing will ensure that the new algorithm performs better than older variants. Bugs are naturally inevitable and some may find their way to the final product. However, the algorithm will not see the light until it meets all metrics in a large group of users. The set of monitoring tools should be rich enough for you to inspect the process by yourself and make sure the algorithm meets all the design criteria. In SuperMemo 17 you will see a definite metric that will compare both algorithms live while learning • Will the algorithm adapt to my memory or is it a universal formula? All SuperMemos adapt to your memory and, importantly, to your strategies (e.g. choice of grades). That adaptability will be stronger than ever. All that despite the fact that core formulas are now pretty universal due to being based on a strong memory model • Will SuperMemo also change as a result of employing a new algorithm? No. SuperMemo does not need to change beyond a few technical details (like storing repetition histories). However, various versions are likely to be later equipped with new options that are made possible with the new algorithm (the array of possibilities in near-to infinite) • If the S:R model is so simple, why do you speak of thousands lines of code? See: Why simple model leads to complex computer code? • Is stability just an interval? Why not to call it an interval?. See: Is stability just an interval? • Will not new SuperMemo be fooled by outside world interference when using full repetition history record? No. All SuperMemos are constantly undermined by interference. Algorithm SM-17 provides the best statistical approach to look for regularities in the departure of data from memory models. See: Repetition history and outside interference • Your formulas show that you can calculate R from S. Should this not mean that there is only one component of memory? No. See: How many components of memory are there? • Once you say there are two components of memory. Once you say there are three. Can you explain? See: How many components of memory are there? • If the two-component model is 22 years old, why do you come up with the algorithm only now?. See: Why does Algorithm SM-17 come so late? • Should we not have two different measures of difficulty?. See: We should have two independent measures of difficulty! • Did you try univalent matrices like in your 1994 publication?. See: How does Algorithm SM-17 perform when initialized with univalent matrices? • Did you compare SM17 with older algorithms like SM2, Anki, SM8? Do you have specific numbers?. See: Algorithm SM-17 vs. older SuperMemos and SuperMemo 17 will use both Alg-SM16 and Alg-SM17 • Case against cramming - where strength of Algorithm SM-17 reveals weakness of human memory and fires back at the algorithm • How can startup stability be two months? (incl. "ridiculous" 30-year intervals)
{"url":"https://www.supermemopedia.com/wiki/Algorithm_SM-17_FAQs","timestamp":"2024-11-12T19:25:44Z","content_type":"text/html","content_length":"26011","record_id":"<urn:uuid:526a26bc-1ebc-4d1f-a994-9417f5067fe1>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00139.warc.gz"}
Water and Energy Efficiency Assessment in Urban Green Spaces CERIS–Civil Engineering Research and Innovation for Sustainability, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal Author to whom correspondence should be addressed. Submission received: 26 July 2021 / Revised: 19 August 2021 / Accepted: 30 August 2021 / Published: 2 September 2021 Urban green spaces can be intensive water and energy consumers in the cities, particularly in water scarce regions. Though a very efficient use of such resources is necessary, tools for assessing both water and energy consumption and efficiency are not available. In this paper, a new methodology based on water and energy balances is developed for assessing the water-use and energy efficiency in urban green spaces. The proposed balances, adapted from those developed for water supply systems, are specifically tailored for accounting for urban green spaces specificities, namely, landscape water requirements, other uses besides irrigation and over irrigation water losses. The methodology is demonstrated in two case studies of different nature and characteristics: a modern garden with a smart irrigation system and an urban park with traditional irrigation system. The results show that the developed water balances allow to estimate and assess the irrigation efficiency over the years and to assess the effectiveness of implemented water saving measures. The application of the water–energy balance demonstrates the impact of water efficiency measures on the energy efficiency of the irrigation systems. The proposed methodology can be used to assess water and water–energy efficiency in urban green spaces and to identify the most adequate improvement measures, contributing for a better management of the two resources in the cities. 1. Introduction The current environmental agenda in Europe reflects concerns with the conflicting economic and industrial growth and the environment and natural resources protection [ ] In this context, the concept of Green Infrastructure has gained popularity in the last decades, despite its origin dating the XIX century in the United Kingdom with the greenbelts and the creation of public parks in urban areas [ ]. Green Infrastructure is defined as a network of natural and semi-natural areas strategically designed and managed to deliver a wide range of ecosystem services and to enhance human wellbeing [ ]. The fact is that green infrastructures, such as gardens and parks, provide multiple benefits to the inhabitants of urban areas and to the environment, both by mitigating the effects of pollution and of extreme weather events (e.g., floods, heat waves) and by providing recreational areas contributing for public health and wellbeing [ However, urban green spaces can be intensive water consumers, particularly in arid and semi-arid regions [ ]. In areas of low rainfall, additional watering in dry months is required in order to maintain the health and the appearance of plants [ ]. Consequently, urban gardens and parks have been identified as large water consumers in the cities, contributing for the water footprint [ ], included in the cities’ water efficiency action plans [ ], where targeted efficiency measures are often considered. Water consumption for the irrigation of urban green spaces depends on the water requirements of the vegetation, which in turn depends on the planted species and on local climatic conditions. Autochthones species are generally those that require less irrigation, as they are adapted to local climatic conditions. However, turfgrass is often the chosen vegetation for greening extensive areas in many urban parks, due to both aesthetic reasons and practical purposes, despite its high water demand [ ]. Additionally, green spaces in the cities usually include trees and shrubs, both requiring less water than the turfgrass [ ]. The shades provided by the trees also contribute for lowering the water requirements of the green space, by reducing the evapotranspiration of the vegetation around them [ ]. The simultaneous presence of vegetation of different types introduces more complexity and uncertainty in determining the exact amount of water needed for irrigation [ ]. On the other hand, the type and condition of the installed equipment greatly affects the amount of water consumed for irrigation. In general, micro irrigation systems are much more efficient than sprinkler irrigation, which are mostly used for irrigating extensive turfgrass areas. Water-use efficiency in urban green spaces is attained when the supplied water matches the real water needs. However, the water consumption for irrigation is often higher than the estimated demand of the green space, meaning that there is a saving potential [ ]. In order to optimize irrigation efficiency in green spaces, smart irrigation technologies have been developed and adopted in recent years. These technologies aim at optimizing irrigation by an accurate estimation of plant water requirements and an optimal efficiency of the irrigation systems, in order to minimize excessive watering [ While most urban gardens are irrigated with drinking water from the supply network and make use of the network pressure for their own functioning, others rely on local groundwater abstraction or rainwater harvesting. These alternative water sources can have enough quality for irrigation and allow for drinking water savings, which is particularly needed in regions suffering from water scarcity [ ]. However, supplying the irrigation systems with such waters requires the use of pumps, which increases energy consumption. In addition, the components of the smart irrigation systems require electrical energy too [ ]. Hence, the urban green spaces are also energy consumers. Even though renewable sources of energy can be used to supply the irrigation systems [ ], the energetic aspect of the green spaces must also be taken into account when assessing their sustainability For the sustainable use of resources in the cities [ ], both water and energy must be wisely used and the effectiveness of efficiency measures needs to be monitored and evaluated. The latter is a common practice in the water supply sector, in which annual water balances and performance indicators are calculated, in order to estimate water losses and to identify the most adequate improvement measures [ ]. Similar water–energy balances for the water supply systems have been developed and tested [ ]. Water and the associated energy use in urban green spaces must also be accounted for in the overall city balance and tools for assessing the efficiency of water and energy in the green spaces are needed [ In this paper, a new methodology for assessing water and energy efficiency in urban green spaces is proposed, adapted from existing practices for water supply systems. The methodology is based on the water and energy balances widely used for assessing the performance of water supply systems. New components of the water balance are introduced to account for the landscape water requirements, for water consumption due to other uses besides irrigation and for water losses due to over irrigation. In accordance, new components for the water–energy balance are also proposed. The application of the methodology is demonstrated in two case studies: an urban green space equipped with a smart irrigation system and an urban park with a traditional irrigation system and additional water uses. 2. Materials and Methods 2.1. Water Balance A novel water balance for assessing water-use efficiency in gardens and urban green spaces is proposed ( Figure 1 ), based on the water balances developed for water supply systems [ ] and for agricultural irrigation systems [ ]. The water balance concept is based on the identification and quantification of all possible water volumes that go in and out of a system over a certain period. It is frequently calculated for a one-year period to minimize the different uncertainties on measurements and estimations of water volumes. The system input volume in an urban green space (e.g., parks, gardens) is the total amount of water that is supplied to that space by a man-made system. Urban green spaces are frequently supplied and irrigated with drinking water from the supply network [ ], though other sources might exist, particularly in larger spaces, such as abstracted groundwater, reclaimed water or harvested rainwater. Supplied water volumes should be preferentially metered, or estimated as accurately as possible. For determining the effective water consumption in the green space, all water uses must be identified. Though irrigation is often the largest water consumer in a green space, other uses can also exist, particularly in parks where leisure activities take place (e.g., restaurants, public toilets, drinking water fountains). The effective use for irrigation is, in fact, the landscape water requirement (LWR), which can be calculated by: $LWR = 1 DU LQ × [ ( ET 0 × K L ) − R a ] × A$ where LWR is the landscape water requirement (m /year), DU is the lower quarter distribution uniformity of the associated type of irrigation equipment (dimensionless), ET is the reference evapotranspiration (mm/month), K is the landscape coefficient (dimensionless), R is the allowable rainfall (mm/month) and A is the irrigated area (m ) [ ]. Calculating LWR for green spaces with a variety of types of vegetation and local microclimates can be challenging and the result may lack in accuracy [ ]. However, dividing the irrigation area by hydro zones and estimating monthly LWR for each zone allows closer estimates of real LWR. Calculated LWR for each month must then be summed up for estimating the annual water requirements. Water consumption for other uses in the green space, if any, must ideally be measured by specific water meters, or alternatively, estimated using the best available methods. All the water that goes into the green space but that is not effectively consumed, either for irrigation or other uses, is lost in some way. Water losses in urban green spaces comprise the irrigation losses (e.g., evaporation, percolation, runoff), the apparent losses (unauthorised consumption and metering inaccuracies) and the piped network real losses (e.g., due to leaks in pipes). Irrigation losses include all the water that is consumed for irrigation but that is more than needed to fulfil the plants’ requirements. Such water is loss through evaporation, percolation through soil and surface runoff, but due to the complexity required for estimating each of these losses, this component is estimated as a whole. Unauthorised consumption regards to water thefts and illegal connections to the irrigation system. If detected by the garden workers, it can be estimated by multiplying the duration of the event by the probable flowrate. Metering inaccuracies can be estimated based on the characteristics of the metering devices installed. Network real losses include all the losses in the water network of the green space, such as leaks in pipes or in storage tanks. Leakage in the irrigation network can be estimated by Minimum Night Flow Analysis (MNF) when there is no irrigation or consumption for other uses [ Several performance indicators can also be calculated based on the water balance for urban green spaces allowing the diagnosis of green areas for identifying the inefficiencies and the comparison between several improvement measures. The simplest and most helpful indicator is the irrigation efficiency (IE) [ ], herein defined as: $IE = LWR V INP − V OU × 100$ where LWR is the landscape water requirement (m /year); V is the system input volume (m /year); V is the volume of water consumed for other uses (m /year). The IE is classified in: good irrigation efficiency for IE ≥ 80%; reasonable irrigation efficiency when IE is between 60% and 80%; and inadequate irrigation efficiency for IE ≤ 60%. 2.2. Water–Energy Balance The proposed water–energy balance for urban green spaces ( Figure 2 ) is based on the top-down energy balance for water supply systems [ ]. Additional components and simplifications are introduced in order to better tailor the balance for urban green spaces and their water-uses. It is also calculated for a one-year period. The water–energy balance approach is very similar to that of the water balance, as it accounts all the energy that is supplied to the green space along with the water, as well as the energy that is lost with water losses. The total system input energy is the sum of the energy that is supplied to the urban green space by its various water sources. Natural input energy, E , refers to the potential energy supplied by pressurised delivery points or storage tanks at the inlet of the water system that supplies the urban green space. In most green spaces, supplied by the drinking water network, natural input energy refers only to the pressure energy. Shaft input energy, E , is associated with energy supplied by the pumping stations of the irrigation system. The sum of these two energy sources, natural and shaft, is the total system input energy, E . In case there are no pumping stations in the system, the total input energy can be calculated as follows: $E INP = γ V INP H 3600 × 1000$ in which E is the total energy input (kWh), is the specific weight of water (9800 N/m ), V is the system input volume (m ) and H is the pressure head supplied to the irrigation system (m), assuming that the kinetic head is negligible. The pressure head can be obtained as follows: $H = z e + p inlet γ − z 0$ in which z is the elevation of the node at the inlet of the water supply system of the green space (m), p is the pressure at the inlet of the system (Pa) and z is the reference elevation, typically the node with the minimum elevation in the irrigation system (m). The input energy is subdivided into energy associated with effective use, E[EU], and energy associated with water losses, E[WL]. The energy associated with water losses (E ) can be obtained by associating the water losses percentage from the water balance as proportion to the energy associated with water losses, as follows: $E WL = E INP × WL / 100$ where E is the energy associated with water losses (kWh), E is the total system input energy (kWh) and WL corresponds to the percentage of water losses obtained from the water balance (%). Energy associated with effective use includes the energy that is effectively supplied to the consumers, E , and the energy that is dissipated in the system, E . The energy associated with the water supplied to consumers includes the minimum required energy for irrigation, E , the minimum required energy for other uses, E’ , and the surplus energy, E . The first can be obtained from the theoretical minimum operating pressure, given by the manufacturer of the irrigation equipment. It depends on the type of sprinkler or dripper/micro-sprinkler. The second one is related with the minimum pressure requirements at the consumption point for the other water uses. The minimum required energy, both for irrigation and for other uses, can be calculated as follows: $E min = ∑ i = 0 n γ V needs , i H min , i 3600 × 1000$ in which E is the minimum required energy (kWh), V is the water needs at node i (m ) and H is the minimum pressure head in each consumption node i (m), given by: $H min , i = z i + p min γ − z 0$ in which H is the minimum pressure head at each node i (m), z is the elevation of node i (m), p is the minimum required operating pressure (Pa), z is the reference elevation or the node of minimum elevation in the system (m). The surplus energy, E[SUP], corresponds to the energy above the minimum required that is supplied at the node level. Dissipated energy, E[DIS], in the water supply systems of the green spaces is due to pipe friction, valve head losses and the pumping stations’ inefficiency, if wells or boreholes exist. These two components (dissipated and surplus energy) can be computed together as the difference between the energy associated with effective use and the sum of the minimum required energies for irrigation and other uses. Three energy performance indicators, E1, E2 and E3, can also be calculated from the water–energy balance. Performance indicator E1 represents the energy in excess per volume of input water (kWh/m $E 1 = E INP − E MIN V INP$ This ratio allows the evaluation of the potential of energy reduction per unit of the water volume that enters into the system. It is always positive and should be as low as possible. Performance indicator E2 represents the energy in excess per volume of water effectively used, i.e., the water needs $( V needs$ ) (kWh/m $E 2 = E INP − E MIN V needs$ This indicator is very similar to the previous one; however, E2 allows the assessment of the effect of water losses on the energy efficiency of the system. Performance indicator E3 is the ratio of the system input energy over the minimum required energy. This indicator provides a very simple metric for assessing how much energy is being supplied in excess. It should be as low as possible and, in an ideal situation, equal to 1. 3. Case Studies Description 3.1. Case Study 1: Green Space with Smart Irrigation System Case study 1 is an urban green space with 19,200 m , located in a touristic resort in Vale do Lobo, in the Algarve region, Portugal ( Figure 3 ). The green space includes 154 small gardens with 20 installed irrigation meters. The green space surrounds a neighbourhood of villas with turf grass and flowerbeds. A smart irrigation system was installed in the beginning of 2019. The system includes a connection with a meteorological station located in Faro and a platform that determines the irrigation needs, according to local weather conditions and that, automatically, controls, at every hour, the amount of water that is supplied by the irrigation system, shutting off the system, if no irrigation is needed. The turf grass area is irrigated with sprinklers and the flowerbeds with drip-irrigation. The sprinklers (Rain Bird, series 5000) have a minimum working pressure head of 17 m. Water supplied by the irrigation system of this green space is exclusively consumed for irrigation purposes; thus, there is no consumption for other uses. The estimation of the landscape water requirements considered the two types of vegetation and the irrigation systems of each area. The distribution uniformity, DU[LQ], of both sprinkler and drip irrigation systems is assumed to be 0.7, while the landscape coefficient, K[L], is considered to be 0.7 for the turfgrass areas and 0.5 for the flowerbeds. The elevation of the node that connects the inlet of the irrigation system of the green space, z[e], to the municipal water distribution system is 32 m, while the minimum elevation in the irrigation system, z[0], is 22 m. The pressure head at the inlet of the system, p[inlet]/γ, is of 35 m. The proposed water and water–energy balances are applied to case study 1 for three consecutive years—2017, 2018 (before the installation of the smart water meters) and 2019 (after the installation) —and results are presented and discussed in Section 4 3.2. Case Study 2: Urban Park Marechal Carmona is a public urban park located in the centre of Cascais ( Figure 4 ). This park has approximately 14,343 m of irrigated area, of which about 11,100 m correspond to turfgrass area with sprinkler irrigation, and the remaining 3243 m are covered with shrubs, herbaceous and flowers and are irrigated via micro irrigation. In the park, there are also trees, a small lake, several picnic areas, a field to play traditional games, cafes, toilets, a museum, a building for small conferences, a municipal library for children and youth and a playground. All water users, including the irrigation system, are supplied by the drinking water network of the park, which includes five water meters: three of them connected to the other uses in the park (e.g., café, toilets, library) and two connected to the irrigation system. The water meters are not connected to any telemetry system and the readings are carried out once per month or every two months. The irrigation systems are manually turned on or off by the irrigation workers, who empirically adjust the irrigation time to the weather and soil conditions. The lake is filled with abstracted groundwater from a borehole. The coefficients DU[LQ] and K[L] used for estimating the water requirement of the park are the same as in case study 1. The water balance is calculated for 2015, 2016 and 2017. 4. Results and Discussion The water balances are calculated for both case studies, although the water–energy balance is only carried out for case study 1. This is because there were no records in the municipal services of the physical characteristics and the topology of the irrigation system of case study 2, essential to compute the energy balance components. 4.1. Water Balance Application to Case Study 1 The proposed yearly water balance is applied to case study 1 for 2017, 2018 and 2019. Landscape water requirements are computed for each month, making use of available reference evapotranspiration data for the region and summed for annual estimation. Some assumptions regarding the calculation of the water balance components have been considered. The unauthorized consumption is considered null due to the extremely low probability of existing illegal connections given the existing high security in the area. The metering inaccuracies are considered equal to 2% of the system input volume for the three years, which corresponds to typical average values used for these type of meters. Due to the lack of information regarding the water losses in the pressurized irrigation system, these losses are estimated together with the irrigation losses. The system input volume is the sum of all metered water volumes by the 20 irrigation meters. The results of the water balance for the three years ( Figure 5 ) show that the water losses have decreased over the years from 45% to 30% of the system input volume. Accordingly, the irrigation efficiency described by Equation (2) has increased from 55% in 2017 and 57% in 2018 to 70% in 2019, gradually approaching to the irrigation needs. This improvement, particularly evident from 2018 to 2019, is due to the installation of the smart irrigation system in the beginning of 2019 that manages the irrigation according to the plant needs and to the weather conditions. However, a higher efficiency was not achieved since it used a reference meteorological station located in Faro (not a local one) and it did not have any measurement of the existing humidity in the soil; thus, it could not accurately estimate the vegetation needs. Additionally, the system estimated a single vegetation need and not adjusted it to the type of plant (i.e., turf grass and flowerbeds). Annual variations in the system input volume and in the effective water used for the irrigation (i.e., LWR) are in agreement with observed variations of the precipitation and evapotranspiration. LWR in 2018 is the lowest within the analysed three-year period due to the highest precipitation (529 mm) in this year compared to 2017 (317 mm) and 2019 (229 mm). In 2019, due to lower precipitation, LWR increases but the system input volume does not increase proportionally, due to a more efficient water use and less water losses. This corresponds to the year 2019, when the smart irrigation system began to operate, and demonstrates that this smart system effectively reduces water losses. The calculation of the water balance allows for a more systematic analysis of the water consumption and efficiency of its use in the green spaces. Overall, the results show that almost half of the water consumed in 2017 is lost due to over-irrigation, which was halved in 2019 thanks to the smart irrigation system. Over the studied period, the water consumed for irrigation approaches the water requirements of the green space, particularly in the drier months, from June to September, although there is still potential for more water savings ( Figure 6 Further investigations should focus on a detailed analysis of the irrigation efficiency in the areas associated with each irrigation meter, in order to identify possible local inefficiencies, as well as on the development of an irrigation algorithm that both integrates the meteorological and the soil humidity conditions. 4.2. Water Balance Application to Case Study 2 The proposed water balance is applied to case study 2 for 2015, 2016 and 2017 ( Figure 7 ). LWR is computed for each month, using local data for climatic parameters. The unauthorized consumption is considered null due to the inexistence of illegal connections within the fenced park. The metering inaccuracies are also considered equal to 2% of the system input volume for the three years. The water losses in the irrigation pipe system due to leaks and ruptures are estimated together with the irrigation losses. Consumption in the park is measured in five flow meters, M1 to M5, in a monthly or bimonthly basis. According to the water utility knowledge, three of the meters measure only the consumption for other uses (M1, M3 and M4), whereas the remaining two measure the input water volume for irrigation (M2 and M5). The measured annual water volumes are presented in Figure 7 , in which irrigation represents, on average, 67% of water consumption in the park. The annual water balances are calculated for each of the three years ( Figure 8 ). Results show that the water losses have decreased from 36% in 2015 to 13% of the system input volume, even though irrigation practices are not based on smart systems. There are two main reasons for the decrease of water losses. The primary reason is that irrigation is dictated by the empirical knowledge of garden workers that have become increasingly more aware of the need for water savings in a context of scarcity and that have taken more efficient irrigation practices, manually adjusting the time and duration of the irrigation process. The second reason is the uncertainty associated with the consumption for other uses that was estimated based on the measurement of three meters that are believed to uniquely supply the existing infrastructures; though civil works have taken place in the park in the last 5 years and some parts of the irrigation network might have be connected to these meters. The water balance also shows that the percentage of water consumed for other uses than irrigation is quite high (varying between 27% and 43% of the system input volume) and increases over the years in an inverse trend to that of the consumption for irrigation. It must be noticed that this consumption was calculated based on metered water volumes; hence, all the inefficiencies associated with the many water uses (e.g., a dripping tap in a museum toilet) are included in this component. For a more comprehensive analysis of the irrigation efficiency, the monthly water volumes consumed for irrigation, metered by the dedicated meters of the irrigation system, are also compared with the estimated overall LWR of the park ( Figure 9 ). The results show that the water consumed for irrigation is much more than that needed in 2015 but approaches LWR in 2017. The irrigation efficiency, defined as the ratio between LWR and the real water consumption for irrigation, increases from 52% in 2015 to 56% in 2016 and then, to 78% in 2017, which is similar to that observed in case study 1. The application of the water balance to the park allows concluding that the irrigation efficiency is good, even though there are no smart irrigation system and that further analysis on water efficiency in the park should focus on the analysis of the effective use of water in other water uses. 4.3. Water–Energy Balance Application to Case Study 1 The methodology proposed for the water–energy balance in urban green spaces is applied to case study 1 for the years 2017, 2018 and 2019. Because the irrigation system is supplied by the drinking water distribution network, all the input energy of the irrigation system is pressurized energy (natural input energy) and there is no shaft input energy. The water–energy balance components are calculated by Equations (3)–(7) for the three analysed years ( Figure 10 The energy losses associated with the water losses, either due to irrigation inefficiencies or due to leaks in the irrigation system pipes, vary between 30% and 45% of the system input energy. Hence, a significant part of the energy that is supplied to the urban green space by the drinking water network is lost due to water losses. The water–energy balance also shows that the increase in the irrigation efficiency from 2017 to 2019 is accompanied by an increase in energy efficiency, as less water and its embedded energy is wasted. The water–energy balance also shows that the irrigation system is supplied with much more energy than that needed, as the minimum required energy for irrigation is of only 33% to 42% of the input energy. Concomitantly, 22% to 28% of the input energy is supplied in excess or dissipated at the sprinklers. These results suggest that part of the energy consumed upstream for assuring high pressures in the drinking water network is, then, lost in the urban green space. For that reason, the pressure at the inlet node of the irrigation system should be lower, only slightly exceeding the necessary pressure at the sprinklers. Alternatively, the green spaces could benefit from recovering part of the excess hydro energy in the irrigation systems [ ] that could then be locally consumed (e.g., for supplying the smart irrigation systems’ equipment). In order to better assess the energy efficiency of the irrigation system, performance indicators E1, E2 and E3 given by Equations (8)–(10), often computed to assess energy efficiency of water supply systems, are also calculated ( Figure 11 ). The results show that the performance indicators E1 and E2, which represent the energy in excess per volume of input water (E1) or per effectively used water (E2), are very low and very similar. Only a slight decrease is noticed in 2019, as a result of the water efficiency measures. The lack of benchmarking values for comparing the obtained indicators hampers the evaluation of the energy efficiency of the irrigation system based on such indicators. Overall, E1 and E2 are very small when compared with those of water supply systems [ ], which is likely due to the much smaller length and diameters of the irrigation networks. Regarding the performance indicator E3, which allows assessing how much energy is being supplied in comparison with the minimum required energy, a significant decrease from 3.02 to 2.39 is observed. This is in agreement with the previous observations of energy efficiency improvement from 2017 to 2019 due to the decrease in water losses. For water supply systems, the E3 value should be in the range of 1 to 2 (ideally, equal to 1), which shows that, despite the reduction in water losses, energy efficiency of the irrigation systems requires further measures, in particular, those that address the reduction or the recovery of the excess of supplied water–energy. 5. Conclusions A methodology to calculate water and water–energy balances for urban green spaces is proposed. The methodology is demonstrated with two urban green spaces of different nature: a modern green space with a smart irrigation system and a typical urban green park. The proposed balances are based on the existing balances for water supply systems and for collective irrigation systems to which several changes have been introduced to specifically tailor them to the water uses in the green spaces. The proposed water balance to the green spaces allows assessing the irrigation efficiency in the green spaces over the years, the effectiveness of water saving measures (e.g., smart irrigation systems or empirically–based irrigation practices) and the importance of other water uses (e.g., toilets, cafes) in the overall water consumption of green areas. The water balance helps in identifying the most adequate measures for a more efficient water use in the urban green spaces. The application of the two case studies has demonstrated that smart irrigation systems can significantly increase irrigation efficiency from inadequate (IE < 60%) to reasonable or good (IE > 60%); however, good efficiencies (IE > 80%) require the installation a meteorological station in situ and monitoring of the soil humidity in order to more accurately estimate the plant irrigation needs. On the other hand, reasonable or good irrigation efficiencies (IE > 60%) can also be attained with empirical knowledge of gardeners that can manually adjust the time and duration of the irrigation process. The application of the proposed water–energy balance demonstrates that water efficiency measures have a direct and positive impact in the energy efficiency of the irrigation systems. Additionally, the water–energy performance indicator E3 shows that the irrigation systems are supplied with more than twice the energy needed, even after reducing water losses, thus suggesting additional measures for energy efficiency improvement other than those targeting water savings, such as the reduction of the supplied pressure or the water–energy harvesting by the installation of pico-energy recovery The proposed water and water–energy balances are valuable tools for assessing water use and energy efficiency in urban green spaces, highlighting the water inefficiencies and allowing the identification of the most adequate measures, thus contributing to a better water and energy management in urban green spaces. This study has been applied to two green infrastructures in Portugal: one green area located in Algarve region composed of turf grass area irrigated with sprinklers and flowerbeds with drip-irrigation; and a second park covered with shrubs, herbaceous and flowers irrigated via micro-irrigation, though with other uses within the park (toilets, cafes). The water and water–energy balances should be further applied to other different case studies, with different types of vegetation and with other infrastructures, so that lessons learnt could be used for establishing a set of best practice recommendation for saving both water and energy, namely, the selection of low-water demand species and the use of more efficient irrigation systems. Another very relevant inefficiency in green infrastructures, often forgotten, is the water loss in undetected leaks and bursts in the pipe irrigation system; thus, efforts should be done to assess the importance of this component in irrigated areas by measuring the minimum night flows as well as to control it by reducing operating pressures or closing inlet valves when irrigation is not necessary. Author Contributions Conceptualization, D.C. and L.M.; methodology, D.C. and L.M.; formal analysis, R.C.; data curation, R.C.; writing—original draft preparation, L.M. and D.C.; writing—review and editing, L.M. and D.C.; project administration, D.C.; funding acquisition, D.C. All authors have read and agreed to the published version of the manuscript. This research and the APC were funded by the Fundação para a Ciência e a Tecnologia, grant number PTDC/HAR-HIS/28627/2017. Data Availability Statement Data can be provided on-demand to the corresponding author. The authors acknowledge the Fundação para a Ciência e Tecnologia (FCT) for funding the research project Horto Aquam Salutarem: Water Wise Management in Gardens in the Early Modern Period, grant number PTDC/HAR-HIS/28627/2017. The authors also acknowledge the Câmara Municipal de Cascais and Infralobo-Empresa de Infraestruturas de Vale do Lobo, E.M. for providing the case studies for this Conflicts of Interest The authors declare no conflict of interest. Figure 6. Landscape water requirements and system input volume in case study 1 in 2017, 2018 and 2019. Figure 9. Landscape water requirements and system input volume in case study 2 in 2015, 2016 and 2017. Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Monteiro, L.; Cristina, R.; Covas, D. Water and Energy Efficiency Assessment in Urban Green Spaces. Energies 2021, 14, 5490. https://doi.org/10.3390/en14175490 AMA Style Monteiro L, Cristina R, Covas D. Water and Energy Efficiency Assessment in Urban Green Spaces. Energies. 2021; 14(17):5490. https://doi.org/10.3390/en14175490 Chicago/Turabian Style Monteiro, Laura, Raquel Cristina, and Dídia Covas. 2021. "Water and Energy Efficiency Assessment in Urban Green Spaces" Energies 14, no. 17: 5490. https://doi.org/10.3390/en14175490 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/1996-1073/14/17/5490","timestamp":"2024-11-08T15:13:31Z","content_type":"text/html","content_length":"422202","record_id":"<urn:uuid:b5ea0262-7d15-4c87-882b-d1ba3917d3cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00500.warc.gz"}
Who provides MATLAB array assignment solutions for a fee? Matlab Assignment Help | Who provides MATLAB array assignment solutions for a fee? Project and Homework Help Who provides MATLAB array assignment solutions for a fee? In the text ‘Mass Budget’ (19th century) the question of “how much is paid for the Mäscheln-Mängligenstrassen” has become a serious problem in the technical fields concerned with developing efficient and robust building systems. The most common answer to this question is $240, but that is not quite what is required, since Mänglestrassen is widely used in medieval Europe during this period, and the large part of the Mänglestrassen building was in place during the 1290s. However, the original Mänglestrassen cost was $160 two-year and still not covered by the 10% of gross Mänglestrassen (Mäscheln-Mängligenstrassen) expenditures during the period. The figure in this article shows not that much in the 19th century was paying for Mäscheln-Mängligenstrassen more than the 10% of Mänglestrassen (Mäscheln-Mängligenstrassen). Here we show how generalizations of the Mäscheln-Mänglestrassen complex system are shown using an illustrative example shown in Figure 8. Those generalizations cover all cases, i.e. building, heating and such. Figure 8 Generalizations of Mäscheln-Mänglestrassen and Figure 9 shows examples of generalizations of the Mäscheln-Mänglestrassen complex system using generalizations of those Mäscheln-Mänglestrassen, and the results are reproduced in some figures. Figure 9 Performed using only some generalizations of Mäscheln-Mänglestrassen as well as those of MäschelnWho provides MATLAB array assignment solutions for a fee? In your example: const a = [131231, 122517, 123940, 129446, 123812]; puts (14) and you can access the object to the correct position for the given value: 1. I got this value for this. 2. You get 0 because the first character of the function is [131231, 122517] But I get -14 because the value (15) gets [123940] What do you mean? You cannot assign a to complex numbers, because of complex numbers. So that you always need to access the object to the correct position it appears within gives to the item being modded by the complex number (10), etc. edit One other question: Why do I get – or + when my initial search function gets evaluated with 10? A: In general, your result ought to be somewhere between a and a (in other words, you’re not being restricted by what the arguments do, to the type of your current function, therefore your results might be quite different in your second one). Your purpose to do some calculations with a different type would be to call find by the object with that value the lowest key among all the elements in your array, then use that result as the argument to the function. You could have code that calculates the indices of the objects in your array, but it wouldn’t perform this computation, because it would get a numeric key up the index array, and hence returns a non-negative value (as you saw in this example). If there’s no object instance whose index is the same for all objects, then it’s gone. Have a look at this but to see what’s going on, you might want to use a function named find by object name. Who provides MATLAB array assignment solutions for a fee? I would have thought there would be several related answers to provide any one particular MATLAB function such a function may look like: My question is this – I would need to read MATLAB with and then choose from various command libraries under many different computer architectures to provide my own MATLAB functions and as they exist there are a lot of potential advantages and disadvantages. What Are Online Class Tests Like Given that there are a large number of different programming environment for MATLAB, is navigate to these guys a common workflow for this (please suggest my preferred methodology)? Does one have to use a multi-processor or possibly multi-textured software to create/print/assemble MATLAB array assignments? I must say though that I’m a bit nervous about the basic design/method as I know it to look similar to this one so I’m going to answer all the questions myself but I would prefer providing more detailed feedback about some of the various concepts explained above. If you hadn’t programmed on a 3D machine with 128 neurons (just a test set of 1 or 2), then this could be of significant advantage in reducing the complexity as well as speed, especially if the task is done via a traditional hardware-based setup app. Similarly you may have to program on a 3D server so additional reading is not as similar as you would want but for me I could be seriously del/limbed to do so. There may be a possibility of adding program on a server to create a toolbox to optimize the performance but to be honest I am not sure where this could be effective. A: As you have suggested, I would go for small parallel/vector-based vector work (you are probably already familiar with MATLAB’s vector- and grid methods) Part of you may be dealing with complex arrays or matrices that you may even need. I’m very happy to investigate such things out of the box and add detail to it. However, MATLAB provides an array-access
{"url":"https://www.matlabhelponline.com/who-provides-matlab-array-assignment-solutions-for-a-fee-2-461784","timestamp":"2024-11-04T12:20:32Z","content_type":"text/html","content_length":"146341","record_id":"<urn:uuid:7dca792b-cd0b-4774-a83b-d82f80730db4>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00089.warc.gz"}
VAT Rates Around The World - VAT Calculator VAT Rates Around The World VAT is a form of “consumption tax”. It is charged on goods and services and in some countries is known as GST – Goods and Services Tax. The first country to introduce VAT in 1954 was France and it now accounts for around 45% of state revenue. For developing nations such as India – VAT is an important revenue source due to high unemployment and low per capita income. Here is a summary of VAT rates and currency facts from around the world. The European Union: Members of the EU are obliged by European Law to be a part of the European Union Value Added Tax Area. A minimum of 15% VAT must be charged, with the maximum being 25%. Some countries have reduced VAT rates on certain types of goods and services. Unless otherwise stated the currency is the euro. Austria has a standard VAT rate of 20%, and reduced rates of 12% & 10%. The tax is called USt. (Umsatzsteuer) Belgium has a standard rate of 21% and reduced rates of 12% & 6%. There are three taxes which can be classified as VAT – BTW, TVA and MWSt. Bulgaria has a standard VAT rate of 20%, with reduced rates of 0% & 7%. The currency of this country is the Bulgarian lev, which consists of 100 Stotinka to the Lev. Cyprus has a standard VAT rate of 19% and two reduced rates of 9% and 5%. Czech Republic has a standard VAT rate of 21% & a reduced rate of 9%. The tax is called DPH and the local currency is the Czech Koruna which 100 Haler to the Koruna. Denmark has a single VAT rate of 25%. The tax is called moms and the local currency is the Danish krone which consists of 100 Ore to the krone. Estonia charges a standard VAT rate of 20% & a reduced rate of 9%. The tax is called “km” and the local currency is the Estonian Kroon which consists of 100 sent to the kroon. Finland has a standard VAT rate of 24% plus 2 reduced rates of 17% and 8%. The taxes are called ALV and Moms. France has a standard VAT rate of 20% & reduced rates of 5.5% and 2.1%. The tax is called TVA. Germany has a standard rate of VAT which stands at 19% & a reduced rate of 7%. The taxes are called MwSt. and US. Greece has a standard rate of VAT of 24%, with reduced rates of 9% & 4.5% – throughout the Greek Islands this amount is reduced by 30% to 13%, 6% & 3%. Hungary has a standard VAT rate of 27% & a reduced rate of 18% and 5%. The tax is called AFA and the local currency is the Hungarian Forint (Ft), which has 100 Filler to 1 Forint. Ireland has a standard VAT rate of 23%, with reduced rates of 13.5%, 4.8% or 0%. The tax is known as CBL or VAT. Italy has a standard rate of 22%, with reduced rates of 10%, 6% or 4%. The tax is called IVA. Latvia has a standard rate of 21% and a reduced rate of 10%. The tax is called PVN and the local currency is the Latvian Lats (Ls) which has 100 Santims to the Lats. Lithuania has a standard VAT rate of 21%, with reduced rates of 9% or 5%. The tax is called PVM and the local currency is the Lithuanian Litas (Lt) which has 100 Centas to the Litas. Luxembourg has a standard VAT rate of 16% (temporarily down from 17%), with reduced rates of 12%, 9%, 6%, 3%. As in France, the tax is called TVA. Malta has a standard VAT rate of 18% & a reduced rate of 5%. The Netherlands has a standard rate of 21%, with reduced rates of 9% & 0%. The tax is called BTW. Poland has a standard rate of 23%, with reduced rates of 8%, 5% & 0%. The tax is called PTU and the local currency is the Polish Zloty (zl) which has 100 Grosz to the Zloty. Portugal has a standard VAT rate 23%, and reduced rates of 13% and 6%. The tax is called IVA. Romania has a standard VAT rate of 19% & a reduced rate of 9%. The tax is called TVA and the local currency is the Romanian leu (L), with 100 Ban to the Leu. Slovakia has a standard VAT rate of 20% & a reduced rate of 10%. The tax is called DPH. Slovenia has a standard rate of 22% & a reduced rate of 9.5%. The tax is called DDV. Spain has a standard VAT rate of 21%, with reduced rates of 10% and 4%. The tax is called IVA. Sweden has a standard VAT of 25%, with reduced rates of 12% and 6%. The tax is called Moms and the local currency is the Swedish krona with 100 Ore to the Krone. United Kingdom has a standard VAT rate of 20% [correct as of November 2022], with other reduced rates. The Rest of The World An overview of VAT rates outside the European Union follows: Albania charges a tax called TVSH which has a standard rate of 20%. The local currency is the Albanian lek (L), 100 Quintars to the Lek. Argentina IVA 21% 10.5% 05 Argentine peso ($) 100 Centavos to the peso Armenia AAH – 20% 0% Armenian Dram 100 Luma to the Dram Barbados charges VAT at a single rate of 15%. The local currency is the Barbadian dollar with 100 cents to the dollar. Bosnia & Herzegovina – PDV 17% B & H convertible mark KM 100 Fennings to the mark Brazil – IPI 12%, ICMS 25%, ISS 5% Brazilian real R$ 100 centavos to the real Chile charges IVA at a single rate of 19%. The local currency is the Chilean peso which has 100 centavos to the peso. Colombia IVA 15% Colombian peso $ 100 centavos to the peso China 17%, 6%, 3% Chinese Yuan ¥ 100 Fen to the yuan Croatia PDV 22% 10% Croatian Kuna (kn) 100 Lipa to the Kuna The Dominican Republic charges ITBIS at a standard rate of 16% and reduced rates of 12% and 0%. The local currency is the Dominican Peso which has 100 Centavos to the Peso. Ecuador charges IVA at a standard rate of 12%. The country uses the US dollar. Fiji charges VAT at a standard rate of 15%, from January 2011. The local currency is the Fijian dollar with 100 cents to the dollar. Georgia charges a tax called DgHg at a standard rate of 18%. The local currency is the Georgian lari which has 100 Tetri to the lari. India charges VAT with a standard rate of 12.5%, and reduced rates of 4%, 1% and 0%. The local currency is the Rupee with 100 Paisa to the Rupee. Japan charges Consumption Tax at a standard rate of 5%. The local currency is the Japanese Yen, with 100 sen to the Yen. South Korea VAT 10% South Korean won 100 Jeon Kosovo charges a tax called TVSH at a single rate of 16%. The local currency is the Euro. Lebanon charges TVAat a single rate 10%. The local currency is the Lebanese pound with 100 piastre to the pound. Moldova charges TVA at a standard rate 20% and a reduced rate of 5%. The local currency is Moldovan leu with 100 ban to the leu. Pakistan charges GST at a standard rate of 16%, with reduced rates of 1% and 0%. The local currency is the Pakistani rupee, with 100 paisa to the rupee. Serbia charges PDV at a standard rate of 18%, with reduced rates of 8% and 0%. The local currency is the Serbian dinar, with 100 para to the dinar. Switzerland charges VAT in the form of MWST, TVA, IVA, and TPV. The standard rate is 7.6% and there are reduced rates of 3.6% and 2.4%. The local currency is the Swiss Franc which has 100 rappen to the franc. Turkey charges KDV at a standard rate of 18% and reduced rates of 8% and 1%. The local currency is the Turkish lira with 100 kurus to the lira. Uruguay charges IVA at a standard rate of 22% and a reduced rate of 10%. The local currency is the Uruguayan peso with 100 centesimo to the peso. New Zealand charges GST at a single rate of 12.5%. The currency locally is the New Zealand dollar, with 100 cents to the dollar. Iceland charges VSK at a standard rate of 24.5% and a reduced rate of 7%. The local currency is the Icelandic krona, with 100 Eyrir to the krona. Some form of VAT is charged in most countries around the world and comes under different names. It is subject to change, so it is always wise to check the up to date VAT situation if you have any business dealings concerning VAT around the world.
{"url":"https://vatcalculator.com/vat-rates-around-the-world/","timestamp":"2024-11-07T06:20:25Z","content_type":"text/html","content_length":"66584","record_id":"<urn:uuid:6714ad16-397a-4e47-832e-64659d4af7a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00518.warc.gz"}
ECCC - Reports tagged with pseudorandom pseudodistributions TR21-020 | 15th February 2021 Gil Cohen, Dean Doron, Oren Renard, Ori Sberlo, Amnon Ta-Shma Error Reduction For Weighted PRGs Against Read Once Branching Programs Weighted pseudorandom generators (WPRGs), introduced by Braverman, Cohen and Garg [BCG20], is a generalization of pseudorandom generators (PRGs) in which arbitrary real weights are considered rather than a probability mass. Braverman et al. constructed WPRGs against read once branching programs (ROBPs) with near-optimal dependence on the error parameter. Chattopadhyay and ... more >>>
{"url":"https://eccc.weizmann.ac.il/keyword/19977/","timestamp":"2024-11-08T02:52:56Z","content_type":"application/xhtml+xml","content_length":"19590","record_id":"<urn:uuid:5b45b037-49e6-434d-8fa4-1e144e1fdfbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00555.warc.gz"}
Defense Using the Dark Arts A technical intro to adversarial ML Welcome to the 22nd edition of Black Box. This is the second part of Literature Review, where I dive into emerging research related to generative AI. I covered affective computing last time. This post assumes a conceptual understanding of neural network training, basic linear algebra, and familiarity with mathematical notation. For a refresher, see this generative AI primer that I I have been interested in protecting data and content from AI models since I proposed accelerating model collapse to preserve API access. I think that defensively using offensive techniques has a lot of potential, most of which is unrealized due to the nascency of this research. So when Nightshade, the data poisoning tool, was announced a few weeks ago, I was eager to dig into the paper. What I quickly realized is in this branch of machine learning, there is a gap in knowledge between casual interest and understanding research. Luckily, I studied math alongside economics in college and had previously reviewed the basics of generative AI. It was time for a deep dive. Adversarial ML Nightshade belongs to a field called adversarial machine learning, which is the study of attacks on ML models and defenses against such attacks. To be specific, Nightshade is a data poisoning attack, which modifies the training data of a model so that it produces errors. This is related to another kind of attack called evasion, which modifies the inputs to make a model—usually a classifier—systematically produce wrong outputs, e.g., misclassify. Much adversarial ML research has been on evading image classifiers; Nightshade builds on this body of work. Adversarial attacks can be further categorized as either • White box, in which the attacker has access to a model’s parameters; or black box (more on that in a second). • Targeted, in which the attacker is aiming to produce errors of a specific class; or non-targeted. Most practical applications are targeted. While the nature of adversarial spaces is unclear, the leading hypothesis is that they result from practical limitations. Since a model cannot be trained on the entire input space, the distributions of the training and test data will not match exactly. Adversarial space is the difference, and so inputs in that space are misclassified and become adversarial examples. Adversarial ML research has “traditionally” focused on white box attacks as they can transfer to black box models. This is possible since models trained on similar data are likely to have adversarial space that partially overlap. In fact, this probability increases with dimensionality due to a property called distance concentration1. ML models are also inherently prone to behaving similarly since they are designed to generalize across training datasets and architecture. Similar classifiers should therefore partition their input space into classes with decision boundaries that are close to each other. Distance measures What “close” means matters a lot in adversarial ML because attacks that are obviously manipulated will be caught. Instead of cosine similarity, the most common distance measures used in adversarial ML are p-norms, which are written as || • ||ₚ in literature. Given a pair of vectors, their • l₁ norm or Manhattan distance is the sum of the absolute differences of their elements. It is often used for discrete data. • l₂ norm or Euclidean distance is given by the Pythagorean theorem. It is often used for continuous data. • lₚ norm or Minkowski distance generalizes this to vectors of any finite order p. • l ͚ norm or Chebyshev distance is their largest difference among any of their dimensions. Speaking of literature, there are two foundational papers in adversarial ML that are worth understanding as background for Nightshade. I review these in turn. Fast Gradient Sign Method (2015) An adversarial example takes advantage of the fact that a classifier C has a tolerance so that a perturbation η to an input x below a threshold ϵ maps to the same class y, i.e., C(x) = C(x’) = y where x’ = x + η and ||η|| ͚ < ϵ. For intuition, consider the activation of a node in a neural network. Given a weight vector w and an adversarial example x’, the dot product is wᵀx’ = wᵀx + wᵀη. The perturbation wᵀη can then be maximized2 if η = ϵsign(w). This is a linear perturbation, but neural networks are generally nonlinear. However, Goodfellow et al. hypothesize that they behave linearly enough—at least locally—that they are susceptible to linear perturbations. (This can easily be shown, which they do in the rest of the paper.) Their strategy is based on gradients. In training, backpropagation updates w to minimize the loss function J given an input x. A gradient-based attack is effectively the inverse as it holds w constant and perturbs x to maximize J, subject to some ϵ. For a model with parameters θ, the optimal perturbation is therefore \(\eta=\epsilon sign(\nabla_xJ(\theta,x,y))\) which Goodfellow et al. call the Fast Gradient Sign Method, a non-targeted white box attack. But what about targeted attacks? Carlini and Wagner (2017) Carlini and Wagner propose the following to calculate a targeted white box attack on a classifier C. Let C*(x) be the correct class for a valid input x and t ≠ C*(x) be the target class. Finding an adversarial example x’ = x + δ is then stated as min(D(x, x + δ)) subject to C(x + δ) = t and x + δ ∈ [0, 1]ⁿ, where D is a distance measure and n is the dimension of x. This is difficult to solve directly since C is nonlinear, so Carlini and Wagner define an objective function f such that C(x + δ) = t if and only if f(x + δ) ≤ 0. Conceptually, 1—f is how close x + δ is to being classified as t, which makes f a loss function3. If D is a p-norm, then the problem can be restated as min(||δ||ₚ + cf(x + δ)) subject to x + δ ∈ [0, 1]ⁿ, where c > 0 is a weighing factor for the loss term. Carlini and Wagner present several candidates for f and empirically find that the best is \(f(x')=max(Z(x')_{i\neq t}-Z(x')_t, -k)\) where Z( • ) gives the output from all of a neural network’s layers before the softmax function—a generalization of the logistic function to n dimensions—and k is a confidence threshold (as a lower limit to the loss function). The Z expression is essentially the difference between what the classifier thinks x’ is and what the attacker wants the classifier to think it is4. Carlini and Wagner make one more transformation to accommodate for the fact that many optimization algorithms are not bounded. They apply a change of variable5 on δ and introduce w such that δ = ½( tanh(w) + 1)—x. The final optimization problem is then which they solve using the Adam optimizer. Carlini and Wagner’s approach can quickly generate highly robust targeted white box attacks. Glaze (2023) Nightshade is an extension of prior work by Shan et al. that they call Glaze, a style cloaking technique that applies Carlini and Wagner’s optimization to text-to-image generators. These are harder to evade than image classifiers because they retain more of the features extracted from training images in order to generate original images6: • During training, a text-to-image generator takes in an image x and uses a feature extractor Φ to produce its feature vector Φ(x). • Simultaneously, an encoder E takes a corresponding text caption s and produces a predicted feature vector E(s). • The parameters of E are optimized in training so that E(s) = Φ(x). • At generation time, a user passes a text prompt sᵢ into E and a decoder F decodes E(sᵢ) to produce the generated image xᵢ = F(E(sᵢ)). Shan et al. focus on style mimicry attacks, where a text-to-image generator is used to create art in the style of a particular artist without their consent. Existing protective techniques rely on image cloaking, which are designed for classifiers; they are ineffective against style mimicry because they shift all features in an image instead of focusing on only features related to style. Since it is difficult to explicitly identify and separate style features, Shan et al. do so implicitly by using another feature extractor Ω to transfer artwork x into a target style T that is different from the artist’s, Ω(x, T). (T is selected based on the distances between the centroid of Φ(x) and that of the feature spaces of candidate styles.) Then calculating the style cloak δₓ can be stated as the optimization min(D(Φ(x + δₓ)— Φ(Ω(x, T)))) subject to |δₓ| < p, where D is a distance measure and p is the perturbation budget. Note that since Glaze is a black box evasion attack, the same model can act as both Φ and Ω. In other words, Glaze can use DALL-E, Midjourney, Stable Diffusion, etc. against themselves! I found this to be particularly satisfying as it is the ultimate form of defensively using offensive techniques, at least from the perspective of the artists. Following Carlini and Wagner, Shan et al. then combines the restraint into the optimization problem \(min_{\delta_x}(||\Phi(x+\delta_x)-\Phi(\Omega(x,T)||_2^2+\alpha max(LPIPS(\delta_x)-p,0))\) where α > 0 is a weighing factor for the loss term, LPIPS is a measure of the perceived distortion, and D is instantiated as the l₂ norm. Glaze’s style cloak was empirically successful in protecting art from being learned by Stable Diffusion and DALL-E, as judged by artists7. It helps that artists are willing to accept fairly large p because their current methods of protecting are quite disruptive (e.g., watermarks). However, Glaze can only protect new art since most existing artwork is already part of training data. Nightshade (2023) Nightshade goes beyond cloaking and damages the offending model itself. It takes advantage of the fact that text-to-image generators exhibit concept sparsity despite having large training datasets. That is, a very small portion of the training data contains a given term or its semantically related terms. As a result, Shan et al. hypothesize that text-to-image generators are much more vulnerable to data poisoning (at the concept level) than is commonly believed. They prove this by proposing Nightshade, a prompt-specific data poisoning attack based on mismatched text/image pairs. To minimize the number of poison samples for ease of implementation, Nightshade follows two design principles: • Maximize the effect of each sample by including the keyword C in each poison prompt so that it targets only the parameters associated with C. • Minimize conflicts among different pairs, and thus the overlap of their contributions to the perturbation norm, by creating original images of the target concept T using another text-to-image If Φ is the feature extractor of the victim model and Ω is that of the poison image model, than a valid image x corresponding to the poison prompt can be perturbed by δₓ so that it is poisoned into Ω (x, T). This can be calculated using Glaze8. Nightshade produces stronger poisoning effects than previous techniques and they bleed through to related concepts. They also stack if concepts are combined into a single prompt. Furthermore, if many Nightshade attacks target different prompts on the same model, general features will become corrupted and its image generation function eventually collapses! It is not difficult to imagine that all platforms that host media will protect their content with adversarial techniques like these in the future. I would love to learn more if you’re building or researching in this space—reach out at wngjj[dot]61[at]gmail.com. Until then, I will be thinking about how every new offensive threat could be a new defensive opportunity. ∎ Have you used adversarial ML to protect your content? Tell me about it @jwang_18 or reach out on LinkedIn. Volume generalizes as distance raised to the power of the dimension, so increasingly more volume is on the edges at high dimensions. Here, the edges are decision boundaries. This can be arbitrarily large for sufficiently high n since l ͚ does not grow with dimensionality. Defining f as the complement probability also enables it to be combined with D into a single minimization in the next step. The first term represents the probability of the class of which C predicts x' is part and the second term represents the probability that C classifies x' as t instead. Note that Z is not a probability itself but a softmax input. Since tanh( • ) ∈ [-1, 1], this is equivalent to x + δ ∈ [0, 1]ⁿ. This is a very large output space. Classifiers have limited output classes and therefore do not have to retain as many features. Shan et al. worked closely with professional artists and the Glaze paper includes their perspectives in §3.1, which I recommend reading. In practice, Nightshade uses prompts from a valid dataset of text/image pairs to easily find x.
{"url":"https://jwang18.substack.com/p/adversarial-ml?open=false#%C2%A7fast-gradient-sign-method","timestamp":"2024-11-09T07:11:55Z","content_type":"text/html","content_length":"181655","record_id":"<urn:uuid:df9022a2-162b-4af2-b94b-eb550bb1f57e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00317.warc.gz"}
OpenUCT :: Browsing by Author "Bello-Ochende, Tunde" Browsing by Author "Bello-Ochende, Tunde" Now showing 1 - 10 of 10 Results Per Page Sort Options • A numerical protocol for death-time estimation (2021) Mfolozi, Sipho; Malan, Arnaud George; Bello-Ochende, Tunde; Martin, Lorna Jean A body's axial temperature distribution at death was experimentally demonstrated by the author to predict the postmortem temperature plateau (PMTP), which is known to affect the measured core temperature value and hence death-time estimation. Yet today's methods of death-time estimation apply only a single-point approximation of a body's core temperature in life as well as a single-point measurement of a body's core temperature after death. Four studies were carried out to understand the relationship between a body's axial temperature distribution and the PMTP. The first study numerically approximated antemortem temperature distribution in an MRI-built, high-definition, anatomicallys egmented 3D computational human phantom consisting of several hundred tissues. Metabolic heat generation (QQmm) and blood perfusion (wwbb) parameters were applied to all thermogenic tissue using the Pennes BioHeat Model. The study demonstrated that the antemortem axial temperature distribution was nonlinear, that tissue temperature distribution was inhomogeneous, and that the position and size of the antemortem central isotherm was predicted by the size, shape and location of the most thermogenic internal organ in a given axial plane. Numerical approximation of a body’s antemortem axial temperature distribution using this study’s materials and methods was proposed for death-time estimation. The second study examined postmortem axial heat transfer. The approximated antemortem axial temperature distribution constituted the initial condition. QQmm and wwbb were set to zero to simulate death. Postmortem cooling was simulated in still air, on a cold concrete floor and on a heated floor. The antemortem central isotherm that single-point core thermometry detects was the PMTP. Its size at death, body radius, axial thermometry-depth and length of the postmortem interval (PMI) all predicted PMTP length. The cold concrete floor shifted the central isotherm away from the floor, while the heated floor shifted it towards the floor. Ground temperature and material properties, along with the aforementioned PMTP predictors, result in variation in measured single-point core thermometry values, yet today’s death-time estimation methods do not measure, approximate or standardise them. This is a source of uncertainty. This study demonstrated that a body’s postmortem axial thermal profile was very specific to the PMI at which it exists, including during the PMTP that single-point core thermometry detects. This study proposed a body’s measured postmortem axial thermal profile for death-time estimation to reduce PMTP uncertainties. The study also proposed numerical modelling of the ground, its temperature and material properties. The third study proposed a multipoint axial thermometry (MAT) device to measure a body’s postmortem axial thermal profile. The author designed the device prototype. Its fabrication was outsourced. Empiric and numerical MAT studies were conducted on a cooling dummy and 3D human phantom, respectively. MAT curves indicated a parabolic shape. The fourth study proposed a numerical protocol for death-time estimation that iteratively tested a MAT profile measured at an unknown PMI from a decedent using the proposed MAT device against MAT profiles predicted by numerical simulations of sequentially longer candidate PMIs. A candidate PMI whose MAT profile matched was considered the PMI estimated by the protocol. The proposed protocol applied the exact historical meteorological temperatures that existed during the final estimated PMI. Application of the protocol was demonstrated using a fictitious scenario in which a candidate PMI within 120s of the final estimated PMI was excluded. Potential sources of uncertainty of the proposed protocol were discussed and concluding remarks on future research were made. • Effects of thermal stresses on Pressurised Water Reactor nuclear containment vessels following a Loss of Coolant Accident with assimilated containment filtered venting system (2020) Hartnick, Angelo; Bello-Ochende, Tunde In a nuclear power plant, the last barrier under normal and accident operations is the containment building. This is normally constructed from concrete reinforced with steel bars, which are prestressed to enhance the overall capability to withstand thermodynamic stresses like over-pressurisation and high temperatures. The failure of this final barrier will lead to the release of radioactivity to the surrounding environment. To examine the effects of thermo-hydraulic stresses on PWR containment following a LOCA, a model is proposed with simulated scenarios performed at the Koeberg Nuclear Power Station as a case study. The accidents were simulated using the Koeberg engineering simulator to obtain the output data. The scenario for the proposed model correlates the critical mass flow from a double-ended guillotine break to the containment pressure and temperature increase. Different containment filtered venting systems (CFVS) are also investigated in this study as severe accident management systems. CFVS have historically been included in boiling water reactor (BWR) designs, but following the Fukushima Daiichi nuclear accident, they are being introduced as severe accident management systems to manage the threat of containment over-pressurisation in pressurised water reactors (PWR). Finally, the rate of change in containment pressure and temperature is analysed and compared to literature, with the incorporation of a simulated filtered venting system to the PWR containment building. • Exergy analysis of a Stirling cycle (2017) Wills, James Alexander; Bello-Ochende, Tunde In this dissertation the analysis of the Stirling engine is presented, this research topic falls within the category of thermal energy conversion. The research that was conducted is presented in three chapters of which the topics are: the effects of allocation of volume on engine performance, the GPU-3 (Ground Power Unit - developed by GM) Stirling engine analysis, and the optimisation of a 1000 cm³ Stirling engine with finite heat capacity rates at the source and the sink. The Stirling engine has many advantages over other heat engines, as it is extremely quiet, has multi-fuel capabilities and is highly efficient. There is also significant interest in using Stirling engines in low to medium temperature solar thermal applications, and for waste heat recovery. To develop high-performance engines that are also economically viable, advanced mathematical models that accurately predict performance and give insight into the different loss mechanisms are required. This work aims to use and adapt such a model to analyse the effects of different engine parameters and to show how such a model can be used for engine optimisation using the Implicit Filtering algorithm. In the various analyses that are presented, the dynamic second order adiabatic numerical model is used and is coupled to equations that describe the heat and mass transfer in the engine. The analysis shows that the allocation of volume has a significant effect on engine performance. It is shown that in high-temperature difference (HTD) engines, increasing dead-volume ratio increases efficiency and decreases specific work output. In the case of low-temperature difference (LTD) and medium-temperature difference (MTD) engines, there is an optimal dead-volume ratio that gives maximum specific work output. It was also found that there are optimal swept volume ratios and that the allocation of heat exchanger volume has a negligible effect on engine performance - so long as the dead-volume ratio is optimal. The second order model with irreversibilities included was used to perform an exergy analysis of the GPU-3 Stirling engine. This model compared well with experimental results and the results from other models found in the literature. The results of the study show the two different approaches in modelling the engine losses and the effect that the various engine parameters have on the GPU-3 power output and efficiency. The optimisation of the 1000 cm³ Stirling engine was performed using a model with finite heat capacity rates at the source and the sink, fixed number of heater and cooler tubes, and four different regenerator mesh types. The engine geometry was optimised for maximum work output using the implicit filtering algorithm, and the results show the dominant effect that the regenerator has on engine performance and the geometry that gives maximum work output. The critical insights obtained from this research are the importance of the dead-volume ratio in engine analysis, the merits of the novel Second law Stirling engine model, and the importance of regenerator mesh choice and geometry. The Implicit filtering algorithm is also shown to be a suitable choice of optimisation algorithm to use with Stirling engine mathematical models. • Numerical Design of a 3-Stage Cascaded Thermal Energy Storage System for Solar Application (2023) Oguike, Chimezie; Bello-Ochende, Tunde The analysis of a three-stage cascaded thermal energy storage is presented in this dissertation. Cascaded thermal energy storage systems has many advantages over conventional thermal energy storages, majorly it allows for maintaining of a nigh-constant temperature between the HTF and PCM during the charging and discharging cycles leading to improved performance of the system. This dissertation investigates the performance and transient response of a packed bed operating under high-temperature conditions with phase change materials in varying encapsulations (cascaded in a three-stage format) during charging and discharging cycle by employing computational numerical techniques via commercially available ANSYS Fluent software. The analysis was performed for nine different encapsulation geometries with increased surface area and constant volume in comparison to the base geometry (sphere) to determine the effects of each new encapsulation on the performance of the thermal energy storage (TES). The computational model used in the development of this work compares well with the experimental results by Raul [1]. Additionally, the effect of packing scheme/PCM layout is also investigated in this work. Comparative data analysis was performed on the TES with the various PCM encapsulation designs and the standard spherical PCM encapsulation to determine which geometry provides better performance during charging and discharging cycles. The results of this study show that the thermal performance of the cascaded thermal energy storage improves with each new encapsulation as evidenced by the decreases in charging and discharging times in comparison to the base encapsulation. This study also highlights which capsule design is most practical when considering the bed dimension increases/ decreases with in increasing thermal performance. This study's findings can serve as a benchmark for future optimization of cascaded thermal energy storage systems. • Numerical investigation of the convective heat transfer coefficient of the human body using a representative cylindrical model (2017) Eferemo, Daniel; Bello-Ochende, Tunde; Malan, Arnaud G The principal objective of this study is to investigate, develop and verify a framework for determining the convective heat transfer co-efficient from a cylindrical model that can easily be adaptable to more complex geometry - more specifically the human body geometry. Analysis of the model under forced convection airflow conditions between the transition velocity of about 1m/s - calculated using the Reynolds number - up until 12m/s were carried out. The boundary condition, however, also included differences in turbulence intensities and cylinder orientation with respect to wind flow (seen as wind direction in some texts). A total of 90 Computational Fluid Dynamic (CFD) calculations from these variations were analysed for the model under forced convective flow. Similar analysis were carried out for the model under natural convection with air flow velocity of 0.1m/s. Here, the temperature difference between the model and its surrounding environments and the cylinder orientation with respect to wind flow were varied to allow for a total of 15 CFD analysis. From these analysis, for forced convection, strong dependence of the convective heat transfer coefficient on air velocity, cylinder orientation and turbulence intensity was confirmed. For natural convection, a dependence on the cylinder orientation and temperature difference between the model and its environment was confirmed. The results from the CFD simulations were then compared with those found in texts from literature. Formulas for the convective heat transfer coefficient for both forced and natural convection considering the respective dependent variables are also proposed. The resulting formulas and the step by step CFD process described in this thesis provides a framework for the computation of the convective heat transfer coefficient of the human body via computer aided simulations. This framework can easily be adaptable to the convective heat transfer coefficient calculations of the human body with some geometric modelling adjustments, thus resulting in similar representative equations for a human geometric model. • Optical and Thermal Analysis of a Heteroconical Tubular Cavity Solar Receiver (2018) Maharaj, Neelesh; Bello-Ochende, Tunde The principal objective of this study is to develop, investigate and optimise the Heteroconical Tubular Cavity receiver for a parabolic trough reflector. This study presents a three-stage development process which allowed for the development, investigation and optimisation of the Heteroconical receiver. The first stage of development focused on the investigation into the optical performance of the Heteroconical receiver for different geometric configurations. The effect of cavity geometry on the heat flux distribution on the receiver absorbers as well as on the optical performance of the Heteroconical cavity was investigated. The cavity geometry was varied by varying the cone angle and cavity aperture width of the receiver. This investigation led to identification of optical characteristics of the Heteroconical receiver as well as an optically optimised geometric configuration for the cavity shape of the receiver. The second stage of development focused on the thermal and thermodynamic performance of the Heteroconical receiver for different geometric configurations. This stage of development allowed for the investigation into the effect of cavity shape and concentration ratio on the thermal performance of the Heteroconical receiver. The identification of certain thermal characteristics of the receiver further optimised the shape of the receiver cavity for thermal performance during the second stage of development. The third stage of development and optimisation focused on the absorber tubes of the Heteroconical receiver. This enabled further investigation into the effect of tube diameter on the total performance of the Heteroconical receiver and led to an optimal inner tube diameter for the receiver under given operating conditions. In this work, the thermodynamic performance, conjugate heat transfer and fluid flow of the Heteroconical receiver were analysed by solving the computational governing Equations set out in this work known as the Reynolds-Averaged Navier-Stokes (RANS) Equations as well as the energy Equation by utilising the commercially available CFD code, ANSYS FLUENT®. The optical model of the receiver which modelled the optical performance and produced the nonuniform actual heat flux distribution on the absorbers of the receiver was numerically modelled by solving the rendering Equation using the Monte-Carlo ray tracing method. SolTrace - a raytracing software package developed by the National Renewable Energy Laboratory (NREL), commonly used to analyse CSP systems, was utilised for modelling the optical response and performance of the Heteroconical receiver. These actual non-uniform heat flux distributions were applied in the CFD code by making use of user-defined functions for the thermal model and analysis of the Heteroconical receiver. The numerical model was applied to a simple parabolic trough receiver and reflector and validated against experimental data available in the literature, and good agreement was achieved. It was found that the Heteroconical receiver was able to significantly reduce the amount of reradiation losses as well as improve the uniformity of the heat flux distribution on the absorbers. The receiver was found to produce thermal efficiencies of up to 71% and optical efficiencies of up to 80% for practically sized receivers. The optimal receiver was compared to a widely used parabolic trough receiver, a vacuum tube receiver. It was found that the optimal Heteroconical receiver performed, on average, 4% more efficiently than the vacuum tube receiver across the temperature range of 50-210℃. In summary, it was found that the larger a Heteroconical receiver is the higher its optical efficiency, but the lower its thermal efficiency. Hence, careful consideration needs to be taken when determining cone angle and concentration ratio of the receiver. It was found that absorber tube diameter does not have a significant effect on the performance of the receiver, but its position within the cavity does have a vital role in the performance of the receiver. The Heteroconical receiver was found to successfully reduce energy losses and was found to be a successfully high performance solar thermal tubular cavity • Optimisation of feedwater heaters and geothermal preheater in fossil-geothermal hybrid power plant (2019) Nsanzubuhoro, Christa; Bello-Ochende, Tunde; Malan, Arnaud Sufficient energy supply is a fundamental necessity for the stimulation of socio-economic advancement. However, the current rapid rise in urbanisation has resulted in the significant increase in energy demands. Consequently, the current conventional energy supply systems are facing numerous challenges in meeting the world's growing demand for energy sustainably. Thus, there is an urgent and compelling need to develop innovative, more effective ways to integrate sustainable renewable energy solutions into the already existing systems or better yet, create new systems that all together make use of renewable energy. This research aims to investigate and establish the optimum working conditions of a feedwater heater and geothermal preheater in a power plant that makes use of both renewable and non-renewable energy resources, where renewable energy (geothermal energy) is used to boost the power output in an environmentally sustainable way. Henceforth, a simplified model of a Rankine cycle with single reheat and regeneration and another model with a geothermal preheater substituting the low-pressure feedwater heater were designed. The Engineering Equations Solver (EES) software was used to perform an analysis of the thermodynamic performance of the two models designed. The models were used to analyse the energetic and exergetic effects of replacing a low-pressure feedwater heater with a geothermal preheater sourcing heat from a low temperature geothermal resource (temperature generally < 150°C). The results of this research work reveal that the replacement of the low-pressure feedwater heater with a geothermal preheater increases the power generated since less heat is bled from the low-pressure turbine (allowing more heat energy from the steam to be converted into mechanical energy in the turbine). Applying the principle of the Second Law of thermodynamics analysis, the Number of Entropy Generation Units (EGU) and Entropy Generation Minimisation (EGM) analysis were employed to optimise the designed hybrid system. The feedwater heaters and geothermal preheater were modelled as counter-flow heat exchangers and a downhole co-axial heat exchanger, respectively. The feedwater heaters were optimised by means of the method of Number of Entropy Generation Units whereas the geothermal preheater was optimised by means of the Entropy Generation Minimisation analysis method. Owing to the optimisation of these components, the operating conditions of the boiler and turbines were secondarily improved. Overall, this research emphasises the impact renewable energy has on major power plant systems that are in operation and run on non-renewables. • Sensitivity analysis of the secondary heat balance at Koeberg Nuclear Power Station (2021) Boyes, Haydn; Bello-Ochende, Tunde At Koeberg Nuclear Power Station, the reactor thermal power limit is one of the most important quantities specified in the operating licence, which is issued to Eskom by the National Nuclear Regulator (NNR). The reactor thermal power is measured using different methodologies, with the most important being the Secondary Heat Balance (SHB) test which has been programmed within the central Koeberg computer and data processing system (KIT). Improved accuracy in the SHB will result in a more accurate representation of the thermal power generated in the core. The input variables have a significant role to play in determining the accuracy of the measured power. The main aim of this thesis is to evaluate the sensitivity of the SHB to the changes in all input variables that are important in the determination of the reactor power. The guidance provided by the Electric Power Research institute (EPRI) is used to determine the sensitivity. To aid with the analysis, the SHB test was duplicated using alternate software. Microsoft Excel VBA and Python were used. This allowed the inputs to be altered so that the sensitivity can be determined. The new inputs included the uncertainties and errors of the instrumentation and measurement systems. The results of these alternate programmes were compared with the official SHB programme. At any power station, thermal efficiency is essential to ensure that the power station can deliver the maximum output power while operating as efficiently as possible. Electricity utilities assign performance criteria to all their stations. At Koeberg, the thermal performance programme is developed to optimize the plant steam cycle performance and focusses on the turbine system. This thesis evaluates the thermal performance programme and turbine performance. The Primary Heat Balance (PHB) test also measures reactor power but uses instrumentation within the reactor core. Due to its location inside the reactor coolant system, the instrumentation used to calculate the PHB is subject to large temperature fluctuations and therefore has an impact on its reliability. To quantify the effects of these fluctuations, the sensitivity of the PHB was determined. The same principle, which was used for the SHB sensitivity analysis, was applied to the PHB. The impact of each instrument on the PHB test result was analysed using MS Excel. The use of the software could be useful in troubleshooting defects in the instrumentation. A sample of previously authorised tests and associated data were used in this thesis. The data for these tests are available from the Koeberg central computer and data processing system. • Thermal analysis of the internal climate condition of a house using a computational model (2020) Knutsen, Christopher; Bello-Ochende, Tunde The internal thermal climatic condition of a house is directly affected by how the building envelope (walls, windows and roof) is designed to suit the environment it is exposed to. The way in which the building envelope is constructed has a great affect on the energy required for heating and cooling to maintain human thermal comfort. Understanding how the internal climatic conditions react to the building envelope construction is therefore of great value. This study investigates how the thermal behaviour inside of a simple house reacts to changes made to the building envelope with the objective to predict how these changes will affect human thermal comfort when optimising the design of the house. A three-dimensional numerical model was created using computational fluid dynamic code (Ansys Fluent) to solve the governing equations that describe the thermal properties inside of a simple house. The geometries and thermophysical properties of the model were altered to simulate changes in the building envelope design to determine how these changes affect the internal thermal climate for both summer and winter environmental conditions. Changes that were made to the building envelope geometry and thermophysical properties include: thickness of the exterior walls, size of the window, and the walls and window glazing constant of emissivity. Results showed that there is a substantial difference in indoor temperatures, and heating and cooling patterns, between summer and winter environmental conditions. The thickness of the walls and size of the windows had a minimal effect on internal climate. It was found that the emissivity of the walls and window glazing had a significant effect on the internal climate conditions, where lowering the constant of emissivity allowed for more stable thermal conditions within the human comfort range. • Thermodynamic design optimisation of an open recuperative twin-shaft solar thermal Brayton cycle with combined or exclusive reheating and intercooling (2017) Meas, Matthew Robert; Bello-Ochende, Tunde The Gouy-Stodola Theorem implies that the net power output of a system can be maximised by synchronously sizing the components, thus minimising the cumulative entropy generation rate. The resulting optimal design is related to, and therefore characteristic of, the cycle configuration, since the entropy generation rates in the individual components are interdependent. In this work, optimal design of three common open solar thermal Brayton cycle variants is investigated and compared using principles of the second law of thermodynamics and the method of entropy generation minimisation. The basic cycle, modified accordingly to construct the reheated, intercooled and combined cases, comprises a modified cavity receiver, a counter-flow plate-type recuperator, and a pair of proprietary automotive turbochargers configured to operate as micro-turbines. An additional modified cavity receiver and cross-flow plate heat exchanger constitute the reheater and intercooler, respectively. Net power output is expressed in terms of the temperature and pressure fields in each case, defined in terms of geometric variables characteristic of the components. Heat addition is calculated using the receiver sizing algorithm developed by Stine and Harrigan. Maximum constraints are applied to the recuperator and intercooler lengths and to the surface temperatures of the receiver and reheater absorber tubes. The dynamic-trajectory method is implemented to optimise the variables such that the net power output is maximised. An array of inputs are considered and compared, including 22 micro-turbine models, eight concentrator diameters ranging from six to 20 meters, and both circular and rectangular absorber tube profiles. The influence of receiver inclination, concentrator optics, environmental conditions and design constraints are investigated and the optimisation subroutines validated in the Flownex simulation environment. Results show the optimised power output, operating conditions and design parameters. The intercooled case demonstrates both the highest ratio of total irreversibility to heat input and the highest power output per unit collector surface area. The combined and reheated cases follow. Temperature differences across the components are identified as the primary cause of entropy generation. The optimised heat exchanger lengths are shown to lie on their maximum constraints, and the channel cross-sections found to decrease in size with increasing mass flow rate such that the heat transfer area is maximised and the heat transfer effectiveness improved. As such, plate counts in the optimised heat exchangers are found to be relatively high, and investigation of various compact heat exchanger designs, and regenerative- as opposed to recuperative heat exchangers, is recommended for future work on this topic. The receiver and reheater geometric parameters are found to change such that the absorber tube surface temperatures are kept below the maximum constraint. Trends in the data obtained for circular section absorber tubes are found to be less smooth than the trends in the data obtained for absorber tubes of rectangular section, indicating that the geometric constraints required to maintain the receiver shape offer greater design flexibility for rectangular section absorber tubes than for absorber tubes of circular section. It is concluded that the increases in the compressor and turbine outlet temperatures with mass flow rate and compressor pressure ratio drive the changes in the temperature differences across the heat exchangers, and thus the component entropy generation rates. The entropy generation rates must in turn be distributed during the optimisation procedure such that the cumulative rate is less than the power output, and all of the constraints are met.
{"url":"https://open.uct.ac.za/browse/author?value=Bello-Ochende,%20Tunde","timestamp":"2024-11-10T01:26:23Z","content_type":"text/html","content_length":"585844","record_id":"<urn:uuid:3bf9212f-0244-409d-a706-fb9b98583527>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00021.warc.gz"}
Bolix Game Rules Board Game Rulebook Rubric Each of the following categories will be graded on a 0. What sort of board would make the game rules the most interesting? A quick tutorial/demonstration of how to play the classic Shut The Box Game from Frik-n-Frak.com! The Game of Boku Boku (also known as Bolix) is a strategy game for two players or teams. It was invented by the American Rob Nelson and is protected by copyright law and US and UK patents. Edirol Hyper Canvas Vsti. The application for the US Patent, claiming to protect 'the ornamental design for a board game', was filed on 10th June 1994 and the patent was granted on 5th September 1995. The game was put on the market by The London Game Company in 1997. Boku belongs to a class of connection games where the players try to connect five marbles in a row. (Read more about the ' of connection games.) The of Boku. There are various names for this game: Boku, Bolix and, indeed, Bollox. Read more about the '.) Boku Publisher: The London Game Company (1997) Quote: The rules are so simple to learn that anyone can be playing in less than a minute but there are so many varied ways the game can be won that new approaches will be continually conceived. A game takes about 10 to 15 minutes to complete, leaving plenty of time for return matches. Mike 21 Crack. As the game reaches its completion, their exists another possible situation. For example, if a player successfully closes every number save 5 (that is, if 5 is the only number still open), the subsequent roll will be of a single die (die = singular of dice). The reason for this is straight-forward. If the total of open numbers is six or less, the probability of rolling that total increases if the player rolls only one die. For instance, the odds of rolling a 5 on two dice are 1/9 or roughly 11% (meaning with two dice, you will roll 5 11% of the time). Using one die, the odds of rolling a 5 are 1/6 or roughly 16.6% (meaning you roll 5 16.6% of the time). In some versions of the rules, players are allowed to decide if they would rather roll 1 or 2 dice if the remaining total is 6 or less; however, it is always better to roll one die, so that is how I've implemented the rules. Play continues until all of the sliders are shut or until an impossible roll occurs. If all of the sliders are shut, the player cries out 'ShutBox!' Indicating that the player has accomplished the goal of the game, and in a multi-player situation, is likely to be the winner. An impossible roll occurs if the sum of the dice cannot be 'shut out' by closing sliders. As mentioned before, the sum of the sliders shut must exactly equal the target sum. If this is not possible given the remaining open numbers, then the game ends. Again, if one rolls an eight, but only the five is left open, the game is over and the score is 5.
{"url":"https://southpolar.netlify.app/bolix-game-rules","timestamp":"2024-11-04T14:24:27Z","content_type":"text/html","content_length":"21666","record_id":"<urn:uuid:454bfa8c-f70d-4c69-8aea-787e91c98e42>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00183.warc.gz"}
*MP2 NO¶ MP2 natural orbitals module by S. Knecht and H. J. Aa. Jensen. The MP2 calculation will produce the MP2 energy and the natural orbitals for the density matrix through second order. The primary purpose of this option is to generate good starting orbitals for CI or MCSCF but also CC wave functions, but it may of course also be used to obtain the MP2 energy, perhaps with frozen core orbitals. For valence correlation calculations it is recommended that the core orbitals are excluded via the .ACTIVE in order to obtain the appropriate correlating orbitals as start for an MCSCF calculation. As the commonly used basis sets do not contain correlating orbitals for the core orbitals and as the core correlation energy therefore becomes arbitrary, a thoughtfully chosen .ACTIVE option can also be of benefit in MP2 energy calculations. See also Ref. [ Jensen1988] for more details. Note: the module works at present only for closed-shell Hartree-Fock reference wave functions. Print level. .MAX VS¶ Maximum number of virtual orbitals in the MP2 calculation. The actual number will be reduced to this value if its exceeded by the number calculated from .ACTIVE. .SEL NO¶ Select a minimal set of natural orbitals given as a string analog to the Specification of orbital strings defined as active orbitals in subsequent correlation calculations. The numbers are: max. occupation, min occupation, safety tolerance (here +-10%). .SEL NO NO-occ 1.99 0.001 0.1 Sets the integral transformation scheme for the **MOLTRA part. Default: default scheme of MOLTRA (scheme 6): Perform a Mulliken population analysis (see .MULPOP) of the MP2 natural orbitals. This can be useful for a comparison with a Mulliken population analysis of the Hartree-Fock orbitals. Specify what two-electron integrals to include during the construction of the MP2 natural orbitals (default: .INTFLG under **HAMILTONIAN).
{"url":"http://www.diracprogram.org/doc/release-24/manual/wave_function/mp2no.html","timestamp":"2024-11-10T01:25:42Z","content_type":"text/html","content_length":"11253","record_id":"<urn:uuid:8b7af72c-67d1-4ba6-b3ba-54e190b2bc44>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00775.warc.gz"}
20 People From Math Problems Spotted In Real Life20 People From Math Problems Spotted In Real Life | DeMilked 20 People From Math Problems Spotted In Real Life Most of us remember the ridiculous-sounding math problems from our school days that went something like “Johnny has 200 apples and has to share them with four of his friends…”. Well, even though they might sound ridiculous, they just might be based on actual real-life situations. People are sharing photos of the times they spotted the people from math problems in real life and they’re absolutely hilarious. It looks like hundreds of watermelons and thousands of bananas seem to be a completely reasonable amount of fruit for these people. But who are we to judge, right? Check out the people from math problems spotted in real life in the gallery below! Read more #1 The Guy My Math Teacher Was Talking About Image source: kart51 #2 My Friend Purchased 28 Industrial Sized Clear Bags Of Cheetos. Each Bag Cost Him $65. What Was The Amount He Paid In Total? Image source: Arttherapist #3 Danny Has 1,496 Bananas Protecting Him From Outside Forces. If He Eats 1,369 Of Them, How Many Are Left? Image source: Max Wark / CTV Kitchener #4 Example From A Math Textbook Image source: chiick #5 I Always Wondered How Many Bananas It Would Take To Fill A Car #6 The Moment The Cashier Found The Person From The Math Problems Image source: zonlin #7 I Was The Kid From Your First Grade Math Problems With 87 Watermelons And 132 Cantaloupes Image source: rostiswag #8 I Think About Him Every Day. Why Did He Need The Bananas? Why Did He Need The Milk? Image source: marya-morevna #9 Soda For 6 Cents. The Obvious Course Of Action. I Think, They Got 600 Bottles. They Are Definitely People From Math Book Image source: HappyWulf #10 Math Problems Never Been This Real #11 He’s The Guy From Your Math Problems Image source: tntien #12 Found The Car From All The Middle School Math Problems Image source: coIox #13 The Man From Our Math Problems Image source: ihatetechnology #14 Met One Of Those Guys From The Math Problems Tonight Image source: X1Pikachu1X #15 This Is The Person You Learned About In Math Class: “Sally Bought 1000 Bags Of Chips” Image source: dcanderson96 #16 A Guy Buys 38 Watermelons, He Can Take 2 In Both Hands, How Many Times Will It Take For Him To Bring All Of Them Home? #17 If Brian Buys 2000 Bananas And Eats Half Of Them In One Sitting, How Many Loaves Of Banana Bread Can He Make With The Remaining Bananas? Image source: thebrijam #18 Found The Guy From The Math Problems Image source: AlbertoBarahona #19 I’m The Guy From Your Math Books Who Has A Ton Of Pizzas In Their Car Image source: xxclownkill3rxx #20 I Found The Guy From All Those Elementary School Math Problems At Walmart Image source: Shaine_Memes Got wisdom to pour?
{"url":"https://www.demilked.com/people-from-math-problems-irl/","timestamp":"2024-11-01T19:50:30Z","content_type":"text/html","content_length":"142313","record_id":"<urn:uuid:f7f633d3-e1ed-4103-95e8-3c46f04d9217>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00583.warc.gz"}
Harmonic Distortion, Odd- and Even-Order J. Donald Tillman Sept 2022 This article is a guided tour through the characteristics of Harmonic Distortion. Welcome. In the audio world, there has been a lot of debate about odd and even order harmonic distortion; their qualities, timbre, audibility, and such. Sometimes boiling down to "even good, odd bad". But I thought the topic would be worth exploring a bit. "Harmonic Distortion" has traditionally been a measure of the quality of an amplifier, or other audio device, with respect to its linearity. The ratio of the output voltage to the input voltage should be a constant value. And if you plot the output-vs-input on an XY plot you should see a perfectly straight line. And any curve, wiggle, or warp in that line would be a nonlinearity. The procedure to measure harmonic distortion is to use the purest available source of a sine wave, apply it to the input, drive the unit at the intended level (often its rated output), and measure the output signal. Any nonlinearities in the unit would show up in the output as a misshapen sine wave, which would have additional harmonic content. So the harmonic distortion is the ratio of the level of the extraneous harmonic content to the sine wave. For this article I created a triptych display that presents the curve, the wave, and the spectrum, all together, for various distortion curves. It's enlightening to see a mechanism from multiple angles simultaneously. The Jupyter Notebook Python source code for the triptych displays is available here: https://github.com/dontillman/distortion-article. Exponential Curve We'll start with an exponential curve. While e^x is a very specific curve, it also serves as an example of the general case where the slope increases at higher x values. The exponential curve is also very significant in electronics; the transfer function of a bipolar transistor is an exponential curve. Specifically: $$I_E = I_{ES}({e^{V_{BE} / V_T} - 1)}$$ Here are the curve, the distorted sine wave, and the harmonic distortion components: For these graphs, the source is a +/- 1.0 Volt peak-to-peak sine wave. The left plot shows the nonlinear curve with the input on the X axis and the output on the Y axis. I placed a sideways sine wave at the bottom of the plot to help make that clear. The nonlinearity is scaled and biased to cover the same +/- 1.0 volt range as the input. The middle plot shows the resulting distorted sine wave, and compares it to a proper sine wave. The right plot shows the resulting spectral components of the distortion. To minimize distractions, the DC component and the fundamental have been removed, as well as any harmonics less than -80dB down. The harmonic colors are assigned from the Electronic Color Code so my electrical engineering readers will feel at home. The distortion level and harmonic content varies with signal level. For demo purposes, the examples in this article the signal level have been hand-tweaked to generate a level of about 10% harmonic distortion. That's a lot in the hifi world, but smaller levels would be difficult to see. Of course this is all in an abstract theoretical world, and in the real world things are a lot more complex. So in this example, we see that an exponential nonlinearity creates multiple harmonics, both odd and even, but the 2nd harmonic predominates. Technically, an exponential distortion curve creates all harmonics, but they drop off pretty quickly. The Parabolic Curve The next curve is a parabola, or x^2, curve. For the parabola, and any other x^n curves, we have to bias the signal up from zero center, else there would be all harmonics and no fundamental. So the approach is to bias the curve to the point where have 10% harmonic distortion. In this case, up 2.5 Volts. The parabolic curve is also important because, due to their construction, Field Effect Transistors exhibit a square law transfer curve: $$I_{DS} = I_{DSS} (1 - V_{GS} / V_P)^2$$ The parabolic curve is unique and especially interesting as its only distortion product is the second harmonic. And this is consistent with the trigonometric identity: $$\cos^2 \theta = {{\cos 2 \theta + 1} \over 2}$$ There will be more on the parabolic curve later... Other x^n Curves How about a cubic curve? Or a quartic, or x^4, curve. They're both remarkably similar, both in the shape of the curve and the harmonic content. And I think this is mostly due to the constraint of driving the stage to a 10% distortion level. x^3/2 Curve Venturing on the other side of the parabola (ahem...), here is an x^3/2 curve. I am including this because because Marshall Leach's SPICE model of a vacuum tube uses an x^3/2 curve: $$i_P = K (\mu v_{GK} + v_{PK})^{3/2}$$ Symmetrical Mechanisms with Only Odd Harmonics We say that a given nonlinearity is symmetrical when the curve for the negative half of the signal is the exact mirror-image opposite of the curve on positive half. For these situations, all the even harmonics cancel out and we are left with just odd harmonics. This is a hyperbolic tangent, or tanh(), curve, which is mirror-image symmetrical. The tanh() curve is also the large signal transfer function of a bipolar transistor pair differential amp circuit. So we see this used all over the place. Sure enough, odd harmonics only. Turning Odd Harmonics into Even Harmonics While a distortion of only odd harmonics is specific to a symmetrical mechanism, and has a readily identifiable spectral characteristic, it's incredibly easy to circumvent. All you have to do to convert a symmetrical curve to a non-symmetrical curve is to add a little bias voltage. So here we take the tanh() curve above and bias the zero point down 0.15 Volts. The original 10% predominantly 2nd order harmonic distortion spreads out over seven harmonics, while keeping the total distortion about the same. That's a pretty dramatic result for such a simple change. We can take it even further. If we bias the tanh() curve down significantly we can turn it into something very closely resembling a parobolic curve, with pretty much entirely 2nd harmonic distortion. Eliminating Even Harmonics: Push-Pull You can eliminate even harmonics by placing two stages in a push-pull arrangement. Push-pull is the circuit topology used in almost every vacuum tube power amplifier. The signal goes through a "phase splitter" stage that delivers balanced positive and negative versions of the audio signal, and each of those drives an output stage on alternate ends of an output transformer. This topology naturally creates a symmetrical transfer function by cancelling the even-order harmonic distortion components. For example, here is the schematic for the power amp on the classic Fender Deluxe guitar amp: From a signal chain point of view, the result is effectively combining the original and inverted transfer functions. $$f_{pushpull}(x) = f(x) - f(-x)$$ If we have two devices with the vacuum tube x^3/2 curve above and place them in a push-pull arrangement, the result is the original curve with the even harmonics removed. And since the 2nd harmonic was by far the most prominent, the result is a total distortion level reduced by well over an order of magnitude. And this is the first plot with no visible distortion on the sine wave. The effect on a bipolar transistor exponential curve is almost identical. Suggesting that, based only on theoritical curves, that vacuum tube and bipolar transistor output stages are nearly identical. Of course in real life things are more complicated. A push-pull version of the parabolic distortion curve is a special case; all of the distortion is in the second harmonic, and it mathematically cancels completely: $$(2.5 + x)^2 - (2.5 - x)^2 = (6.25 + 5x +x^2) - (6.25 - 5x + x^2) = 10x$$ You can also do a push-pull arrangement with the tanh() stage above that was heavily biased to be predominantly 2nd harmonic distortion and the result is the same. But it would be an example of a symmetrical odd-harmonic stage, biased to be even harmonics, and then used in a push-pull pair to be odd harmonics again. It seems like a pointless excersize at first... but maybe not. Two Stages in a Row Discussions of harmonic distortion and devices generally assume that there's only one device. Near zero pieces of audio equipment have exactly one stage. Usually there are multiple stages, in all kinds of configurations, parallel, series, whatever. A single parabolic stage has only 2nd harmonic distortion. But what happens if we put the signal through two parabolic stages in series? 4th harmonic? (For these plots, the signal is adjusted for unity gain and zero offset between the two stages.) Interesting; the 2nd harmonic distortion literally doubled, which makes some sense. And the 3rd and 4th harmonics were added as a side effect. However... that probably won't happen in practical circuits. If I have a gain stage, whether it is a vacuum tube, bipolar transistor, or FET, that gain stage will most likely be wired up to invert the signal ("common cathod", "common emitter", "common source"). For example, here is the preamp of the aforementioned Fender Deluxe: The input goes through the first vacuum tube gain stage, with a gain of 50 or so, and inverted in the process, and then the signal gets knocked down by the passive tone controls, and the volume control, to roughly the original strength, depending on the settings, and gets another boost, and another inversion, by the second "recovery" stage. Here is the same curve as above, but with an inversion in between the two stages. We went from 22% distortion down to 2.5% just by inverting an audio signal? What happened? The 2nd harmonic distortion component in the second stage is of the opposite polarity to the 2nd harmonic distortion of first stage, and mostly cancelled it. I think it's delightful how an "organic" distortion cancelling mechanism is built in to the regular design of vacuum tube preamps, and it goes by unnoticed. And yes, one of my analog design techniques is to set things up so that the nonlinearities of adjacent stages cancel. Local Feedback Finally, we can add negative feedback. Here I'm referring to small amounts of local feedback, such as an emitter resistor. Opamp circuits involve large amounts of global feedback, and they work very differently, more like a servo than an amplifier. And that's another topic entirely. Local feedback has several effects; it reduces harmonic distortion overall, it changes the spectrum of the distortion, it reduces the output level, and it reduces the level of the signal coming in to the nonlinear transfer function, which reduces the distortion some more. Just as an example, here is the parabolic curve, normally 10% 2nd harmonic distortion only, but with 6dB feedback (a loop gain of 1). Given the reduced input signal level, you have the opportunity to drive the signal harder to cause a given level of distortion. So things could get a little confusing at that point. You'd really need to do a proper analysis of your specific circuit. In this case, the harmonics look nothing like the original situation. 'Hope you enjoyed the tour. Overall, it seems to me that the odd/even classification of harmonic distortion is one of those cases where if all you've got is a hammer then everything looks like a nail. Yes, odd and even order distortion is eminently classifiable. But it's also quite malliable, depending on the circuit context. Copyright 2022
{"url":"https://till.com/articles/harmonicdistortion/","timestamp":"2024-11-04T04:01:44Z","content_type":"text/html","content_length":"17961","record_id":"<urn:uuid:6a171b5d-3766-415d-8bd2-4f75b011495d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00262.warc.gz"}
Constrained suboptimality when prices are non-competitive P. Jean-Jacques Herings and Alexander Konovalov Journal of Mathematical Economics, 2009, vol. 45, issue 1-2, 43-58 Abstract: The paper addresses the following question: how efficient is the market system in allocating resources if trade takes place at prices that are not competitive? Even though there are many partial answers to this question, an answer that stands comparison to the rigor by which the first and second welfare theorems are derived is lacking. We first prove a "Folk Theorem" on the generic suboptimality of equilibria at non-competitive prices. The more interesting problem is whether equilibria are constrained optimal, i.e. efficient relative to all allocations that are consistent with prices at which trade takes place. We discuss an optimality notion due to Bénassy, and argue that this notion admits no general conclusions. We then turn to the notion of p-optimality and give a necessary condition, called the separating property, for constrained optimality: each constrained household should be constrained in each constrained market. If the number of commodities is less than or equal to two, the case usually treated in the textbook, then this necessary condition is also sufficient. In that case equilibria are constrained optimal. When there are three or more commodities, two or more constrained households, and two or more constrained markets, this necessary condition is typically not sufficient and equilibria are generically constrained suboptimal. Keywords: Non-competitive; prices; Welfare; Pareto; improvement (search for similar items in EconPapers) Date: 2009 References: View references in EconPapers View complete reference list from CitEc Citations: View citations in EconPapers (11) Downloads: (external link) Full text for ScienceDirect subscribers only Related works: Working Paper: Constrained Suboptimality When Prices are Non-Competitive (2000) Working Paper: Constrained Suboptimality When Prices are Non-Competitive (2000) Working Paper: Constrained suboptimality when prices are non-competitive (2000) This item may be available elsewhere in EconPapers: Search for items with the same title. Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text Persistent link: https://EconPapers.repec.org/RePEc:eee:mateco:v:45:y:2009:i:1-2:p:43-58 Access Statistics for this article Journal of Mathematical Economics is currently edited by Atsushi (A.) Kajii More articles in Journal of Mathematical Economics from Elsevier Bibliographic data for series maintained by Catherine Liu ().
{"url":"https://econpapers.repec.org/article/eeemateco/v_3a45_3ay_3a2009_3ai_3a1-2_3ap_3a43-58.htm","timestamp":"2024-11-11T07:53:14Z","content_type":"text/html","content_length":"15425","record_id":"<urn:uuid:5aed403b-aecd-415e-b0f5-af49f3b832cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00621.warc.gz"}
Decoding Dow (DOW) Through Benjamin Graham's Lens Many investors turn to Benjamin Graham's so-called “Graham number” to calculate the fair price of a stock. The Graham number is √(22.5 * 5 year average earnings per share * book value per share), which for Dow gives us a fair price of $22.0. In comparison, the stock’s market price is $50.41 per share. Dow’s current market price is 129.2% above its Graham number, which implies that there is upside potential -- even for a conservative investors who require a significant margin of safety. The Graham number is often used in isolation, but in fact it is only one part of a check list for choosing defensive stocks that he laid out in Chapter 14 of The Intelligent Investor. The analysis requires us to look at the following fundamentals of Dow: Sales Revenue Should Be No Less Than $500 million For Dow, average sales revenue over the last 5 years has been $73.52 Billion, so in the context of the Graham analysis the stock has impressive sales revenue. Originally the threshold was $100 million, but since the book was published in the 1970s it's necessary to adjust the figure for inflation. Current Assets Should Be at Least Twice Current Liabilities We calculate Dow's current ratio by dividing its total current assets of $17.61 Billion by its total current liabilities of $9.96 Billion. Current assets refer to company assets that can be transferred into cash within one year, such as accounts receivable, inventory, and liquid financial instruments. Current liabilities, on the other hand, refer to those that will come due within one year. Dow’s current assets outweigh its current liabilities by a factor of 1.8 only. The Company’s Long-term Debt Should Not Exceed its Net Current Assets This means that its ratio of debt to net current assets should be 1 or less. Since Dow’s debt ratio is -0.4, the company has much more liabilities than current assets because its long term debt to net current asset ratio is -0.4. We calculate Dow’s debt to net current assets ratio by dividing its total long term of debt of $14.91 Billion by its current assets minus total liabilities of $57.97 The Stock Should Have a Positive Level of Retained Earnings Over Several Years Dow had good record of retained earnings with an average of $22.41 Billion. Retained earnings are the sum of the current and previous reporting periods' net asset amounts, minus all dividend payments. It's a similar metric to free cash flow, with the difference that retained earnings are accounted for on an accrual basis. There Should Be a Record of Uninterrupted Dividend Payments Over the Last 20 Years Dow has offered a regular dividend since at least 2017. The company has returned an average dividend yield of 5.2% over the last five years. A Minimum Increase of at Least One-third in Earnings per Share (EPS) Over the Past 10 Years Dow's EPS growth rate does not meet Graham's requirement of a minimum 30% growth rate over 10 years, but the growth rate is positive nonetheless over a 7 year period. We calculate the EPS growth rate from the values reported in 2017 and 2018, which were $0.60 and $6.21, giving us an average of $3.40. Then we do the same for the years 2022 and 2023, which gives us an average of $3.55 from their reported values of $6.28 and $0.82. The growth rate between the two averages is 4.41% — indicating a respectable upwards EPS growth trend for Dow. Based on the above analysis, we can conclude that Dow satisfies some of the criteria Benjamin Graham used for identifying for an undervalued stock because it is trading above its fair value and has: • impressive sales revenue • a decent current ratio of 1.77 • much more liabilities than current assets because its long term debt to net current asset ratio is -0.4 • good record of retained earnings • an acceptable record of dividends • declining EPS growth
{"url":"https://marketinference.com/analysis/r/2024/10/25/DOW/","timestamp":"2024-11-08T11:36:16Z","content_type":"text/html","content_length":"54552","record_id":"<urn:uuid:c4242b5f-366e-416d-9533-d3ddc1636b75>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00353.warc.gz"}
Large-scale analysis and computer modeling reveal hidden regularities behind variability of cell division patterns in Arabidopsis thaliana embryogenesis Noise plays a major role in cellular processes and in the development of tissues and organs. Several studies have examined the origin, the integration or the accommodation of noise in gene expression, cell growth and elaboration of organ shape. By contrast, much less is known about variability in cell division plane positioning, its origin and links with cell geometry, and its impact on tissue organization. Taking advantage of the first-stereotyped-then-variable division patterns in the embryo of the model plant Arabidopsis thaliana, we combined 3D imaging and quantitative cell shape and cell lineage analysis together with mathematical and computer modeling to perform a large-scale, systematic analysis of variability in division plane orientation. Our results reveal that, paradoxically, variability in cell division patterns of Arabidopsis embryos is accompanied by a progressive reduction of heterogeneity in cell shape topology. The paradox is solved by showing that variability operates within a reduced repertoire of possible division plane orientations that is related to cell geometry. We show that in several domains of the embryo, a recently proposed geometrical division rule recapitulates observed variable patterns, suggesting that variable patterns emerge from deterministic principles operating in a variable geometrical context. Our work highlights the importance of emerging patterns in the plant embryo under iterated division principles, but also reveal domains where deviations between rule predictions and experimental observations point to additional regulatory mechanisms. This manuscript presents a new and interesting work exploring stochastic and deterministic aspects of embryonic cell division in plants. In particular, the power of the proposed approach lies in the quantitative analysis of 3D cell geometries that is combined with quantitative computer modelling. In multicellular organisms, cell division is one of the major mechanisms that subtend the elaboration and maintenance of functional tissue organizations, as observed for example in animal epithelia ( Lemke and Nelson, 2021). In plants, division is the primary determinant of relative cell positions because the cellular wall forbids cell displacements and intercalations (Fowler and Quatrano, 1997). Deciphering the principles that underlie the positioning and orientation of division plane is thus a central question to understand organ development and morphogenesis (Gillies and Cabernard, 2011). The possibility that universal primary physical principles operate in cleavage plane selection has led to the formulation of several geometrical rules relating division plane positioning to mother cell shape (Minc and Piel, 2012), such as Errera’s rule of plane area minimization for cells dividing symmetrically (i.e. producing daughters of approximately identical sizes, Errera, 1888). Although they are essentially phenomenological, such rules have proved useful as proxys to highlight generic cellular mechanisms that may be shared between cells with varying morphologies. Stochastic fluctuations, or noise, play a major role in developmental systems (Meyer and Roeder, 2014; Cortijo and Locke, 2020). For example, at the molecular level, transcriptional noise has been recognized as a source of heterogeneity in cell fates (Meyer et al., 2017; at the cellular level, noise in growth rate has been suggested to contribute to the robustness in the development of organ size and shape Hong et al., 2016; at higher levels, it has been proposed that stochastic fluctuations could subtend plant proprioception up to the organ and organism scales Moulia et al., 2021). However, in contrast with variability and heterogeneity in cell and tissue growth, stochasticity in the positioning of the cell division plane has received much less attention. A noticeable exception is the seminal work of Besson and Dumais, who showed that in several two-dimensional plant systems with symmetric divisions, a stochastic formulation of Errera’s rule accounted better for observed division patterns than its deterministic counterpart (Besson and Dumais, 2011). In addition, the impact on tissue organization of deterministic and variable division rules has been examined from a statistical point of view (Gibson et al., 2006; Sahlin and Jönsson, 2010; Alim et al., 2012; Wyatt et al., 2015) but the combinatorics of cell patterns (possible spatial arrangements of cells) that can result from variable cell divisions has not been examined with a cellular resolution. Overall, systematic analyses of variability in division plane positioning and of its relations to cell shape and tissue topological organization are currently lacking. Here, we used the embryo in the model plant Arabidopsis thaliana to fill this gap, taking advantage of the variable cell division patterns observed in this system after initial rounds of completely stereotyped cell divisions (Mansfield and Briarty, 1991; Capron et al., 2009). We combined 3D image analysis, cell lineage reconstruction, and computer modeling to systematically dissect the spatio-temporal diversity of cell shapes and cell divisions and to challenge the existence of a possible geometrical rule linking cell geometry and division plane positioning. Paradoxically, our quantifications revealed that cell shapes resulting from variable cell divisions were evolving within a restrained repertoire of possibilities, highlighting the existence of hidden geometrical constraints behind the apparent variability of division patterns. We tracked the origin of these constraints back to the mother cell geometry and show that most of the observed patterns could be interpreted in light of a recently proposed division rule relating cell shape and plane positioning (Moukhtar et al., 2019). Our results reveal a unifying principle behind stereotyped and variable cell divisions in Arabidopsis early embryo, suggesting stochasticity is an emergent property of the evolution of cell shapes during the first generations of cell divisions. Cases where observed patterns deviate from the rule illustrate how our model can highlight domains where, beyond cell geometry, additional regulators may be involved in the positioning of the division plane. In Arabidopsis thaliana, the fourth round of cell division leads to a 16-cell (16C) embryo where four different domains can be distinguished based on their longitudinal (apical or basal) and radial (inner or outer) location (Figure 1A). The first four rounds of cell divisions follow invariant patterns, which can be predicted based on cell geometry (Moukhtar et al., 2019). Hence, 16C embryos exhibit cell shapes that are specific to each of the four domains (Moukhtar et al., 2019) and present invariant, symmetrical radial cell organizations in both the apical and the basal domains (Figure Variability within and between embryos in cell shapes and cell arrangements. Here, we examined whether the stereotypical nature of cell shapes and patterns was maintained during late embryo development within each domain. We analyzed cell shapes and cell patterns over ∼100 embryos between 1C and 256C stages (rounds 1–8 of cell divisions from the 1C stage). In accordance with previous observations (Yoshida et al., 2014), we initially observed that, from generation 5 onwards, the basal part of the embryo showed little variability in cell shapes and spatial arrangements, leading to a preserved radial symmetry across domains and individuals (Figure 1DE). On the contrary, shapes and arrangements of cells were highly variable in the apical domain. Different orientations and topologies of cell divisions were observed among the different quarters in a given individual as well as among different individuals (Figure 1DE). This variability resulted in a loss of radial symmetry of cell organization in the apical domain (Figure 1DE). To better characterize and understand the origin of this variability, we conducted an in-depth quantitative analysis and modeling study of cell shapes and division patterns. Diversity in cell shape is domain-specific To quantitatively describe cell patterns, we first focused on the diversity of cell shape in the embryo and in its four principal domains defined from the 16C stage (apical/basal × inner/outer). For each embryo, cells were segmented in 3D and their lineage reconstructed back to the 1C stage by recursively merging sister cells (Figure 2A). To this end, sister cells were identified and paired so as to minimize wall discontinuities in reconstructed mother cells (see Material and methods). Cell shape diversity in Arabidopsis thaliana early embryogenesis. For a given mother cell, two features of the division plane determine the geometries of the daughter cells. First is the orientation of the division (e.g. anticlinal or periclinal), related to which mother cell walls are intersected by the division plane. Second is where the division plane is anchored on these walls and where it passes through the mother cell space. The orientation defines the shape (in the topological sense) of the daughter cells, that is, the morphological information that remains unchanged under position, rotation, scale or other linear and non-linear geometrical transformations such as anisotropic scaling, shearing, and bending. Plane positioning determines the lengths of cell edges, so that different cell geometries can be obtained for a same orientation of division. For example, cells in the inner apical domain at the 16C stage result from periclinal divisions of apical 8C cells and can all be described as tetrahedral pyramids (same shape) even though none of these cells have the same edge lengths (different geometries) (Figure 1BC). To classify cells into different shape categories resulting from varying orientations of cell divisions, one should thus rely on topological information only. To this end, we introduced a new cell shape topology descriptor defined as the cumulative number of division planes that were positioned through generations to generate a given cell (Figure 2B). This number, referred to as the number of faces, was automatically computed from cell lineages reconstructed back to the initial 1C stage, which contains two faces (see Material and methods). A key advantage of this descriptor is to provide a robust, objective and unambiguous description of cell shape. Contrary to the number of neighbors or of geometrical facets, the number of division faces only depends on the topology of successive divisions that generated the considered cell, is independent of divisions in neighbor cells and is insensitive to geometrical fluctuations in the positioning of division planes and to their curvature. We first applied this descriptor to analyze cell shapes up to the 16C stage (Figure 2C). The truncated sphere and half-sphere cell shapes of stages 1C and 2C have two and three-face shapes, respectively. The truncated sphere quarter at 4C has four division faces and is thus topologically equivalent to a tetrahedron. At stage 8C, the apical cells also have four faces but a new shape type is observed in the basal domain where cells have five faces, thus being topologically equivalent to a prism with a triangular basis. At stage 16C, a new cell shape with six faces was observed, being topologically equivalent to a cuboid. For each of the first four generations, each embryo domain (one domain from 1C to 4C, two domains at 8C, four at 16C) contained exactly one cell shape. These results are consistent with the stereotyped nature of cell division patterns until 16C stage. In addition, our analysis shows that at the whole embryo scale each generation corresponded to the introduction of a new cell shape with a unit increase in the number of division faces. Over the next four generations (G5 to G8), we found that more than 99% embryonic cell shapes were distributed over the three main cell topologies already present at stage 16C, corresponding to shapes with four (3.6%), five (21.9%), and six (73.7%) faces (Figure 2D). From G4 onward, cell shapes progressively accumulated in the six-face (cuboid) shape category, which eventually represented more than 90% of the cells at G8 (Figure 2E). The systematic unit increase in the number of faces at each generation between G0 and G4 was no longer observed after G4. Hence, the transition between generations 4 and 5 (16C-32C) corresponded to a rupture in the dynamics of embryonic cell shapes. Cell shape heterogeneity, quantified by the entropy of the distribution of the number of cell faces at each generation, culminated at G4 and progressively decreased during the subsequent generations (Figure 2E). The evolution of cell shapes at the whole embryo scale masked large differences among the four domains. Indeed, the domain-specific analysis of cell shapes showed that from generation 4 onward there was almost no variability in the basal outer domain, where all cells remained in the six-face shape category (Figure 2H). The inner apical domain exhibited the largest variability in cell shape, with cells having four, five and six-faces observed through several consecutive generations (Figure 2G). In the basal inner and in the apical outer domains, the diversity was intermediate, with most cells distributed between the two categories of five and six-face shapes (Figure 2F and I). The dynamics were also similar in these two domains, with a continuously increasing proportion of six-face cells. Overall, these results quantitatively confirmed the visual observations that cell patterns in the apical domain were more variable than in the basal domain. However, our analysis revealed at the same time a limited range of diversity in the topology of cell shapes, with most cell shapes falling within one out of three main categories. In addition, our data showed that the dynamics of shape changes during generations 5–8 differed from the dynamics observed during generations 1–4. Shape diversity increased until G4, before decreasing with an homogenization into the cuboid shape. Diversity in division patterns is domain-specific Since cell shapes are determined by the positioning of division planes, we asked whether the diversity of cell shapes in the different domains could be related to domain-specific variability in the positioning and orientation of division planes. We examined this hypothesis by enumerating observed cell division patterns in each of the four embryo domains. Cell division patterns were characterized based on the shapes of the mother and of the daughter cells. In addition, we also took into account the relative orientation of the division planes within the embryo. For example, a triangular prismatic cell in the outer apical domain can divide according to three orientations into another prism and a cuboid (Figure 3A). These three possibilities were considered as distinct division patterns. Using lineage trees, we analyzed and quantified the frequencies of division patterns during the last four generations, using both observed patterns and patterns reconstructed at intermediate generations back to the 16C stage. Note that the absence of embryo bending at these stages ensured that the plane orientation in the embryo at the time of division could be correctly inferred even for patterns reconstructed from later stages. Figure 3 with 4 supplements see all Reconstructed division patterns and cell lineages in the four embryo domains. Starting from the stereotyped cell patterns of 16C embryos, we found three major orientations of cell divisions in the outer apical domain at the G4-G5 transition (Figure 3B and Figure 3—figure supplement 1). Divisions in this domain were systematically anticlinal and oriented parallel to an existing cell edge, thus separating one vertex from the two other ones at the outer triangular surface of the cell. The transverse orientation (parallel to the boundary between the apical and basal domains) was less frequent than the two longitudinal orientations, suggesting a directional bias in the positioning of the division plane. In the inner apical domain, we also found three main orientations of division planes, all oriented along the longitudinal axis of the embryo (Figure 3D and Figure 3—figure supplement 2). Only two of these orientations were parallel to an original vertical face of the cell. Divisions parallel to the horizontal face of the cells were extremely rare. As in the outer apical domain, these results suggested a preferential positioning of division planes along a limited number of directions. In contrast with the apical domains, there was only one major orientation of division in each of the outer and inner basal domains (Figure 3CE). External cells systematically divided according to a longitudinal anticlinal division (intersecting their external face), with a division plane parallel to the lateral faces of the cell. Internal cells also divided longitudinally but along a periclinal division (parallel to their external face). This suggests even stronger constraints on the positioning of division planes within the basal domain compared with the apical domain. The contrast between the apical and the basal domains remained during subsequent generations, with strongly stereotyped division orientations in the basal domain, except for the division of the lower cells in the innermost domain at G6 (Figure 3). These results show that variability in the orientation of division planes was larger in the apical than in the basal domain during the latest four generations. By comparison with the stereotyped division patterns up to stage 16C, our analysis further corroborated that the transition between generations 4 and 5 corresponds to a rupture in the dynamics of division patterns. Division patterns correlate with cell shape topology Since beyond stage 16C the embryo domains differ in the variability of both cell shapes and division patterns, we hypothesized that this variability could reflect shape-specific division patterns. We addressed this issue by exploiting reconstructed lineage trees to analyze division patterns in the three main cell shape categories that we identified. Cuboid cells were found in all domains at several generations (Figure 3B–E). These cells almost exclusively divided into two cuboid daughter cells. Cuboid division resulting in a triangular prismatic daughter cell was only rarely observed. Hence, division of cuboid cells showed a strong auto-similarity, in that the mother cell shape was almost systematically preserved through the division. Another remarkable feature of the division of cuboids was spatio-temporal stationarity, since the division pattern of these cells was the same at all generations and in all four domains. Cells with a triangular prism topology were also present in the four domains, when rare division patterns were also considered (Figure 3B–E). These cells showed two division patterns. The first pattern produced two triangular prisms as daughter cells, through a division parallel to the triangular faces. The second pattern yielded one triangular prism and one cuboid, through a division parallel to the quadrilateral faces. Hence, as for cuboid cells, cells with a triangular prism topology showed auto-similarity in their divisions patterns, even though they could also generate new cell shapes. In addition, they also showed spatio-temporal stationarity since their division patterns were similarly observed in all domains and generations where these cells were present. Cells with a tetrahedral topology were only found in the inner apical domain (Figure 3D). They also exhibited two division patterns, one producing two triangular prisms and the other producing one triangular prism and one tetrahedron. Hence, auto-similarity in tetrahedral cells was not systematic. However, their division patterns were similar throughout successive generations, showing they were also exhibiting stationarity. Together, these results show that each cell shape exhibited specific division patterns that were shared among different generations and among different locations within the embryo. The cuboid shape could be reached from any other cell shape according to the tetrahedron→triangular prism→cuboid→cuboid sequence. Hence, the cuboid shape represented an absorbing state because it could be reached from the two other shapes but tended to reproduce itself once reached. In contrast, the tetrahedral shape was the less stable state. These results explain the decreasing relative frequencies of the tetrahedral and triangular prism cell shapes through generations of cell divisions observed in the four domains (Figure 2E). Because of shape differences at stage 16C between the four domains, these results may also explain differences in variability of division patterns. For example, the large variability observed in the inner apical domain can be interpreted in light of the intermediate triangular prismatic shape between the tetrahedral and cuboid shapes. Inversely, the absence of shape variability in the outer basal domain can be related to the absorbing state cuboid shape already present at G4 in this domain. However, shapes with identical topology were observed in domains with different variability levels in division orientations, as for example in the outer apical domain and in the inner basal domain that both have triangular prismatic cells at G4. Hence, other factors than cell shape topology alone are probably involved in the variability of cell division patterns. Graph theory of cell division reveals variability is constrained To assess whether additional factors govern division patterns beyond cell shape topology, we asked whether observed division patterns matched predictions from topologically random divisions. To this end, we used graph theory to describe polyhedral cells and their divisions and to enumerate all possible combinations of dividing a cell based on its topology, disregarding the lengths of its edges. The three main cell shapes (tetrahedron, triangular prism, cuboid) observed during generations 5–8 are polyhedra composed of vertices (cell corners), of edges connecting vertices, and of faces delineated by edges. These shapes can all be represented as planar graphs and displayed using 2D Schlegel diagrams (Grünbaum, 2003). These representations are obtained by projecting the 3D cell shapes in a direction orthogonal to one of their faces (Figure 4A). We represented cell divisions as graph cuts on these polyhedral graphs. A graph cut consists in removing some edges in a graph so as to partition the original vertices in two disjoint subsets (Greig et al., 1989). Representing divisions as graph cuts implies that divisions avoid existing vertices and edges, in accordance with the avoidance of four-way junctions. Hence, by removing some edges in the mother cell graph, any cell division resulted in the partitioning of the $V$ vertices of the mother cell into two subsets of $p$ and $V-p$ vertices. The graphs of the two daughter cells were obtained by adding new vertices at edge cuts and by introducing new edges between the added vertices (Figure 4B; Supplementary Analyzing cell divisions as graph cuts on polyhedral graphs. We used this approach to determine the combinatorial possibilities of division in each of the three shape topologies. For a given mother cell with $V$ vertices, we found that any division separating $p$ vertices (with $p≤V/2$) from the $V-p$ other ones could be fully described based on $p$ and the number of mother cell edges that were inherited by the daughter cell inheriting the $p$ vertices (Supplementary Information). We further found that in case the inherited edges formed a cyclic graph, the number of faces in one of the two daughter cells was the same as in the mother cell and was at most this number in the other daughter cell. In the case of an acyclic graph, however, at least one daughter would necessarily gain one additional face as compared with the mother cell (Supplementary Information). This theoretical result explains in particular why the number of faces in at least one daughter cell necessarily increases when a tetrahedral cell divides, since the division of tetrahedral cells exclusively corresponds to the acyclic case. This theory shows why tetrahedral cells cannot be an absorbing state and why they represent an inevitable source of cell shape diversity through their divisions. For each cell shape topology, we determined all possible combinations of graph cuts under complete randomness. This allowed us to compute the expected proportions of daughter cells falling within each cell shape category (Supplementary Information). The theoretical distributions we obtained were significantly different from the observed distributions (Figure 4C), thus showing observed division patterns were not compatible with the hypothesis of randomly selected positioning of division planes. Overall, the predictions made using graph theory under unconstrained, random divisions strongly contrast with observed division patterns, where no or only marginal increases in the number of faces was observed during the last four generations. Our analysis thus shows that the observed division patterns are constrained within a limited range of possible combinations. Division planes obey cell geometry constraints To understand the origin of the limited variability in cell division patterns, we asked whether cell geometry could be sufficient to account for the observed division planes. We previously showed that, during the first four generations, diverse division patterns (symmetrical as well as asymmetrical, anticlinal as well as periclinal) could be predicted by a single geometrical rule according to which planes obey area minimization conditioned on the passing through the cell center (Moukhtar et al., 2019). The small distance between division plane and cell center observed during the late four generations (Figure 3—figure supplement 3) suggested that this rule could also hold beyond the first four generations. To examine whether this was indeed the case in spite of diverse division orientations (Figure 3) and volume-ratios (Figure 3—figure supplement 4), we compared observed division patterns at G5 to predictions derived from a computational model of cell divisions. We used a stochastic model that generated binary partitions of a mother cell at arbitrary volume-ratios, under the constraint of minimizing the interface area between the two daughter cells (Moukhtar et al., 2019) (see Material and Methods). This cell-autonomous model takes as input the cell geometry alone, ignoring the environment of the cell within the tissue. Several independent simulations with different volume-ratios were run for each reconstructed mother cell to sample the local minima of interface area in the space of possible binary partitions. Running the model in synthetic shapes showed that repeating independent simulations at various volume-ratios generally produced several families of solutions (Figure 5). Each family corresponded to one of the possible combinations of graph-cuts in the polyhedral graph of the mother cell. The families could be visualized by plotting the distribution of simulation results based on surface area and distance to the cell center. For instance, simulations within a cuboid generated families corresponding to divisions parallel to two of the cuboid faces. In the distribution plots, such families appeared as vertically oriented clusters because of the similar areas but varying distance to the cuboid center (Figure 5). Other families corresponded to oblique divisions, isolating one vertex or one edge (Figure 5). These families appeared as diagonally oriented clusters because area of these solutions increased when the distance to the center decreased. Computational strategy to analyze cell divisions: illustration with a synthetic example (symmetrical vertical division of a cuboid). We scored the similarity between simulation results and observed patterns based on a matching index. This index quantified how well a simulated pattern was reproducing the observed one based on the overlap between daughter cells in the two patterns (figure Supplement 1 and Material and Methods). This index ranged between 0.5 (minimal correspondence between simulation and observation) and 1.0 (perfect correspondence). For a sample division obeying the law of area minimization constrained by the passing through the cell center, the simulated divisions that match best the observed pattern should be located at the bottom left of the distribution plot (Figure 5). We first examined divisions in cells of the outer basal domain, which obey a stereotyped symmetrical, anticlinal, and longitudinal positioning of the division planes (Figure 3C and Figure 3—figure supplement 4). For each cell, we ran 1000 independent simulations, which appeared sufficient to explore the space of area-minimizing partitions in a reproducible way (Figure 6—figure supplement 1). The distribution plots of simulated division planes based on surface area and on distance to the cell center were insensitive to potential segmentation errors (Figure 6—figure supplement 3) and were reminiscent of those observed in synthetic cuboid shapes (Figure 6 and Figure 6—figure supplement 4; compare with Figure 5). Different clusters of simulated planes were observed, revealing the existence of several local minima of the interface area within the space of possible partitionings in these cells (Figure 6A). In spite of the variability in the geometry of analyzed cells (Figure 6—figure supplement 5), the simulated planes that matched the observed patterns were systematically found at the bottom left of the distribution plot (Figure 6A and Figure 6—figure supplement 4), showing that these matching planes were minimizing the surface area among the solutions that pass close to the cell center. Two other clusters of simulated planes, corresponding to either oblique or horizontal divisions, poorly matched observed patterns and had a larger interface area and/or a larger distance to the cell center. Hence, the anticlinal, highly symmetrical division of the basal outer cells at stage 16C of the embryo was perfectly predicted by the division rule. In most cells, the matching solutions were at the bottom of a cluster of solutions displaying a wide range of distances to cell center but comparable areas, corresponding to a family of parallel longitudinal divisions. This confirmed our previous result that, by the combined minimization of distance to cell center and of interface area, the rule can predict both the positioning of the division plane and the volume-ratio of the division (Moukhtar et al., 2019). Figure 6 with 8 supplements see all Modeling division patterns at G5 in outer cells based on geometrical features. In the outer apical domain, where slightly asymmetrical, non stereotyped divisions were observed (Figure 3B and Figure 3—figure supplement 4), we ran the model in reconstructed mother cells that divided along the three main modes of division observed in this domain. As in the basal domain, the model generated different families of solutions within each mother cell (Figure 6B–D), showing the existence of different local minima of surface area for a given cell geometry. In each case, one of these cluster faithfully matched the observed pattern. The location of this cluster at the bottom left of the distribution plot suggested that for a given mother cell shape, the observed division plane could be predicted based on area minimization conditioned on the passing through the cell center (Figure 6 and Figure 6—figure supplement 6, Figure 6—figure supplement 7 and Figure 6—figure supplement 8). Remarkably, simulations belonging to the other, non-matching clusters, which were located farther from the bottom left of the distribution, corresponded to division patterns observed in other cells (Figure 6B–D). These data can be interpreted as showing the existence of three principal local minima of surface area in the space of partitionings of each apical outer cell. These minima are likely related to the order 3 rotation invariance of perfectly symmetric triangular prisms. Departure from perfect symmetry would turn one of these local minima into a global minimum that would be selected upon division. Our results also show that cells divide according to the area minimum that fits best with the same division rule that operates in the outer basal domain. As in the outer basal domain, simulation results within basal inner cells (were observed divisions were stereotyped, periclinal and strongly asymmetrical; Figure 3E and Figure 3—figure supplement 4) were distributed among different patterns. However, a key difference with the outer domain was that a few, if any, simulations reproduced the observed divisions (Figure 7A and Figure 7—figure supplement 1). Since the probability of generating a given interface with the model is inversely related to its area, the absence or scarcity of reproduced observed patterns suggested that the periclinal divisions in the inner basal domain did not correspond to the global minimization of interface area. This was confirmed by the fact that the rare simulations reproducing observed divisions had generally larger interface areas than alternatives passing as close to the cell center. Figure 7 with 4 supplements see all Modeling division patterns at G5 in internal cells based on geometrical features. In the internal apical cells, where experimental variability was the largest (Figure 3D and Figure 3—figure supplement 4), we found different results depending on the orientation of the division. For cells where division occurred parallel to an existing interface (yielding a triangular prism and a tetrahedron as daughter cell shapes), we obtained results comparable to those obtained in outer apical cells. Several clusters of simulations were obtained within each cell, and the one reproducing the actual division was in most cases located at the bottom left of the distribution (Figure 7B; Figures Figure 7—figure supplement 2 and Figure 7—figure supplement 3). In the other clusters, we observed simulated divisions that corresponded to patterns observed in other cells (Figure 7B). Hence, divisions in these cells were consistent with the existence of multiple local minima of interface area and with the selection, among these, of the minimum that also fits with the minimization of distance to the cell center. In cells dividing radially (yielding two triangular prisms for daughter cell shapes), some cells complied with this rule (Figure 7C; Figure 7—figure supplement 4) but we also found as many that did not. In the latter cells, several clusters corresponding to various division orientations were again observed. However, the cluster reproducing the observed division was either overlapping with other clusters or was located farther away from the heel of the distribution plot compared with the alternative clusters (Figure 7D). This showed that in these cells, the observed division was not unequivocally corresponding to the minimization of distance to the cell center and of interface area. Validation of model predictions Simulation results obtained with our model suggested that asymmetries in mother cell geometry could bias the positioning of the division plane. We evaluated this prediction by examining the correlation between asymmetries in the mother cell geometry and the division plane orientation. We performed this analysis on the divisions of the 16C apical cells. For these cells, there was indeed, at the same time, strong self-similarity by rotation of the corresponding idealized shapes (tetrahedron in the inner part, triangular prism in the outer one) and large variability in the orientation of the division planes. For each reconstructed mother cell, we quantified its radial asymmetry by the ratio of left to right lengths and we quantified its longitudinal-to-radial asymmetry by the ratio of its longitudinal length to its maximal radial length (Figure 8A). Asymmetries in mother cell geometry in the apical domain at stage 16C and their relations with division plane orientation. Figure 8—source data 1 Figure 8—source data 2 For internal apical cells dividing longitudinally with a triangular prismatic daughter cell on the left, the left radial length was on average smaller than the right one (Figure 8B, Green). The reverse was observed for the internal cells that divided with a triangular prismatic daughter located on the right (Figure 8B, Yellow). For the internal cells that divided horizontally or longitudinally with no left/right asymmetry in plane positioning, there was no pronounced radial asymmetry (Figure 8B, White and Pink) but, compared with cells that divided longitudinally, they exhibited a larger longitudinal length (Figure 8C). Hence, in internal apical cells, the position of the division plane matched the geometrical asymmetry of the mother cell along different Similar trends were observed in the outer apical cells. Among these, cells dividing longitudinally with a cuboid daughter cell located on the left had on average a smaller left than right radial length (Figure 8B, Turquoise). The reverse was observed for the cells that divided with a cuboid daughter cell located on the right (Figure 8, Orange). As in the inner domain, the radial asymmetry was less pronounced for the outer apical cells that divided horizontally (Figure 8B, Blue). Compared with the inner domain, however, it was less clear whether their longitudinal length was larger than in cells dividing longitudinally (Figure 8C), which may be due to the limited number of cells that were observed to divide horizontally. Overall, these results show that apical cells at 16C presented directional asymmetries and that division planes tended to be oriented parallel to the smallest cell length. This suggests that the diversity of division plane orientations for a given shape topology reflects geometrical diversity, in accordance with the predictions from our geometrical division rule. Attractor patterns buffer variability of cell division orientation The above results show that from one generation to the next, there is large variability in cell division orientation in some embryo domains. Across several generations, the combinatorial possibilities between different orientations can potentially lead to a large number of distinct cell patterns. To determine whether this was indeed the case, we analyzed division patterns over two consecutive generations. In the outer apical domain, three main orientations of cell divisions were observed at G5. Variability in division orientation was less pronounced in the subsequent generations, which presented alternation of division plane orientations (Figure 3B). As a result, similar cell patterns could be reached at G6 through different sequences of division events from G4. Over the 135 patterns observed at G6 in the apical outer domain, only 7 distinct sequences were observed. Two sequences were predominant and accounted for 40% (54/135; Figure 9A, Top) and 42% (58/135; Figure 9A, Bottom) of the observations. Attractor patterns buffer variability in division plane positioning. In the protodermal layer of the basal domain, some variability was first observed at the transition between G6 and G7, where in 16 out of 303 cases (5.3%) the division plane was oriented transversely instead of longitudinally (Figure 3C). Similarly, some cells (19/297, 6.4%) at G7 divided longitudinally instead of transversely (Figure 3C and Figure 9C). Some cells in early heart stage embryos of our collection had already underwent an additional round of cell division, allowing to examine the evolution of such patterns. The cells that had exceptionally divided longitudinally at G7 led to daughters cells that divided transversely at the next generation, thus restoring at G9 the same 2×2 checkerboard cell pattern than obtained along the transverse then longitudinal path followed in most embryos from G7 to G9 (Figure 9C). In the inner basal domain, cell divisions were strongly stereotyped, following periclinal patterns that yield the precursors of the future vascular tissues (Figure 3E and Figure 9B, Left). At G5, however, 3 out of 153 cases (2%) in our dataset showed an anticlinal pattern (Figure 3E and Figure 9B, Right). One of these cases was reconstructed from an embryo acquired at G6. This allowed to observe that one of the two daughter cells of the anticlinal division at G5 had divided periclinally at G6, thus restoring the formation of a new cell layer as in the standard case (Figure 9B). This suggests that, in the inner basal domain also, similar cell patterns can be reached through different paths in spite of variability in division plane positioning. These results reveal the existence of attractor patterns, which are invariant cell arrangements that can be reached through different paths of successive cell divisions from stage 16C. The existence of attractor patterns suggests that a significant part of the variability in cell division orientation observed during the late four generations of embryogenesis is buffered when considering time scales that span several generations, thus ensuring the construction of robust cell organizations in spite of local spatio-temporal variability. Previous attempts to decipher the principles that underlie the position and orientation of division planes have been focused on geometrical rules predicting the division plane position relatively to the mother cell geometry, and on their impact on global tissue organization and growth. Much less attention has been given to the prediction of tissue organization with a geometrical precision at the individual cell level. The Arabidopsis embryo is a remarkable model to address the existence and nature of geometrical division rules, as it presents invariant division patterns during the first four generations followed by intra- and inter-individual variable patterns for the next four generations. Here, we provided a detailed quantitative analysis of this variability and used theoretical and computational modeling of cell divisions to investigate its origin. We show that strong regularities are hidden behind the apparent variability and that most of the observed patterns can be explained by a deterministic division rule applied in a geometrical context affected by the stochasticity of the precise positioning of division plane. Deterministic cell division patterns have been interpreted in light of geometrical rules linking cell shape to division plane (Minc and Piel, 2012). The shortest path rule, according to which cells divide symmetrically so as to minimize the interface area between daughter cells (Errera, 1888), has been shown to operate in several plant tissues such as fern protonema (Cooke and Paolillo, 1980), algae thallus (Dupuy et al., 2010), Arabidopsis meristem (Sahlin and Jönsson, 2010) or early embryo (Yoshida et al., 2014; Moukhtar et al., 2019). However, it was also shown that stochastic rules are required to account for division patterns in many tissues with 2D geometries (Besson and Dumais, 2011), as in some animal systems (Théry et al., 2007; Minc et al., 2011). Hence, a stochastic rule for division plane orientation would a priori be the most likely candidate interpretation of the variable division patterns we reported here in late Arabidopsis embryo. Our results point to a different interpretation for this variability. Indeed, for a given cell geometry, the observed plane orientation and position matched in most cases the global optimum according to the rule of area minimization conditioned on the passing through the cell center, and we could correlate the plane orientation with asymmetries in directional cell lengths. Based on these results, we propose that variability in cell division patterns could originate from fluctuations in mother cell geometry rather than from the division rule. The tetrahedral and triangular prismatic shape topologies of apical cells at stage 16C are rotationally symmetric (they can superimpose to themselves after rotation). If cell geometries were perfectly symmetric, the various plane orientations (4 in inner apical cell, 3 in outer apical cells) would be equally probable according to the geometrical rule (Figure 10A). We hypothesize that actual geometrical deviations from perfect symmetry suppress this equi-probability and induce a single global optimum of plane orientation, which would be selected during the division (Figure 10B). Accordingly, variability in division plane orientations in the apical domain would not ensue from a stochastic division rule, but rather from a deterministic principle expressed within a varying mother cell geometry (Figure 10C). Note, however, that our sample sizes do not allow to definitively rule out a possible stochastic selection of division plane orientation, in particular in light of the results obtained in the inner apical domain with cells dividing longitudinally. Further studies will be required to definitively distinguish between these two hypotheses and to further dissect the respective contributions of intrinsic (variability of plane positioning for a given cell geometry) and of extrinsic (variability due to fluctuations in mother cell geometry) noise in the selection of the division plane. Schematic interpretation for the origin of variability in division patterns in Arabidopsis thaliana embryo. Our data reveal an abrupt change in the dynamics of cell shapes and cell division patterns at the transition between generations 4 (16C) and 5 (32C). Up to generation 4, division patterns were stereotyped and each generation corresponded to the introduction of a new cell shape with a unit increase in the number of cell faces. In contrast, we observed from generation 5 onward a strong variability in division patterns with a concomitant reduction in the variability of cell shape topology, as cell shapes progressively converged towards a single 6-face shape topology. Graph cut theory on polyhedral graphs together with our hypothesis of a deterministic division principle operating in a stochastic cell geometry offer a parsimonious interpretation of this apparent paradox. On the one hand, our theory shows that the division of the tetrahedral cells from generation 2 inevitably generates novelty with one obligatory prismatic daughter cell shape. We also show that triangular prismatic shapes that appear at generation 3 are theoretically twice less self-reproducible than the cuboid shapes that appear for the first time at generation 4. Beyond this stage, cell division through the cell center and area minimization tend to preserve the cuboid shape of the mother cell in the two resulting daughters. On the other hand, variability in division patterns emerges at generation 5 because of the almost, but not exactly, rotation-symmetrical cell geometries reached for the first time at stage 16C. Hence, our study reveals that a common underlying geometrical rule can account for cell division patterns with radically different traits, stressing the importance of geometrical feedback between cell geometry and division plane positioning in the self-organization of tissue architectures in Arabidopsis embryo. A parsimonious cellular machinery may be beneficial to ensure robustness in the building of complex cellular patterns. Our interpretation of the variability in division orientations raises the issue of the origin of variability in cell geometry within a given cell shape category. In spite of genetic controls, any given division pattern is subject to random fluctuations affecting the precise positioning of the cleavage planes (Schaefer et al., 2017), even in strongly stereotyped systems (Guignard et al., 2020 ). Hence, rotational symmetry, if it were present at some stage, could not be preserved through cell divisions (Figure 10A). This noise in the positioning of division planes accumulate through embryo generations, resulting in non-perfectly symmetrical shapes at stage 16C. A modeling study previously reported the importance of stochastic positioning of cleavage planes at the 2C-4C transition in the patterning of vascular tissues (De Rybel et al., 2014). In our case, it is likely that errors accumulated over the 2C-4C and 4C-8C transitions contribute to the geometrical asymmetries that bias division plane positioning at the 16C-32C stage. Hence, our results strongly suggest that not only genetic patterning (De Rybel et al., 2014) but also division patterning could be influenced by the geometric memory of past stochastic events. Several studies have highlighted the importance of noise and stochastic processes in plant developmental programs (Korn, 1969; Meyer and Roeder, 2014; Hong et al., 2018). At the cellular level, these processes have been described essentially for cell growth. For example, heterogeneity in cell growth patterns was shown essential for the robustness of organ shapes (Hong et al., 2016). Homeostatic mechanisms compensating for cell growth variability have been described. For example, at the cellular level, larger relative growth rates in smaller cells (Willis et al., 2016) or DNA-dependent dosage of a cell cycle inhibitor D’Ario et al., 2021 have been proposed to subtend cell size homeostasis in the shoot apical meristem; at the tissue level, mechanical feedbacks have been described that buffer growth heterogeneities between cells (Hervieux et al., 2017). We reveal here in several embryo domains the existence of attractors in embryo cell patterns that can be reached through different division sequences, thus generalizing past observations in the root embryonic axis (Scheres et al., 1995). As for cell growth patterns, these attractor patterns can be interpreted as buffering heterogeneity in division plane orientation. Hence, our results reveal a new compensation mechanism at the cellular level that, in addition to known cell growth regulations, could operate in developing plant tissues to generate robust supra-cellular patterns. Our quantifications showed a much larger variability of divisions patterns and cell shapes in the apical domain compared with the basal one. This difference can be related to the different cell shapes in the two domains at stage 16C. Tetrahedral shapes, which are an obligatory source of shape variability through their division, are only present in the apical domain. Conversely, cuboid shapes, which represent an absorbing state, are only present in the basal domain. Different cell environments, with the basal cells constrained between the apical cells and the suspensor, may also contribute to less variability in the basal domain. Although the functional significance of this apical-basal contrast remains to be elucidated, one can hypothesize it could contribute to establish a specific tissue organization or mechanical context required for proper embryo growth and transition from a globular to a heart shape. Recent reports in both plants and animals emphasized the importance of the spatial organization of cell interfaces for tissue mechanical properties or cell-fate acquisition (Guignard et al., 2020; Majda et al., 2022). Previous studies have modeled the topology of divisions in 2D. It was shown for example how an average of 6 neighbors per cell could emerge from random symmetrical divisions (Graustein, 1931; Gibson et al., 2006). Based on Markov chain modeling, it was also shown how steady-state distributions in the number of faces or of neighbors could be computed in proliferating epithelia (Gibson et al., 2006; Cowan and Morris, 1988). The topology of a 2D division in a polygonal shape can simply be modeled as a combinatorial choice of two polygonal edges (Cowan and Morris, 1988; Gibson et al., 2006). Unfortunately, this approach cannot be generalized to polyhedral cells in three dimensions. Here, we proposed a solution to this problem by modeling the topology of division in polyhedral cells as cuts on polyhedral graphs. The large differences between predicted daughter shape distributions under topologically random divisions of mother cells and observed distributions revealed the existence of strong constraints on division plane positioning at the 16C-32C transition. Though this is probably challenging, it would be of further interest to explore the potential of the proposed graph theoretical approach to address the existence of, and to theoretically derive, the asymptotic distributions of 3D shapes under random or more elaborate topological rules, as was done in 2D tissues ( Cowan and Morris, 1988; Gibson et al., 2006). The results of the present study show that the same geometrical rule that accounted for cell division patterns during the first four generations is also consistent with the positioning of division planes beyond the dermatogen stage. However, we found contrasting results among different embryo domains and, to a lesser extent, among different orientations of division. In the protodermal domains of both the upper and the lower domains, both the volume-ratios and the positioning of the cleavage interface could be accurately predicted following the geometrical rule. In contrast, divisions markedly departed from the rule in the lower inner domain. An intermediate situation was observed in the inner apical domain, where the rule accounted for all but the longitudinal radial orientation. Post-division changes in cell geometry can potentially alter predictions of plane positioning in mother cells reconstructed by merging their daughters, although such changes are probably moderate given the relatively limited cell growth at the stages we considered (Yoshida et al., 2014). Auxin signaling has been suggested as required for cells to escape the default regime of division plane minimization and to control periclinal divisions at the previous (8C-16C) generation of cell divisions (Yoshida et al., 2014), which could involve a modulation of cell geometry by auxin signaling ( Vaddepalli et al., 2021). At subsequent generations, it has instead been reported that the first vascular and ground tissue cells divided periclinally along their maximal (longitudinal) length when the auxin response was impaired by a ARF5/MP mutation or local ARF inhibition (Möller et al., 2017). In the shoot apical meristem, cells preferentially divide longitudinally at the boundaries of emerging organs, where auxin responses are low (Louveaux et al., 2016). Hence, it is unclear whether specific auxin responses are involved in the longitudinal divisions observed in the inner domains. Mechanical forces have been shown to alter division plane orientations in in vitro-grown cells (Lintilhac and Vesecky, 1984), and it was shown in the shoot apical meristem that tissue mechanical stress could override cell geometry in the specification of plane positioning (Louveaux et al., 2016). It was also recently found that the orientation of cell division during lateral root initiation correlated with cellular growth (Schütz et al., 2021). Hence, one can speculate that the differences in cell environments between the inner and the outer embryo domains may induce different mechanical contexts with differential impacts on the determination of the division plane orientation. Sample preparation and image acquisition Arabidopsis siliques were opened and fixed in 50% methanol and 10% acetic acid five days at 4 °C. Samples were rehydrated (ethanol 50%, 30%, 10% and water) then transferred 3 hours in a 0.1 N NaOH 1% SDS solution at room temperature. Next, samples were incubated 1 hr in 0.2 mg/ml $α$-amylase (Sigma A4551) at 37 °C and bleached in 1.25% active Cl_30-60 s. Samples were incubated in 1% periodic acid at room temperature for 30 min and colored by Schiff reagent with propidium iodide (100 mM sodium metabisulphite and 0.15 N HCl; propidium iodide to a final concentration of 100 mg/mL was freshly added) overnight and cleared in a chloral hydrate solution (4 g chloral hydrate, 1 mL glycerol, and 2 mL water) few hours. Finally, samples were mounted between slide and cover slip in Hoyer’s solution (30 g gum arabic, 200 g chloral hydrate, 20 g glycerol, and 50 mL water) using spacers. Confocal microscopy and image acquisition Request a detailed protocol Acquisitions were done with a Zeiss LSM 710 confocal microscope as described previously (Truernit et al., 2008). Fluorescence signals were recorded using a 40 x objective and digitized as 8-bit 3D image stacks with a near-to-optimal voxel size of 0.17×0.17×0.35 μm^3. Image processing and analysis Request a detailed protocol Noise in acquired 3D images was attenuated by applying Gaussian smoothing (with parameter $σ=0.5$) under the Fiji software (Schindelin et al., 2012). Cells were segmented by applying the 3D watershed transform (Vincent and Soille, 1991) to images after non-significant minima had been removed using minima imposition (Soille, 2004). The two operations were performed using the Morphological Segmentation tool of the MorphoLibJ suite (Legland et al., 2016). All segmentations were visually checked and a modified version of the MorphoLibJ plugin was developed to correct over- and under-segmentation errors, if any, based on the interactive modification of watershed initialization seeds. The cell lineages were manually back tracked, processing embryos from the younger to the older ones (using the number of cells as a proxy for developmental stage). Based on the cellular geometries and organizations, sister cells were paired so as to minimize wall discontinuities in reconstructed mother cells. Ambiguities, as observed for example at the 2C-4C transition or later in the outer basal domain where four-way junctions could be observed at the external surface of the embryo, were resolved by examining cell interfaces in 3D. Indeed, actual four-way junctions were rare in 3D, as penetrating inside the tissue generally revealed a transition from a cross pattern formed by the four cells to a double-T pattern, thus revealing a former division plane that had been reached on its opposite sides by two more recent ones. Over a total of 4285 division patterns, we observed 12 cases (0.3%) at the 2C-4C generation of divisions where ambiguity could not be resolved this way. This had no impact on our analyses because of the symmetry between the two possible interpretations at this stage. We observed 16 cases (0.4%), all but one in the outer domains, in advanced stages. We resolved these ambiguous cases by assigning them to the most frequent configuration over the whole dataset of embryos. Lineage reconstruction was performed using TreeJ, an in-house developed Fiji plugin. Reconstructed cell lineage trees were exported as Ascii text files for further quantitative analysis. Segmented images and lineages trees were processed under Matlab (MATLAB, 2012) to localize cells within the embryo and to assign them to embryo domains (inner or outer, apical or basal). Cell volumes were obtained by multiplying the number of voxels of each cell by unit voxel volume (product of spatial calibration in XYZ directions). Mother cells were reconstructed by merging the segmentation masks of daughter cells. The mother cell center was computed as the average voxel position in the mother cell mask. For each division, the volume-ratio was computed as the ratio between the smaller cell volume and the mother cell volume. Three-dimensional triangular meshes of segmented cells and of their interfaces with neighbour cells were computed under AvizoFire (2013 Visualization Sciences Group, an FEI Company). The cell interface meshes were processed by a python script to automatically measure cell lengths along different directions. To this end, we first computed the intersection lines between side meshes by determining their shared vertices. Then, vertices at intersections between three connected intersection lines were identified as cell corners. Cell lengths were obtained as Euclidean distances between corner vertices. The number of faces per cell was computed using cell lineage trees with a python script. Mother cells were reconstructed up to the first embryonic cell by recursively merging sister cells. During this process, the generation at which each division plane had been formed was recorded. This allowed to determine for each observed cell the number of different generations at which interfaces with neighboring cells had been created. This number was taken as the number of faces for the cell. For the first embryonic cell, there were two interfaces, one corresponding to the wall separating this cell from the suspensor and the other corresponding to the separation with the outside of the embryo. At any generation $g$, the entropy of the distribution of cells among the different classes of cell shapes (defined by the number of faces) was computed as: $\mathrm{Entropy}\left(\mathrm{g}\right)=-\sum _{f}{p}_{f}\left(g\right)\mathrm{log}{p}_{f}\left(g\right)$ where $pf⁢(g)$ designates the proportion of cells having $f$ faces at generation $g$. Entropy is a measure of the heterogeneity in a distribution: it is maximized for a uniform distribution; on the opposite, it takes its minimal value 0 when all individuals belong to the same class. Computer modeling of cell divisions Cell divisions in reconstructed mother cells were simulated using the model we introduced previously (Moukhtar et al., 2019). This model takes as input the 3D binary mask of a mother cell and generates stochastically a partitioning of the cell based on geometric constraints, ignoring the environment of the cell within the tissue. For each simulation, the volume-ratio $ρ$ of the division (volume of the smaller daughter cell to the volume of the mother) was randomly drawn between 0.2 and 0.5. Each voxel of the mother cell mask was initially assigned to one or another of the two daughter cells with probability $ρ$ and $1-ρ$, respectively. The Metropolis algorithm (Metropolis et al., 1953) was then used to iteratively minimize the interface area between the two daughter cells. The algorithm iterated Monte Carlo cycles of $N$ steps each, $N$ being the number of voxels in the binary mask of the mother cell. At each step, a voxel was randomly chosen. Its assignment to one or the other of the two daughter cells was flipped if this induced a decrease in the interface area. Otherwise, the flip was accepted with probability $exp⁡(-β⁢Δ⁢A)$, where $Δ⁢A$ represented the change in interface area induced by the flip. The parameter $β$ was automatically adjusted at the end of each cycle so that about 5%, on average, of the candidate flips that would increase interface area were accepted. For each mother cell, 1000 independent simulations were run. The number of Monte Carlo cycles was set to 500, which was sufficient to ensure convergence (Figure 6—figure supplement 1). Scoring simulated divisions Request a detailed protocol The similarity between the simulated and observed divisions was scored based on their spatial overlap (Figure 5—figure supplement 1). Let $A$ and $B$ denote the sets of voxels in the two daughter cells of an observed division, and let $A′$ and $B′$ denote the two sets in a simulated division. The score quantifying the match between the two partitions of the mother cell space was defined as: $\mathrm{score}=\mathrm{max}\left\{\frac{|A\cap {A}^{\prime }|+|B\cap {B}^{\prime }|}{|A\cup B|},\frac{|A\cap {B}^{\prime }|+|B\cap {A}^{\prime }|}{|A\cup B|}\right\}$ This score varied between 0.5 (the minimum possible overlap) and 1.0 (perfect overlap). Predicting the topology of random divisions using graph cuts on polyhedral graphs We consider the main three cell shapes observed during late embryogenesis in Arabidopsis thaliana. These shapes are the tetrahedron, triangular prism, and cuboid (containing 4, 5, and 6 faces, 4, 6, and 8 vertices, and 6, 9, and 12 edges, respectively). Our objective here is to enumerate the different ways of dividing these cell shapes and to characterize the resulting daughter shapes. The key to our analysis is to represent cell shapes as planar polyhedral graphs and cell divisions as graph cuts on these polyhedral graphs. Polyhedral graphs (Bottom row) for the three main cell shapes (Top row) found in Arabidopsis thaliana early embryogenesis. Any convex polyhedral cell shape with $F$ faces can be represented by a 3-connected planar graph $G$ of $V$ vertices inter-connected by $E$ edges (polyhedral graph). Such graphs can be represented in 2D by Schlegel diagrams (Figure 1). Because of the 3-connectivity of the corresponding graph, applying Euler’s formula ($V-E+F=2$) to any of these shapes gives the following relations: $\begin{array}{cc}\hfill 2E=3V& \\ \hfill 2F=4+V& \end{array}$ Hence, we only need to determine the number of vertices of the daughter cells to characterize a cell division in terms of the abstract resulting cell shapes. Given that cell divisions avoid existing vertices (avoidance of four-way junctions), any division splits the cell vertices in two disjoint sets of vertices. These sets are non-empty because cell division planes extend from one face of the mother cell to another one. Imposing that each face of the original mother cell is cut at most once implies that we do not consider curved division planes that would fold back to connect to the face from which they emanate. This is consistent with biological observations, given that situations where a division plane extends from an existing cell face to the same face are extremely rare in general and unknown in the embryo. Hence, a cell division corresponds to a graph cut, whereby a number of edges are removed to yield two disconnected subgraphs. Following cut, each subgraph is completed by adding new vertices at the cut positions. A new edge is also introduced for each pair of new vertices located on the same face of the mother cell. The two resulting graphs are the graphs of the two daughter cells (Figure 2). Cell division as cuts on polyhedral graphs: illustration with the division of a cuboid. A division can be characterized by a couple of integers $(p,q)$, where $p$ and $q$ are the number of mother cell vertices that are separated by the division. Since $q=V-p$, the division is actually fully characterized by $p$ only. We call $p$-division a division that separates $p$ vertices from the $V-p$ other vertices ($p>0$). For example, the 1-divisions are the divisions whereby one of the vertices is separated from all the other ones (“corner” division). Since the $p$-divisions and the $q$-divisions with $q=V-p$ are two identical sets of divisions, we limit ourselves to situations where $p≤q$, i.e., $p≤V/2$. We note $N⁢(p)$ the number of possible $p$-divisions of a given cell shape. For each of these divisions, we note $K⁢(p)$ the number of removed edges (=size of the edge cut-set); $Vp⁢(p)$, $Ep⁢(p)$ and $Fp⁢(p)$ the total number of vertices, edges and faces in the daughter cell that inherits the $p$ vertices; $Vq⁢(p)$, $Eq⁢(p)$ and $Fq⁢(p)$ the total number of vertices, edges and faces in the daughter cell that inherits the remaining $q=V-p$ vertices; $Ep*⁢(p)$ and $Eq*⁢(p)$ the number of edges that are inherited from the mother cell by each of these two daughter cells, respectively (= number of edges in the subgraphs of $G$ induced by the $p$ and $q$ vertices, respectively). We derive below the expressions of all these quantities as functions of $p$. Each edge cut creates a new vertex for each daughter cell. We thus have, for any $p$: $\begin{array}{cc}\hfill {V}_{p}\left(p\right)=p+K\left(p\right)& \\ \hfill {V}_{q}\left(p\right)=q+K\left(p\right)& \end{array}$ Since each vertex is connected to three edges, the maximal number of possible cuts is $3⁢p$ (remember that $p≤q$). Each edge inherited by a daughter cell from its mother removes two potential cuts (one for each end-vertex). This gives: $\begin{array}{cc}\hfill K\left(p\right)=3p-2{E}_{p}^{*}\left(p\right)& \\ \hfill =3q-2{E}_{q}^{*}\left(p\right)& \end{array}$ We thus have: $\begin{array}{r}{V}_{p}\left(p\right)=2\left[2p-{E}_{p}^{\ast }\left(p\right)\right]\\ {V}_{q}\left(p\right)=2\left[2q-{E}_{q}^{\ast }\left(p\right)\right]\end{array}$ which we can write $\begin{array}{cc}\hfill {V}_{p}\left(p\right)=2{Q}_{p}\left(p\right)& \\ \hfill {V}_{q}\left(p\right)=2{Q}_{q}\left(p\right)& \end{array}$ $\begin{array}{cc}\hfill {Q}_{p}\left(p\right)=2p-{E}_{p}^{*}\left(p\right)& \\ \hfill {Q}_{q}\left(p\right)=2q-{E}_{q}^{*}\left(p\right)& \end{array}$ This finally leads to the following simple expressions for the number of edges and faces in the daughter cells: $\begin{array}{cc}\hfill {E}_{p}\left(p\right)=3{Q}_{p}\left(p\right)& \\ \hfill {E}_{q}\left(p\right)=3{Q}_{q}\left(p\right)& \end{array}$ $\begin{array}{cc}\hfill {F}_{p}\left(p\right)=2+{Q}_{p}\left(p\right)& \\ \hfill {F}_{q}\left(p\right)=2+{Q}_{q}\left(p\right)& \end{array}$ Given that $E={E}_{p}^{\ast }\left(p\right)+{E}_{q}^{\ast }\left(p\right)+K\left(p\right)$ we also have In a graph-theoretical perspective, we can thus fully describe a $p$-division and the resulting daughter cell shapes by two parameters only: the number $p$ of original vertices and the number $Ep*⁢ (p)$ of original edges that are inherited by the “smallest” ($p≤q$) of the two daughter cells. To go further we must distinguish two situations, depending on whether the subgraph induced by the $p$ vertices and their $Ep*⁢(p)$ edges is cyclic or not. If the subgraph induced by the $p$ vertices and their $Ep*⁢(p)$ edges contains no cycle (this is systematically the case for $p<3$), then we have: This gives the following features for a division induced by two acyclic subgraphs: $\begin{array}{rl}& {Q}_{p}\left(p\right)=p+1\\ & {Q}_{q}\left(p\right)=V/2+1\\ & {V}_{p}\left(p\right)=2\left(p+1\right)\\ & {V}_{q}\left(p\right)=V+2\\ & {E}_{p}\left(p\right)=3\left(p+1\right)\\ & {E}_{q}\left(p\right)=3V/2+3\\ & {F}_{p}\left(p\right)=p+3\\ & {F}_{q}\left(p\right)=V/2+3\end{array}$ One corollary of these results is that: $\begin{array}{cc}\hfill {F}_{p}\left(p\right)\le F+1& \\ \hfill {F}_{q}\left(p\right)=F+1& \end{array}$ Hence, a division “in the acyclic case” systematically yields a daughter cell with one additional face compared with the mother. The other daughter cell has at most one additional face. For the shapes we consider, we have $p≤4$. In this particular case, the presence of a cycle in the subgraph induced by the $p$ vertices ($p≥3$) and their $Ep*⁢(p)$ edges necessarily leads to: $Ep*⁢(p)=p$ which yields the following features for a division induced by a cyclic subgraph: $\begin{array}{rl}& {Q}_{p}\left(p\right)=p\\ & {Q}_{q}\left(p\right)=V/2\\ & {V}_{p}\left(p\right)=2p\\ & {V}_{q}\left(p\right)=V\\ & {E}_{p}\left(p\right)=3p\\ & {E}_{q}\left(p\right)=3V/2\\ & {F}_ {p}\left(p\right)=p+2\\ & {F}_{q}\left(p\right)=V/2+2\end{array}$ with, as a corollary, the following: $\begin{array}{cc}\hfill {F}_{p}\le F& \\ \hfill {F}_{q}=F& \end{array}$ Hence, a division in the “cyclic case” cannot generate shapes with a larger number of faces than the mother cell. In addition, one of the two daughter cells has systematically the same shape as the mother cell. Now it remains to enumerate the number $N⁢(p)$ of different $p$-divisions for a given mother cell shape. The number of 1-divisions is simply: For the 2-divisions, we must distinguish the tetrahedral shape from the other ones because of symmetries of the 2-divisions in this shape: $N\left(2\right)=\left\{\begin{array}{cc}E/2\hfill & \text{if}V=4\hfill \\ E\hfill & \text{otherwise}\hfill \end{array}$ The number of 3-divisions (meaningful only for the two shapes with $V≥6$) is the number of pairs of adjacent edges in the mother cell graph. There are three pairs of adjacent edges at each vertex. For the triangular prismatic shape, care must be taken that the two triangular faces induce symmetries. On each face, there are indeed three pairs of edges that define the same division (“cyclic” case). Hence we have $N\left(3\right)=\left\{\begin{array}{cc}3V-5\hfill & \text{if}V=6\hfill \\ 3V\hfill & \text{if}V=8\hfill \end{array}$ The 4-divisions are meaningful only for the cuboidal shape. They are obtained either by separating opposite quadrilateral faces of the mother cell (“cyclic” case) or by separating one vertex and its three connected neighbors from the other four vertices (“acyclic” case). Taking care of symmetries, we thus have: $\begin{array}{cc}\hfill N\left(4\right)=F/2+V/2& \\ \hfill =1+\frac{3}{4}V& \end{array}$ Now we can compute the expected proportions of cell shapes resulting from the division of a given cell shape, under a discrete uniform probability distribution over the space of possible divisions. In the sequel, we refer to each shape by the triplet $V.E.F$. The possible outcomes of the division of a tetrahedral (4.6.4) mother cell are given in Appendix 1—table 1. From this table, we obtain that the expected proportions of cell shapes following the division of a 4.6.4 cell are: $\text{Daughters of 4.6.4}\left\{\begin{array}{l}P\left(4.6.4\right)=\frac{4}{14}\phantom{\rule{1em}{0ex}}\left(28.6\mathrm{%}\right)\\ P\left(6.9.5\right)=\frac{10}{14}\phantom{\rule{1em}{0ex}}\left The possible outcomes of the division of a triangular prismatic (6.9.5) mother cell are given in Appendix 1—table 2. From this table, we obtain that the expected proportions of cell shapes following the division of a 6.9.5 cell are: $\text{Daughters of 6.9.5}\left\{\begin{array}{ll}P\left(4.6.4\right)& =\frac{6}{56}\phantom{\rule{1em}{0ex}}\left(10.7\mathrm{%}\right)\\ P\left(6.9.5\right)& =\frac{11}{56}\phantom{\rule{1em}{0ex}} \left(19.6\mathrm{%}\right)\\ P\left(8.12.6\right)& =\frac{39}{56}\phantom{\rule{1em}{0ex}}\left(69.9\mathrm{%}\right)\end{array}$ The possible outcomes of the division of a cuboidal (8.12.6) mother cell are given in Appendix 1—table 3. From this table, we obtain that the expected proportions of cell shapes following the division of a 8.12.6 cell are: $\text{Daughters of 8.12.6}\left\{\begin{array}{ll}P\left(4.6.4\right)& =\frac{8}{90}\phantom{\rule{1em}{0ex}}\left(8.9\mathrm{%}\right)\\ P\left(6.9.5\right)& =\frac{12}{90}\phantom{\rule{1em}{0ex}} \left(13.3\mathrm{%}\right)\\ P\left(8.12.6\right)& =\frac{24}{90}\phantom{\rule{1em}{0ex}}\left(26.7\mathrm{%}\right)\\ P\left(10.15.7\right)& =\frac{46}{90}\phantom{\rule{1em}{0ex}}\left(51.1\ The dataset of embryo images used in this study has been deposited in Data INRAE: Belcram, Katia; Palauqui, Jean-Christophe, 2022, "A collection of 3D images of Arabidopsis thaliana embryos", https:/ /doi.org/10.15454/HIIBKW, Portail Data INRAE, V1. The TreeJ plugin for ImageJ/Fiji used for lineage reconstruction is available from https://imagej.net/plugins/treej and its source code from https:// github.com/L-EL/TreeJ, (copy archived at swh:1:rev:5e70bb8149b9e18ba2439e24f4c558cf649da348). The scripts used for cell shape analysis can be found at https://github.com/L-EL/plantCellShapeAnalysis, (copy archived at swh:1:rev:b681e0760416bee82d2dde15da96974602a9ff7c). The code source of the 3D cell division model, together with an executable version for Linux Ubuntu 20.04, has been deposited on Data INRAE: Philippe Andrey, 2022, Cell division model: source code and executable, https://doi.org/10.15454/LHGT6C, Portail Data INRAE, V1. Source Data files have been provided for Figures 2, 4, and Über zellformen und siefenblasen Bottanisches Centralblatt 34:395–399. 16. Book Convex Polytopes. Number 221 in Graduate Texts in Mathematics New York: Springer-Verlag. 28. Book MATLAB and Statistics Toolbox Release Natick, Massachusetts, United States: The MathWorks, Inc. Article and author information Author details INRA MIA and INRA BAP departments (PhD Fellowship) The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. This work was supported by the MIA and BAP departments of INRA (funding support to EL) and has benefited from the support of IJPB’s Plant Observatory technological platforms. We thank Herman Höfte for his comments and feedback on a first version of our manuscript. The IJPB benefits from the support of Saclay Plant Sciences-SPS (ANR-17-EUR-0007). © 2022, Laruelle et al. This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. Views, downloads and citations are aggregated across all versions of this paper published by eLife. A two-part list of links to download the article, or parts of the article, in various formats. Downloads (link to download the article as PDF) Open citations (links to open the citations from this article in various online reference manager services) Cite this article (links to download the citations from this article in formats compatible with various reference manager tools) 1. Elise Laruelle 2. Katia Belcram 3. Alain Trubuil 4. Jean-Christophe Palauqui 5. Philippe Andrey Large-scale analysis and computer modeling reveal hidden regularities behind variability of cell division patterns in Arabidopsis thaliana embryogenesis eLife 11:e79224. Further reading 1. Computational and Systems Biology 2. Physics of Living Systems Explaining biodiversity is a fundamental issue in ecology. A long-standing puzzle lies in the paradox of the plankton: many species of plankton feeding on a limited variety of resources coexist, apparently flouting the competitive exclusion principle (CEP), which holds that the number of predator (consumer) species cannot exceed that of the resources at a steady state. Here, we present a mechanistic model and demonstrate that intraspecific interference among the consumers enables a plethora of consumer species to coexist at constant population densities with only one or a handful of resource species. This facilitated biodiversity is resistant to stochasticity, either with the stochastic simulation algorithm or individual-based modeling. Our model naturally explains the classical experiments that invalidate the CEP, quantitatively illustrates the universal S-shaped pattern of the rank-abundance curves across a wide range of ecological communities, and can be broadly used to resolve the mystery of biodiversity in many natural ecosystems. 1. Chromosomes and Gene Expression 2. Computational and Systems Biology Genes are often regulated by multiple enhancers. It is poorly understood how the individual enhancer activities are combined to control promoter activity. Anecdotal evidence has shown that enhancers can combine sub-additively, additively, synergistically, or redundantly. However, it is not clear which of these modes are more frequent in mammalian genomes. Here, we systematically tested how pairs of enhancers activate promoters using a three-way combinatorial reporter assay in mouse embryonic stem cells. By assaying about 69,000 enhancer-enhancer-promoter combinations we found that enhancer pairs generally combine near-additively. This behaviour was conserved across seven developmental promoters tested. Surprisingly, these promoters scale the enhancer signals in a non-linear manner that depends on promoter strength. A housekeeping promoter showed an overall different response to enhancer pairs, and a smaller dynamic range. Thus, our data indicate that enhancers mostly act additively, but promoters transform their collective effect non-linearly. 1. Computational and Systems Biology 2. Physics of Living Systems Planar cell polarity (PCP) – tissue-scale alignment of the direction of asymmetric localization of proteins at the cell-cell interface – is essential for embryonic development and physiological functions. Abnormalities in PCP can result in developmental imperfections, including neural tube closure defects and misaligned hair follicles. Decoding the mechanisms responsible for PCP establishment and maintenance remains a fundamental open question. While the roles of various molecules – broadly classified into “global” and “local” modules – have been well-studied, their necessity and sufficiency in explaining PCP and connecting their perturbations to experimentally observed patterns have not been examined. Here, we develop a minimal model that captures the proposed features of PCP establishment – a global tissue-level gradient and local asymmetric distribution of protein complexes. The proposed model suggests that while polarity can emerge without a gradient, the gradient not only acts as a global cue but also increases the robustness of PCP against stochastic perturbations. We also recapitulated and quantified the experimentally observed features of swirling patterns and domineering non-autonomy, using only three free model parameters - the rate of protein binding to membrane, the concentration of PCP proteins, and the gradient steepness. We explain how self-stabilizing asymmetric protein localizations in the presence of tissue-level gradient can lead to robust PCP patterns and reveal minimal design principles for a polarized system.
{"url":"https://elifesciences.org/articles/79224","timestamp":"2024-11-04T17:45:46Z","content_type":"text/html","content_length":"486855","record_id":"<urn:uuid:c71bd79b-b0b1-49a0-afc7-c5eeae208cce>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00813.warc.gz"}
seminars - An introduction to intrinsic scaling Lecture II -일시 : 10월 16일(화)~10월 23일(화) 09:00~11:00 Abstract: The mini-course is an introductory and self-contained approach to the method of intrinsic scaling, aiming at bringing to light what is really essential in this powerful tool in the analysis of degenerate and singular equations. The theory is presented from scratch for the simplest model case of the degenerate p-Laplace equation, leaving aside technical renements needed to deal with more general situations. A striking feature of the method is its pervasiveness in terms of the applications and I hope to convince the audience of its strength as a systematic approach to regularity for an important and relevant class of nonlinear partial dierential equations. I will extensively follow my book $14$ , with complements and extensions from a variety of sources (listed in the references), mainly
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&document_srl=787757&sort_index=Time&order_type=asc","timestamp":"2024-11-14T11:05:24Z","content_type":"text/html","content_length":"47634","record_id":"<urn:uuid:62c2c043-6809-4b9c-8272-786fa0a0d0bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00824.warc.gz"}
What is the relationship between US treasury bond prices and interest rates? - Temple of Wisdom US Treasury bonds are among the most important investment vehicles in the world. As the benchmark for global interest rates, they serve as a bellwether for the broader economy and provide a means for investors to earn a fixed income. The relationship between US Treasury bond prices and interest rates is a crucial one for investors to understand, as it can have a significant impact on the value of their investments. To begin with, it is important to define what is meant by US Treasury bonds. These are debt securities issued by the US government to finance its operations. The bonds come in a variety of maturities, from short-term bills to longer-term notes and bonds, and are sold at auction to institutional and retail investors. The bonds are backed by the full faith and credit of the US government, making them among the safest investments available. Interest rates, on the other hand, represent the cost of borrowing money. They are determined by the supply and demand for credit in the economy, and are influenced by a range of factors, including inflation expectations, central bank policy, and economic growth prospects. Interest rates are typically expressed as a percentage of the amount borrowed, and are paid by borrowers to lenders as compensation for the use of their funds. The relationship between US Treasury bond prices and interest rates is an inverse one. That is, as interest rates rise, the price of existing bonds falls, and vice versa. This relationship is driven by the fact that bond prices are determined by the present value of their future cash flows. As interest rates rise, the discount rate used to calculate the present value of those cash flows also rises, reducing the value of the bond. To understand this relationship in more detail, it is helpful to look at an example. Let us say that an investor buys a 10-year US Treasury bond with a face value of $1,000 and a coupon rate of 3%. This means that the bond will pay the investor $30 per year in interest, or 3% of the face value, until it matures in 10 years. At maturity, the investor will receive the full face value of $1,000. Now, let us assume that interest rates in the economy rise from 3% to 4% shortly after the investor purchases the bond. This means that new bonds being issued by the government will have a coupon rate of 4%, reflecting the higher cost of borrowing. As a result, the existing bond with a 3% coupon rate becomes less attractive to investors, as they can earn a higher return by buying new bonds. To compensate for this, the price of the existing bond must fall. To see why this is the case, consider the present value of the bond’s future cash flows. At an interest rate of 3%, the present value of the bond’s cash flows is calculated as follows: PV = ($30 / 1.03) + ($30 / 1.03^2) + … + ($30 / 1.03^10) + ($1,000 / 1.03^10) = $253.68 + $232.56 + … + $402.71 + $385.54 = $1,000 This means that at an interest rate of 3%, the bond is priced at its face value of $1,000, as the present value of its cash flows equals that amount. However, at an interest rate of 4%, the present value of the bond’s cash flows is calculated as follows: PV = ($30 / 1.04) + ($30 / 1.04^2) + … + ($30 / 1.04^10) + ($1,000 / 1.04^10) = $244.26 + $220.05 + $310.28 + $671.02 = $911.13 As we can see, the present value of the bond’s cash flows is now only $911.13, which is less than its face value of $1,000. This means that the bond is now worth less than its face value, and its price must fall accordingly. The inverse relationship between bond prices and interest rates is not linear, however. Rather, it is convex, meaning that the price of a bond falls more as interest rates rise, and rises less as interest rates fall. This is due to the fact that as interest rates rise, the increase in the discount rate has a compounding effect on the present value of the bond’s future cash flows. Conversely, as interest rates fall, the decrease in the discount rate has a diminishing effect on the present value of the bond’s future cash flows. This convexity effect is particularly pronounced for long-term bonds, as they have more future cash flows to discount. As a result, long-term bonds are more sensitive to changes in interest rates than short-term bonds. This is reflected in the yield curve, which plots the yields of bonds with different maturities. Typically, the yield curve is upward-sloping, meaning that yields rise as maturities lengthen. This reflects the market’s expectation that long-term bonds will be more sensitive to changes in interest rates than short-term bonds. The relationship between US Treasury bond prices and interest rates has important implications for investors. When interest rates are low, as they have been in recent years, bond prices are generally high, as investors flock to the safety and stability of fixed-income investments. However, when interest rates rise, bond prices are likely to fall, which can result in significant losses for investors who hold long-term bonds. As a result, it is important for investors to carefully consider the impact of interest rate changes on their bond investments, and to diversify their portfolios to mitigate risk. In conclusion, the relationship between US Treasury bond prices and interest rates is an inverse one, driven by the present value of future cash flows. As interest rates rise, the discount rate used to calculate the present value of those cash flows also rises, reducing the value of existing bonds. This relationship is particularly pronounced for long-term bonds, which are more sensitive to changes in interest rates than short-term bonds. As a result, investors should carefully consider the impact of interest rate changes on their bond investments, and seek to diversify their portfolios to mitigate risk.
{"url":"https://blog.antalyatv.com/templeofwisdom/what-is-the-relationship-between-us-treasury-bond-prices-and-interest-rates/","timestamp":"2024-11-12T05:40:22Z","content_type":"text/html","content_length":"92770","record_id":"<urn:uuid:3ecd40f6-026f-43eb-ab4b-229e33f0dd8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00227.warc.gz"}
I stopped working on black hole information loss. Here’s why. [This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.] It occurred to me the other day that I’ve never told you what I did before I ended up in the basement in front of a green screen. So today I want to tell you why I, as many other physicists, was fascinated by the black hole paradox that Steven Hawking discovered before I was even born. And why I, as many other physicists, tried to solve it. But why I, in the end, unlike many other physicists, decided that it’s a waste of time. What’s the black hole information paradox? Has it been solved and if not, will it ever be solved? What if anything is new about those recent headlines? That’s what we’ll talk about today. First things first, what’s the black hole information loss paradox. Imagine you have a book and you throw it into a black hole. The book disappears behind the horizon, the black hole emits some gravitational waves and then you have a black hole with a somewhat higher mass. And that’s it. This is what Einstein’s theory of general relativity says. Yes, that guy again. In Einstein’s theory of general relativity black holes are extremely simple. They are completely described by only three properties: their mass, angular moment, and electric charge. This is called the “no hair” theorem. Black holes are bald and featureless and you can mathematically prove it. But that doesn’t fit together with quantum mechanics. In quantum mechanics, everything that happens is reversible so long as you don’t make a measurement. This doesn’t mean processes look the same forward and backward in time, this would be called time-reversal “invariance”. It merely means that if you start with some initial state and wait for it to develop into a final state, then you can tell from the final state what the initial state was. In this sense, information cannot get lost. And this time-reversibility is a mathematical property of quantum mechanics which is experimentally extremely well confirmed. However, in practice, reversing a process is possible only in really small systems. Processes in large systems become for all practical purposes irreversible extremely quickly. If you burn your book, for example, then for all practical purposes the information in it was destroyed. However, in principle, if we could only measure the properties of the smoke and ashes well enough, we could calculate what the letters in the book once were. But when you throw the book into a black hole that’s different. You throw it in, the black hole settles into its hairless state, and the only difference between the initial and final state is the total mass. The process seems irreversible. There just isn’t enough information in the hairless black hole to tell what was in the book. The black hole doesn’t fit together with quantum mechanics. And note that making a measurement isn’t necessary to arrive at this conclusion. You may remember that I said the black hole emits some gravitational waves. And those indeed contain some information, but so long as general relativity is correct, they don’t contain enough information to encode everything that’s in the book. Physicists knew about this puzzle since the 1960s or so, but initially they didn’t take it seriously. At this time, they just said, well, it’s only when we look at the black hole from the outside that we don’t know how reverse this process. Maybe the missing information is inside. And we don’t really know what’s inside a black hole because Einstein’s theory breaks down there. So maybe not a problem after all. But then along came Stephen Hawking. Hawking showed in the early 1970s that actually black holes don’t just sit there forever. They emit radiation, which is now called Hawking radiation. This radiation is thermal which means it’s random except for its temperature, and the temperature is inversely proportional to the mass of the black hole. This means two things. First, there’s no new information which comes out in the Hawking radiation. And second, as the black hole radiates, its mass shrinks because E=mc^2 and energy is conserved, and that means the black hole temperature increases as it evaporates. As a consequence, the evaporation of a black hole speeds up. Eventually the black hole is gone. All you have left is this thermal radiation which contains no information. And now you have a real problem. Because you can no longer say that maybe the information is inside the black hole. If a black hole forms, for example, in the collapse of a star, then after it’s evaporated, all the information about that initial star, and everything that fell into the black hole later, is completely gone. And that’s inconsistent with quantum mechanics. This is the black hole information loss paradox. You take quantum mechanics and general relativity, combine them, and the result doesn’t fit together with quantum mechanics. There are many different ways physicists have tried to solve this problem and every couple of months you see yet another headline claiming that it’s been solved. Here is the most recent iteration of this cycle, which is about a paper by Steve Hsu and Xavier Calmet. The authors claim that the information does get out. Not in gravitational waves, but in gravitons that are quanta of the gravitational field. Those are not included in Hawking’s original calculation. These gravitons add variety to black holes, so now they have hair. This hair can store information and release it with the radiation. This is a possibility that I thought about at some point myself, as I am sure many others in the field have too. I eventually came to the conclusion that it doesn’t work. So I am somewhat skeptical that their proposal actually solves the problem. But maybe I was wrong and they are right. Gerard ‘t Hooft by the way also thinks the information comes out in gravitons, though in a different way then Hsu and Calmet. So this is not an outlandish idea. I went through different solutions to the black hole information paradox in an earlier video and will not repeat them all here, but I want to instead give you a general idea for what is happening. In brief, the issue is that there are many possible solutions. Schematically, the way that the black hole information loss paradox comes about is that you take Einstein’s general relativity and combine it with quantum mechanics. Each has its set of assumptions. If you combine them, you have to make some further assumptions about how you do this. The black hole information paradox then states that all those assumptions together are inconsistent. This means you can take some of them, combine them and obtain a statement which contracts another assumption. Simple example for what I mean with “inconsistent”, the assumption x< 0 is inconsistent with the assumption x > 1. If you want to resolve an inconsistency in a set of assumptions, you can remove some of the assumptions. If you remove sufficiently many, the inconsistency will eventually vanish. But then the predictions of your theory become ambiguous because you miss details on how to do calculations. So you have to put in new assumptions to replace the ones that you have thrown out. And then you show that this new set of assumptions is no longer inconsistent. This is what physicists mean when they say they “solved the problem”. But. There are many different ways to resolve an inconsistency because there are many different assumptions you can throw out. And this means there are many possible solutions to the problem which are mathematically correct. But only one of them will be correct in the sense of describing what indeed happens in nature. Physics isn’t math. Mathematics is a great tool, but in the end you have to make an actual measurement to see what happens in reality. And that’s the problem with the black hole information loss paradox. The temperature of the black holes that we can observe today is way too small to measure the Hawking radiation. Remember that the larger the black hole, the smaller its temperature. The temperature of astrophysical black holes is below the temperature of the CMB. And even if that wasn’t the case, what do you want to do? Sit around 100 billion years to catch all the radiation and see if you can figure out what fell into the black hole? It’s not going to happen. What’s going to happen with this new solution? Most likely, someone’s going to find a problem with it, and everyone will continue working on their own solution. Indeed, there’s a good chance that by the time this video appears this has already happened. For me, the real paradox is why they keep doing it. I guess they do it because they have been told so often this is a big problem that they believe if they solve it they’ll be considered geniuses. But of course their colleagues will never agree that they solved the problem to begin with. So by all chances, half a year from now you’ll see another headline claiming that the problem has been solved. And that’s why I stopped working on the black hole information loss paradox. Not because it’s unsolvable. But because you can’t solve this problem with mathematics alone, and experiments are not possible, not now and probably not in the next 10000 years. Why am I telling you this? I am not talking about this because I want to change the mind of my colleagues in physics. They have grown up thinking this is an important research question and I don’t think they’ll change their mind. But I want you to know that you can safely ignore headlines about black hole information loss. You’re not missing anything if you don’t read those articles. Because no one can tell which solution is correct in the sense that it actually describes nature, and physicists will not agree on one anyway. Because if they did, they’d have to stop writing papers about
{"url":"https://backreaction.blogspot.com/2022/04/i-stopped-working-on-black-hole.html","timestamp":"2024-11-08T07:59:40Z","content_type":"application/xhtml+xml","content_length":"157592","record_id":"<urn:uuid:a29f5210-ccc1-49ff-be91-30dc4458580c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00361.warc.gz"}
Chapter 2.4: Introduction to Systems of Equations Figure 1. Enigma machines like this one, once owned by Italian dictator Benito Mussolini, were used by government and military officials for enciphering and deciphering top-secret communications during World War II. (credit: Dave Addey, Flickr) By 1943, it was obvious to the Nazi regime that defeat was imminent unless it could build a weapon with unlimited destructive power, one that had never been seen before in the history of the world. In September, Adolf Hitler ordered German scientists to begin building an atomic bomb. Rumors and whispers began to spread from across the ocean. Refugees and diplomats told of the experiments happening in Norway. However, Franklin D. Roosevelt wasn’t sold, and even doubted British Prime Minister Winston Churchill’s warning. Roosevelt wanted undeniable proof. Fortunately, he soon received the proof he wanted when a group of mathematicians cracked the “Enigma” code, proving beyond a doubt that Hitler was building an atomic bomb. The next day, Roosevelt gave the order that the United States begin work on the same. The Enigma is perhaps the most famous cryptographic device ever known. It stands as an example of the pivotal role cryptography has played in society. Now, technology has moved cryptanalysis to the digital world. Many ciphers are designed using invertible matrices as the method of message transference, as finding the inverse of a matrix is generally part of the process of decoding. In addition to knowing the matrix and its inverse, the receiver must also know the key that, when used with the matrix inverse, will allow the message to be read. In this chapter, we will investigate matrices and their inverses, and various ways to use matrices to solve systems of equations. First, however, we will study systems of equations on their own: linear and nonlinear, and then partial fractions. We will not be breaking any secret codes here, but we will lay the foundation for future courses.
{"url":"https://ecampusontario.pressbooks.pub/sccmathtechmath1/chapter/introduction-to-systems-of-equations-and-inequalities/","timestamp":"2024-11-02T23:53:33Z","content_type":"text/html","content_length":"82895","record_id":"<urn:uuid:3e54de77-833d-45a5-aa56-268aab6db4c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00274.warc.gz"}
Problem with unsuspected coordinate values when creating polygons with arcpy. 01-24-2020 04:37 AM I have a problem when creating polygon features with arcpy. I want to save rounded X and Y vertices for example to two decimal places. Example: I have coordinates given on two decimal places. I am expecting rounding error due to limitations in precision of float data type. Lets say we are working with projected coordinate system with base units of 1 Meter. Example script (using dummy coordinate values): import arcpy point_coord = arcpy.Point(1.10, 2.13) point = arcpy.PointGeometry(point_coord) linestring_coords = [arcpy.Point(1.10, 2.13), arcpy.Point(2.44, 5.12)] ls_array = arcpy.Array(linestring_coords) linestring = arcpy.Polyline(ls_array) poly_coords = [arcpy.Point(1.10, 2.13), arcpy.Point(2.44, 5.12), arcpy.Point(0.32, 4.2), arcpy.Point(1.10, 2.13)] p_array = arcpy.Array(arcpy.Array(poly_coords)) poly = arcpy.Polygon(p_array) print("Constructed point: {}".format(point.WKT)) print("Constructed polyline: {}".format(linestring.WKT)) print("Constructed polygon: {}".format(poly.WKT)) Constructed point: POINT (1.1000000000000001 2.1299999999999999) Constructed polyline: MULTILINESTRING ((1.1000000000000001 2.1299999999999999, 2.4399999999999999 5.1200000000000001)) Constructed polygon: MULTIPOLYGON (((1.10009765625 2.130126953125, 2.44012451171875 5.1201171875, 0.32012939453125 4.2000732421875, 1.10009765625 2.130126953125))) As said I am expecting rounding errors on for the values in point and polyline. But error on polygon coordinates is a lot bigger than 10e-10. If we say that base projection units are in meters that brings an error of 0.1 mm which is a lot in some surveying applications. I know that this error can be virtually rounded to 2 decimal places in arcmap or some other, but if I want to take these coordinates programically and perform some calculations the error grows and stays in data. Is there a way to enforce rounding to values of coordinates for polygon features. How can this be the case in software meant for managing acurrate and high quality data. I know for a fact, that some libraries outside arcgis ecosystem round coordinates normally for polygons (like point and polyline above). How can I achieve same 'precision' as when creating polylines but with Polygons. Thanks in advance. 01-24-2020 06:23 AM 01-24-2020 06:43 AM 01-24-2020 07:29 AM
{"url":"https://community.esri.com/t5/python-questions/problem-with-unsuspected-coordinate-values-when/td-p/160361","timestamp":"2024-11-03T17:21:52Z","content_type":"text/html","content_length":"249652","record_id":"<urn:uuid:af22292e-8d7f-41b4-9d36-b495a87a16b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00654.warc.gz"}
What Will It Take to Get to Acceptable Privacy-Accuracy Combinations? What will it take to achieve acceptable privacy-accuracy combinations? I discuss this question in five parts. In Section 1, I review the technicalities of the privacy-accuracy trade-off problem. In Section 2, I introduce the two articles in this symposium that study real-world privacy-accuracy combinations. In Sections 3 and 4, I ask, respectively: What is acceptable accuracy? What is acceptable privacy? I consider these questions through the lens of the two articles. I conclude in Section 5, connecting the technical questions that practitioners study (from Sections 1 and 2) with the normative questions that only society as a whole can and should answer (from Sections 3 and 4). I argue that while great technical progress has been made, much more work is needed before we can talk of getting even close to an acceptable privacy-accuracy combination. I touch on some currently investigated ideas for moving forward, and end with a call to open and broaden the conversation, bringing the rest of society in. Keywords: privacy-accuracy trade-off, privacy-accuracy combinations, differential privacy, census, acceptable privacy, acceptable accuracy Better late than never. It is only quite recently that we have realized, to our horror, that the private information collected by the U.S. Census may not be as protected as we had believed it to be: we have learned that it is possible to use aggregate statistics published in the past by the U.S. Census Bureau to reconstruct much of the underlying sensitive microdata. The idea is quite simple. The Census Bureau publishes billions of statistics on only slightly more than 300 million people. All a sophisticated ‘reconstruction attacker’ has to do is solve a system of billions of equations to find 300 million unknowns—an easy task, in principle and under some assumptions, with today’s computing power. Enter differential privacy. The Census Bureau can use it to replace the underlying microdata with synthetic microdata. The synthetic data are created from the original data using randomness of a certain predetermined, publicly known, scale and features. By publishing only statistics computed on the synthetic microdata—or even, as a special case, publishing the entire synthetic data set—the Census Bureau can provide formally provable privacy protections. The more randomness used to create the synthetic microdata—and hence, the greater the accuracy loss—the stronger the privacy-protection guarantees. What will it take to achieve acceptable privacy-accuracy combinations? I discuss this question in five parts. In Section 1, I review the technicalities of the privacy-accuracy trade-off problem. In Section 2 I introduce the two articles in this symposium that study real-world privacy-accuracy combinations. In Sections 3 and 4 I ask, respectively: What is acceptable accuracy? What is acceptable privacy? I consider these questions through the lens of the two articles. I conclude in Section 5, connecting the technical questions that practitioners study (from Sections 1 and 2) with the normative questions that only society as a whole can and should answer (from Sections 3 and 4). I argue that while great technical progress has been made, much more work is needed before we can talk of getting even close to an acceptable privacy-accuracy combination. I touch on some currently investigated ideas for moving forward, and end with a call to open and broaden the conversation, bringing the rest of society in. 1. The Problem: A Privacy-Accuracy Trade-off Differential privacy quantifies the strength of its privacy protection with a parameter ε. Its interpretation, in three steps: 1. Take any possible census data set—that is, any data set that has the size and structure of the actual census microdata, but that is filled with any possible (potentially made-up) individual 2. Imagine any possible ‘neighboring’ data set in which a single individual’s record in said data set is changed. 3. Create synthetic microdata from the actual census microdata in such a way that if you applied the same (known) procedure to create synthetic data from any possible data set in 1 and from any of its neighboring data sets in 2, the probability of any published statistic based on either synthetic data set would not differ, across the two data sets, by more than a multiplicative factor of e The intuition behind this privacy protection is as follows. No matter what true values the actual census data set contains (see 1), the privacy of any individual participating in that census is protected by the guarantee that if we replaced that individual’s data with some other data (see 2), the probability of any specific value in the published information would not change by more than an e^ε multiplicative factor. The privacy of an individual participating in the census lies, therefore, in that their individual data cannot affect the probability of any published outcome too much. How much is too much? More than by an e^ε multiple. For example, the probability that the published (synthetic) data set classifies N individuals as being below 18 years old is at most e^ε times higher when that published data set is created from the actual census data set than when it is created from a neighboring data set where one of the (actual) below-18 individuals is replaced with an 18-or-above individual. If ε were taken, in the extreme, to be 0, then e^ε would be 1, and the privacy guarantee would be absolute: the probability that N individuals are classified as children in the published data set would need to be the same regardless of the underlying microdata. Of course, this extreme value of ε would make the published data set useless: it would contain no information regarding the actual underlying, private microdata. The larger ε is, the more useful, or accurate, is the published data set, and the weaker is the privacy guarantee. The holy grail in the Census differential privacy project is finding a value of ε that could allow for an acceptable combination: an acceptable level of privacy protection with an acceptable level of accuracy of the published data. One of the great achievements of the Census differential privacy project is that we now can, for the first time, meaningfully discuss such an acceptable privacy-accuracy combination. Before the census turned to differential privacy, there was no way to formally quantify the privacy provided by its disclosure-avoidance techniques; certainly not by those of us not privy to those (unpublished) techniques—but not even, as it turns out, by our colleagues at the Census who designed and implemented these techniques (see my opening paragraph above). 2. The Task: Understanding the Trade-off Now that differential privacy allows us to formulate, investigate, and understand the privacy-accuracy trade-off problem, we can finally ask: Can we achieve an acceptable combination? Answering this question involves two steps. First, it requires understanding the accuracy-privacy combinations that are technically achievable, given current or future technology. Second, it requires answering two normative questions: What is acceptable accuracy? What is acceptable privacy? To study what is technically achievable, we need exactly the type of studies by Asquith et al. (2022) and Brummet et al. (2022). Of course, as these authors observe, their specific findings are by now somewhat outdated, since Census is rapidly updating its algorithm, responding quickly to user accuracy concerns by rebalancing the ways that it uses noise, to ensure high accuracy where it matters most. But the studies are useful and informative for examining the questions and considerations that must go into an analysis of the trade-off. Using the 1940 Census—the most recent census for which the microdata are publicly available—the authors of these two studies perturb it using an early version of the Census differential privacy algorithm to achieve a range of ε levels, and use repeated runs of the perturbation to examine the distributions of several statistics of interest. For each ε level, they thus allow us to examine the accuracy loss, that is, the distribution of differences in these to-be-published statistics relative to the same statistics when calculated from the unperturbed 1940 Census data. It is important to bear in mind that since the algorithms used by the Census are in constant flux, any specific empirical study at best illustrates the trade-offs that can currently be guaranteed, rather than the trade-offs that may be feasible as technology improves. As such, current findings of specific trade-offs should be interpreted as lower, rather than upper, bounds. That said, as real-world applications of differential privacy, these two studies put concrete numbers on a possible privacy-accuracy trade-off—numbers that we can now discuss. Since at this point differential privacy is the only game in town with provable guarantees—and a promising game, at that—the more such studies we see now, the better. And given the fast pace at which things move at the Census, the speed at which these and follow-up studies are completed is important too, and should be prioritized. What do these studies find? What accuracy-privacy combinations are technically achievable using these studies’ procedures and data? I give examples below, in the context of the two normative questions above, without which we cannot interpret the findings. 3. Question 1: What Is Acceptable Accuracy? What constitutes acceptable accuracy is a question that must be discussed not only by those who will use census data; importantly, it must be discussed also by those who will be affected by census data—that is, all stakeholders (or their representatives) in U.S. society. The answers to this question are application-specific. The introduction of the differential privacy toolkit in the census context opens the door for this long-overdue discussion, since it enables—for the first time—processing steps on census data that come with provable, transparent bounds on their impact on statistical Of course, even ‘unperturbed’ census data, before any disclosure-limitation techniques are applied, are themselves noisy due to myriad factors such as nonresponse bias, and imputations and corrections applied to the data. Any conversation about acceptable accuracy is therefore incomplete without accounting for these additional sources of error. In particular, a discussion about the impact of differential privacy on accuracy is incomplete without examining how the scale of the perturbations from privacy compares with the scale of other sources of noise, and how the various sources of noise interact: Do they amplify one another? Do they sometimes cancel one another out? That said, for now, the two studies let us assess the additional accuracy loss at different perturbation levels, given the Census algorithm used at the time the studies were conducted, in the context of specific realistic uses of the 1940 Census data. Does the accuracy loss they find seem acceptable to these studies’ authors? Start with Brummet et al. (2022). They investigate three applications of census data: two related to survey sampling, and one that simulates allocating funds to specific areas. Consider their fund-allocation application: Assume a budget of $5 billion to be allocated nationwide proportional to the number of individuals <18 years old (roughly $125/child). What is the distribution of misallocated funds across enumeration districts and counties? Brummet et al. (2022) answer this question for eight different levels of ε: 0.25, 0.50, 0.75, 1, 2, 4, 6, and 8. The team finds that across counties—generally large geographic units—per-child misallocation is modest (see their Table 8, Panel A). A county at the 10th percentile would mistakenly receive around $1.30–3.30 per child less than it should, while a county at the 90th percentile would receive $0.60–3.10 per child more than it should, for ε in the above range of 0.25 (largest misallocations) to 8 (smallest misallocations). The authors observe that even such modest allocations “may still be large enough to cause concerns for districts that depend on the funds.” Moreover, for districts—much smaller geographical units—misallocation can be quite substantial (Table 8, Panel B). While it remains less than $10 per child for ε > 1, it ranges from $15 for ε = 1 to $57 for ε = 0.25—almost half of the original per-child allocation. The authors conclude that “the noise injection can lead … to substantial misallocations of funds to local areas.” Asquith et al. (2022) explore other applications. They look at population counts (total, White, and African American) across counties and other geographies, and at three commonly used segregation indices across counties. They too investigate the impact of the Census differential privacy algorithm, known as the Disclosure Avoidance System (DAS), on accuracy for the same range of ε values, though focusing on the subset 0.25, 1, and 8. In addition, they explore different sub-allocations of a given ε across geographic levels and queries. The authors report a rich set of findings, interspersed with suggestions regarding what in their view constitutes acceptable accuracy for particular applications—a welcome discussion that we need more of, in a broader context. For example, looking at total-population counts across counties at the strongest privacy level they consider, ε = 0.25, “population discrepancies become common, and many counties have differences of 5% or more.” Discrepancies shrink as ε goes up or when limiting the analysis to the largest counties, but grow dramatically when replacing total population with African American counts: “For this group, population estimates vary considerably for a majority of counties, even when ε = 8 and, outside the Deep South, even for counties with above-median population.” Moving from simple counts to more complex statistics such as segregation measures, the authors conclude that they “may become entirely impractical with data that have had the DAS applied, especially in rural places or when studying low-population groups.” The authors call for the development of new, more robust, statistics, but point to a variety of issues (e.g., backward compatibility). As mentioned, these findings raise the question of how the scale of noise from these levels of differential privacy compares with the scale of disagreement between different data sources, which in turn highlights the importance of deeper investigation of these other sources of noise. If the private census data already have unacceptable levels of noise for certain statistics, then we may judge our data accuracy unacceptable even before the additional accuracy lost due to differential privacy. It would be misleading then to place all the blame at the feet of privacy—if only implicitly, by focusing on the part of the accuracy loss that is due to privacy. Moreover, Asquith et al. (2022) mention recent research suggesting that postprocessing by the Census may in fact be responsible for most of the discrepancies between the unperturbed data and the Census-released differentially private versions—again taking blame away from privacy itself. All that said, however, the two studies do find substantial reason for concern regarding accuracy in their simulations. 4. Question 2: What Is Acceptable Privacy? As discussed, these two studies focus on evaluating accuracy for ε values in the range 0.25–8. While readily admitting that interpretation of the privacy protection at the higher end of this range is difficult, Brummet et al. (2022) explicitly refer to the lower end as “strong privacy protection.” But is it? Do ε values in any part of this range provide meaningful levels of privacy? The theoretical computer science literature, where the differential privacy apparatus has been, and is still being developed, is mostly silent on the normative question regarding acceptable levels of ε. Indeed, what constitutes acceptable privacy is, too, a question that can only be answered by society at large. However, in many computer science papers and talks, a commonly used ε = 0.1 appears to be the example of choice. It appears therefore that ε = 0.1 has emerged as a tacitly agreed-upon privacy guarantee that provides a minimally meaningful privacy protection. Following that literature, work that discusses the use of differential privacy in the social sciences (see, e.g., Heffetz & Ligett, 2014, and Oberski & Kreuter, 2020) often uses ε = 0.1 as a standard example in concrete calculations. Yielding a multiplicative constant of e^0.1 ≈ 1.11, this de facto privacy standard implies, in the Brummet et al. (2022) example above, that the probability the perturbed 1940 Census classifies N individuals as < 18 years old is at most only 11% higher for the actual data set than for a neighboring one where one child were switched to an adult. Of course, ‘only’ is a value judgment; but it seems a defensible one. In comparison, the levels of ε considered by the two studies above are much higher. The lowest level of ε in the studies is ε = 0.25, yielding a multiplicative constant of e^0.25 ≈ 1.28 (so the probability for N observations classified as children in the published data set is at most 28% higher for the original than for the neighboring data set). The highest level of ε in the studies, ε = 8, yields an e^8 ≈ 2,981 multiplicative constant, making the upper bound on the privacy guarantee all but meaningless. Overall, the range of ε levels considered by these articles provide formal upper-bound privacy guarantees that do not seem to promise very meaningful privacy protection. In practice, things may be much worse. Individuals are likely to participate in several censuses throughout their lives. Unless we develop an implementation of the technology to leverage this fact and provide better-than-expected privacy guarantees across several censuses, or unless we develop a more nuanced language for reasoning about how census-related privacy harms accumulate across an individual’s lifetime, the reasonable effective ε budget for a single census is much lower than 0.1. Given life expectancy in the United States, the expectation nowadays is for around eight censuses in one’s lifetime. To guarantee a lifetime ε = 0.1 by perturbing each census in isolation (as these studies do), each single census would have to guarantee ε = 0.1/8 = 0.0125, that is, 20 times lower than even the lowest ε (= 0.25), and 640 times lower than the highest ε (= 8), considered in the two articles. Stated another way, the strongest privacy guarantee considered in these two studies, ε = 0.25, adds throughout a participant’s lifetime to around ε = 2, or e^2 ≈ 7.4. A guarantee that, throughout their life, census participants’ published age, sex, or race are ‘only’ 7.4 times more likely given they are the true underlying values than given the alternative, is not a strong privacy guarantee. Of course, these back-of-the-envelope calculations are only illustrative. The Census Bureau is actually using a refined version of differential privacy to reason about the way in which privacy losses add up across computations. As a result, the cumulative harm from participation in multiple censuses grows more slowly than in the simple illustrations above. But the overall qualitative point holds. In summary, even if the decennial census is the only survey one ever participates in, given the differential privacy technologies these articles currently implement and the constraints they have to obey, the range of ε they investigate is not even close to what the differential privacy literature itself would consider as acceptable privacy guarantees. Indeed, the range investigated is at least one to two orders of magnitude higher than a reasonable range, even under very favorable assumptions. 5. Discussion: Technical Questions for Practitioners, Normative Questions for Society These studies demonstrate that we can finally have a meaningful, long-due discussion of the accuracy-privacy trade-off. But they also highlight that the preliminary Census algorithm they used was probably ‘not there yet’ in terms of acceptable privacy-accuracy trade-offs. This raises two points: ‘getting there’ is urgent, but so is setting up the societal infrastructure to evaluate whether and when algorithmic techniques are able to provide an acceptable trade-off. There are many possible paths to ‘getting there’: better algorithms; more nuanced understanding of accuracy requirements; relaxing insistence that differentially private synthetic data obey certain ‘invariants’ that—perhaps unnecessarily—tie the algorithm’s hands; better reasoning about how privacy ‘harms add up’; exploring relaxed privacy notions; and more. We need more analysis of which of these approaches are most important to ensuring that society has meaningful privacy-accuracy combinations to select from. To place the question of acceptable trade-offs in relief, it is perhaps informative to examine the alternative: Is society currently better off with a corner solution? Under which conditions, if any, would it be better to publish the original, unperturbed data set—with whatever other sources of noise it contains anyway—and explicitly guarantee that original level of accuracy but no privacy regarding these demographic variables? As mentioned in my introduction, this no-privacy corner solution is approximately what has historically happened unintentionally anyway, through the publication of billions of statistics on roughly 300 million people. It is a great achievement of the Census differential privacy project that we now have the tools to consider this corner solution in a fully informed fashion, rather than as an uninformed mistake. Also, notice that the other corner solution—not using the data at all, guaranteeing full privacy—is not a viable option. Even ‘completely hiding’ the data from the public but using it as the basis for policy decisions does not provide meaningful privacy, since the data are potentially vulnerable to a reconstruction attack, on the basis of the observed decisions, which in themselves reveal something about the data. If a no-privacy corner solution is unacceptable to society, investigating ways to get to an acceptable interior accuracy-privacy combination should be a high research priority. One avenue is going back to the question: Does the differential privacy definition used in these articles provide a privacy guarantee that is too strong? To a social scientist, the worst-case guarantee that differential privacy provides may feel unnecessarily strong, as it protects against what are potentially extremely low-probability events. Recall that the differential privacy guarantee covers any possible combination of microdata in the data set, and any possible individual record change in creating its neighboring data sets. To a computer scientist, however, a rare weakness is still a weakness to worry about. Intuitively, with less than the worse-case guarantee that differential privacy provides, an attacker would be able to find vulnerabilities to exploit. In addition, after all, privacy is meant to protect the weak and vulnerable, rather than the typical; it should, in that sense, be designed to protect the outliers, rather than the norm. That said, Census and industry have been carrying out promising work on careful relaxations of differential privacy. These relaxations still protect the outliers, but allow for a failure of the guarantee in only unthinkably unlikely scenarios. The insistence on worst-case guarantees—which, from a privacy point of view, may be fully justified—creates an asymmetry in studies of the privacy-accuracy trade-off. Brummet et al. (2022) and Asquith et al. (2022), and the differential-privacy-based approach more generally, take as a given a certain level of ε—that is, a certain worst-case privacy guarantee—and investigate the distribution, and with it the accuracy, of resulting statistics. Could things be somehow flipped? Could we think of an accuracy requirement—for example, a guarantee that no more than a threshold level (or percent) of funds are misallocated across counties or districts—and investigate the distribution of privacy guarantees that it entails? More generally, we need new formalism to allow us to perform the necessary cost-benefit analyses, incorporating both the probabilities and the severities of various potential accuracy and privacy harms and losses. I would like to see these issues discussed from philosophical, ethical, social, political, legal, and economic perspectives. We cannot move forward without this conversation. And in this conversation, we need to separate the technical-limitations discussion from the normative decisions that society must make. Differential privacy has made this conversation possible. Now let us bring the rest of society in. Disclosure Statement Ori Heffetz has no financial or non-financial disclosures to share for this article. Abowd, J. M., & Schmutte, I. M. (2019). An economic analysis of privacy protection and statistical accuracy as social choices. American Economic Review, 109(1), 171–202. https://doi.org/10.1257/ Asquith, B. J., Hershbein, B., Kugler, T., Reed, S., Ruggles, S., Schroeder, J., Yesiltepe, S. & Van Riper, D. (2022). Assessing the impact of differential privacy on measures of population and racial residential segregation. Harvard Data Science Review, (Special Issue 2). https://doi.org/10.1162/99608f92.5cd8024e Brummet, Q., Mulrow, E., & Wolter, K. (2022). The effect of differentially private noise injection on sampling efficiency and funding allocations: Evidence from the 1940 Census. Harvard Data Science Review, (Special Issue 2). https://doi.org/10.1162/99608f92.a93d96fa Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2011). Differential privacy: A primer for the perplexed. Joint UNECE/Eurostat Work Session on Statistical Data Confidentiality, WP. 26. October 26–28, Tarragona, Spain. https://unece.org/fileadmin/DAM/stats/documents/ece/ces/ge.46/2011/26_Dwork-Smith.pdf Heffetz, O., & Ligett, K. (2014). Privacy and data-based research. Journal of Economic Perspectives, 28(2), 75–98. https://doi.org/10.1257/jep.28.2.75 Oberski, D. L., & Kreuter, F. (2020). Differential privacy and social science: An urgent puzzle. Harvard Data Science Review, 2(1). https://doi.org/10.1162/99608f92.63a22079 Wu, S., Roth, A., Ligett, K., Waggoner, B., & Neel, S. (2019). Accuracy first: Selecting a differential privacy level for accuracy-constrained ERM. Journal of Privacy and Confidentiality, 9(2). ©2022 Ori Heffetz. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.
{"url":"https://hdsr.mitpress.mit.edu/pub/lj69n2vc/release/2?readingCollection=a133a0a2","timestamp":"2024-11-10T05:01:27Z","content_type":"text/html","content_length":"1049792","record_id":"<urn:uuid:f5292e8d-800a-44b9-b5fe-3567a75df26a>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00544.warc.gz"}
Scientific RPN Calculator (with ATTINY85) 03-08-2018, 03:02 AM (This post was last modified: 03-08-2018 03:14 AM by sa-penguin.) Post: #21 sa-penguin Posts: 82 Member Joined: Jan 2014 RE: Scientific RPN Calculator (with ATTINY85) That _exp_sin code is... cool. I notice you use standard Arduino code for log(x) and atan(x) functions. I also noted temporary results held in multiple registers: SWAP, ROTUP, ROTDOWN and all the trig functions. I don't know if the code would be smaller or faster if you had one set of common registers. If you wanted to keep names to make the code easier to read, I'd suggest a union structure (lets you call the same variable by multiple names). I'd also suggest moving the square root code to the "sub programs" area, and calling it during the trig functions: double _sqrt(double f) { result = _exp_sin(0.5 * log(f), true); return result; case SQRT: // SQRT x = _sqrt(x); double as = atan(x / _sqrt(1 - x * x)); else if (key == ASINH) x = log(x + _sqrt(tmp)); else if (key == ACOSH) x = log(x + _sqrt(tmp - 2)); That may make the code slightly smaller, but also improve readability. Just my $0.02 worth User(s) browsing this thread: 1 Guest(s)
{"url":"https://www.hpmuseum.org/forum/showthread.php?tid=10281&pid=92578&mode=threaded","timestamp":"2024-11-05T23:12:32Z","content_type":"application/xhtml+xml","content_length":"33822","record_id":"<urn:uuid:7c979885-16ea-4e24-9f9f-7f2ec2b39d83>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00636.warc.gz"}
MATH 151 MCC Mathematics Algebra Worksheet - Custom Scholars Delivering a high-quality product at a reasonable price is not enough anymore. That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe. Money-back guarantee You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent. Read more Zero-plagiarism guarantee Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in. Read more Free-revision policy Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result. Read more Privacy policy Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems. Read more Fair-cooperation guarantee By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language. Read more
{"url":"https://customscholars.com/math-151-mcc-mathematics-algebra-worksheet/","timestamp":"2024-11-14T08:26:42Z","content_type":"text/html","content_length":"54510","record_id":"<urn:uuid:a03cf0ba-7ade-41d5-b557-531d62e96c2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00396.warc.gz"}
Jump to navigation Jump to search In mathematics, a well-order (or well-ordering or well-order relation) on a set S is a total order on S with the property that every non-empty subset of S has a least element in this ordering. The set S together with the well-order relation is then called a well-ordered set. In some academic articles and textbooks these terms are instead written as wellorder, wellordered, and wellordering or well order, well ordered, and well ordering. Every non-empty well-ordered set has a least element. Every element s of a well-ordered set, except a possible greatest element, has a unique successor (next element), namely the least element of the subset of all elements greater than s. There may be elements besides the least element which have no predecessor (see Natural numbers below for an example). In a well-ordered set S, every subset T which has an upper bound has a least upper bound, namely the least element of the subset of all upper bounds of T in S. If ≤ is a non-strict well ordering, then < is a strict well ordering. A relation is a strict well ordering if and only if it is a well-founded strict total order. The distinction between strict and non-strict well orders is often ignored since they are easily interconvertible. Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The well-ordering theorem, which is equivalent to the axiom of choice, states that every set can be well ordered. If a set is well ordered (or even if it merely admits a well-founded relation), the proof technique of transfinite induction can be used to prove that a given statement is true for all elements of the set. The observation that the natural numbers are well ordered by the usual less-than relation is commonly called the well-ordering principle (for natural numbers). Ordinal numbers[edit] Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The position of each element within the ordered set is also given by an ordinal number. In the case of a finite set, the basic operation of counting, to find the ordinal number of a particular object, or to find the object with a particular ordinal number, corresponds to assigning ordinal numbers one by one to the objects. The size (number of elements, cardinal number) of a finite set is equal to the order type. Counting in the everyday sense typically starts from one, so it assigns to each object the size of the initial segment with that object as last element. Note that these numbers are one more than the formal ordinal numbers according to the isomorphic order, because these are equal to the number of earlier objects (which corresponds to counting from zero). Thus for finite n, the expression "n-th element" of a well-ordered set requires context to know whether this counts from zero or one. In a notation "β-th element" where β can also be an infinite ordinal, it will typically count from zero. For an infinite set the order type determines the cardinality, but not conversely: well-ordered sets of a particular cardinality can have many different order types. For a countably infinite set, the set of possible order types is even uncountable. Examples and counterexamples[edit] Natural numbers[edit] The standard ordering ≤ of the natural numbers is a well ordering and has the additional property that every non-zero natural number has a unique predecessor. Another well ordering of the natural numbers is given by defining that all even numbers are less than all odd numbers, and the usual ordering applies within the evens and the odds: 0 2 4 6 8 ... 1 3 5 7 9 ... This is a well-ordered set of order type ω + ω. Every element has a successor (there is no largest element). Two elements lack a predecessor: 0 and 1. Unlike the standard ordering ≤ of the natural numbers, the standard ordering ≤ of the integers is not a well ordering, since, for example, the set of negative integers does not contain a least The following relation R is an example of well ordering of the integers: x R y if and only if one of the following conditions holds: 1. x = 0 2. x is positive, and y is negative 3. x and y are both positive, and x ≤ y 4. x and y are both negative, and |x| ≤ |y| This relation R can be visualized as follows: 0 1 2 3 4 ... −1 −2 −3 ... R is isomorphic to the ordinal number ω + ω. Another relation for well ordering the integers is the following definition: x ≤[z] y iff (|x| < |y| or (|x| = |y| and x ≤ y)). This well order can be visualized as follows: 0 −1 1 −2 2 −3 3 −4 4 ... This has the order type ω. The standard ordering ≤ of any real interval is not a well ordering, since, for example, the open interval (0, 1) ⊆ [0,1] does not contain a least element. From the ZFC axioms of set theory (including the axiom of choice) one can show that there is a well order of the reals. Also Wacław Sierpiński proved that ZF + GCH (the generalized continuum hypothesis) imply the axiom of choice and hence a well order of the reals. Nonetheless, it is possible to show that the ZFC+GCH axioms alone are not sufficient to prove the existence of a definable (by a formula) well order of the reals.^[1] However it is consistent with ZFC that a definable well ordering of the reals exists—for example, it is consistent with ZFC that V=L, and it follows from ZFC+V=L that a particular formula well orders the reals, or indeed any set. An uncountable subset of the real numbers with the standard ordering ≤ cannot be a well order: Suppose X is a subset of R well ordered by ≤. For each x in X, let s(x) be the successor of x in ≤ ordering on X (unless x is the last element of X). Let A = { (x, s(x)) | x ∈ X } whose elements are nonempty and disjoint intervals. Each such interval contains at least one rational number, so there is an injective function from A to Q. There is an injection from X to A (except possibly for a last element of X which could be mapped to zero later). And it is well known that there is an injection from Q to the natural numbers (which could be chosen to avoid hitting zero). Thus there is an injection from X to the natural numbers which means that X is countable. On the other hand, a countably infinite subset of the reals may or may not be a well order with the standard "≤". For example, • The natural numbers are a well order under the standard ordering ≤. • The set {1/n : n =1,2,3,...} has no least element and is therefore not a well order under standard ordering ≤. Examples of well orders: • The set of numbers { − 2^−n | 0 ≤ n < ω } has order type ω. • The set of numbers { − 2^−n − 2^−m−n | 0 ≤ m,n < ω } has order type ω². The previous set is the set of limit points within the set. Within the set of real numbers, either with the ordinary topology or the order topology, 0 is also a limit point of the set. It is also a limit point of the set of limit points. • The set of numbers { − 2^−n | 0 ≤ n < ω } ∪ { 1 } has order type ω + 1. With the order topology of this set, 1 is a limit point of the set. With the ordinary topology (or equivalently, the order topology) of the real numbers it is not. Equivalent formulations[edit] If a set is totally ordered, then the following are equivalent to each other: 1. The set is well ordered. That is, every nonempty subset has a least element. 2. Transfinite induction works for the entire ordered set. 3. Every strictly decreasing sequence of elements of the set must terminate after only finitely many steps (assuming the axiom of dependent choice). 4. Every subordering is isomorphic to an initial segment. Order topology[edit] Every well-ordered set can be made into a topological space by endowing it with the order topology. With respect to this topology there can be two kinds of elements: • isolated points — these are the minimum and the elements with a predecessor. • limit points — this type does not occur in finite sets, and may or may not occur in an infinite set; the infinite sets without limit point are the sets of order type ω, for example N. For subsets we can distinguish: • Subsets with a maximum (that is, subsets which are bounded by themselves); this can be an isolated point or a limit point of the whole set; in the latter case it may or may not be also a limit point of the subset. • Subsets which are unbounded by themselves but bounded in the whole set; they have no maximum, but a supremum outside the subset; if the subset is non-empty this supremum is a limit point of the subset and hence also of the whole set; if the subset is empty this supremum is the minimum of the whole set. • Subsets which are unbounded in the whole set. A subset is cofinal in the whole set if and only if it is unbounded in the whole set or it has a maximum which is also maximum of the whole set. A well-ordered set as topological space is a first-countable space if and only if it has order type less than or equal to ω[1] (omega-one), that is, if and only if the set is countable or has the smallest uncountable order type. See also[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/G%C3%B6del_teljess%C3%A9gi_t%C3%A9tel%C3%A9nek_eredeti_bizony%C3%ADt%C3%A1sa/en.wikipedia.org/wiki/Well-order.html","timestamp":"2024-11-05T01:14:10Z","content_type":"text/html","content_length":"64918","record_id":"<urn:uuid:402285f1-2261-4b12-b2e9-f9a2d83cdba1>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00579.warc.gz"}
Internal tide energy flux over a ridge measured by a co-located ocean glider and moored acoustic Doppler current profiler Articles | Volume 15, issue 6 © Author(s) 2019. This work is distributed under the Creative Commons Attribution 4.0 License. Internal tide energy flux over a ridge measured by a co-located ocean glider and moored acoustic Doppler current profiler Internal tide energy flux is an important diagnostic for the study of energy pathways in the ocean, from large-scale input by the surface tide to small-scale dissipation by turbulent mixing. Accurate calculation of energy flux requires repeated full-depth measurements of both potential density (ρ) and horizontal current velocity (u) over at least a tidal cycle and over several weeks to resolve the internal spring–neap cycle. Typically, these observations are made using full-depth oceanographic moorings that are vulnerable to being “fished out” by commercial trawlers when deployed on continental shelves and slopes. Here we test an alternative approach to minimize these risks, with u measured by a low-frequency acoustic Doppler current profiler (ADCP) moored near the seabed and ρ measured by an autonomous ocean glider holding station by the ADCP. The method is used to measure the semidiurnal internal tide radiating from the Wyville Thomson Ridge in the North Atlantic. The observed energy flux (4.2±0.2kWm^−1) compares favourably with historic observations and a previous numerical model study. Error in the energy flux calculation due to imperfect co-location of the glider and ADCP is estimated by subsampling potential density in an idealized internal tide field along pseudorandomly distributed glider paths. The error is considered acceptable (<10%) if all the glider data are contained within a “watch circle” with a diameter smaller than 1∕8 the mode-1 horizontal wavelength of the internal tide. Energy flux is biased low because the glider samples density with a broad range of phase shifts, resulting in underestimation of vertical isopycnal displacement and available potential energy. The negative bias increases with increasing watch circle diameter. If watch circle diameter is larger than 1∕8 the mode-1 horizontal wavelength, the negative bias is more than 3% and all realizations within the 95% confidence interval are underestimates. Over the Wyville Thomson Ridge, where the semidiurnal mode-1 horizontal wavelength is ≈100km and all the glider dives are within a 5km diameter watch circle, the observed energy flux is estimated to have a negative bias of only 0.4% and an error of less than 3% at the 95% confidence limit. With typical glider performance, we expect energy flux error due to imperfect co-location to be <10% in most mid-latitude shelf slope regions. Received: 28 Feb 2019 – Discussion started: 05 Mar 2019 – Revised: 22 May 2019 – Accepted: 03 Jun 2019 – Published: 07 Nov 2019 Internal tides are a ubiquitous hydrodynamic feature over continental shelves and slopes as they are commonly generated at the shelf break by across-slope tidal flows (Baines, 1982; Pingree et al., 1986; Sharples et al., 2007). However, direct measurement of internal tides can be a challenge in these regions due to intense commercial fishing activity leading to an increased risk of oceanographic mooring loss (Sharples et al., 2013). Calculation of internal tide energy flux, a key diagnostic for the understanding of baroclinic energy pathways, requires repeated full-depth measurements of both potential density (ρ) and horizontal current velocity (u) over at least a tidal cycle (Nash et al., 2005). If an objective is to resolve the internal spring–neap cycle or observe the effect of seasonal changes in stratification on the internal tide field, repeated full-depth measurements over several weeks or months are required. Typically, these measurements are made using a full-depth oceanographic mooring incorporating an acoustic Doppler current profiler (ADCP) and a string of conductivity–temperature loggers (e.g. Hopkins et al., 2014), or a profiling mooring with a CTD (conductivity, temperature, and depth) and acoustic current meter (e.g. Zhao et al., 2012). On continental shelves and slopes, these full-depth moorings are vulnerable to being “fished out” by demersal and pelagic trawling activity. Hall et al. (2017b) describe a novel alternative approach to minimize these risks, with u measured by a low-frequency ADCP moored near the seabed and ρ measured by an autonomous ocean glider holding station by the ADCP as a “virtual mooring”. Commercial fishing activity on continental shelves and their adjacent slopes is often intense because these regions are highly biologically productive. However, steps can be taken to reduce the risk of ADCP loss, including deploying deeper than 600m, keeping mooring lines short, or using trawl-resistant frames. Being relatively small, gliders are unlikely to be fished out, and the risk can be further reduced by real-time evasive action in response to vessel proximity guided by the maritime automatic identification system (AIS). However, this alternate approach was not comprehensively tested by Hall et al. (2017b) because of glider navigation and telemetry problems. In this study we test the method using a co-located glider and ADCP dataset from the Wyville Thomson Ridge in the North Atlantic, a topographic feature that previous observations and numerical model studies suggest is an energetic internal tide generator (Sherwin, 1991; Hall et al., 2011). We also estimate the error in the energy flux calculation due to imperfect co-location of the glider and ADCP and find that, with typical glider performance, it is acceptable in most mid-latitude shelf slope regions. Ocean gliders have previously been used to observe internal waves and internal tides (Rudnick et al., 2013; Rainville et al., 2013; Johnston and Rudnick, 2015; Boettger et al., 2015; Hall et al., 2017a), including the calculation of energy fluxes using current velocity measurements from gliders equipped with ADCPs (Johnston et al., 2013, 2015). However, ADCPs are not routinely integrated with commercially available glider platforms (Seaglider, Slocum, and SeaExplorer), in part due to their higher power requirement. Synergy with moored ADCP data allows accurate calculation of internal tide energetics without the endurance limitations and data analysis complexities of an ADCP-equipped glider (e.g. Todd et al., 2017). In Sect. 2 the temporal resolution constraints of glider measurements are explained and the observations used in this study described. The calculation of internal tide energy flux from co-located glider and moored ADCP data is fully described in Sect. 3. Observations of the internal tide radiating from the Wyville Thomson Ridge are presented in Sect. 4 and compared with historic observations and a previous numerical model study. In Sect. 5 the error in the energy flux calculation due to imperfect co-location is estimated. Key results are summarized and discussed in Sect. 6. High temporal resolution is crucial for internal tide observations; Nash et al. (2005) suggest that a minimum of four evenly distributed independent profiles of u and ρ are required per tidal cycle for an unbiased calculation of energy flux. Over continental shelves and upper shelf slopes, this temporal resolution is achievable using gliders. Typical glider vertical velocities are 15–20cms^−1 , so a complete dive cycle to 1000m can take as little as 3h. This yields eight profiles (four dives) per semidiurnal (≈12h) tidal cycle, but near the surface and seabed the descending and ascending profiles converge in time so the number of independent samples is halved to four. For diurnal (≈24h) internal tides, 16 profiles per cycle are possible. In shallower water the temporal resolution of glider measurements increases further; 40min full-depth dives are achievable over a 200m deep shelf break, yielding 36 profiles per semidiurnal tidal cycle. The depth-limiting factor of the methodology is the range of the ADCP. In narrowband mode, 75kHz ADCPs have a maximum range of around 600m (dependent on environmental conditions) so multiple ADCPs or additional current meters on the mooring line are required for sites between 600 and 1000m deep. The observations used to test the method were collected from the northern flank of the Wyville Thomson Ridge (WTR) in the North Atlantic (Fig. 1a). A Kongsberg Seaglider (SG613; Eriksen et al., 2001) was deployed from NRV Alliance between 2 and 5 June 2017 during the fourth Marine Autonomous Systems in Support of Marine Observations mission (MASSMO4). The glider was navigated from the deeper waters of the Faroe–Shetland Channel (FSC) to the WTR and held station for 40h by a short oceanographic mooring, deployed 5 days previously from MRV Scotia (Fig. 1b). The mooring was sited close to the 800m isobath and instrumented with an upwards-looking 75kHz RDI Long Ranger ADCP at approximately 722m and an Aanderaa Seaguard acoustic current meter at 784m, yielding observations of horizontal current velocity over 78% of the water column. When on-station by the ADCP mooring, the glider made repeated 2h dives to 700m or the seabed, whichever was shallower. This yielded approximately 12 profiles (six independent samples near the surface and seabed; 12 independent samples at mid-depth) per semidiurnal tidal cycle. Glider location at the surface, before and after each dive, was given by GPS position. Subsurface sample locations were approximated by linearly interpolating surface latitude and longitude onto sample time. When on-station, the glider stayed within 2.5km of the mooring and the mean horizontal distance between temporally coincident glider and ADCP measurements was 1.3km. This spatial scattering of the glider data is small compared to the semidiurnal mode-1 horizontal wavelength over the WTR (≈100km, calculated from the observed buoyancy frequency profile) and so the glider data are initially considered a fixed-point time series with no spatial–temporal aliasing. As the glider was on-station for only 40h, the co-located time series is not long enough to resolve the internal spring–neap cycle. As a result, M[2] harmonic fits to the glider and mooring data (Sect. 3) are contaminated with S[2] variability. To acknowledge this, we refer to the estimated M[2] component of the co-located time series as D[2] following Alford et al. (2011). The comparative numerical model (Sect. 4.1) only includes the M[2] tidal constituent so we refer to model diagnostics as M[2]. 2.1Data processing The glider was equipped with a standard Sea-Bird Electronics conductivity–temperature (CT) sail sampling at 0.2Hz and the data processed using the UEA Seaglider Toolbox (https://bitbucket.org/ bastienqueste/uea-seaglider-toolbox, last access: 9 February 2017) following Queste (2014). Conductivity data were corrected for thermal hysteresis following Garau et al. (2011) and the Seaglider flight model regressed using a method adapted from Frajka-Williams et al. (2011). As the CT sail was unpumped, salinity samples were flagged when the glider's speed was less than 10cms^−1 or it was within 8m of apogee^1. Temperature–salinity profiles from descents and ascents were independently averaged (median value) in 5m depth bins, typically with 4–5 samples per bin. Sample time was averaged into the same bins to allow accurate temporal analysis at all depths. Absolute Salinity (S[A]), Conservative Temperature (Θ), and potential density (ρ) in each bin were calculated using the TEOS-10 equation of state (IOC et al., 2010). The 75kHz ADCP was configured in narrowband mode with 10m bins and 24 pings per 20min ensemble. The ADCP data were processed using Marine Scotland Science's standard protocols, including correction for magnetic declination and quality assurance based on error velocity, vertical velocity, and percentage good ping thresholds. The ADCP data were then linearly upsampled onto the same Δ 5m depth levels as the glider data. The acoustic current meter was configured with a 10min sampling interval and linearly downsampled onto the same 20min sampling interval as the ADCP. Good velocity data were recovered for all depth levels between 85 and 705m, as well as 780–785m. In addition to the ADCP and current meter measurements, horizontal velocity was inferred from GPS position and the Seaglider flight model using a dive-average current method (DAC; Eriksen et al., 2001; Frajka-Williams et al., 2011). DAC was only calculated for dives deeper than 500m so that values were representative of the majority of the water column. All velocities were transformed into along-slope and across-slope components. We take the northern flank of the WTR to be orientated exactly northwest–southeast so along-slope (u) is positive southeast and across-slope (v) is positive northeast (down-slope). The full 3-day glider time series of Conservative Temperature and Absolute Salinity is shown in Fig. 2a. A semidiurnal internal tide is evident as a vertical oscillation of the main pycnocline (centred around 550m) with an amplitude up to 50m and a period of ≈12h. Temporally coincident ADCP and current meter measurements (Fig. 2b, c) show dominant semidiurnal periodicity and a reversal of baroclinic current velocity across the main pycnocline, characteristic of a low-mode internal tide. Mode-1 horizontal velocity, calculated from the observed buoyancy frequency profile, reverses at approximately 505m, slightly above the pycnocline. 3Internal tide energy flux Following Kunze et al. (2002) and Nash et al. (2005), internal tide energy flux is calculated as $\mathbit{F}=〈{\mathbit{u}}_{\mathrm{bc}}^{\prime }{p}^{\prime }〉$. The method requires repeated full-depth measurements of ρ and u over at least a tidal cycle in order to determine pressure perturbation (p^′) and baroclinic velocity (${\mathbit{u}}_{\mathrm{bc}}^{\prime }$), respectively. 3.1Pressure perturbation For the 40h window when the glider was on-station by the ADCP mooring, potential density anomaly is calculated by subtracting the window-mean density profile from measured potential density, $\begin{array}{}\text{(1)}& {\mathit{\rho }}^{\prime }\left(z,t\right)=\mathit{\rho }\left(z,t\right)-\stackrel{\mathrm{‾}}{\mathit{\rho }}\left(z\right).\end{array}$ Before subtraction, $\stackrel{\mathrm{‾}}{\mathit{\rho }}\left(z\right)$ is smoothed with a 50m gaussian tapered running mean (σ=10m) to yield a suitable background density profile. Vertical isopycnal displacement is then calculated as $\begin{array}{}\text{(2)}& \mathit{\xi }\left(z,t\right)=-{\mathit{\rho }}^{\prime }\left(z,t\right){\left(\frac{\partial \stackrel{\mathrm{‾}}{\mathit{\rho }}}{\partial z}\right)}^{-\mathrm{1}}.\ To separate D[2] internal tide variability from other physical processes, M[2] tidal period (T=12.42h) harmonics are fit to ξ on each Δ5m depth level following Emery and Thomson (2001). This analysis is only applied to depth levels between 10 and 675m; near-surface and near-bottom bins are excluded because of high numbers of flagged samples and reduced temporal resolution due to the glider going into apogee above 700m. To obtain a full-depth time series, the D[2] component of ξ is linearly extrapolated assuming ξ=0 at the surface (z=0) and bottom ($z=-H$, where H is water depth). Buoyancy frequency squared, ${N}^{\mathrm{2}}=-g/{\mathit{\rho }}_{\mathrm{0}}\left(\partial \stackrel{\mathrm{‾}}{\mathit{\rho }}/\partial z\right)$, is also linearly extrapolated, assuming ${N}^{\mathrm{2}}={\mathrm{10}}^{-\mathrm{6}}$s^−2 at the surface and bottom. Pressure perturbation is then calculated by integrating the hydrostatic equation from the surface, $\begin{array}{}\text{(3)}& {p}^{\prime }\left(z,t\right)={p}_{\mathrm{surf}}^{\prime }\left(t\right)+{\mathit{\rho }}_{\mathrm{0}}\underset{z}{\overset{\mathrm{0}}{\int }}{N}^{\mathrm{2}}\left(z\ right)\mathit{\xi }\left(z,t\right)\phantom{\rule{0.125em}{0ex}}\mathrm{d}z,\end{array}$ where ${p}_{\mathrm{surf}}^{\prime }$ is pressure perturbation at the surface due to the internal tide, determined by applying the baroclinicity condition for pressure, $\begin{array}{}\text{(4)}& {p}_{\mathrm{surf}}^{\prime }\left(t\right)=-\frac{\mathrm{1}}{H}\underset{-H}{\overset{\mathrm{0}}{\int }}{p}^{\prime }\left(z,t\right)\phantom{\rule{0.125em}{0ex}}\ Figure 3 shows potential density (Fig. 3c and d) and the D[2] component of vertical isopycnal displacement (Fig. 3e and f) for the 40h analysis window. The amplitudes and phases of the D[2] component of ξ are shown in Fig. 4a and b. 3.2Baroclinic velocity For the same 40h window, horizontal velocity perturbation is calculated, $\begin{array}{}\text{(5)}& {\mathbit{u}}^{\prime }\left(z,t\right)=\mathbit{u}\left(z,t\right)-\stackrel{\mathrm{‾}}{\mathbit{u}}\left(z\right),\end{array}$ where $\stackrel{\mathrm{‾}}{\mathbit{u}}\left(z\right)$ is the window-mean horizontal velocity profile. There are three spatial gaps in the time series: above 85m, between the ADCP and current meter (705–780m including blanking distance), and from the current meter to the seabed (785–800m). To obtain a full-depth time series, u^′ is linearly interpolated between the ADCP and current meter, and extrapolated to the surface and the bottom using a nearest neighbour method. Baroclinic velocity is then calculated, $\begin{array}{}\text{(6)}& {\mathbit{u}}_{\mathrm{bc}}^{\prime }\left(z,t\right)={\mathbit{u}}^{\prime }\left(z,t\right)-{\mathbit{u}}_{\mathrm{bt}}^{\prime }\left(t\right),\end{array}$ where ${\mathbit{u}}_{\mathrm{bt}}^{\prime }$ is barotropic velocity, assumed here to equal the depth-mean velocity, calculated as $\begin{array}{}\text{(7)}& {\mathbit{u}}_{\mathrm{bt}}^{\prime }\left(t\right)=\frac{\mathrm{1}}{H}\underset{-H}{\overset{\mathrm{0}}{\int }}{\mathbit{u}}^{\prime }\left(z,t\right)\phantom{\rule The D[2] components of ${\mathbit{u}}_{\mathrm{bc}}^{\prime }$ and ${\mathbit{u}}_{\mathrm{bt}}^{\prime }$ are extracted using the same harmonic analysis method applied to ξ. Figure 3 shows barotropic (Fig. 3a and b) and baroclinic (Fig. 3c and d) velocities and the D[2] components of barotropic (Fig. 3a and b) and baroclinic (Fig. 3e and f) velocities for the 40h analysis window. The amplitudes and phases of the D[2] component of ${\mathbit{u}}_{\mathrm{bc}}^{\prime }$ are shown in Fig. 4a and b. 3.3Internal tide energetics Profiles of internal tide energy flux, available potential energy (APE), and horizontal kinetic energy (HKE) are calculated as $\begin{array}{}\text{(8)}& & \mathbit{F}\left(z\right)=〈{\mathbit{u}}_{\mathrm{bc}}^{\prime }\left(z,t\right){p}^{\prime }\left(z,t\right)〉,\text{(9)}& & \mathrm{APE}\left(z\right)=\frac{\mathrm {1}}{\mathrm{2}}{\mathit{\rho }}_{\mathrm{0}}{N}^{\mathrm{2}}\left(z\right)〈{\mathit{\xi }}^{\mathrm{2}}\left(z,t\right)〉,\end{array}$ $\begin{array}{}\text{(10)}& \mathrm{HKE}\left(z\right)=\frac{\mathrm{1}}{\mathrm{2}}{\mathit{\rho }}_{\mathrm{0}}〈{\mathbit{u}}_{\mathrm{bc}}^{\prime \mathrm{2}}\left(z,t\right)〉,\end{array}$ where 〈⋅〉 denotes an average (mean) over an integer number of M[2] cycles and ρ[0]=1028kgm^−3 is a reference density. Maximum D[2] vertical isopycnal displacement is 42m and occurs at 565m (Fig. 4a), within the main pycnocline. This is comparable with historic observations of a semidiurnal internal tide over the northern flank of the WTR. Sherwin (1991) analysed CTD data from a 17h repeat station (30min between casts) that was 6.7km east of the mooring (Fig. 1b) and determined maximum D[2] vertical isopycnal displacement to be 37 at 580m, again within the pycnocline. Here, almost all of the APE is contained within the pycnocline (Fig. 4c) because maximum ξ occurs at a similar depth to maximum N^2 ($\mathrm{4.9}×{\mathrm{10}}^{-\mathrm{5}}$s^−2 at 525m). D[2] baroclinic velocity is maximum (≈20cms^−1) near-bottom (Fig. 4a), as is HKE (Fig. 4c). Depth-integrated HKE and APE are 5.5 and 1.7kJm^−2, respectively. Both the across- and along-slope components of D[2] internal tide energy flux are maximum (≈12.7kWm^−2) near-bottom and go to zero at the depth of maximum N^2 (Fig. 4d), characteristic of a low-mode internal tide with a pycnocline in the lower half of the water column. Depth-integrated energy flux magnitude is 4.2kWm^−1, directed almost due east (7^∘ anticlockwise from east). In comparison, Sherwin (1991) estimated the D[2] mode-1 internal tide energy flux at the nearby CTD repeat station to be 4.7kWm^−1, but was unable to diagnose the direction. 4.1Model comparison In Fig. 5 the observations are compared with the regional tide model described by Hall et al. (2011). The model is a configuration of the Princeton Ocean Model (POM; Blumberg and Mellor, 1987) for the FSC and WTR region, initiated with typical late-summer stratification and forced at the boundaries with M[2] barotropic velocities (see Hall et al., 2011, for full details). Maximum N^2 in the model is slightly higher than observed (Table 1) but the vertical distribution of stratification is similar; the main pycnocline is between 500 and 600m. M[2] internal tide generation occurs within the model domain, driven by barotropic tidal currents across isobaths, and is diagnosed as positive barotropic-to-baroclinic energy conversion (Fig. 5b). The northern flank of the WTR is an area of energetic internal tide generation, up to 4Wm^−2, and radiates an internal tide into the southern FSC. Modelled internal tide energy fluxes are spatially variable, but >5kWm^−1 at some locations (Fig. 5a). The mooring was located east of the most energetic generation and up-slope of the largest energy fluxes. For direct comparison, the model output is interpolated onto the exact location of the mooring (Table 1). The modelled M[2] internal tide energy flux is 6%–7% larger than the observed D[2] energy flux, but within 10^∘ of its direction. Maximum modelled vertical isopycnal displacement is 41m (slightly smaller than observed) but is compensated by the higher maximum N^2 and results in modelled APE being 30% larger than observed; modelled HKE is 10% smaller than observed. 4.2Surface tidal ellipses As well as measuring potential density by the ADCP mooring, the glider is used to infer a second estimate of barotropic velocity. Harmonic analysis is used to extract the D[2] component of DAC velocity (all dives deeper than 500m) and compared to the D[2] component of ${\mathbit{u}}_{\mathrm{bt}}^{\prime }$ from the ADCP and current meter. Barotropic velocity is highest in the across-slope direction (maximum 15cms^−1; Fig. 3a) and there is a very close match between the DAC and ADCP estimates (rms difference is 0.8cms^−1). In the along-slope direction, where barotropic velocity is lower (maximum 0.5cms^−1; Fig. 3b), the DAC estimate lags the ADCP estimate by 35min but their amplitudes closely match (rms difference is 1.2cms^−1). The resulting surface tidal ellipses have similar semi-major axis lengths and phases (Table 1) but the DAC estimate is less eccentric (more circular) and rotated 3^∘ anticlockwise (Fig. 5c). Compared with M[2] surface ellipses from the regional tide model described by Hall et al. (2011), both observational estimates are less eccentric and have shorter semi-major axes (Table 1; Fig. 5c). However, the inclination of observed and modelled ellipses are comparable, with their semi-major axes orientated across-slope. This is the orientation required to generate an energetic internal tide at the WTR. The separation of spatial and temporal variability is a common problem when interpreting glider data due to their slow speed (Rudnick and Cole, 2011) and imperfect positioning. In this context, the inability of the glider to perfectly hold station by the ADCP mooring leads to error in the calculation of internal tide energy flux (Sect. 3) due to mis-sampling of the spatially and temporally varying density field. An understanding of this error is important for both mission planning and interpretation of results. Other missions along the European continental slope (e.g. Hall et al., 2017 a) have shown that a glider operating as a virtual mooring by repeatedly diving to 1000m around a fixed station can maintain a “watch circle” with a diameter of approximately 5km, i.e. all dives start and end within 2.5km of the target location. The ability to do this is dependent on environmental conditions, particularly tidal and slope currents, but the lower limit is effectively set by the glide angle; a steep 45^∘ glide angle will result in around 2km horizontal travel over a complete dive cycle to 1000m. The size of the energy flux error is related to the length scale of the sampling cloud (d, the diameter of the watch circle) and the horizontal wavelength of internal tide being measured (λ). If d≪λ we can consider the glider data a fixed-point time series with no spatial–temporal aliasing, and so the error will be small. As d increases the glider will increasingly sample density at the wrong phase of the internal tide and so the error will increase because the measured pressure perturbation (${p}_{\mathrm{Glider}}^{\prime }$) will deviate from the pressure perturbation at the ADCP (${p}_ {\mathrm{ADCP}}^{\prime }$), located at the centre of the watch circle. If d≃λ the glider will sample density at random phases of the internal tide and so ${p}_{\mathrm{Glider}}^{\prime }$ and ${p}_ {\mathrm{ADCP}}^{\prime }$ will be uncorrelated. Here we use a Monte Carlo approach to estimate the energy flux error. Potential density in an idealized internal tide field is subsampled along pseudorandomly distributed glider paths contained within watch circles of varying diameters. The “true” depth-integrated energy flux at the ADCP, ${\mathbit{F}}_{\mathrm{true}}={\int }_{-H}^{\mathrm{0}}〈{\mathbit{u}}_{\mathrm{bc}}^{\prime }{p}_{\ mathrm{ADCP}}^{\prime }〉\phantom{\rule{0.125em}{0ex}}\mathrm{d}z$, is then compared with the “observed” depth-integrated energy flux, ${\mathbit{F}}_{\mathrm{obs}}={\int }_{-H}^{\mathrm{0}}〈{\ mathbit{u}}_{\mathrm{bc}}^{\prime }{p}_{\mathrm{Glider}}^{\prime }〉\phantom{\rule{0.125em}{0ex}}\mathrm{d}z$. In both equations ${\mathbit{u}}_{\mathrm{bc}}^{\prime }$ is baroclinic velocity at the ADCP. An idealized M[2] multi-mode internal tide field is created for a 1000m deep water column with uniform stratification (Appendix A). The mode-1 horizontal wavelength (λ) is 80km and mode-1 vertical isopycnal displacement is 50m, typical of mid-latitude shelf slope regions. Glider sampling is modelled as a group of twelve 1000m dives (denoted here as a twelve-dive scenario) over 37h (≈3M[2] cycles), within a watch circle of diameter d. Each dive is 2h 50min long, with 15min at the surface between dives. Horizontal distance travelled during each dive cycle is between 1.5 and 4km (typical of real glider missions), but there is no surface drift. The glider's path during each dive is determined by randomly selecting a start position within the watch circle then randomly selecting an end position 1.5–4km away, but still within the watch circle. The start position of the following dive is the same as the end position. Potential density is linearly interpolated onto this pseudorandom glider path and the resulting density “observations” analysed using the method described in Sect. 3.1 to yield ${p}_{\mathrm{Glider}}^{\prime }$. Nine cases are investigated, with d ranging from λ∕32 (2.5km) to λ∕4 (20km), and for each case 5000 different twelve-dive scenarios are simulated. A different random set of baroclinic mode phases is used for each scenario. Example pseudorandomly distributed glider paths for four cases are shown in Fig. 6. Energy flux relative error is defined as ${\mathbit{F}}_{\mathrm{err}}=\left({\mathbit {F}}_{\mathrm{obs}}-{\mathbit{F}}_{\mathrm{true}}\right)/{\mathbit{F}}_{\mathrm{true}}$ so positive error indicates an overestimation and negative error indicates an underestimation. Similarly, APE relative error is defined as ${\mathrm{APE}}_{\mathrm{err}}=\left({\mathrm{APE}}_{\mathrm{obs}}-{\mathrm{APE}}_{\mathrm{true}}\right)/{\mathrm{APE}}_{\mathrm{true}}$, where APE[true] is “true” depth-integrated APE (calculated from ξ[ADCP]) and APE[obs] is “observed” depth-integrated APE (calculated from ξ[Glider]). 5.1Single-scenario example A single twelve-dive scenario for the $d=\mathit{\lambda }/\mathrm{4}$ case is shown in Fig. 7 to highlight the impact of mis-sampling density on observed energy flux and APE. This is an extreme example, with all the glider dives 6–10km from the ADCP (Fig. 7d), and features near-bottom internal tide intensification similar to that observed on the northern flank of the WTR. In this example, the error in measured density is maximum in the lower half of the water column (where ξ[ADCP] is up to 80m; Fig. 7a, b); the resulting ξ[Glider] underestimates ξ[ADCP] by up to 20m and leads by up to 40min. Observed energy flux and APE underestimate true energy flux and APE over the majority of the water column (Fig. 7c); depth-integrated observed energy flux and APE underestimate depth-integrated true energy flux and APE by 772Wm^−1 (${\mathbit{F}}_{\mathrm{err}}=-\mathrm{0.09}$) and 615Jm^−2 (${\mathrm{APE}}_{\mathrm{err}}=-\mathrm{0.2}$), respectively. 5.2Energy flux error Histograms of F[err] (0.005 wide bins) for four watch circle diameter cases are shown in Fig. 8a. The peaked distribution for the $d=\mathrm{1}/\mathrm{32}\mathit{\lambda }$ case broadens with increasing watch circle diameter as well as becoming biased towards negative error. The negative bias results from two related mechanisms. Firstly, the amplitude of ξ[Glider] (and therefore ${p}_{\ mathrm{Glider}}^{\prime }$) is typically underestimated for large watch circles because the glider samples density with a broad range of phase shifts, causing spectral smearing and poor harmonic fits to ξ. Secondly, maximum energy flux occurs when p^′ and ${\mathbit{u}}_{\mathrm{bc}}^{\prime }$ are exactly in phase so any error in the phase of ${p}_{\mathrm{Glider}}^{\prime }$, positive or negative, will also result in a negative bias. F[err] distributions for all nine watch circle diameter cases are shown in Fig. 8c, including the 99% and 95% confidence limits and the bias (median value). As watch circle diameter increases, the width of the confidence intervals increases and the bias becomes progressively more negative. For the $d=\mathrm{1}/\mathrm{32}\mathit{\lambda }$ case, F[err] is ±0.04 at the 99% limit and the bias is near zero (−0.002). For the $d=\mathrm{1}/\mathrm{4}\mathit{\lambda }$ case at the other extreme, F[err] is 0 to −0.31 at the 99% limit and the bias is −0.1. 5.3APE error Histograms of APE[err] for four watch circle diameter cases are shown in Fig. 8b. Compared with F[err], the distributions are broader and with a more negative bias for small watch circles. The broader distribution is explained by the error in ξ[Glider] being squared in Eq. (9). The negative bias is explained by the first mechanism described in Sect. 5.2. APE[err] distributions for all nine watch circle diameter cases are shown in Fig. 8d. Similar to F[err], the width of the confidence intervals increases and the bias becomes progressively more negative as watch circle diameter increases. For $d=\mathrm{1}/\mathrm{32}\mathit{\lambda }$, APE[err] is ±0.08 at the 99% limit and the bias is only −0.005. For $d=\mathrm{1}/\mathrm{4}\mathit{\lambda }$, APE[err] is 0.02 to −0.33 at the 99% limit and the bias is −0.08. Unlike F[err], the bias converges towards a constant value for very large watch circles. A novel approach to measuring internal tide energy flux using a co-located ocean glider and moored ADCP is tested using a dataset collected from the WTR in the North Atlantic. Gliders cannot perfectly hold station, even when operating as a virtual mooring, so error in the energy flux calculation due to imperfect co-location of the glider and ADCP is estimated by subsampling potential density in an idealized internal tide field along pseudorandomly distributed glider paths. If we consider the maximum acceptable energy flux error to be 0.1 (10%), all the glider data must be contained within a watch circle with a diameter smaller than 1∕8 the mode-1 horizontal wavelength of the internal tide. Energy flux is biased low and the negative bias increases with increasing watch circle diameter. If watch circle diameter is larger than 1∕8 the mode-1 horizontal wavelength, the negative bias is more than −0.03 (3%) and all realizations within the 95% confidence interval are underestimates. When on-station over the WTR, the glider stayed within 2.5km of the mooring so watch circle diameter d=5km. The local D[2] mode-1 horizontal wavelength λ≈100km so the $d/\mathit{\ lambda }=\mathrm{0.05}$ case (Table 2) is the most appropriate for the observations presented here. The observed energy flux is estimated to have a negative bias of only −0.004 (0.4%) and an error of less than ±0.03 (3%) at the 95% confidence limit. This estimate does not include the effect of internal tide advection by the barotropic tide (Stephenson et al., 2016), which can lead to an additional negative bias if barotropic velocity amplitude is of a similar size to baroclinic phase speed. Over the WTR, D[2] mode-1 phase speed is ≈2.2ms^−1 and barotropic velocity amplitude is <0.2ms^−1 so we expect this effect to be negligible for our observations. At mid-latitudes, D[2] mode-1 horizontal wavelength for a 1000m deep water column is typically in the range 40-160km. The results presented here suggest energy flux error due to imperfect co-location can be reduced to an acceptable level (10%) if the glider maintains a 5 to 20km diameter watch circle. In the absence of strong tidal and slope currents, a well-trimmed glider diving to 1000m with a relatively steep glide angle can usually maintain a watch circle with a diameter of 5km or less, so energy flux error will typically be <10%. Where horizontal wavelengths are shorter, for example at lower latitudes or in shallower and less stratified water columns, a smaller watch circle will be required to maintain an acceptable level of error. In shallower water, smaller watch circles are generally achievable because horizontal travel over a complete dive cycle scales with dive depth. Diurnal internal tides have longer horizontal wavelengths so larger watch circles are acceptable. For mission planning, the mode-1 horizontal wavelength of a tidal frequency ω can be estimated, $\mathit{\lambda }=\mathrm{2}\mathit{\pi }{c}_{\mathrm{1}}/\sqrt{{\mathit{\omega }}^{\ mathrm{2}}-{f}^{\mathrm{2}}}$, where f is the inertial frequency and ${c}_{\mathrm{1}}=NH/\mathit{\pi }$ is an approximation of mode-1 eigenspeed. If the assumption of uniform stratification is not appropriate, c[1] can be calculated by solving the boundary value problem for a given N(z) (Gill, 1982). Table 2 can then be used to estimate the energy flux bias and error that can be expected for a given value of d∕λ. Including the above estimate of error due to imperfect co-location, the observed depth-integrated D[2] internal tide energy flux over the northern flank of the WTR is 4.2±0.2kWm^−1. This is considerably larger than previous internal tide observations over the southeastern bank of the FSC: 0.2kWm^−1 (90km northeast of the WTR; Hall et al., 2011) and 0.4–0.6kWm^−1 (105km northeast of the WTR; Hall et al., 2017b), but small compared with some deep-ocean ridges, e.g. the Hawaiian Ridge (up to 33kWm^−1; Lee et al., 2006) and Luzon Strait (up to 41kWm^−1; Alford et al., 2011). More comparable to the WTR is the Mendocino Escarpment, where a ridge is orientated perpendicular to the continental slope and the observed energy flux is 7kWm^−1 (Althaus et al., 2003). The 40h co-located time series presented here is not long enough to resolve the internal spring–neap cycle. Peak neap tide occurred on yearday 153^2, 1 day before the majority of the co-located time series. Assuming the internal tide is generated locally at the WTR, the surface and internal spring–neap cycles will be in phase. The observed D[2] energy flux is therefore representative of neap internal tide and so an underestimate of the true M[2] internal tide. This may somewhat explain the slight underestimate compared to the M[2]-only regional tide model. Interestingly, the CTD time series used by Sherwin (1991) was recorded 2 days after peak spring tide so is more representative of spring internal tide. The fact that two observational estimates of D[2] vertical isopycnal displacement, 6.7km apart and at different phases of the internal spring–neap cycle, are so similar implies that there are compensating spatial gradients in internal tide magnitude. The regional tide model shows the possible extent of these gradients and suggests that accurate siting of moorings is crucial for repeated long-term observations. For future experiments, spatial gaps in the time series can be minimized with conductivity–temperature loggers and additional current meters on the mooring line. We have also shown that glider-inferred DAC can provide an accurate estimate of tidal current velocity that could be used to constrain barotropic velocity in the absence of full-depth data coverage by ADCPs and current meters. However, the major limitation of the dataset presented here is the short length of the co-located time series. Future glider missions will hold station by an ADCP mooring for several weeks to resolve the internal spring–neap cycle. Calculating D[2] internal tide energetics in a 36h moving window will yield a time-varying energy flux that can be related to seasonal changes in stratification, advection by mesoscale eddies, spatial and temporal patterns in internal tide-driven turbulent mixing, and the resulting biogeochemical response. Appendix A:Idealized internal tide field An idealized M[2] multi-mode internal tide field is created for a 1000m deep water column with uniform stratification (${N}^{\mathrm{2}}=\mathrm{6.1}×{\mathrm{10}}^{-\mathrm{6}}$s^−2). Horizontal current velocity, $\mathbit{u}=\left(u,v\right)$, and vertical isopycnal displacement, ξ, are defined by summing the first 10 baroclinic modes, $\begin{array}{}\text{(A1)}& \begin{array}{rl}& u\left(x,y,z,t\right)=\sum _{n=\mathrm{1}}^{\mathrm{10}}{u}_{n}\mathrm{sin}\left({k}_{n}x-\mathit{\omega }t-{\mathit{\varphi }}_{n}\right){A}_{n}\left (z\right),\\ & v\left(x,y,z,t\right)=\sum _{n=\mathrm{1}}^{\mathrm{10}}{u}_{n}\frac{f}{\mathit{\omega }}\mathrm{cos}\left({k}_{n}x-\mathit{\omega }t-{\mathit{\varphi }}_{n}\right){A}_{n}\left(z\ $\begin{array}{}\text{(A2)}& \begin{array}{rl}& \mathit{\xi }\left(x,y,z,t\right)=\\ & \sum _{n=\mathrm{1}}^{\mathrm{10}}{u}_{n}\mathrm{sin}\left({k}_{n}x-\mathit{\omega }t-{\mathit{\varphi }}_{n}\ right){B}_{n}\left(z\right)\frac{\mathrm{1}}{\mathit{\omega }}{\left(\frac{{\mathit{\omega }}^{\mathrm{2}}-{f}^{\mathrm{2}}}{{N}^{\mathrm{2}}-{\mathit{\omega }}^{\mathrm{2}}}\right)}^{\mathrm{1}/\ where u[n] and ϕ[n] are the velocity amplitude and the phase of the nth baroclinic mode, respectively; $\mathit{\omega }=\mathrm{1.41}×{\mathrm{10}}^{-\mathrm{4}}$s^−1 is the M[2] frequency; and $f= \mathrm{1.26}×{\mathrm{10}}^{-\mathrm{4}}$s^−1 is the inertial frequency at 60^∘N. A[n](z) and B[n](z) are the vertical structures of horizontal current velocity and vertical isopycnal displacement for each baroclinic mode, and are equivalent to cos(nπz∕H) and sin(nπz∕H), respectively, where n is mode number. Horizontal wavenumber ${k}_{n}=\sqrt{{\mathit{\omega }}^{\mathrm{2}}-{f}^{\mathrm {2}}}/{c}_{n}$, where ${c}_{n}=NH/n\mathit{\pi }$, is an approximation of mode eigenspeed (Gill, 1982). Velocity amplitude decays with mode number, ${u}_{n}={u}_{\mathrm{1}}{e}^{-\mathrm{0.5}\left(n- \mathrm{1}\right)}$, where u[1] is the mode-1 velocity amplitude. This decay rate results in a well-defined internal tide beam if velocity phase is approximately equal for each baroclinic mode. However, a different random set of baroclinic mode phases (ϕ[n]) is used for each scenario simulated so internal tide beams are only apparent in a subset of scenarios. u[1]=0.28ms^−1 yields a mode-1 vertical isopycnal displacement amplitude of 50m but energy flux error and APE error are not sensitive to absolute amplitude. The time-varying potential density field is then $\begin{array}{}\text{(A3)}& \mathit{\rho }\left(x,y,z,t\right)=\stackrel{\mathrm{‾}}{\mathit{\rho }}\left(z\right)+\frac{{\mathit{\rho }}_{\mathrm{0}}}{g}{N}^{\mathrm{2}}\mathit{\xi },\end{array}$ where $\stackrel{\mathrm{‾}}{\mathit{\rho }}\left(z\right)$ is a background density profile with a vertical gradient equivalent to N^2. Barotropic velocity (${\mathbit{u}}_{\mathrm{bt}}^{\prime }$) and residual flow ($\stackrel{\mathrm{‾}}{\mathbit{u}}$) are both zero so ${\mathbit{u}}_{\mathrm{bc}}^{\prime }=\mathbit{u}$. RH led the glider mission, analysed the co-located glider and ADCP dataset, and developed the method for estimating glider sampling error. BB lead the mooring deployment and processing of the ADCP data. GD processed and quality-controlled the glider data. The paper was written by RH with input from the other authors. The authors declare that they have no conflict of interest. This article is part of the special issue “Developments in the science and history of tides (OS/ACP/HGSS/NPG/SE inter-journal SI)”. It is not associated with a conference. SG613 is owned and maintained by the UEA Marine Support Facility. The glider and ADCP mooring were deployed as part of the fourth Marine Autonomous Systems in Support of Marine Observations mission (MASSMO4; funded primarily by the Defence Science and Technology Laboratory) and the Marine Scotland Science Offshore Monitoring Programme. The cooperation of the captain and crew of NRV Alliance (CMRE, Centre for Maritime Research and Experimentation) and MRV Scotia (Marine Scotland) are gratefully acknowledged. The glider data were processed by Gillian Damerell, the ADCP data were processed by Helen Smith and Barbara Berx, and the acoustic current meter data were processed by Jennifer Hindson and Helen Smith. Assistance with glider piloting was provided by the UEA Glider Group. This paper was edited by Mattias Green and reviewed by two anonymous referees. Alford, M. H., MacKinnon, J. A., Nash, J. D., Simmons, H., Pickering, A., Klymak, J. M., Pinkel, R., Sun, O., Rainville, L., Musgrave, R., Beitzel, T., Fu, K.-H., and Lu, C.-W.: Energy flux and dissipation in Luzon Strait: Two tales of two ridges, J. Phys. Oceanogr., 41, 2211–2222, https://doi.org/10.1175/JPO-D-11-073.1, 2011.a, b Althaus, A. M., Kunze, E., and Sanford, T. B.: Internal tide radiation from Mendocino Escarpment, J. Phys. Oceanogr., 33, 1510–1527, 2003.a Baines, P. G.: On internal tide generation models, Deep-Sea Res., 29, 307–338, 1982.a Berx, B., Hindson, J., and Smith, H.: Moored data from NWZ-E monitoring site in the Faroe-Shetland Channel, Marine Scotland, UK, https://doi.org/10.7489/12217-1, 2019.a Blumberg, A. F. and Mellor, G. L.: A description of a three-dimensional coastal ocean circulation model, in: Three-Dimensional Coastal Ocean Models, Vol. 4, edited by: Heaps, N. S., American Geophysical Union, Washington, DC, 1–16, 1987.a Boettger, D., Robertson, R., and Rainville, L.: Characterizing the semidiurnal internal tide off Tasmania using glider data, J. Geophys. Res.-Oceans, 120, 3730–3746, https://doi.org/10.1002/ 2015JC010711, 2015.a Emery, W. J. and Thomson, R. E.: Data Analysis Methods in Physical Oceanography, Elsevier, Amsterdam, 2 Edn., 654 pp., 2001.a Eriksen, C. C., Osse, T. J., Light, R. D., Wen, T., Lehman, T. W., Sabin, P. J., Ballard, J. W., and Chiodi, A. M.: Seaglider: a long-range autonomous underwater vehicle for oceanographic research, IEEE J. Oceanic Eng., 26, 424–436, https://doi.org/10.1109/48.972073, 2001.a, b Frajka-Williams, E., Eriksen, C. C., Rhines, P. B., and Harcourt, R. R.: Determining vertical water velocities from Seaglider, J. Atmos. Ocean. Tech., 28, 1641–1656, https://doi.org/10.1175/ 2011JTECHO830.1, 2011.a, b Garau, B., Ruiz, S., Zhang, W. G., Pascual, A., Heslop, E., Kerfoot, J., and Tintoré, J.: Thermal lag correction on Slocum CTD glider data, J. Atmos. Ocean. Tech., 28, 1065–1071, https://doi.org/ 10.1175/JTECH-D-10-05030.1, 2011.a Gill, A. E.: Atmosphere-Ocean Dynamics, Academic Press, 662 pp., 1982.a, b Hall, R. A., Huthnance, J. M., and Williams, R. G.: Internal tides, nonlinear internal wave trains, and mixing in the Faroe-Shetland Channel, J. Geophys. Res., 116, C03008, https://doi.org/10.1029/ 2010JC006213, 2011.a, b, c, d, e, f Hall, R. A., Aslam, T., and Huvenne, V. A. I.: Partly standing internal tides in a dendritic submarine canyon observed by an ocean glider, Deep-Sea Res. Pt. I, 126, 73–84, https://doi.org/10.1016/ j.dsr.2017.05.015, 2017a.a, b Hall, R. A., Berx, B., and Inall, M. E.: Observing internal tides in high-risk regions using co-located ocean gliders and moored ADCPs, Oceanography, 30, 51–52, https://doi.org/10.5670/ oceanog.2017.220, 2017b.a, b, c Hopkins, J. E., Stephenson, G. R., Green, J. A. M., Inall, M. E., and Palmer, M. R.: Storms modify baroclinic energy fluxes in a seasonally stratified shelf sea: Inertial-tidal interaction, J. Geophys. Res.-Oceans, 119, 6863–6883, https://doi.org/10.1002/2014JC010011, 2014.a IOC, SCOR, and IAPSO: The international thermodynamic equation of seawater – 2010: Calculation and use of thermodynamics properties, in: Intergovernmental Oceanographic Commission, Manuals and Guides, 56, p. 196, UNESCO, 2010.a Johnston, T. M. S. and Rudnick, D. L.: Trapped diurnal internal tides, propagating semidiurnal internal tides, and mixing estimates in the California Current System from sustained glider observations, 2006–2012, Deep-Sea Res. Pt. II, 112, 61–78, https://doi.org/10.1016/j.dsr2.2014.03.009, 2015.a Johnston, T. M. S., Rudnick, D. L., Alford, M. H., Pickering, A., and Simmons, H. J.: Internal tide energy fluxes in the South China Sea from density and velocity measurements by gliders, J. Geophys. Res.-Oceans, 118, 3939–3949, https://doi.org/10.1002/jgrc.20311, 2013.a Johnston, T. M. S., Rudnick, D. L., and Kelly, S. M.: Standing internal tides in the Tasman Sea observed by gliders, J. Phys. Oceanogr., 45, 2715–2737, https://doi.org/10.1175/JPO-D-15-0038.1, 2015. Kunze, E., Rosenfeld, L. K., Carter, G. S., and Gregg, M. C.: Internal waves in Monterey Submarine Canyon, J. Phys. Oceanogr., 32, 1890–1913, 2002.a Lee, C. M., Kunze, E., Sanford, T. B., Nash, J. D., Merrifield, M. A., and Holloway, P. E.: Internal tides and turbulence along the 3000-m isobath of the Hawaiian Ridge, J. Phys. Oceanogr., 36, 1165–1182, 2006.a Nash, J. D., Alford, M. H., and Kunze, E.: Estimating internal wave energy fluxes in the ocean, J. Atmos. Ocean. Tech., 22, 1551–1570, 2005.a, b, c Pingree, R. D., Mardell, G. T., and New, A. L.: Propagation of internal tides from the upper slopes of the Bay of Biscay, Nature, 321, 154–158, 1986.a Queste, B. Y.: Hydrographic observations of oxygen and related physical variables in the North Sea and Western Ross Sea Polynya, PhD thesis, School of Environmental Sciences, University of East Anglia, 215 pp., 2014.a Rainville, L., Lee, C. M., Rudnick, D. L., and Yang, K.-C.: Propagation of internal tides generated near Luzon Strait: Observations from autonomous gliders, J. Geophys. Res., 118, 4125–4138, https:// doi.org/10.1002/jgrc.20293, 2013.a Rudnick, D. L. and Cole, S. T.: On sampling the ocean using underwater gliders, J. Geophys. Res., 116, C08010, https://doi.org/10.1029/2010JC006849, 2011.a Rudnick, D. L., Johnston, T. M. S., and Sherman, J. T.: High-frequency internal waves near the Luzion Strait observed by underwater gliders, J. Geophys. Res., 118, 1–11, https://doi.org/10.1002/ jgrc.20083, 2013.a Sharples, J., Tweddle, J. F., Green, J. A. M., Palmer, M. R., Kim, Y.-N., Hickman, A. E., Holligan, P. M., Moore, C. M., Rippeth, T. P., Simpson, J. H., and Krivtsov, V.: Spring-neap modulation of internal tide mixing and vertical nitrate fluxes at a shelf edge, Limnol. Oceanogr., 52, 1735–1747, 2007. a Sharples, J., Ellis, J. R., Nolan, G., and Scott, B. E.: Fishing and the oceanography of stratified shelf seas, Prog. Oceanogr., 117, 130–139, https://doi.org/10.1016/j.pocean.2013.06.014, 2013.a Sherwin, T. J.: Evidence of a deep internal tide in the Faeroe-Shetland channel, in: Tidal Hydrodynamics, edited by: Parker, B. B., John Wiley & Sons, New York, 469–488, 1991.a, b, c, d, e Stephenson, G. R., Green, J. A. M., and Inall, M. E.: Systematic bias in baroclinic energy estimates in shelf seas, J. Phys. Oceanogr., 46, 2851–2862, https://doi.org/10.1175/JPO-D-15-0215.1, 2016.a Todd, R. E., Rudnick, D. L., Sherman, J. T., Owens, W. B., and George, L.: Absolute velocity estimates from autonomous underwater gliders equipped with Doppler current profilers, J. Atmos. Ocean. Tech., 34, 309–333, https://doi.org/10.1175/JTECH-D-16-0156.1, 2017.a Wynn, R. B., Wihsgott, J. U., Palmer, M. R., Lichtman, I. D., Miller, P., Goult, S., Nencioli, F., Loveday, B. R., Jones, S., Inall, M. E., Dumont, E., Venables, E., Jones, O., Risch, D., Hall, R. A., Cauchy, P., Pierpoint, C., Doran, J., Mowat, R., and Damerell, G. M.: MASSMO 4 project ocean glider and autonomous surface vehicle data, British Oceanographic Data Centre, National Oceanography Centre, NERC, UK, https://doi.org/10.5285/9373933d-48c1-5a37-e053-6c86abc0e213, 2019.a Zhao, Z., Alford, M. H., Lien, R.-C., Gregg, M. C., and Carter, G. S.: Internal tides and mixing in a submarine canyon with time-varying stratification, J. Phys. Oceanogr., 42, 2121–2142, https:// doi.org/10.1175/JPO-D-12-045.1, 2012.a Apogee is the phase of the dive between descent and ascent, when the glider pitches upwards and increases its buoyancy. Flow through the conductivity cell is unpredictable during this phase and so salinity spikes are common. We refer to time using yearday, defined as decimal days since midnight on 31 December 2016 (e.g. noon on 31 January 2017 is yearday 30.5).
{"url":"https://os.copernicus.org/articles/15/1439/2019/os-15-1439-2019.html","timestamp":"2024-11-08T21:14:43Z","content_type":"text/html","content_length":"332293","record_id":"<urn:uuid:b0dbd829-c7e2-4784-8366-c2573900b8c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00381.warc.gz"}
Pappus' Sangaku Few sangaku illustrate better the disparity of the independent development of mathematics during the Edo period (1603-1867) of self-imposed seclusion of Japan from the Western world than those related to arbelos and Pappus' chain of circles. The tool of inversion so useful in solving problems involving circles remained unkown to the Japanese until late 19^ century. Consider the sangaku 1.8.2 from the collection by Fukagawa and Pedoe: Circles C[1](r) and C[2](r) touch at O in the circle O(2r) and also touch O(2r), C[1](r) touching at T. O[1](r[1]) touches O(2r) internally and both C[1](r) and C[2](r) externally, O[2](r[2]) touches O[1](r[1]) and C[1](r) externally and O(2r) internally, and so on. Find r[n]. 1. H. Fukagawa, D. Pedoe, Japanese Temple Geometry Problems, The Charles Babbage Research Center, Winnipeg, 1989 Write to: Charles Babbage Research Center P.O. Box 272, St. Norbert Postal Station Winnipeg, MB Canada R3V 1L6 |Contact| |Front page| |Contents| |Geometry| |Up| Copyright © 1996-2018 Alexander Bogomolny Circles C[1](r) and C[2](r) touch at O in the circle O(2r) and also touch O(2r), C[1](r) touching at T. O[1](r[1]) touches O(2r) internally and both C[1](r) and C[2](r) externally, O[2](r[2]) touches O[1](r[1]) and C[1](r) externally and O(2r) internally, and so on. Find r[n]. With the help of inversion the solution is immediate. In the generalization of Archimedes' formula r[n] = rk / (k²n² + k + 1). where k is the ratio between the radii of the two inner circles, i.e. k = 1 in this case. Thus the answer to the problem is r[n] = r / (n² + 2), exactly the expression that appears in the Gothic Arc diagram. In the very first volume (1826) of Crelle's Journal, J. Steiner proved several theorem concerning arbelos, the above being one of them. The sangaku however is dated 1801 (and was written in the Mie |Contact| |Front page| |Contents| |Geometry| |Up| Copyright © 1996-2018 Alexander Bogomolny
{"url":"https://www.cut-the-knot.org/pythagoras/SteinerSangaku.shtml","timestamp":"2024-11-02T06:39:40Z","content_type":"text/html","content_length":"25538","record_id":"<urn:uuid:2a72e4a5-233b-424a-bcb2-1e63f3d81240>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00058.warc.gz"}
Conditions for partial fraction method Q. For a partial fraction method to be followed, 1) The degree of the numerator must be more than the degree of the denominator. 2) The factors formed for partial fraction are a combination of Linear factors and Irreducible quadratic factors. 3) The degree of the numerator must be less than the degree of the denominator. 4) The factors formed for partial fraction are a combination of Linear factors and Square roots.- Published on 27 Nov 15 a. 1, 2 and 3 are correct b. 1 and 2 are correct c. 2 and 3 are correct d. All the four are correct ANSWER: 2 and 3 are correct ➨ Post your comment / Share knowledge Enter the code shown above: (Note: If you cannot read the numbers in the above image, reload the page to generate a new one.)
{"url":"https://www.careerride.com/mchoice/conditions-for-partial-fraction-method-17648.aspx","timestamp":"2024-11-10T21:33:41Z","content_type":"text/html","content_length":"18428","record_id":"<urn:uuid:05d1aa12-889a-4fa6-ac92-6d120d268664>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00422.warc.gz"}
Apples - math word problem (1305) Hanka has 5 apples more than Juro and 7 apples less than Mirka. Mirka has 19 apples. How many apples have Hanka, and how many Juro? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators Do you have a linear equation or system of equations and are looking for its ? Or do you have a quadratic equation You need to know the following knowledge to solve this word math problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/1305","timestamp":"2024-11-02T12:12:42Z","content_type":"text/html","content_length":"49875","record_id":"<urn:uuid:1975ca60-05c9-4a3c-96c9-597a76b70fb4>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00470.warc.gz"}
Continuous Variables Graph States Shaped as Complex Networks: Optimization and Manipulation Laboratoire Kastler Brossel, Sorbonne Université, CNRS, ENS-PSL Research University, Collège de France, 4 Place Jussieu, F-75252 Paris, France Author to whom correspondence should be addressed. Submission received: 31 October 2019 / Revised: 19 December 2019 / Accepted: 20 December 2019 / Published: 24 December 2019 Complex networks structures have been extensively used for describing complex natural and technological systems, like the Internet or social networks. More recently, complex network theory has been applied to quantum systems, where complex network topologies may emerge in multiparty quantum states and quantum algorithms have been studied in complex graph structures. In this work, we study multimode Continuous Variables entangled states, named cluster states, where the entanglement structure is arranged in typical real-world complex networks shapes. Cluster states are a resource for measurement-based quantum information protocols, where the quality of a cluster is assessed in terms of the minimal amount of noise it introduces in the computation. We study optimal graph states that can be obtained with experimentally realistic quantum resources, when optimized via analytical procedure. We show that denser and regular graphs allow for better optimization. In the spirit of quantum routing, we also show the reshaping of entanglement connections in small networks via linear optics operations based on numerical optimization. 1. Introduction In the last few decades, network theory has provided a natural framework for describing complex natural, social, and technological structures [ ]. Recurrent types of complex networks, like the scale-free networks, have been recovered in phenomena at different scales, where the functionality of the systems seems to be closely related to their structure [ ]. More recently, complex networks have gained attention in the quantum realm, where several theoretical works [ ] show that complex structures may play a role in quantum information technologies. It is clear that, as quantum architectures are reaching larger scales, their internal arrangement starts to play a significant role in their functionality. Moreover, network structures are clearly at the base of quantum communication protocols [ ], but also appear in particular kinds of multiparty entangled states that allow for measurement-based quantum computing (MBQC) protocols [ Recently, new records have been established in superconductor [ ] and Rydberg [ ] based technologies, and extremely large entangled states have been generated in the optical domain [ ]. We can describe superconducting and Rydberg platforms as networks of interacting qubits, where quantum information is encoded in Discrete Variables (DV) states. Differently, in the optical domain, networks of entangled optical modes are deterministically generated and measurement-based quantum protocols can be implemented via a Continuous Variables (CV) encoding [ ]. The control on the reconfigurability of the cited networks is gradually increasing and, in particular, in the CV case, it is possible to have totally reconfigurable networks (with all-to-all connections) mimicking complex networks structures [ Here, we study CV entangled states, named CV graph states or CV cluster states, arranged with shapes that are typical of real-world complex networks, in order to investigate their properties for quantum computing or quantum networking (communication) protocols. CV graphs have been introduced in the context of measurement-based quantum computing exploiting CV resources [ ]. While usually the name “cluster” is used when the graph shape allows for universal quantum computing, in this work, we will use the terms “cluster” state and “graph state” as synonyms. In CV-quantum computing, quantum information is encoded in variables that can take a continuum spectrum of values. In optics, these variables are the quadratures of the electromagnetic field (called amplitude and phase quadrature) and they correspond to the position and momentum of the harmonic oscillator, which represents the behavior of a single mode of the field. Ideal cluster states, which involve perfect correlations between quadratures of different optical modes, cannot be reached experimentally, as they would require an infinite amount of energy (squeezing) to be established. Thus, experimental clusters always involve a poorer degree of correlations, which cause errors and noise in computation. Given experimentally accessible resources, analytical optimization protocols have been proposed to choose state manipulations that better distribute the correlations in order to minimize errors [ ], then improve the quality of the cluster. In this work, we apply the optimization procedure to complex cluster shapes and we find that denser and more regular network shapes allow for better quality [ As the classical internet can be described via a complex network model, it is worth considering complex network shapes in quantum communication protocols. In particular, we are interested in quantum routing, i.e., the establishment of a direct entanglement link between two arbitrary nodes of a network in order to have an exploitable channel for quantum teleportation. Quantum routing protocols have been mainly studied in the DV approach [ ], while here we study the particular setting of CV quantum resources. In particular, instead of considering a set of initially disconnected nodes subjected to routing via local operations, we consider a network of CV entanglement correlations, which can be easily produced with the current technology [ ], and then we reconfigure it via partially-local and easy operations (linear optics). The optimal operations are obtained via numerical optimization. We found solutions for small networks of different geometry which are shared between two parties that are allowed to act via local linear optics operations in their set of nodes [ 2. Results 2.1. Background: Cluster States and Complex Networks Ideal cluster states are built starting from a set of modes of light, which are placed in the zero-momentum eigenstate $| 0 〉 p$, i.e., infinitely squeezed vacuum states along the momentum quadrature p. Entangling $C Z = exp ( ı q ^ i ⊗ q ^ j )$ gates are then applied between couples of modes i and j, according to a given configuration, which can be represented by a graph. A graph is defined by a set ${ V , E }$, where $V$ is the set of nodes (vertices) and $E$ are the edges. In the graphical representation of the cluster, the nodes represent different modes of light, while the edges between the nodes are the entanglement correlations given by the application of the $C Z$ gates. The graph is characterized by the adjacency matrix V whose elements $V i j$ are set to 1 when an entangling gate has been applied between the two nodes i and j, and 0, otherwise. An ideal cluster state of modes with adjacency matrix is given by $| Ψ C 〉 = ∏ 1 ≤ i < j < N exp ( ı V i j q i ⊗ q j ) | 0 〉 p ⨂ N .$ Cluster states are characterized by a particular set of operators, called , that read $δ ^ i = p ^ i − ∑ j ∈ N ( i ) q ^ j , such that δ ^ i | Ψ C 〉 = 0 ,$ $δ ^ i$ $N ( i )$ denote respectively the nullifier and the set of the nearest neighbours of the -th node. Cluster states are eigenstates of the nullifiers with zero eigenvalue, so, for an ideal cluster, the following condition holds: $Δ 2 δ i = 0$ . In addition, being Gaussian states, cluster states are fully described by mean values of quadratures and covariance matrices [ ]. By defining the vector $X ^ = { q ^ 1 , … , q ^ N , p ^ 1 , … , p ^ N }$ , cluster states are identified by mean values $〈 X ^ i 〉 = 0$ and covariance matrix with elements $σ i j = ( 1 / 2 ) 〈 ( X ^ i X ^ j + X ^ j X ^ i ) 〉$ As already said, in a realistic situation, CV cluster states are always imperfect, due to the impossibility of reaching an infinite amount of squeezing to obtain the zero momentum eigenstate. Approximated clusters, when used in measurement-based quantum computing protocols, at each step of the computation, introduce a certain amount of noise that can be quantified with the variance of the nullifiers of the cluster [ ]. Therefore, the smaller the value of $Δ 2 δ i$ is for every node, the better the quality of cluster is. Instead of acting with $C Z$ gates on multimode squeezed vacuum states, as in the original formulation, cluster states can be implemented via linear optics transformations on the multimode squeezed vacuum state [ ]. This is the implementation that is pursued in experimental setups [ ], as it is easier and less costly than the implementation of $C Z$ , which requires online squeezing. Linear optics manipulations are described by unitary transformations U on the creation and annihilation operators of the different light modes, so that the annihilation operator of the mode i is transformed as $a ^ i → ∑ j U j i a ^ i$. This corresponds to orthogonal symplectic transformations S acting on the quadratures, which transform the covariance matrix as $σ → S σ S T$. To implement a cluster with adjacency matrix , we can apply the following unitary operation: $U = ( I + ı V ) ( V 2 + I ) − 1 2 O ,$ is the identity matrix and is an arbitrary real orthogonal matrix. This orthogonal matrix provides supplementary $N ( N − 1 / 2 )$ degrees of freedom that can be exploited to optimize given properties of the cluster [ ]. In particular, we can optimize given properties of the nullifiers via an analytical protocol [ ], with the aim, for example, of reducing their variances, hence improving the quality of the cluster. In this work, we investigate, for the first time, clusters whose graphical representation ${ V , E }$ corresponds to three different models of complex networks [ ]: the Barabási–Albert [ ], the Erdős–Rényi [ ], and the Watts–Strogatz [ ]. These models have been developed in graph theory in order to reproduce the behaviors of complex systems. The Erdős–Rényi has been the first model considered for complex networks. It is based on random graph: given a fixed number of nodes , two nodes have the probability $p E R$ to be connected by an edge. The model generates graphs with approximately $p E R n ( n − 1 ) / 2$ edges which are distributed randomly. Most nodes have a comparable node degree , defined as the number of edges of the given node. Later, by looking at the degree distribution of many real-world networks, it has been clear that complex networks go beyond the random graph model, as many networks (like Internet or the WWW) are characterized by a power-law distribution of the degree. Thus, new models, such as the Barabási–Albert, have been developed in order to reproduce the structure of these networks, also known as scale-free networks. Barabási–Albert networks grow from a small number of nodes according to preferential attachment, i.e., the fact that new nodes attach preferentially to nodes that already have a high degree (high number of links). In the growth process, the parameter $m B A$ specifies the number of links coming with the new node. Hubs, i.e., heavily linked nodes, arise spontaneously in this Watts–Strogatz networks lie between regular and random graphs and exhibit small-world properties typical of social networks. They are built from a regular network by rewiring its edges, to increase randomness. From a ring lattice with nodes and edges per node, which connect it to the closest neighbors, each edge is randomly rewired with probability $p W S$ . The rewiring parameter $p W S$ spans from 0, the regular graph, to 1, which corresponds to a totally random graph, as shown in Figure 1 . Even if we are going to study graphs with a small number of nodes compared to the ones for which these models have been developed, we can assess a different behavior for the corresponding graph states in the three classes. In the following, we will see how, starting from a given set of modes with finite squeezing values, we can optimize the quality of the clusters corresponding to these complex networks models and how the result of the optimization depends on the parameters of the analyzed model. 2.2. Improving the Overall Quality of a Complex Cluster We test here cluster states with the structure of the three models defined above: Barabási–Albert (BA), Erdős–Rényi (ER), and Watts–Strogatz (WS), with different characterizing parameters $m B A$ $p E R$ , and $p W S$ . In Figure 2 , the difference between a scale-free network and a random network with a comparable average degree is shown. As we see in Figure 2 a, the Barabási–Albert network exhibits highly connected nodes ( ) that are absent in the Erdős–Rényi model ( Figure 2 We consider at first the implementation of 48-mode complex clusters. As explained above, the clusters are implemented from a 48-mode squeezed vacuum via the application of the linear optics unitary of Equation ( ) corresponding to the graphical shape we want to reach [ ]. The scheme is presented in Figure 3 a. In order to pursue realistic implementations, we consider the list of squeezing values presented in Figure 3 b, which corresponds to a series of faithful values that can be obtained via the Schmidt decomposition of parametric process Hamiltonian of the experiment described in [ ]. In this work, the input squeezing values for the implementation of the cluster are fixed: this is because our interest lies in understanding what is the best we can do when, like in a realistic situation, we are provided with a given set of squeezed input states. We stress that this scheme, as well as the associated results, is different to the one dealing with the canonical decomposition of [ The obtained clusters are optimized, acting on the parameters of the arbitrary orthogonal matrix of Equation ( ), by minimizing the fitness function $f ( Δ 2 δ i ) = ∑ i Δ 2 δ ¯ i$ , where $Δ 2 δ ¯ i$ denotes the -th nullifier normalized with the vacuum noise. As BA, ER, and WS are statistical models for graph families, we implement $N = 100$ graphs for each model with a given set of parameters. We define the average quality of a single graph $μ j$ and the average quality for a set of graphs as , as follows: $μ j = 1 48 ∑ i Δ 2 δ ¯ i , j ; μ = 〈 μ j 〉 ,$ $Δ 2 δ ¯ i , j$ identifies variance of the -th nullifier of the -th graph. We will indicate as the standard deviation of the $μ j$ values. This way, we can average out the fluctuations due to the randomness of the complex shape. As we can see from Table 1 , the implementation of quantum complex networks following the Barabási–Albert model or the Erdős–Rényi model shows that the quality of the cluster increases when the average number of edges per node, the average degree $〈 k 〉$ , increases. The average degree $〈 k 〉$ can be raised by increasing the parameters $m B A$ $p E R$ , for the BA and the ER models, respectively. The results clearly show that the quality of the cluster increases with these parameters, until the limiting case of the fully connected graph ( $m B A = 47$ for the Barabási–Albert model and $p E R = 1$ for the Erdős–Rényi model) is reached. The Watts–Strogatz cluster confirms that the quality of the cluster increases for a larger value of $〈 k 〉$ , as we can see by comparing Table 2 a,b, but it also shows a peculiar behavior that depends on $p W S$ . As already said, $p W S$ is a rewiring parameter that can be varied from 0 to 1 in order to tune the randomness of the graph without changing the average degree, as the number of edges is not changed by the rewiring. From Table 2 , it is clear that the quality of the cluster is reduced when $p W S$ approaches 1, so that regular graphs states are optimized better than random graphs. The results we obtained on the 48-node graphs we analyzed show a dependence between the quality of the optimization and the topology of the graph and in particular its dependence on the average $〈 k 〉$ . In order to reduce the influence of the finite size in the network models we have been using, we repeat the procedure for larger networks. We optimize the mean value of nullifiers squeezing (Equation ( )) of a set of 10 complex graphs of 1000 nodes for a certain complex network model with given parameters. The optimization is reiterated by varying the average degree $〈 k 〉$ and by varying the model. The initial list of 1000 squeezing values is obtained from a pseudorandom number generator that created uniformly distributed random numbers in the range $[ − 14 , − 3 ]$ . The results are reported in Figure 4 The data show that the regular cluster (Watts–Strogatz model with $p W S = 0$) converges very fast to its optimal overall quality. Indeed, as we expected from the 48-node cluster analysis, we see that, for a fixed $〈 k 〉$, increasing the regularity, via the $p W S$ parameter, results in a better quality of the optimized cluster and in a faster convergence, as we can see comparing the curves for the Watts–Strogatz model with $p W S = 0$, $p W S = 0.25$ and $p W S = 0.5$. On the other hand, the difference among the Barabási–Albert, Erdős–Rényi and Watts–Strogatz $p W S = 0.5$ complex graphs is less significant. Nevertheless, the convergence of the Watts–Strogatz follows a different behavior, being closer to the Barabási–Albert for lower $〈 k 〉$ and converging to the behavior of the Erdős–Rényi for larger $〈 k 〉$. The Erdős–Rényi is found to be the one with the worst overall quality, differing only slightly from the Barabási–Albert behavior. 2.3. Concentrating the Squeezing The overall quality $μ$ is the quantity to optimize if we want to use the cluster in MBQC protocols. On the contrary, if we want to use just two nodes to perform a quantum teleportation protocol, the best we can do is to concentrate the entanglement correlations in the two nodes: this corresponds to concentrating the squeezing on the nullifiers of the two chosen nodes. Given a set of squeezing values for the input state we use the protocol to concentrate the squeezing on the nullifiers of two nodes $n 1$ and $n 2$, using the fitness function $f ( Δ 2 δ ^ i , n 1 , n 2 ) = ∑ i A i ( n 1 , n 2 ) Δ 2 δ i ¯$, where $n 1$ and $n 2$ are two given modes. We set $A i ( n 1 , n 2 ) = 10 5$ if $i = n 1 , n 2$ and $A i ( n 1 , n 2 ) = 1$ otherwise. We see that the squeezing of the nullifiers is indeed concentrated on the two desired nodes, by reaching (and never exceeding) the highest squeezing values provided on the set of input states. In this case, we will indicate as $μ j$ the mean of the nodes of the graph that are not concerned with the teleportation protocol and with the mean of the $μ j$ values as follows: $μ j = 1 46 ∑ i ≠ n 1 , n 2 Δ 2 δ ¯ i , j ; μ = 〈 μ j 〉 ,$ $Δ 2 δ ¯ i , j$ identifies variance of the -th nullifier of the -th graph. $μ n 1 / n 2$ denotes the mean of the set of values $Δ 2 δ ¯ n 1 / n 2 , j$ , where $n 1$ $n 2$ are two given nodes on which we chose to perform the teleportation. As an example, in Table 3 , we show that, for a 48-node Barabási–Albert model, the squeezing on the two selected nodes $n 1$ $n 2$ indeed take the highest value provided on the input list of Figure 3 b. The same results hold for the Erdős–Rényi model and the Watts–Strogatz model. 2.4. Creating a Quantum Channel between Nodes by Manipulating Existing Networks In the previous section, we have seen how to optimize the generation of complex clusters when we have multimode squeezed vacuum modes and we perform linear optics transformations. As already said, several experimental setups demonstrated the ability to deterministically generate large clusters following this approach; therefore, they can be used to generate the complex clusters presented above. The generated cluster can then be distributed among different parties, i.e., different optical modes corresponding to different nodes of the network can be sent to different players, which can eventually use the entanglement correlations between the shared nodes for quantum communication protocols. According to the given task, it may be necessary to reshape the entanglement correlations among the set of nodes. We consider here the simplest case: the protocol wants to establish a maximally entangled state between two arbitrary nodes of a network in order to use it for quantum teleportation. The two-nodes entangled state is a two mode-squeezed state also called EPR, as it is the approximation of the entangled state used by Einstein, Poldolsky, and Rosen in their famous paper in 1935 [ The task of generating an entanglement link between chosen nodes corresponds to what is called quantum routing. As already said, the procedure we follow is well suited to CV entangled networks as it is relatively easy to deterministically generate the resources and then reshape them, while in the DV case the generation of optical entangled networks is costly and not deterministic so that the best procedure consists in routing the right entanglement connections at the beginning. In the following, we reshape the resource by allowing only for the easiest operations that preserve the number of nodes, i.e., passive linear optics transformations. Quadrature measurements for cluster state reshaping are also an option, but this would imply the removal of the measured node and subsequently the owner of said mode could be cut off from the communication network. For this reason, we will concentrate on linear optics operations, and we will leave open the possibility, in future works, of adding measurements if no other option is possible. These can be global, when they operate on all the nodes at the same time, or local, when they act on a subset of nodes. The case of a global transformation is somewhat trivial as, if we are provided with a cluster A implemented with a linear optics transformation $S A$ acting on a set of squeezed input states, it is always possible to find the transformation that leads us to the cluster B. This transformation is simply $S = S B · S A − 1$, where $S B$ is the linear optics transformation that we should perform on the same set of input states to build the cluster B. We now consider a more interesting scenario: the modes of the cluster are distributed to two spatially separated parties, such that each party is allowed to perform local linear optics transformations only on its set of nodes, as shown in Figure 5 . We then want to check which cluster shapes can be reconfigured in order to get a teleportation channel between two arbitrary nodes. A solution to this problem has already been found if we allow for more general symplectic transformations and weighted graphs [ Let us say that are the number of nodes of party A (Alice) and B (Bob), respectively. We want to act with a linear optics transformation $U A$ locally on the modes and with a linear optics transformation $U B$ locally on the modes. The transformation acting on the whole set of quadratures of the modes then reads $S = Re ( U A ) 0 − Im ( U A ) 0 0 Re ( U B ) 0 − Im ( U B ) Im ( U A ) 0 Re ( U A ) 0 0 Im ( U B ) 0 Re ( U B ) ,$ $U A$ $U B$ are two unitary matrices parametrized respectively by $n 2$ $p 2$ parameters [ ]. A method for the generation of numerically random unitary matrices is presented in [ ]. If we define $σ 1$ as the covariance matrix of the cluster we are given and $σ 2$ as the covariance matrix of the cluster we obtain after the transformation, holds, where is defined in Equation ( ). Our goal is to find the two matrices $U A$ $U B$ , whose real and imaginary parts define , such that $σ 2$ is of the desired form. In our case, we want $σ 2$ to be such that, for two given nodes, we get an EPR channel. In this case, it is hard to find an analytical solution to the problem, so we use a Derandomized Evolution Strategy (DES) algorithm to explore the parameter space [ $n 1$ $n 2$ are the nodes out of which we want to obtain an EPR channel, we can define $σ 2 , r e d ( n 1 , n 2 ) = σ 2 , n 1 σ 2 , n 2 σ 2 , n 2 + n + p σ 2 , n 2 + n + p$ as the reduced $4 × 2 ( n + p )$ matrix obtained by selecting only the rows of $σ 2$ we want to set as a quantum channel. We thus define the $4 × 2 ( n + p )$ $σ c h a n n e l$ as the matrix with null entries except for $σ 1 , n 1 = σ 2 , n 2 = σ 3 , n 1 + n + p = σ 4 , n 2 + n + p = α ,$ $σ 1 , n 2 + p + n = σ 2 , n 1 + p + n = σ 3 , n 2 = σ 4 , n 1 = β ,$ where the subscript “channel” has been omitted for simplicity and where are numerical values that we fix according to the squeezing we want to attain on the modes of the quantum channel. This squeezing cannot exceed the squeezing of the input states of the cluster. For simplicity, we worked with an equal value of squeezing of –10 dB, both for the input squeezed states used to implement the cluster and for the squeezing chosen for the quantum channel. We will search for the minimal value of the function $f o p t = ‖ σ c h a n n e l − σ 2 , r e d ( n 1 , n 2 ) ‖ ,$ $‖ · ‖$ indicates the Frobenius norm. Table 4 , we show preliminary results on different graphs with a restricted number of nodes, shown in Figure 6 . For the 6-mode and 10-mode “grid” graphs and the graphs “X” and “Y”, a result can be found for the creation of a quantum channel between Alice and Bob. For these structures, however, it was not possible to find a solution for the creation of a quantum channel between nodes of the same team. The opposite stands for the fully-connected cluster, for which it was possible to create a channel between nodes of the same team but not between nodes of different teams. Lastly, for the graph “Z”, which represents two 3-mode cluster states distributed to the two parties, for the dual-rail and for the 8-mode “grid” graph, no solution was found. As an example, the results of the fully-connected graph and of the 6-mode “grid” graph of Figure 6 are shown in Appendix A 3. Discussion We have shown that, for CV cluster states with complex graphical representation the quality, measured as the mean of the variance of the nullifiers, can be better optimized when the quantity of entanglement links increases. This is probably due to the fact that, if the number of links is higher, there are more available ways, given by the newly introduced links, of distributing the larger values of the initial squeezing. We have analyzed in the quantum regime different complex shapes corresponding to different models of real-world networks. In the Barabási–Albert and Erdős–Rényi models, regardless of the topology, clusters with a similar average degree $〈 k 〉$ have a comparable overall quality. The average degree $〈 k 〉$ in complex graphs could thus be used as a benchmark for the quality of the state implemented with the optimization protocol. Moreover, analyzing “small-world” networks that evolve from a regular network to a random network as their characteristic parameter $p W S$ increases, we found that randomness in the structure is detrimental to the quality of the state. The optimization procedure can be also used to concentrate the entanglement between two given nodes. Global optimizations of cluster states via linear optics unitaries can always be realized via analytical procedures. On the contrary, if the cluster is distributed to different locations and the players want to reshape it via local operations (quantum routing) with the aim of performing quantum communication protocols, we need to use a numerical procedure in order to deal with the larger number of constraints. In this case, we have used a Derandomized Evolutionary Strategy (DES) algorithm with a suitable fitness function, the one of Equation ( ), in order to find solutions to generate an EPR state between two arbitrary nodes of a network shared between two parties, which can perform only local linear optics operations. We have studied small networks and we have found that it is possible to create EPR channels between two nodes of the two different teams for some network shapes, while creating an EPR channel from two nodes of the same team has never been possible, except for the case of the fully connected network. Thus, except for this last case, it has never been possible, by using only local linear optics operations, to disconnect two nodes of one player from the ones of the other player. It has to be stressed that, in the cases where no solution is found, we cannot conclude with certainty that the solution does not exist, as the DES algorithm can be stuck in a local minimum in the parameters space. In future works, we will investigate quantum routing operations when complex clusters are distributed between several parties by also adding, when linear optics operations are not sufficient, quadrature measurements. The measurement of the quadrature of a mode of the cluster allows in fact for node removal, while the measurement of is useful in wire shortening [ ]: both can be used in order to cut the residual edges after the optimization via linear optics operations. 4. Materials and Methods To implement the networks and carry out the data analysis, we used Wolfram Mathematica Version 11.3 (Wolfram Research, Champaign, Illinois, USA). In particular, the Barabási–Albert, Erdős–Rényi and Watts–Strogatz models are already embedded in the software. Wolfram Mathematica has been used also to implement the DES (μ-λ) iso-CMA algorithm presented in [ ]. The goal of a DES algorithm is the optimization of a given function $f ( x )$ , where is a vector of parameters. We are free to choose our starting point $x o l d$ in the parameter landscape, and we “mutate” it, generating new points ( ) as $x k n e w = x o l d + Δ x k , where k = 1 , ⋯ , λ ,$ $Δ x k$ is drawn from a multivariate normal distribution $N ( 0 , σ 2 𝟙 )$ . The new points are then evaluated with respect to the chosen fitness function and sorted. The “mutants” that provide the best result are chosen to generate a new parent as $x n e w = ∑ k w k x k n e w ,$ $∑ k w k = 1$ . The procedure is then iterated, setting $x n e w$ as the new starting point. A learning component is provided by updating, at each generation, the global step-size . As already said above, in our case, we want to reach a specific value $f o p t ∼ 0$ of a non-negative fitness function. However, it is never guaranteed that the DES procedure finds the global extremum. Author Contributions Conceptualization, V.P.; Data curation, F.S.; Investigation, F.S. and V.P.; Methodology, F.S. and V.P.; Supervision, V.P.; Software F.S.; Writing—review and editing, F.S. and V.P. All authors have read and agreed to the published version of the manuscript. The authors acknowledge financial support from the European Research Council under the Consolidator Grant COQCOoN (Grant No. 820079). The authors also thank Francesco Arzani, Nicolas Treps and Mattia Walschaers for useful discussions. Conflicts of Interest The authors declare no conflict of interest. Appendix A We will present here the result for the creation of a quantum channel out of two given nodes for the graphs shown in Figure A1 Figure A1. Graphs whose results for the creation of an EPR channel between Alice (green) and Bob (blue) are shown in this Appendix. (a) 6-mode grid cluster; (b) 6-mode fully connected cluster. For the 6-mode “grid” graph of Figure A1 a, the suitable unitary matrices $U A$ $U B$ (see Equation ( )) that we need for the creation of a quantum channel out of the nodes 1 and 4 are $Re ( U A ) = − 0.564055 O ( 10 − 16 ) 0.564055 0.250315 O ( 10 − 16 ) 0.250315 O ( 10 − 16 ) − 0.277133 O ( 10 − 16 ) , Im ( U A ) = − 0.426429 O ( 10 − 16 ) 0.426429 − 0.661319 O ( 10 − 17 ) − 0.661319 O ( 10 − 16 ) − 0.960831 O ( 10 − 16 ) Re ( U B ) = − 0.564055 O ( 10 − 17 ) 0.564055 − 0.449914 O ( 10 − 17 ) − 0.449914 O ( 10 − 18 ) − 0.993175 O ( 10 − 17 ) , Im ( U B ) = 0.426429 O ( 10 − 17 ) − 0.426429 − 0.545507 O ( 10 − 18 ) − 0.545507 O ( 10 − 18 ) − 0.116635 O ( 10 − 17 ) .$ Using these transformations, the correlations between 1 and 4 and the other nodes are at most of the order of $10 − 15$ For the “fully connected” graph of Figure A1 b, the suitable unitary matrices $U A$ $U B$ (see Equation ( )) that we need for the creation of a quantum channel out of the nodes 1 and 2 are $Re ( U A ) = 0.56149 − 0.397134 − 0.164356 − 0.56149 0.397134 0.164356 0.408248 0.408248 0.408248 , Im ( U A ) = − 0.134394 − 0.419068 0.553462 − 0.134394 − 0.419068 0.553462 − 0.408248 − 0.408248 − 0.408248 Re ( U B ) = 0.447715 0.293987 0.104392 0.436176 0.614297 − 0.472751 0.124502 0.361237 0.860632 , Im ( U B ) = − 0.706639 0.450256 − 0.0124474 0.201155 − 0.408378 0.0407497 0.232375 − 0.190305 0.152002 .$ Using these transformations, the correlations between 1 and 2 and the other nodes are at most of the order of $10 − 15$ Figure 1. Rewiring of a regular 48-node network for the construction of a “small world” network as shown in [ Figure 2. Comparison between two models of complex networks, both with an average degree of $〈 k 〉 ∼ 3.9$. The size of the dots increases with the number of links. (a) Barabási–Albert model with $m B A = 2$, with a maximum node degree of k = 22; (b) Erdős–Rényi model with $p E R = 4 / 49$, with a maximum node degree of k = 8. Figure 3. Realistic implementation of complex cluster states. (a) implementation of a cluster via a linear optics transformation acting on a series of squeezed modes; (b) list of realistic squeezing values of the input modes for the implementation of a 48-mode cluster. Figure 4. Plot of the mean squeezing value of the nullifiers of the cluster as a function of its average degree $〈 k 〉$ for the different topologies of complex graphs. In the legend, “BA” = Barabási–Albert, “ER” = Erdős–Rényi, “WS p = 0” = Watts–Strogatz with $p W S = 0$, “WS p = 0.25” = Watts–Strogatz with $p W S = 0.25$ and “WS p = 0.5” = Watts–Strogatz with $p W S = 0.5$. Figure 5. (a) a quantum network is created; (b) the resource is distributed to two spatially separated teams, Alice and Bob; (c) Alice performs a linear optics operation $U A$ on her set of nodes and Bob performs a linear optics operation $U B$ on his set of nodes to create a quantum channel out of two given nodes; (d) the quantum channel is established. Figure 6. Graphs analyzed with the aim of creating an EPR channel between Alice (green) and Bob (blue) (or eventually between nodes of the same team). Table 1. and standard deviation of the values $μ j$ of Equation ( ) evaluated on $N = 100$ Barabási–Albert (a) and Erdős–Rényi (b) graphs with different characterizing parameters and consequently different average degrees $〈 k 〉$ , optimized using the function $f ( Δ 2 δ i ) = ∑ i Δ 2 δ ¯ i$ . Without the optimization protocol, takes the value of −3.48 dB for the Erdős–Rényi model, independently of the value of $p E R$ , and it oscillates between −3.48 dB and −3.72 dB for the Barabási–Albert model. (a) Barabási–Albert (b) Erdős–Rényi $m B A$ $μ$ (dB) $[ μ ± σ ]$ (dB) $〈 k 〉$ $p E R$ $μ$ (dB) $[ μ ± σ ]$ (dB) $〈 k j 〉 ¯$ 1 −4.70 [−4.73,−4.67] 1.96 0.2 −5.50 [−5.54,−5.46] 9.35 5 −5.55 [−5.58, −5.53] 9.38 0.4 −5.80 [−5.83, −5.76] 18.83 10 −5.82 [−5.84, −5.80] 17.71 0.6 −6.02 [−6.04, −6.00] 28.29 20 −6.15 [−6,16, −6.14] 31.25 0.8 −6.22 [−6.23, −6.21] 37.58 47 −6.33 [−6.33, −6.33] 47 1 −6.33 [−6.33, −6.33] 47 Table 2. and standard deviation of the values $μ j$ of Equation ( ) evaluated on $N = 100$ Watts–Strogatz graphs with different parameter $p W S$ and different $〈 k 〉$ , optimized using the function $f ( Δ 2 δ i ) = ∑ i Δ 2 δ ¯ i$ . Without the optimization protocol, takes the value of $− 3.48$ dB, independently of the value of $p W S$ $〈 k 〉$ (a) $〈 k 〉 = 4$ (b) $〈 k 〉 = 8$ $p W S$ $μ$ (dB) $[ μ ± σ ]$ (dB) $p W S$ $μ$ (dB) $[ μ ± σ ]$ (dB) 0 −5.19 $[ − 5.19 , − 5.19 ]$ 0 −5.79 $[ − 5.79 , − 5.79 ]$ 0.1 −5.16 $[ − 5.17 , − 5.14 ]$ 0.1 −5.69 $[ − 5.71 , − 5.66 ]$ 0.4 −5.10 $[ − 5.12 , − 5.07 ]$ 0.4 −5.49 $[ − 5.51 , − 5.46 ]$ 0.7 −5.09 $[ − 5.11 , − 5.07 ]$ 0.7 −5.43 $[ − 5.46 , − 5.40 ]$ 1 −5.09 $[ − 5.12 , − 5.06 ]$ 1 −5.43 $[ − 5.46 , − 5.41 ]$ Table 3. $μ 12$ $μ 13$ of the nullifiers of the nodes 12 and 13 and of the value $μ j$ of Equation ( ) evaluated on $N = 100$ Barabási–Albert graphs with different parameter $m B A$ , optimized using the function $f ( Δ 2 δ ^ i ) = ∑ i A i Δ 2 δ i ¯$ , where $A i = 10 5$ $i = 12 , 13$ $A i = 1$ $m B A$ $μ 12$ (dB) $μ 13$ (dB) $μ$ (dB) 1 −6.51 −6.51 −4.61 5 −6.51 −6.51 −5.48 10 −6.51 −6.51 −5.76 20 −6.51 −6.51 −6.10 47 −6.51 −6.51 −6.32 Table 4. Results on the possibility to create a quantum communication channel for the graphs of Figure 6 , between nodes belonging to different teams A and B and between nodes belonging to the same team. Graph Between A and B Same Team 6-node grid Yes No 8-node grid No No 10-node grid Yes No “X” Yes No “Y” Yes No Fully-connected No Yes “Z” No No Dual-rail No No © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Sansavini, F.; Parigi, V. Continuous Variables Graph States Shaped as Complex Networks: Optimization and Manipulation. Entropy 2020, 22, 26. https://doi.org/10.3390/e22010026 AMA Style Sansavini F, Parigi V. Continuous Variables Graph States Shaped as Complex Networks: Optimization and Manipulation. Entropy. 2020; 22(1):26. https://doi.org/10.3390/e22010026 Chicago/Turabian Style Sansavini, Francesca, and Valentina Parigi. 2020. "Continuous Variables Graph States Shaped as Complex Networks: Optimization and Manipulation" Entropy 22, no. 1: 26. https://doi.org/10.3390/ Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/1099-4300/22/1/26","timestamp":"2024-11-13T22:35:24Z","content_type":"text/html","content_length":"517353","record_id":"<urn:uuid:c0e28024-9149-4e2d-b2c3-728b0a5b322c>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00682.warc.gz"}
What is the length of integer in DB2? Numeric Limits Item Limit Smallest INTEGER value -2147483648 Largest INTEGER value 2147483647 Smallest BIGINT value -9223372036854775808 Largest BIGINT value 9223372036854775807 How many bytes is integer in DB2? Numeric Data types i) Integer Data types – Integer occupies 4 bytes in memory and it has 31 precession and the range for this integer is -21474836482+2147493647. It occupies 4 bytes. For the bigger number, we can define INTEGER. ii) SMALLINT Data types – It is a binary data type integer and the range is -32682+32767. How many values can an integer data type take on? The INTEGER data type stores whole numbers that range from -2,147,483,647 to 2,147,483,647 for 9 or 10 digits of precision. The number 2,147,483,648 is a reserved value and cannot be used. What are the valid DB2 numeric data types? LIBNAME Statement Data Conversions DB2 Data Type SAS Data Type Default SAS Format INTEGER numeric 11. SMALLINT numeric 6. BIGINT numeric 20. DECIMAL numeric w.d Is Bigint and long the same? The equivalent of Java long in the context of MySQL variables is BigInt. In Java, the long datatype takes 8 bytes while BigInt also takes the same number of bytes. What is the range of small integer? –32,767 to 32,767 The SMALLINT data type stores small whole numbers that range from –32,767 to 32,767. The maximum negative number, –32,768, is a reserved value and cannot be used. The SMALLINT value is stored as a signed binary integer. What is the maximum length of Sqlca? 136 is the maximum length of the SQLCA. What is data type in Db2? Every column in every Db2 table has a data type. The data type influences the range of values that the column can have and the set of operators and functions that apply to it. You specify the data type of each column at the time that you create the table. You can also change the data type of a table column. What is the largest 64 bit number? In computing. The number 9,223,372,036,854,775,807, equivalent to the hexadecimal value 7FFF,FFFF,FFFF,FFFF 16, is the maximum value for a 64-bit signed integer in computing. It is therefore the maximum value for a variable declared as a long integer (long, long long int, or bigint) in many programming languages running on modern computers. What is maximum number that can be represented by integer? The number 2,147,483,647 (or hexadecimal 7FFFFFFF 16) is the maximum positive value for a 32-bit signed binary integer in computing. It is therefore the maximum value for variables declared as integers (e.g., as int) in many programming languages, and the maximum possible score, money, etc. for many video games. What is int limit in SQL Server? MAXINT or INT_MAX is the highest number that can be represented by a given integer data type. In SQL Server this number for the INT data type is 2,147,483,647. What are the types of numerical data? The exact numeric data types are SMALLINT, INTEGER, BIGINT, NUMERIC (p,s), and DECIMAL (p,s). Exact types mean that the values are stored as a literal representation of the number’s value. The approximate numeric data types are FLOAT (p), REAL, and DOUBLE PRECISION.
{"url":"https://hollows.info/what-is-the-length-of-integer-in-db2/","timestamp":"2024-11-03T15:43:48Z","content_type":"text/html","content_length":"42958","record_id":"<urn:uuid:9103bee3-9ecd-43ae-8abb-d84c48210cd5>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00492.warc.gz"}
Introduction to Proofs and the Mathematical An Introduction to Proofs and the Mathematical Vernacular The typical university calculus sequence, which serves majors in the physical sciences and engineering as well as mathematics, emphasizes calculational technique. In upper level mathematics courses, however, students are expected to operate at a more conceptual level, in particular to produce "proofs" of mathematical statements. To help students make the transition to more advanced mathematics courses, many university mathematics programs include a "bridge course". Many texts have been written for such a course. I have taught from a couple of them, and have looked at numerous others. These various texts represent different ideas for what a bridge course should emphasize. Not having found a text that was a good fit with my own ideas, I decided to try to write one of my own. I am making the book freely available; a link is provided at the bottom of this page. But first I want to explain the ideas which I have tried to embody in the book. My Philosophy for the Bridge Course. The students taking this course have (I assume) completed a standard technical calculus sequence. They will have seen some proofs, but may have dismissed them as irrelevant to what they needed to know for homework or exams. We now want them to start thinking in terms of properties of mathematical objects and logical deduction, and to get them used to writing in the customary language of mathematics. I don't think we accomplish that with the how-to approach to writing proofs that some texts take. That encourages students to think of a mathematical proof as some sort of meaningless ritual that they must learn to do simply because we require them to. Rather we want them to begin to think like mathematicians, and to become conversant with the language of written mathematics. One of my disappointments with existing textbooks is that they often begin with too much formalism about propositional logic. My experience is that whatever students learn from that formalism is left by the wayside as soon as they move into a mathematical context of any substance. My premise is that one learns precise logical language in the context of a real mathematical discussion, not from a "content-free" formal summary of logical grammar. So rather than starting with logical form in the absence of substance, I start with some substance (Chapter 1). Specifically the first chapter simply jumps in with some proofs. Then, with those proofs as examples, we can discuss how they are structured logically and talk about the language with which they are written (Chapter 2). Another concern I have with some texts is their deconstructive approach. Students are implicitly told to forget what they know, because we want to start from scratch with an axiomatic approach. For instance if we develop the integers or real numbers from their axioms, we have to ask the students to suspend what they already know about these basic number systems so that we can develop them anew from the axioms. Instead of building our students' knowledge we seem to be dismantling it and sending them backwards to more primitive topics. I want to downplay that and instead develop topics that are not so obvious to the students, so that when we prove something we are moving forward rather than backward. I do think it is important for students to understand what a set of axioms is, and what an axiomatic development is like. So I have presented a set of axioms for the integers and proven a few elementary properties from them, so students can see the mental discipline required to set aside all our presumptions and work from the axioms alone. But I think it is enough to have made that point. So I go on to focus on the Well Ordering Principle as the property distinguishes the integers from other familiar number systems. The students in this course have finished two years of calculus and related material. Many bridge course texts do not touch on that material at all. It is my desire to incorporate at least some problems and examples that employ ideas and techniques from differential calculus, in addition to the usual topics such as the Euclidean algorithm and modular arithmetic. Analysis is very rich in content, which makes for many opportunities for creativity in developing arguments. But students are not very adept at using the ideas of calculus yet, and probably will go on to an advanced calculus course after this one, so I keep use of analysis relatively simple. However I do think it is important that a text training students to develop and appreciate mathematical arguments not create the impression that careful proof is only important in elementary number theory or algebra. They should see that it pervades all mathematics, analysis included. Another goal is to train students to read more involved proofs such as they may encounter in advanced books and journal articles. This involves being able to fill in details that a proof leaves to the reader. Even more important is being able to look past the details to see the fundamental idea behind a proof. To this end Chapter 5 is built around some results about polynomials (Descartes' Rule of Signs and the Fundamental Theorem of Algebra) whose proofs are accessible to students at this level, but are more substantial than anything they have encountered previously. Chapter 6 gives a treatment of determinants. The proofs of that chapter are mostly based on careful manipulations using the explicit formula for det(A), and provide an opportunity to help the students learn to scrutinize a detailed formula-based argument. The final section develops a proof of the Cayley-Hamilton theorem. This illustrates how a rigorous proof can emerge from a careful examination of a cute but questionable manipulation using the adjoint matrix. Many people want their bridge course to involve ideas from linear algebra, and this chapter provides some of that. I expect many of my colleagues to react with, "this is too hard for the typical student." My philosophy is that the instructor has a rather different role to play than the written text. He/she does not merely recite the material (and grade papers), but serves as a sort of intellectual trainer, prodding students toward more sophisticated points of view and encouraging them in the face of new challenges. In this course especially, I view the instructor's role as helping students learn to read and work from a text written in a style typical of what they will encounter in their upper level courses. So I have tried to write a book that is not a comfortable accommodation of where the students are when they start the course, but an example of the kind of exposition they will need to work from in the future, with their instructor serving as a coach for this their first encounter. I have deliberately made written solutions to the problems available in any form that can be easily posted on the web. The availability of solutions online seems to be a temptation that few undergraduates can resist. The result is to short-circuit the value of a text as a teaching tool. I go over solutions on the board in class or in my office as needed, but I do not make solutions available in any electronic format. If you are teaching from the book, I implore you to respect this restriction and not make solutions available in any electronic format. If you are using the text for self-study I understand that prepared solutions would be valuable to you, but I will not provide them. My advice is to seek out someone with experience in writing proofs and ask for their advice or help. The Current Version The current version is dated December 7, 2016. It includes many typographical corrections and revisions suggested by students as well as those who have found the book online. Thanks to all who have made suggestions. If you find more I'll be grateful if you point them out to me. Downloading the Book You are welcome to download the current version of the book (pdf file), use it, and redistribute it for noncommercial purposes (such as provide it to students, either electronically or by having your local copy shop print it up for them). If you do use the book to teach a course, I would enjoy to hear from you about how it worked out and any comments or suggestions you have. Please note that under the no-derivatives restriction you may not distribute problem solutions in any electronic form. For details of what the copyright allows, see the link in the copyright statement below. -- M.
{"url":"https://personal.math.vt.edu/day/ProofsBook/","timestamp":"2024-11-10T03:08:33Z","content_type":"text/html","content_length":"12163","record_id":"<urn:uuid:32b9058a-fb08-4df7-9f02-e1ae10ef2ef8>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00871.warc.gz"}
LRVSCREE procedure • Genstat v21 Prints a scree diagram and/or a difference table of latent roots (P.G.N. Digby). PRINT = string tokens Printed output (scree, differences); default scre PLOT = string token What to plot in high-resolution graphics (scree); default scre TITLE = text Title for the graph; default * i.e. none WINDOW = scalar Window to use for the graph; default 1 ROOTS = LRVs or any numerical structures Latent roots to be displayed; if an LRV is supplied the trace will also be extracted from it TRACE = scalars Supplies or saves the total of the latent roots DIFFERENCES = pointers Contains 3 variates to save the difference table Procedure LRVSCREE displays a set of latent roots in a convenient form. The input to the procedure is a set of latent roots (ROOTS), either as an LRV or any structure with numerical values. Optionally a scalar (TRACE) can be specified, either to supply or to save the total of the latent roots. Printed output is controlled by the PRINT option. The setting scree produces a scree diagram, annotated with the latent roots on their original scale and expressed both as per-thousandths of the total and as cumulated per-thousandths. The setting differences prints these quantities as a table, together with the first three differences among the per-thousandth values; i.e. the first difference column gives the differences from each per-thousandth to the next, the second difference column gives differences among the first-difference values, and so on. Large first-difference values indicate latent roots ocurring prior to large declines in the scree diagram. Large second and third differences mark the locations of series of two or more latent roots of similar magnitude, which can be thought of as plateaus on the scree diagram. Large positive, or negative, second differences indicate the first, or last, latent root of a plateau. Large negative third differences occur at the last latent root of one plateau that is followed by another plateau. See the example for illustration. By default the scree diagram is also plotted in high-resolution graphics but this can be suppressed by setting option PLOT=*. The TITLE option can supply a title for the plot, and the WINDOW option specifies which window is used (by default window 1). The DIFFERENCES parameter allows a pointer to be specified to contain three variates storing the columns of the difference table. Options: PRINT, PLOT, TITLE, WINDOW. Parameters: ROOTS, TRACE, DIFFERENCES. Not relevant: LRVSCREE deals primarily with diagonal matrices or LRVs. If the latent roots are supplied in a variate, any restriction on the variate will be ignored. See also Directives: CVA, PCP, PCO. Procedure: QEIGENANALYSIS. Commands for: Multivariate and cluster analysis, Graphics. CAPTION 'LRVSCREE example',\ 'Data from Section 3.5.2 of Digby & Kempton (1987).';\ DIAGONALMATRIX [ROWS=28; VALUES=23.4,16.5,15.6,11.3,10.3, 9.2, 8.4, 6.7,\ 5.7, 4.5, 4.3, 4.0, 2.9, 2.4, 2.0, 1.6,\ 1.1, 1.0, 0.6, 0.3, 0.2, 0.0,-1.0,-1.8,\ -2.1,-2.7,-2.9,-3.7] Eigenval PRINT Eigenval CAPTION 'Use LRVSCREE, saving TRACE' LRVSCREE Eigenval; TRACE=TotEigen "Construct LRV with values from Eigenval and TotEigen" LRV [ROWS=28] L EQUATE Eigenval,TotEigen; L[2,3] LRVSCREE [PRINT=scree,differences] L CAPTION !T('The largest first differences, 59 and 37, correspond',\ 'to the two places where the lines of the scree diagram',\ 'differ by more than a single asterisk.',\ 'The second and third roots are fairly similar in value -',\ 'this ''plateau'' is indicated by the values 51 and -29',\ 'in the column of second differences.',\ 'The 4th - 7th latent roots may be considered to form',\ 'another plateau, marked by the second differences 28',\ 'and -8; the third difference of -57 marks the end of',\ 'the first plateau and the start of the next.')
{"url":"https://genstat21.kb.vsni.co.uk/knowledge-base/lrvscree/","timestamp":"2024-11-02T17:39:36Z","content_type":"text/html","content_length":"41511","record_id":"<urn:uuid:ad8eb0d4-4607-4735-aef3-dc6120c8be82>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00767.warc.gz"}
BFS Tree To help illustrate the challenge given in this lecture I’ve gone ahead and organised the traversal into a tree diagram. This is a great way to visualised what’s actually going on and should should be particularly helpful for anyone who is struggling to see how the trace back works when you finally reach the goal. So, the challenge was to manually work out the shortest path for the following grid, using the directional priority; up, right, down, left: I won’t go into the specifics of how to actually traverse the grid, since Ben already does a fine job at explaining it in the video. Now, by following the instructions of our algorithm, you’re building a tree that looks something like this: Notice that every time a visited node has somewhere to go, those nodes are added as branches on the tree. However, each node can only be added to the tree once. This gives us a queue in the order A-B-G-C-H-E-D-L-F-M-I-K-Z. So to find the path back to the start, all you do is work your way back up the tree. Therefore, the shortest path (according to our algorithm) is A-B-C-E-F-I-Z. But what about the other paths we could have taken? What happens if we change the directional priority to favour moving down before moving right? Well, that’s easy! Here’s the tree for the same grid but searched with the down-right directional priority. This new tree gives us a queue in the order A-G-B-H-C-L-D-E-M-F-K-I-Z. So now the path shortest path is (A-G-H-L-M-K-Z) As you can see this, corresponds to moving straight down on the grid and then moving right. I hope this helps with your understanding of this breadth first search algorithm.
{"url":"https://community.gamedev.tv/t/bfs-tree/64195","timestamp":"2024-11-01T23:35:21Z","content_type":"text/html","content_length":"16644","record_id":"<urn:uuid:d0402fc1-54ac-43f4-8187-f9525d123665>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00376.warc.gz"}
1/3 Times Two As A Fraction What is 1/3 times 2 as a fraction? Let's break down how to multiply a fraction by a whole number. Understanding the problem: We're asked to find the product of 1/3 and 2. This means we're essentially multiplying one-third by two. 1. Represent the whole number as a fraction: The whole number 2 can be written as a fraction: 2/1. 2. Multiply the numerators: 1 x 2 = 2 3. Multiply the denominators: 3 x 1 = 3 4. Simplify the resulting fraction: The result is 2/3. Therefore, 1/3 times 2 is equal to 2/3. Key takeaways: • Multiplying a fraction by a whole number is the same as multiplying the fraction by the whole number written as a fraction (with a denominator of 1). • When multiplying fractions, you multiply the numerators and the denominators. • Always simplify the resulting fraction if possible.
{"url":"https://jasonbradley.me/page/1%252F3-times-two-as-a-fraction","timestamp":"2024-11-08T11:13:09Z","content_type":"text/html","content_length":"56958","record_id":"<urn:uuid:836054a8-eea8-4fe4-8ee5-7303e445d493>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00863.warc.gz"}
Debiased ML via NN for GLM Debiased ML via NN for GLM This is the note for Chernozhukov, V., Newey, W. K., Quintas-Martinez, V., & Syrgkanis, V. (2021). Automatic Debiased Machine Learning via Neural Nets for Generalized Linear Regression. ArXiv:2104.14737 [Econ, Math, Stat]. give debiased machine learners of parameters of interest that depend on generalized linear regressions. machine learners provide remarkably good predictions in a variety of settings but are inherently biased. The bias arises from using regularization and/or model selection to control the variance of the prediction. Confidence intervals based on estimators with approximately balanced variance and squared bias will tend to have poor coverage. Consider iid observations $W_1,\ldots, W_n$ with $W_i$ having CDF $F_0$. Take a function to depend on a vector of regressors $X$, impose the restriction that $\gamma$ is in a set of functions $\Gamma$ that is linear and closed in mean square, specify that the estimator $\gamma$ is an element of $\Gamma$ with probability one and has a probability limit $\gamma(F)$ when $F$ is the distribution of a single observation $W_i$. Suppose that $\gamma(F)$ satisfies an orthogonality condition where a residual $\rho(W,\gamma)$ with finite second moment is orthogonal in the population to all $b\in \Gamma$. \[E_F[b(X)\rho(W,\gamma(F))] = 0\] for all $b\in \Gamma$ and $\gamma(F)\in\Gamma$. • $\rho(W, \gamma)=Y-\gamma(X)$: orthogonality condition is necessary and sufficient for $\gamma(F)$ to be the least squares projection of $Y$ on $\Gamma$. • quantile conditions $\rho(W,\gamma)=p-1(Y<\gamma(X))$ • first order conditions for generalized linear models, $\rho(W,\gamma)=\lambda(\gamma(X))[Y-\mu(\gamma(X))]$
{"url":"https://stats.hohoweiya.xyz/2021/11/16/debiased-glm/","timestamp":"2024-11-13T01:32:03Z","content_type":"text/html","content_length":"8337","record_id":"<urn:uuid:6bb444a7-59e2-4f7a-89ab-3176eb391f23>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00712.warc.gz"}
There are many applications of nonlinear dynamics in psychology, biomedical sciences, sociology, political science, organizational behavior and management, macro- and micro-economics. We can only provide an overview here and direct our readers to more resources using the links on the panel to the left. So let's start with the big picture - the paradigm. Nonlinear theory introduces new concepts to psychology for understanding change, new questions that can be asked, and offers new explanations for phenomena. It would be correct to call chaos and complexity theory in psychology a new paradigm in scientific thought generally, and psychological thought specifically. A special issue of Nonlinear Dynamics, Psychology, and Life Sciences in January, 2007 was devoted to the paradigm question, which actually spans across the various disciplines we study. The highlights of the paradigm are: 1. Events that are apparently random can actually be produced by simple deterministic functions; the challenge is to find the functions. 2. The analysis of variability is at least as important as the analysis of means, which pervades the linear paradigm. 3. There are many types of change that systems can produce, not just one; hence we have all the different modeling concepts that have been described thus far. 4. Contrary to common belief, many types of systems are not simply resting in equilibrium unless perturbed by a force outside the system; rather, stabilities, instabilities, and other change dynamics are produced by the system as it behaves "normally." 5. Many problems that we would like to solve cannot be traced to single underlying causes; rather, they are product of complex system behaviors. 6. Because of the above, we can ask many new types of research questions and need to develop appropriate research methods for answering those questions. Such efforts are well underway (see further along on this Resources page). Nonlinear science is an interdisciplinary adventure. Its growth has been facilitated by the interactions among scientific disciplines, as they are traditionally defined. Scientists soon discover that there are common principles the underlie phenomena that are seemingly unrelated. Consider some quick and blatant examples: 1. The phase shifts that are associated with water turning to ice or vapor follow the same dynamical principles as the transformations made by clinical psychology patients from the time of starting therapy to the time when the benefits of therapy are realized in their lives. 2. The changes in work performance (or error rates) as a person's mental workload becomes too great follows the same dynamics as the buckling of a beam, the materials for which could range from elastic and flexible to rigid and stiff. 3. The growth of a discussion group on the internet parallels that of a population of organisms, which is limited by its birth rate and environmental carrying capacity. 4. The transformation of a work team from a leaderless group into one with primary and secondary leadership roles as its task unfolds bears a close resemblance to the process of speciation in Kauffman's NK[C] model as an organism finds new ecological niches in a rugged landscape. (The former is a less complex version of the latter, however.)
{"url":"https://societyforchaostheory.org/resources/","timestamp":"2024-11-06T07:46:37Z","content_type":"text/html","content_length":"168281","record_id":"<urn:uuid:7357666c-d988-4b9e-a2d6-ce38bcb9834a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00219.warc.gz"}
A quadratic function is given. f(x) = −x2 − 3x + 3 Mathematics Assignment Help - A quadratic function is given. f(x) = −x2 − 3x + 3 Mathematics Assignment Help. A quadratic function is given. f(x) = −x2 − 3x + 3 Mathematics Assignment Help. (/0x4*br /> A quadratic function is given. (a) Express the quadratic function in standard form. (c) Find its maximum or minimum value. f(x) = A quadratic function is given. f(x) = −x2 − 3x + 3 Mathematics Assignment Help[supanova_question] The effectiveness of a television commercial depends on how many times a viewer Mathematics Assignment Help The effectiveness of a television commercial depends on how many times a viewer watches it. After some experiments an advertising agency found that if the effectiveness E is measured on a scale of 0 to 10, then where n is the number of times a viewer watches a given commercial. For a commercial to have maximum effectiveness, how many times should a viewer watch it? A soft-drink vendor at a popular beach analyzes his sales records and finds that Mathematics Assignment Help A soft-drink vendor at a popular beach analyzes his sales records and finds that if he sells x cans of soda pop in one day, his profit (in dollars) is given by P(x) = −0.001x^2 + 3x − 1800. What is his maximum profit per day? How many cans must he sell for maximum profit? A manufacturer finds that the revenue generated by selling x units of a certain Mathematics Assignment Help A manufacturer finds that the revenue generated by selling x units of a certain commodity is given by the function R(x) = 80x − 0.2x^2, where the revenue R(x) is measured in dollars. What is the maximum revenue, and how many units should be manufactured to obtain this maximum? $ , at units If a ball is thrown directly upward with a velocity of 32 ft/s, its height (in f Mathematics Assignment Help If a ball is thrown directly upward with a velocity of 32 ft/s, its height (in feet) after t seconds is given by y = 32t − 16t^2. What is the maximum height attained by the ball? (Round your answer to the nearest whole number.) Carol has 2,000 ft of fencing to fence in a rectangular horse corral. x 1000 – Mathematics Assignment Help Carol has 2,000 ft of fencing to fence in a rectangular horse corral. (a) Find a function that models the area of the corral in terms of the width of the corral. (b) Find the dimensions of the rectangle that maximize the area of the corral. Carol has 2,000 ft of fencing to fence in a rectangular horse corral. x 1000 – Mathematics Assignment Help[supanova_question] A quadratic function is given. please help Mathematics Assignment Help A quadratic function is given. (a) Express the quadratic function in standard form. (c) Find its maximum or minimum value. f(x) = Homework Question Help (Finance) Business Finance Assignment Help A bond with an annual coupon of $100 originally sold at par for $1000. The current yield on the maturity on this bond is 9%. Assuming no change in risk, this bond would sell at a ______ in order to compensate A rain gutter is formed by bending up the sides of a 44-inch-wide rectangular me Mathematics Assignment Help A rain gutter is formed by bending up the sides of a 44-inch-wide rectangular metal sheet as shown in the figure. (a) Find a function that models the cross-sectional area of the gutter in terms of (b) Find the value of x that maximizes the cross-sectional area of the gutter. x = in (c) What is the maximum cross-sectional area for the gutter? When a certain drug is taken orally, the concentration of the drug in the patien Mathematics Assignment Help When a certain drug is taken orally, the concentration of the drug in the patient’s bloodstream after t minutes is given by C(t) = 0.06t − 0.0002t^2, where 0 ≤ t ≤ 240 and the concentration is measured in mg/L. When is the maximum serum concentration reached? t = min What is the maximum concentration? A quadratic function is given. f(x) = −x2 − 3x + 3 Mathematics Assignment Help A quadratic function is given. f(x) = −x2 − 3x + 3 Mathematics Assignment Help
{"url":"https://anyessayhelp.com/a-quadratic-function-is-given-fx-%E2%88%92x2-%E2%88%92-3x-3-mathematics-assignment-help/","timestamp":"2024-11-12T20:17:28Z","content_type":"text/html","content_length":"151568","record_id":"<urn:uuid:a63fe185-8212-4dd1-ab5c-93a605517b02>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00591.warc.gz"}
Effective Mathematics Teaching Practices You are here Effective Mathematics Teaching Practices Instructional Support Menu Mathematics Teaching Practices In mathematics, a framework for Best, First Instruction combines the content and skills described by the Colorado Academic Standards for Mathematics, the Colorado Essential Skills, the Standards for Mathematical Practice, and NCTM's essential mathematics teaching practices, described below. Effective Mathematics Teaching Practices Additional Principles to Actions Resources NCTM has developed a number of resources to support Principles to Actions. NCTM membership is required to access some of the materials, but if you have questions you can contact Raymond Johnson for more information.
{"url":"http://www.cde.state.co.us/comath/effectivemathteachingpractices","timestamp":"2024-11-05T07:46:23Z","content_type":"text/html","content_length":"30185","record_id":"<urn:uuid:d9d2e7db-03ab-4da7-91d9-a62241b73308>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00133.warc.gz"}
function with LINQ Today I work on a problem to find if a line is intersected with a circle. so f1 is y = ax + b and f2 is x^2 + y^2 - r^2 = 0 It is true you can use for-loop to solve this problem, but since I hate for-loop, i will use LINQ. the input to f1 is { x }, if the line is intersected with a circle means there is (a,b) where f2(a,b) <=0. OK, the data input is {x}, and it is transformed by f1 into some { (x,y) } tuple sequence. If there is a tuple in the sequence makes f2 <=0, we can draw the conclusion that the line is intersected with a circle. The pseudo-code is like: { x } |> seq.map f1 |> seq.exists f2(x,y) <=0 Let me give detailed code tomorrow
{"url":"http://apollo13cn.blogspot.com/2011/04/function-with-linq.html","timestamp":"2024-11-03T00:14:05Z","content_type":"text/html","content_length":"64982","record_id":"<urn:uuid:6297c0eb-5a97-4805-8f52-7e1729d357df>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00406.warc.gz"}
SAS: Time Series Forecasting - ARIMA In this tutorial, we will cover how to perform ARIMA with SAS, along with an explanation of how it works. Hope you have gone through the Part-1 of this series Table of Contents Data Preparation Steps For ARIMA Modeling 1. Check if there is variance that changes with time - Volatility. For ARIMA, the volatility should not be very high. 2. If the volatility is very high, we need to make it non-volatile. 3. Check for Stationary - a series should be stationary before performing ARIMA. 4. If data is non-stationary, we need to make it stationary. 5. Check for Seasonality in the data Data File Location Library - SASHELP Data set - AIR Step 1 : Check the time series As a matter of practice, we first plot the time series and have a cursory look upon it. It can be done directly in SAS using following code : proc sgplot data = sashelp.AIR; series x = date Y = AIR; It would give you the following plot in the result window : SAS : Time Series Modeling It is clear from the chart above that the series of AIR is having an increasing trend and consistent pattern over time. The peaks are at a constant time interval which is indicative of presence of seasonality in the series. This is a non-stationary series for sure and hence we need to make it stationary first. Practically, ARIMA works well in case of such types of series with a clear trend and seasonality. We first separate and capture the trend and seasonality component off the time-series and we are left with a series i.e. stationary. This stationary series is forecasted using ARIMA and then final forecasting incorporates the pre-captured trend and seasonality. We would understand it in details further in Step 2 : Check the volatility of the series Volatility is the degree of variation of a time-series over time. For ARIMA, the volatility should not be very high. For checking the volatility of time-series, we do a scatter plot using the following SAS code : Proc gplot data=SAShelp.AIR; plot Date * AIR; It would give you the following plot in the result window : Check the volatility of Series The highlighted area is showing the diverging pattern (Fan shaped) of the scatter plot and hence depicting that the data is volatile. Ideally, the highlighted pattern should be parallel for ARIMA Step 3 : Treatment of Volatile Series We need to make the series non-volatile and move ahead. We would transform the AIR series and remove volatility. Generally a hit and trail method for transformation is used, but we would suggest to not to waste your time. Box-Cox Transformation can be used to help you out and recommend the suitable transformation. Proc Transreg Data = sashelp.AIR; Model BOXCOX (AIR) = Identity(Date); You get following plot along with Lamba value, which is "0" in this case. Now based on this Lambda value, you can decide the transformation. Take help from the table provided below. In our case, it is suggesting a log transformation, so we do the same. In a new data (Masterdata) we create a new variable (Log_AIR). Data Masterdata; Set SAShelp.AIR; Log_AIR = log(AIR); We can check the volatility again of the transformed series, just to be sure, using scatter plot as elaborated above. Step 4 : Check For Non-Stationarity Now on the transformed series, we check whether the series is stationary or non-stationary. For performing ARIMA , a series should be stationary, however if the series is non-stationary, we make it stationary (For more explanation on stationarity, read Part 1 of this series). Rather than identifying the series's stationarity visually as we have done in step 1, we now use Augmented Dickey-Fuller Unit Ratio Test for the same. Unit Root - Homogeneous Non-Stationarity Data Dickey-Fuller test The Dickey-Fuller test is used to test the null hypothesis that the time series exhibits a lag d unit root against the alternative of stationarity. Null Hypothesis : Non-Stationary Alternative Hypothesis : Stationary There are three types by which you can calculate test statistics of dickey-fuller test. 1. Zero Mean - No Intercept. Series is a random walk without drift. 2. Single Mean - Includes Intercept. Series is a random walk with drift. 3. Trend - Includes Intercept and Trend. Series is a random walk with linear trend. All the above test statistics are computed from the OLS regression model. Drawback of ADF Test Uncertainty about what test version to use, i.e. about including the intercept and time trend terms. Inappropriate exclusion or inclusion of these terms substantially affects test reliability. Using of prior knowledge (for instance, as result of visual inspection of a given time series) about whether the intercept and time trend should be included is the mostly recommended way to overcome the difficulty mentioned. We run Proc ARIMA Stationarity = (ADF) option to do so : PROC ARIMA DATA= Masterdata ; IDENTIFY VAR = log_Air STATIONARITY= (ADF) ; There are many outputs of the above code, a part of which is used for checking stationarity: Important Note : Check Tau Statistics (Pr < Tau) in ADF Unit Root Tests table. It should be less than 0.05 to say data is stationary at 5% level of significance. Step 5 : Make Non-Stationary Data Stationary Post establishing the non-stationarity of the series, we need to make the series stationary. Differencing process is used for making the series stationary. Differencing : Transformation of the series to a new time series where the values are the differences between consecutive values Differencing Procedure may be applied consecutively more than once, giving rise to the "first differences" "second differences" , etc. Differencing Orders : 1st order : ∇xt = xt - xt-1 2nd order : ∇2xt = (∇xt - ∇xt-1) = xt - 2xt-1 + xt-2 It is unlikely that more than two differencing orders would ever be required. Note : If there is a physical explanation for a trend or seasonal cycle : use to make series stationary. For that we use the output of the Step-3 code itself. While we have run the code above, we have got "Autocorrelation Check for White Noise" along with " Augmented Dickey-Fuller Unit Root Tests". Looking at "Autocorrelation Check for White Noise", we decide the order(s) of differencing required. Stationary : Order of Differencing A heat map has been made using Excel for demonstration, SAS output is black and white only. The first row of the above autocorrelation matrix shows correlation of time-series with 1st to 6th lags, second row show the same for 7th to 12th lags...and so on ... The same is visible in ACF chart provided in Step-3 visuals. We can see that in above matrix the highest auto-correlation exists with 1st lag, it starts decreasing but again increases to attain a local peak at 12th lag. Step 6 : Check Seasonality Highest Correlation with 1st Lag indicates towards the presence of trend and that with 12th lag indicates an annual seasonality. Hence we need to do differencing at first and Twelfths orders. We perform differencing and check the stationarity again. PROC ARIMA DATA= masterdata ; IDENTIFY VAR = Log_Air (1,12) STATIONARITY= (ADF) ; We have used 1 and 12 in bracket to define the 1st and 12th order of differencing. Check whether data is stationary Check Tau Statistics (Pr < Tau) in ADF Unit Root Tests table again and see if the value <0.05 to say data is stationary at 5% level of significance. How this differencing actually worked : 1. First order (1) Differencing removes the trend, but Seasonality still exists. 2. Second Order (12) Differencing removes the seasonality. How to do it with MS Excel: First subtract first lag from each observation and plot it. Then in this new series subtract 12th lag from each observation. Step 7 : Split Data into Training and Validation Now we can break the data into Training and Validation samples.We cannot use random sampling like we do in regression models to split the data. Instead, we can use recent data for validation and remaining data be used to train the model. We would develop ARIMA model and forecast on Testing part and would check the results on Validation part. Data Training Validation; Set Masterdata; If date >= '01Jan1960'd then output Validation; Else output Training; Next Step - Follow Part 3 of this series to learn how to train ARIMA model on a training dataset using SAS. This article was originally written by Rajat Agarwal, later Deepanshu gave final touch to the post. Rajat is an analytics professional with more than 8 years of work experience in diverse business domains. He has gained expert knowledge in Excel and SAS. He loves to create innovative and imaginative dashboards with Excel. He is founder and lead author cum editor at Ask Analytics. Post Comment 1 Response to "SAS: Time Series Forecasting - ARIMA" 1. despite doing everything - using MINIC, my autocorrelation is still significant, what should I do Autocorrelation Check of Residuals To Chi- Pr > Lag Square DF ChiSq --------------------Autocorrelations-------------------- 6 19.46 4 0.0006 -0.045 -0.094 0.282 -0.178 0.079 0.279 12 41.37 10 <.0001 -0.260 0.024 0.311 -0.199 0.005 -0.121 18 64.20 16 <.0001 -0.275 0.212 -0.075 -0.193 0.207 -0.072 24 92.59 22 <.0001 -0.125 0.151 -0.180 -0.168 0.272 -0.250
{"url":"https://www.listendata.com/2015/09/time-series-forecasting-arima-part-2.html","timestamp":"2024-11-05T01:20:26Z","content_type":"application/xhtml+xml","content_length":"118827","record_id":"<urn:uuid:cf757475-bc4f-42bf-b09d-39940b4768cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00127.warc.gz"}
Why is one to many not a function? Asked by: Mr. Israel Goldner MD Score: 4.3/5 37 votes A function cannot be one-to-many because no element can have multiple images. The difference between one-to-one and many-to-one functions is whether there exist distinct elements that share the same Why is a one-to-many relation not a function? If it is possible to draw any vertical line (a line of constant x) which crosses the graph of the relation more than once, then the relation is not a function. If more than one intersection point exists, then the intersections correspond to multiple values of y for a single value of x (one-to-many). Why is a function one-to-many? This means that two (or more) different inputs have yielded the same output and so the function is many-to-one. If a function is not many-to-one then it is said to be one-to-one. This means that each different input to the function yields a different output. What makes a function not one-to-one? What Does It Mean if a Function Is Not One to One Function? In a function, if a horizontal line passes through the graph of the function more than once, then the function is not considered as one-to-one function. Also,if the equation of x on solving has more than one answer, then it is not a one to one function. Can a relation be one-to-one but not a function? The answer here is yes, relations which are not functions can also be described as injective or surjective. A-Level Maths: B8-04 Functions: One-to-One, Many-to-One, One-to-Many, Many-to-Many 29 related questions found How do you tell if a relation is not a function? Determining whether a relation is a function on a graph is relatively easy by using the vertical line test. If a vertical line crosses the relation on the graph only once in all locations, the relation is a function. However, if a vertical line crosses the relation more than once, the relation is not a function. How do you tell if a relation is a function? Identify the output values. If each input value leads to only one output value, classify the relationship as a function. If any input value leads to two or more outputs, do not classify the relationship as a function. What is not a function? A function is a relation in which each input has only one output. In the relation , y is a function of x, because for each input x (1, 2, 3, or 0), there is only one output y. x is not a function of y, because the input y = 3 has multiple outputs: x = 1 and x = 2. How do you know if a function is one-to-one without graphing? Use the Horizontal Line Test. If no horizontal line intersects the graph of the function f in more than one point, then the function is 1 -to- 1 . A function f has an inverse f−1 (read f inverse) if and only if the function is 1 -to- 1 . How do you prove a function? Summary and Review 1. A function f:A→B is onto if, for every element b∈B, there exists an element a∈A such that f(a)=b. 2. To show that f is an onto function, set y=f(x), and solve for x, or show that we can always express x in terms of y for any y∈B. Is many to many is a function? Any function is either one-to-one or many-to-one. A function cannot be one-to-many because no element can have multiple images. The difference between one-to-one and many-to-one functions is whether there exist distinct elements that share the same image. There are no repeated images in a one-to-one function. Is a one to many relationship a function? One-to-many relations are not functions. Example: Draw a mapping diagram for the function f(x)=2x2+3 in the set of real numbers. What is difference between relation and function? A relation is defined as a relationship between sets of values. Or, it is a subset of the Cartesian product. A function is defined as a relation in which there is only one output for each input. Are all function relations? Note that both functions and relations are defined as sets of lists. In fact, every function is a relation. However, not every relation is a function. In a function, there cannot be two lists that disagree on only the last element. What makes a relationship great? What does a good relationship need? It will vary from one person to another, but most people would probably agree that respect, companionship, mutual emotional support, sexual expression, economic security and, often, childrearing, are all important parts of an adult relationship. How do you know if a function is Injective? To show that a function is injective, we assume that there are elements a1 and a2 of A with f(a1) = f(a2) and then show that a1 = a2. Graphically speaking, if a horizontal line cuts the curve representing the function at most once then the function is injective. How do you tell if a graph is a function? Inspect the graph to see if any vertical line drawn would intersect the curve more than once. If there is any such line, the graph does not represent a function. If no vertical line can intersect the curve more than once, the graph does represent a function. How do you know if a function is invertible? In general, a function is invertible only if each input has a unique output. That is, each output is paired with exactly one input. That way, when the mapping is reversed, it will still be a Which set is not a function? Sridhar V. Set C does NOT represent a function. What's a function and not a function? A function is a relation between domain and range such that each value in the domain corresponds to only one value in the range. Relations that are not functions violate this definition. They feature at least one value in the domain that corresponds to two or more values in the range. How do you know if a function is not a function? Use the vertical line test to determine whether or not a graph represents a function. If a vertical line is moved across the graph and, at any time, touches the graph at only one point, then the graph is a function. If the vertical line touches the graph at more than one point, then the graph is not a function. Is a circle a function? If you are looking at a function that describes a set of points in Cartesian space by mapping each x-coordinate to a y-coordinate, then a circle cannot be described by a function because it fails what is known in High School as the vertical line test. A function, by definition, has a unique output for every input. Are all functions one to one? A function for which every element of the range of the function corresponds to exactly one element of the domain. One-to-one is often written 1-1. Note: y = f(x) is a function if it passes the vertical line test.
{"url":"https://moviecultists.com/why-is-one-to-many-not-a-function","timestamp":"2024-11-02T08:52:49Z","content_type":"text/html","content_length":"40970","record_id":"<urn:uuid:6e9c6fd5-2b45-4ed1-bb56-35477fcf6624>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00610.warc.gz"}
Asymptotic Notation Date: September 28 2021 Summary: An overview of asymptotic notation and time complexity Keywords: #asymptotic #notation #complexity #bigo #masters #archive Not Available Table of Contents - Basic memory and reference management - Simple comparisons - Basic arithmetic - Addition - Subtraction - Multiplication - Division - Modulo - Execution time - Space used (in memory or disk) $\mathcal{O}(5n) \rightarrow \mathcal{O}(n)$ $\mathcal{O}(n^{2} + 1000n - 3) \rightarrow \mathcal{O}(n^{2})$ • Dropping constants theoretically is possible because constants do not grow towards infinity • In practice however, these constants can affect practical outcomes of algorithms • Performance does not scale with input size • Example is having a list and returning the first item in the list: mylist = [1:5...] • Performance does scale with size • Example is summing all elements in an array: mylist = [1:5...] summed_values = sum(mylist) • Performance scales logarithmically with input size • Base doesn't matter due to change of base: $log_{m}(n) = \frac{log_{2}(n)}{log_{2}(m)} = Clog_{2}(n) \rightarrow \mathcal{O}(log_{m}(n)) \rightarrow \mathcal{O}(log(n))$ • It can intuitively be thought of the running time is proportional to the $n$ Stack Overflow Explanation • Another way to think about it is that the time goes up linearly while $n$ increases exponentially Stack Overflow Explanation An example of this behavior is here: Time ($t$) $n$ Where the proportion can be stated as: $f(n) = log_{10}(n^{t})$ The following exercises can be found here 1. What is the time and space complexity of: a = 0 i = 0 while i < N a = a + rand(1) i += 1 b = 0 j = 0 while j < M b = b + rand(1) j += 1 Answer: $\mathcal{O}(N + M), \mathcal{O}(1)$ Explanation: Since we measure complexity by worse case scenario of primitive operations ran, there could be $N$ and $M$ operations executed in this code. As no additional space is being utilized, space complexity is constant as no new variables are being defined. 2. What is the time complexity of: a = 0 i = 0 j = N while i < N while j > i a = a + i + j j -= 1 i += 1 Answer: $\mathcal{O}(N \cdot N)$ Explanation: Both loops are dependent on $N$ so both loops, iterate $N$ times, therefore resulting in a time complexity of $\mathcal{O}(N \cdot N)$ 3. What is the time complexity of the following code: i = N / 2 k = 0 while i <= N j = 2 while j <= N k = k + N / 2 j *= 2 i += 1 Answer: $\mathcal{O}(n \cdot \log(n))$ Explanation: As $n$ continues to increase, the variable, $k$ continues to loosely grow more than exponential. Furthermore, there are, $\frac{n}{2}$ primitive steps in the outer loop such that the total time complexity would be $\mathcal{O}(\frac{n}{2} \cdot \log{n})$ which is then simplified to $\mathcal{O}(n \cdot \log(n))$ I got this wrong initially because I did not account for the outer loop contributing a time complexity of $\frac{n}{2}$. 4. What does it mean when we say that algorithm X is asymptotically more efficient than Y? Answer: Algorithm X will always be better for large inputs Explanation: When we consider an asymptote in terms of an algorithm, we also consider that algorithms "growth" over time. Meaning, that if you have some algorithm that is efficient at an asymptote, by nature of asymptotic analysis, that means it is "good" in the worst case scenario of that algorithm. Addendum: I got this wrong when thinking about asymptotic notation as I failed to consider growth. I thought X would be better for all inputs to that algorithm but that would not be so in the case of possibly a smaller input to X. 5. What is the time complexity of the following code: a = 0 i = N while i > 0 a += i i /= 2 Answer: $\mathcal{O}(\log{n})$ Explanation: There is a direct proportional relationship between $n$ and the final output. 6. What best describes the useful criterion for comparing the efficiency of algorithms? Answer: Time and Memory Explanation: Time dictates how long a program will evaluate for and memory dictates how much a program can evaluate 7. How is time complexity measured? Answer: By counting the number of primitive operations in an algorithm on a given input size. Explanation: Each primitive operation is generally assumed to evaluate at the cost of "one" for each operation. 8. What will be the time complexity of the following code? <!β NOTE: SKIPPING FOR NOWβ > i = 0 while i < N i *= k 9. What will be the time complexity of the following code? <!β NOTE: SKIPPING FOR NOWβ > value = 0 i = 0 j = 0 while i < n while j < i value += 1 j += 1 i += 1 10. Algorithm A and B have a worst-case running time of $\mathcal{O}(n)$ and $\mathcal{O}(logn)$, respectively. Therefore, algorithm B always runs faster than algorithm A. Answer: False Explanation: Algorithm A could be faster on smaller inputs as compared to algorithm B Zelko, Jacob. Asymptotic Notation. https://jacobzelko.com/09242021040445-asymptotic-notation. September 28 2021.
{"url":"https://jacobzelko.com/09242021040445-asymptotic-notation/","timestamp":"2024-11-11T20:15:21Z","content_type":"text/html","content_length":"53238","record_id":"<urn:uuid:4589254f-ca85-47f9-af64-0a8a2fa1e2b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00447.warc.gz"}
Exponential growth - Year 10 maths revision "When will I ever need to use logarithms, or raising something to the the power of 1/n in real life". During Covid-19, it turns out. Since no one likes logarithms, I thought I'd just post a few formulas. The following should work on any system of exponential growth. (Naturally, for Excel formulas, substitute in the value or cell reference for X, Y, T, etc) Formula Excel Formula Example Initial Cases X 112 Aus # Mar10 Final Cases Y 2,431 Aus # Mar25 Time T 15 days Multiplier over duration M = Y/X =Y/X 21.71 Daily Multiplier M[D] = M^1/T =(Y/X)^(1/T) 1.23 Daily % Increase =(M[D] -1) * 100 =((Y/X)^(1/T)-1) 22.8% Format as % Time to double T[2X] = log[e]2 / log[e]M[D] =LN(2)/LN((Y/X)^(1/T)) 3.38 days Time to 10x T[10X] = log[e]10 / log[e]M[D] =(Y/X)^(1/T)/LN(10) 11.22 days Convert from 'time to double' to 'time to 10x' T[10X ]= T[2X] * log[e]10 / log[e]2 =TDOUBLE * LN(10)/LN(2) Convert from 'time to 10x' to 'time to double' T[2X] =T[10X] * log[e]2 / log[e]10 =TTEN * LN(2)/LN(10) Convert from 'time to double' to 'daily multiplier' M[D] = e^log2 / T2X =EXP( LN(2) / TDOUBLE) Apply a daily multiplier for N days M = M[D]^N =POWER(MD, NUMDAYS) No comments:
{"url":"http://blog.ylett.com/2020/03/exponential-growth-year-10-maths.html","timestamp":"2024-11-03T16:28:12Z","content_type":"application/xhtml+xml","content_length":"75672","record_id":"<urn:uuid:252321e6-9c9a-4cb1-b29d-10a5f7fb527b>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00496.warc.gz"}
I am ready to start development on a new math program but I am stuck as to what I should create. I have already made a FOIL program and a program that tells distance, midpoint, slope and equation from two points. Any feedback would be great! EDIT 1: Started quadratic formula program. gonna need help simplifying the radical in the program Sounds like a plan. What does your code look like so far? What are you using as a TI-BASIC guide? What's the algorithm you're planning to use for radical simplification? Then there is the usual outputs. That is the business end of my code. I am not sure what alg to use... Well, the simplest version is to pull out factors from the number until you can't pull out any more, but it can be slow. It goes something like this (in pseudocode): done = 0 outside = 1; inside = N; //<- this is the original number while done == 0: done = 1 for i,1,sqrt(N): if N/i^2 = int(N/i^2): outside = outside*i done = 0 See how that works? So in ti-basic, it looks like :While K=0 :For i,1,sqrt(D) :If D/i^2=int(D/i^2 That looks just about right to me! Three caveats: - I wouldn't recommend using K for the "done" variable, as K is most often used as the key code returned from getKey. That's personal style, though. - Don't use lowercase "i", as that has a fixed value, sqrt(-1). Use uppercase I or some other of the 27 uppercase real variables. - Make sure you save one byte per squared symbol by using the superscript 2 instead of [^][2]. and to make it deal with nonreals, would this work? is there a better way? :While Z=0 :For I,1,sqrt(D) :If D/I^2=int(D/I^2 :End // then output Zi*sqrt(O :While Z=0 :For I,1,sqrt(D) :If D/I^2=int(D/I^2 :End //output Zsqrt(O then take Z and divide it by 2A Or add "a+bi" at the top of your program (found in the MODE menu). souvik1997 wrote: Or add "a+bi" at the top of your program (found in the MODE menu). I'd go with what Souvik suggested. Also, I found a few small fixes in your program, all stemming from the fact that the equal sign (=) only checks equality. Unlike other programming languages, to store a value to a variable, you use [value]→[variable], where → is the [STO>] key. So for example, should be You can even optimize another byte away by using the Delvar command (Delvar Z) instead of setting Z to 0; they do the same thing. It isn't working. It doesn't do anything to D. Say I have sqrt(8 And I run it through the program. It says the answer is 8sqrt(1 The correct answer is 2sqrt(2 lanmonster wrote: It isn't working. It doesn't do anything to D. Say I have sqrt(8 And I run it through the program. It says the answer is 8sqrt(1 The correct answer is 2sqrt(2 Whoops, totally missed this post. :While Z=0 :If D/I²=int(D/I² For starters, I see that the done=1 line in my pseudocode (which should become 1->Z here) never got carried over. For another, that inner loop should start from 2, not 1, because D/1^2 will always equal int(D/1^2. With my fixes, O sqrt (D) seems to be correct: BASIC Code wrote: :While Z=0 :If D/I²=int(D/I² :Disp {O,D} Generated by SourceCoder, © 2005-2012 Cemetech Why not replace :While Z=0 with :Repeat Z ? I'm on my phone now, so I can't really think of any other suggestions at the moment.
{"url":"https://dev.cemetech.net/forum/viewtopic.php?t=8594&view=previous","timestamp":"2024-11-12T03:44:45Z","content_type":"text/html","content_length":"67698","record_id":"<urn:uuid:e363a321-678a-402c-8fc7-0b2708a92d18>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00470.warc.gz"}
wavelength to ev Photon energy is used for representing the unit of energy. 3.1 eV hchc Ehf E!! " OR enter the energy in eV and click "Calculate λ and F" and the values will appear in the corresponding fields. The energy of a single photon of green light of a wavelength of 520 nm has an energy of 2.38 eV. Example. Use this wavelength calculator to help you determine the relationship between wavelength and frequency. But in the soft x-rays you have wavelengths in the order of angstroms. or a peak that has a wavelength of 1.06 nm and a linewidth of .01 nm would be centered at 9433962 cm-1 with a line width of 89000 cm-1. This is the process by which you can convert ev to wavelength in nm- eV = V × C / 1.602176565×10-19. Connecting Wavelength, Energy and Time . Strategy The energy of a photon of EM radiation with frequency f is E=hf. The resulting expression E = hc/λ is used as a wavelength formula. You can use the photon energy calculator to further explore the relationship between the photon energy and its frequency or wavelength. The energy of a single photon is a small number because the Planck constant is ridiculously tiny. What is the energy in electron-volts that is consumed in an electrical circuit with voltage supply of 20 volts and charge flow of 2 coulombs? Solution 4(a) Calculate the wavelength of a photon with energy 3.1 eV. In the ultraviolet you have 3 to 30 eV energies, in the range of 100 to 1000 eV you have soft x-rays, and beyond that hard x-rays. Among the units commonly used to denote photon energy are the electronvolt (eV) and the joule (as well as its multiples, such as the microjoule). ===== (b) Calculate the frequency of a photon with energy 3.1 eV. 1240 eVnm, so 400 nm. Although performing the manual calculation using the wavelength formula isn’t a complex task, this wavelength to frequency calculator is a lot easier to use and it’s highly accurate. Formulas: Planck constant h = 6.62606957*10-34 J*s Speed of light c = 299792458 m/s c = λ*f Elektron-volt: 1 eV = 1.602176565*10-19 J E = h*c / λ E p = E / (1.602176565*10-19) T at λ max = 2,89776829 nm * Kelvin / λ (Wien's displacement law) T at λ max is the temperature of a black body, whose radiation has a maximum at λ. Photons per joule = 1 / (1.602176565*10-19 * E p) OR enter the frequency in gigahertz (GHz) and press "Calculate λ and E" to convert to wavelength. This type of problem, while simple, is a good way to practice rearranging and combining equations (an essential skill in physics and chemistry). Wavelength to Joules formula is defined as (6.626xc)/w. To find energy from wavelength, use the wave equation to get the frequency and then plug it into Planck's equation to solve for energy. Here planck's equation is used in finding the energy by using the wavelength of the light. 1000 3. Wavelength will be in μm. How to convert delta cm-1 to delta electronvolts or eV Since eV is proportional to cm-1 this is easy d(eV) = d(cm-1) * 1.23984 x 10-4 So energy is inversely proportional to wavelength. 8 14 9 3.0010 m/s How to convert eV to volts Photon energy can be expressed using any unit of energy . In the visible spectrum you have wavelengths of a nanometer. To determine the energy of a wave from its wavelength, we need to combine Planck's equation with wavelength equation. Equivalently, the longer the photon's wavelength, the lower its energy. Free Javascript Angstrom - ElectronVolt Eachway Converter. The frequency and wavelength are related by!f=c. Cornelius - January 2001 Source: http://www.srs.dl.ac.uk/XUV-VUV/science/ angstroms.html Given below energy of light with wavelength formula to calculate joules, kilojoules, eV, kcal. E = 20V × 2C / 1.602176565×10-19 = 2.4966×10 20 eV . Here, h is Planck's constant and c is the speed of light. Originally by S.M. Die Stämme Timing Script, Wolf Tötet Kind, Konzert Bülent Ceylan, An Tagen Wie Diesen, I Am Legend Buch, Aufgaben In Der Familie Arbeitsblatt, Kontra K Shop, Kindergarten Spiele Für Drinnen, Poe Crusader Influence Map, Berühmte Brücke Frankfurt, Luca Hänni Mutter,
{"url":"http://web290.server44.configcenter.info/uzvosx/wavelength-to-ev","timestamp":"2024-11-05T06:03:13Z","content_type":"text/html","content_length":"26103","record_id":"<urn:uuid:89702cf7-cb82-4f81-9dbd-148843696c83>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00768.warc.gz"}
Regularity Properties and Determinacy MoL-2007-11: Khomskii, Yurii (2007) Regularity Properties and Determinacy. [Report] Text (Full Text) Preview Download (449kB) | Preview Text (Abstract) Download (7kB) One of the most intriguing developments of modern set theory is the investigation of two-player infinite games of perfect information. Of course, it is clear that applied game theory, as any other branch of mathematics, can be modeled in set theory. But we are talking about the converse: the use of infinite games as a tool to study fundamental set theoretic questions. When such infinite games are played using integers as moves, a surprisingly rich theory appears, with connections and consequences in all fields of pure set theory, particularly the study of the continuum (the real numbers) and Descriptive Set Theory (the study of "definable" sets of reals). The concept of determinacy of games-a game is determined if one of the players has a winning strategy-plays a key role in this field. In the 1960s, the Polish mathematicians Jan Mycielski and Hugo Steinhaus proposed the famous Axiom of Determinacy (AD), which implies that all sets of reals are Lebesgue measurable, have the Baire property, the Perfect Set Property, and in general all the "regularity properties". This contradicts the Axiom of Choice (AC) which allows us to construct irregular sets by using an enumeration of the continuum. A lot of work on determinacy is therefore done in ZF, i.e., Zermelo-Fraenkel set theory without the Axiom of Choice. In such a mathematical universe with AC replaced by AD, the pathological, nonconstructive sets that form counter-examples to the regularity properties are altogether banished. But how should we understand determinacy in the context of ZFC, i.e., standard Zermelo-Fraenkel set theory with Choice? The easiest way is to look at determinacy as another kind of regularity property, D, where a set of reals A is determined if its corresponding game is determined. Since in the AD context infinite games are used to prove regularities, one would expect determinacy to be a kind of "mother regularity property", one which subsumes and implies all the others. This is indeed true, but only in the "classwise" sense: assuming for some large collection Gamma of sets that each of them is determined, we may conclude that each set in Gamma has the regularity properties. Does determinacy actually have "pointwise" consequences, i.e., if we know of a set A that it is determined, does that imply that A is regular? In general, the answer is no. The real "mother regularity property" is the much stronger property of being homogeneously Suslin, which does imply all the regularity properties pointwise.1 Although there are close similarities between determinacy and being homogeneously Suslin, the crucial difference lies in the fact that the former has only classwise consequences whereas the latter has pointwise consequences. In this sense determinacy is a relatively weak property. Although, from the beginning, researchers were aware of this fact, a rigorous study of pointwise (non-)implications from determinacy has not been carried out until a paper by Loewe in 2005. In this thesis, we will continue the research started in that paper and generalize some of its results. Another focus of this thesis are the regularity properties themselves. We take the view that most regularity properties are naturally connected with special combinatorial objects called forcing partial orders. The motivation comes from the theory of forcing, a mainstream area dealing with the independence of certain propositions (like the Continuum Hypothesis) from the axioms of set theory. These combinatorial objects are also interesting in their own right, and can be put in connection with classical regularity properties (e.g., the Baire property and the Perfect Set Property) as well as other regularity properties. There are still a number of open questions regarding these connections. This thesis will combine the study of pointwise consequences of determinacy with the study of these general open questions. Concretely, we denote a particular forcing partial order by P. Some P generate a topology, whereas others don't, and this distinction into topological versus non-topological forcing notions will be central to our work. The most important regularity property connected to P is the Marczewski-Burstin algebra denoted by MB(P), which can easily be defined for any P. However, when P is topological, this algebra tends to be a "bad" regularity property and is replaced by the Baire property in the topology generated by P, denoted by BP(P). But this is only a heuristic distinction, and no research has yet been done on what the precise reason for the dichotomy is. This leads us to formulate our first research question: Main Question 1: Why is there a dichotomy between topological and nontopological forcings P, i.e., why is it that for non-topological forcings P the right regularity property is MB(P) whereas for topological ones it is BP(P)? When is MB(P) a "good" property, and what is the relationship between the two regularity properties? Moving on toward pointwise consequences of determinacy, we wish to study the connections between determinacy and the regularity properties introduced above. In Loewe's paper, the case of non-topological forcings P and the corresponding algebras MB(P) is covered, where it is proved that in all interesting cases determinacy does not imply MB(P) pointwise. Also, a weak version of the Marczewski-Burstin algebra, denoted by wMB(P), is introduced and studied (where the connections with determinacy are more interesting). We will do an analogous analysis for the topological Main Question 2: Can we do an analysis of the pointwise connection between determinacy and the Baire property BP(P) (for topological P), similar to the one in Loewe's paper? Can we also introduce a weak version of the Baire property wBP(P), and if so, what is the pointwise connection between determinacy and wBP(P)? If BP(P) was a generalization of the standard Baire property, then there are also several generalizations of the Perfect Set Property. These so-called asymmetric regularity properties can also be connected to forcing partial orders P, in which case we denote them by Asym(P). In current research, there are four particular examples but as of yet no general definition. We would like to find that general definition, and also to study the pointwise connections with determinacy, analogously to Question 2. This leads us to the last research question: Main Question 3: Can a general definition for the asymmetric property Asym(P) be given? If so, can we do a similar analysis for the pointwise connections between determinacy and Asym(P) as we did in Question 2? This thesis is structured as follows: in Chapter 1, we introduce the basic definitions and ideas related to the study of the real numbers and the forcing notions. Chapter 2 is still introductory, developing in detail the key ideas: determinacy, regularity properties, pointwise and classwise implications. In Chapter 3 we deal with Main Question 1. The main result there is Theorem 3.4 which provides the connection between MB and BP. In the rest of the chapter we study other aspects of Question 1 (when is MB(P) a \sigma-algebra) and provide a partial answer in Theorems 3.6 and Theorem 3.13. In Chapter 4 we deal with Main Question 2. Analogously to Loewe's paper we prove that determinacy does not imply BP(P) pointwise (Theorem 4.8) and characterize the P for which determinacy does, or does not, imply the weak Baire property pointwise (Theorems 4.13 and Finally, in Chapter 5 we deal with Main Question 3. Although we do not find a clear definition for Asym(P), we do give a necessary condition which such a property must satisfy, in terms of a game characterization. This characterization is sufficient to solve the second part of the question: in Theorem 5.12 we do prove that determinacy does not imply Asym(P) pointwise in all non-trivial cases. Item Type: Report Report Nr: MoL-2007-11 Series Name: Master of Logic Thesis (MoL) Series Year: 2007 Date Deposited: 12 Oct 2016 14:38 Last Modified: 12 Oct 2016 14:38 URI: https://eprints.illc.uva.nl/id/eprint/783 Actions (login required)
{"url":"https://eprints.illc.uva.nl/id/eprint/783/","timestamp":"2024-11-06T05:01:04Z","content_type":"application/xhtml+xml","content_length":"40511","record_id":"<urn:uuid:73038e3c-258f-4732-93e4-efbec7cbce23>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00882.warc.gz"}
Geometric interpretation of the weak-field Hall conductivity in two-dimensional metals with arbitrary Fermi surface The Hall conductivity xy of a two-dimensional metal in the weak-field, semiclassical, limit has a simple geometric representation. xy (normalized to e2/h, where e is the electron charge and h is Plancks constant), is equal to twice the number of flux quanta 0 threading the area Al, where Al is the total Stokes area swept out by the scattering path length l(k) as k circumscribes the Fermi surface (FS). From this perspective, many properties of xy become self-evident. The representation provides a powerful way to disentangle the distinct contributions of the three factors, FS area-to-circumference ratio, anisotropy in lk, and negative FS curvature. The analysis is applied to the Hall data on 2H-NbSe2 and the cuprate perovskites. Previous model calculations of xy are critically reexamined using the new representation. All Science Journal Classification (ASJC) codes Dive into the research topics of 'Geometric interpretation of the weak-field Hall conductivity in two-dimensional metals with arbitrary Fermi surface'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/geometric-interpretation-of-the-weak-field-hall-conductivity-in-t","timestamp":"2024-11-09T11:21:10Z","content_type":"text/html","content_length":"49676","record_id":"<urn:uuid:7d59c5af-58c8-4740-bee0-24954f2ca24f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00248.warc.gz"}
Augmenting photometric redshift estimates using spectroscopic nearest neighbours Issue A&A Volume 672, April 2023 Article Number A150 Number of page(s) 9 Section Numerical methods and codes DOI https://doi.org/10.1051/0004-6361/202245369 Published online 14 April 2023 A&A 672, A150 (2023) Augmenting photometric redshift estimates using spectroscopic nearest neighbours ^1 Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di Milano, Via G. Celoria 16, 20133 Milano, Italy e-mail: federico.tosone@unimi.it; marina.cagliari@unimi.it ^2 INAF–Osservatorio Astronomico di Brera, Via Brera 28, 20121 Milano, and Via E. Bianchi 46, 23807 Merate, Italy ^3 INFN–Sezione di Milano, Via G. Celoria 16, 20133 Milano, Italy Received: 3 November 2022 Accepted: 2 March 2023 As a consequence of galaxy clustering, close galaxies observed on the plane of the sky should be spatially correlated with a probability that is inversely proportional to their angular separation. In principle, this information can be used to improve photometric redshift estimates when spectroscopic redshifts are available for some of the neighbouring objects. Depending on the depth of the survey, however, this angular correlation is reduced by chance projections. In this work, we implement a deep-learning model to distinguish between apparent and real angular neighbours by solving a classification task. We adopted a graph neural network architecture to tie together photometry, spectroscopy, and the spatial information between neighbouring galaxies. We trained and validated the algorithm on the data of the VIPERS galaxy survey, for which photometric redshifts based on spectral energy distribution are also available. The model yields a confidence level for a pair of galaxies to be real angular neighbours, enabling us to disentangle chance superpositions in a probabilistic way. When objects for which no physical companion can be identified are excluded, all photometric redshift quality metrics improve significantly, confirming that their estimates were of lower quality. For our typical test configuration, the algorithm identifies a subset containing ~75% high-quality photometric redshifts, for which the dispersion is reduced by as much as 50% (from 0.08 to 0.04), while the fraction of outliers reduces from 3% to 0.8%. Moreover, we show that the spectroscopic redshift of the angular neighbour with the highest detection probability provides an excellent estimate of the redshift of the target galaxy, comparable to or even better than the corresponding template-fitting estimate. Key words: galaxies: distances and redshifts / methods: statistical / methods: data analysis © The Authors 2023 Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication. 1 Introduction Knowledge of galaxy distances is of the utmost importance for cosmology to reconstruct the underlying 3D dark matter distribution that encapsulates key information about the evolution and matter content of the Universe. On cosmological scales, the most efficient method for estimating distances is through their cosmological redshift, which directly connects to the standard definitions of distance. Sufficiently precise red-shift measurements allow us to test the world model through the redshift-distance relation, coupled with standard rulers and standard candles (e.g. Riess et al. 1998; Perlmutter et al. 1998). Over the past 25 yr, galaxy clustering measurements from large redshift surveys have been able to quantify the universal expansion and growth histories, pinpointing the value of cosmo-logical parameters to high precision (e.g. Tegmark et al. 2006; Colless et al. 2003; Blake et al. 2011; de la Torre et al. 2017; Alam et al. 2017; Pezzotta et al. 2017; Bautista et al. 2021). Even larger redshift surveys are now ongoing (DESI; DESI Collaboration 2016) or are scheduled to start soon (Euclid; Laureijs et al. 2011), with the goal of further refining these measurements to exquisite precision and find clues for the poorly understood ingredients of the remarkably successful standard model of cosmology. The redshift is measured from the shift in the position of emission and absorption features identified in galaxy spectra, typically through cross-correlation techniques with reference templates, which capture the full available information (e.g. Tonry & Davis 1979). Despite the considerable advances of multi-object spectrographs over the past 40 yr, collecting spectra for large samples of galaxies remains an expensive task. A cheaper, lower-precision alternative is offered by photometric estimates, that is, by measurements based on multi-band imaging, in which integrated low-resolution spectral information is collected at once for large numbers of objects over large areas. The price to be paid is that of larger measurement errors, together with a number of catastrophic failures, which limit the scientific usage of such photometric redshifts (photo-zs hereafter) to specific applications (e.g. Newman & Gruen 2022). Still, when a sufficient number of photometric bands is available (Benitez et al. 2014; Laigle et al. 2016; Alarcon et al. 2021) or when even information about the ensemble mean spectrum can be obtained (Cagliari et al. 2022), these samples become highly valuable in many respects. Photo-zs are traditionally estimated by fitting template spectral energy distributions (SED) to the measured photometric fluxes (see e.g. Bolzonella et al. 2000; Arnouts et al. 2002; Maraston 2005; Ilbert et al. 2006). Detailed reviews can be found in Salvato et al. (2019), Brescia et al. (2021), and Newman & Gruen (2022). Since the pioneering work of Collister & Lahav (2004; see also Lahav 1994), who first used artificial neural networks (ANN) to obtain photo-z estimates, machine-learning (ML) algorithms have seen many further applications in this context. These include random forests (Carliles et al. 2010), self-organizing maps (SOM; Masters et al. 2015), and advanced ANNs (Sadeh et al. 2016). A notable recent application uses the full images of galaxies through convolutional neural networks (CNN; Pasquet et al. 2019; Henghes et al. 2022). All these methods provide photo-z estimates by using information that is strictly local, that is, the flux of each object measured in a number of photometric bands, independently of correlations with the other galaxies in the sample. In the specific case when a photometric survey includes spec-troscopic redshifts for a representative sub-sample spread over the same area, these represent additional information, which can be exploited to obtain improved estimates of the missing red-shifts. Since galaxies are spatially clustered, angular neighbours on the sky preserve a degree of redshift correlation, depending on the depth of the catalogue. The deeper the catalogue, the weaker the correlation because the projection is made over a deeper baseline. Still, an angular correlation remains, as can be seen explicitly in Fig. 1, in the data of the VIMOS Public Extragalactic Redshift Survey (VIPERS; Guzzo et al. 2014). This correlation was exploited, for example, to improve our knowledge of the overall sample redshift distribution (Newman 2008), which is a fundamental quantity for many cosmological investigations such as weak-lensing tomography. With VIPERS, instead, it was used to estimate the galaxy density field to fill the gaps due to missing redshifts (Cucciati et al. 2014). Even more finely, Aragon-Calvo et al. (2015) used the fact that galaxies are typically confined within cosmic web structures to obtain a dramatic improvement in the estimate of photo-zs for ~200 million Sloan Digital Sky Survey galaxies, starting from only about one million spectroscopically measured redshifts. Our goal with the work presented here has been to optimally retrieve this non-local information from the neighbouring objects of a given galaxy building upon a specific class of ML architectures, graph neural networks (GNN). The key property of this class is the ability to combine information from unstructured data based on our priors of the task at hand (Bronstein et al. 2017). The end goal is to obtain an improved estimate of the galaxy redshift. As shown by Fig. 1, the existing correlation between angular neighbours is strongly diluted by the sea of chance superpositions along the line of sight. Thus, the problem can be more appropriately recast into quantifying the probability that a given angular neighbour (with known redshift) is a physical companion for a given galaxy and thus is closely correlated in redshift as well. Our GNN model, dubbed NezNet, combines the intrinsic features of a target galaxy and a neighbour, that is, their multiband fluxes, the spectroscopic redshift of the neighbour, and their relative angular distance, to output the probability for the two galaxies to be spatially correlated. We trained and tested NezNet using the spectroscopic sample of VIPERS. We show that discarding targets for which no real physical neighbour is identified with significant probability improves the quality of the associated photo-z catalogue obtained through classic SED fitting, increasing precision and accuracy and reducing the fraction of catastrophic outliers. Moreover, when real neighbours are identified, the redshift of the highest-probability neighbour represents an estimate of the target redshift that is typically more precise than that obtained through the classical SED fitting. The idea of using GNNs to draw additional redshift information from neighbouring galaxies is not new. Beck & Sadowski (2019) presented preliminary results of an approach based on using only the photometry of a neighbourhood of galaxies, obtaining a 10% improvement on the median absolute deviation of the photo-zs estimated via a single object-based ML algorithm. The main shortcoming of methods that are based on apparent neighbours lies in the large fraction of chance superpositions, as evident in Fig. 1. Here, we reformulated the problem as a detection task that identifies the physical neighbours of the surrounding spectroscopic objects, also including the neighbour’s spectroscopic information. In this way, we obtain a significant improvement. The paper is organised as follow. In Sect. 2, we give a brief description of how GNNs work and specify the architecture of our model. In Sect. 3, we describe the properties of VIPERS data and the way we prepared the training set, in particular, how we defined real or apparent neighbouring objects. Section 4 describes how the model is applied to the data and the metrics we used to quantify the performance of the results. Finally, in Sect. 5 we present and discuss our results, and we conclude in Sect. 6. Fig. 1 Correlation between the galaxy redshift and that of its nth nearest angular neighbour (n = {1,2,3,4}, left to right), as seen in the VIPERS redshift survey data, which cover the range 0.5 < z < 1.2. Clearly, while a tight correlation exists for a number of objects, many other angular pairs just correspond to chance superpositions. 2 Model A neural network model can be summarised as a set of non-linear functions applied to a set of inputs that undergo a linear mapping. Each mapping has many parameters that are optimised through a training process that allows the network model to approximate a wide variety of almost arbitrary functions (LeCun et al. 2015). In its simplest form, a neural network model corresponds to a multi-layer perceptron (MLP), also known as dense neural network (Murtagh 1991). For images, neural architectures such as CNN are more suited because they take our a priori knowledge about the data structure into account (O’Shea & Nash 2015). This reasoning can be pushed further by introducing neural networks for graph representations (Zhou et al. 2018). In this work, we make use of one key aspect of GNN, that is, message passing (Gilmer et al. 2017). To fix ideas, the problem we wish to address is the following: we need to find the spectroscopic galaxies with the highest probability of being close to a galaxy for which only photometric information is available. This can be recast as a classification task for each pair of galaxies, in which our aim is to distinguish between apparent and real neighbours when projected on the plane of the sky. Intuitively, a model that distinguishes between apparent and real neighbours should be based on the relative difference between galaxy features. A neural network like this can be designed by including a layer of the form $xi′=∑j∈𝒩(i)h(xi,xi−xj),$(1) where x[i] refers to the array of input features of the node i, 𝒩(i) is the neighbourhood of the same node, Σ is the aggregation function that sums the outcomes from each pair of nodes. The function h is an MLP that explicitly combines the value of the input feature at the node and the relative difference of that feature with respect to the neighbour. It is worth noting that this GNN is both permutation equivariant and permutation invariant, so that it is not affected by a change in the order of the nodes, that is, the input galaxies. The complete architecture of our model is illustrated in Fig. 2. Each node is a galaxy, whose inputs (e.g. the photometric measurements) were pre-processed through an MLP before undergoing the message passing of Eq. (1). We restricted ourselves to the case of galaxy pairs, so that the neighbourhood 𝒩(j) includes only one galaxy, and the aggregation function simply sums the features $x1′+x2′$. This model can be seen as a trivial version of EdgeConv (Wang et al. 2018), where the adjacency matrix is a 2 × 2 matrix, with 0 entries for diagonal elements and 1 for the off-diagonal elements. Finally, the summed features undergo a last dense layer with a scalar output. All the activation functions are rectified linear units, with the exception of the final layer, where we used a sigmoid, to represent a probability for our classification task. We call this classification model Nearest-z Network (NezNet). NezNet provides the probability for a pair of galaxies to be real neighbours. The loss function adopted to train NezNet is a standard binary cross entropy, $ℒ=1n∑in[ yilogpi+(1−yi)log(1−pi) ],$(2) where p[i] is the output probability of NezNet for each galaxy pair, while y[i] = 0, 1 is the corresponding training label, and the sum is averaged over the mini-batch. To design our model, we made use of the Spektral library^1 (Grattarola & Alippi 2020), where the EdgeConv layer is conveniently already implemented. Fig. 2 Schematic architecture of NezNet. The input features are first processed by a dense network. Message passing between the two layers through Eq. (1) is then applied to take the relative differences and global values of the features into account. Before the final dense layer, the features are summed and then reprocessed with an MLP to output the score probability of two galaxies being actual 3 Data We trained and tested our approach on the final data release of VIPERS (Guzzo et al. 2014; Scodeggio et al. 2018), for which the redshift correlation between angular neighbours is shown in Fig. 1. The survey used the VIMOS multi-object spectrograph at the ESO Very Large Telescope to target galaxies brighter than i[AB] = 22.5 in the Canada-France-Hawaii Telescope Legacy Survey Wide (CFHTLS-Wide) catalogue, with an additional (r − i) vs. (u − g) colour pre-selection to remove objects at z < 0.5. The resulting sample covers the redshift range 0.5 ≲ z ≲ 1.2, with an effective sky coverage of 16.3 deg^2, split over the W1 and W4 fields of CFHTLS-Wide. We used only galaxies with secure redshift measurements, as identified by their quality flag, corresponding to a 96. 1% confidence level (see Scodeggio et al. 2018). For each galaxy in the catalogue, the following information was considered: the spectroscopic redshift measurement z[spec], the six magnitudes u, g, r, i, z (not to be confused with red-shift) and K [s], the right ascension α (RA), in radians, and the declination δ (Dec), in radians. The angular separation on the sky between two objects with RA α[1] and α[2] and Dec δ[1] and δ[2] is given by the haversine formula, $ΔΘ=arccos(sinδ1sinδ2+cosδ1cosδ2cos(α1−α2)).$(3) We selected the parent photometric sample by applying the same VIPERS colour and magnitude cuts defined above, so as to be fully coherent with the spectroscopic data. 4 Application We set up a training set from the VIPERS W1 galaxy catalogue. We randomly selected about 3 × 10^4 target galaxies, whose spectroscopic redshift during training was ignored. For each of them, we identified the first n[NN] angular nearest neighbours as defined by Eq. (3), which we called spectroscopic galaxies because their spectroscopic redshift information was used in our model. Each of these spectroscopic neighbours was associated with the same target galaxy, but the pairs can be considered as independent from one another in our model. Each angular pair was assigned label 1 when it was a real physical pair, otherwise, it was assigned a 0. The training set was thus made of galaxy pairs. A target galaxy of a pair can also be the nearest neighbour of another target galaxy in another pair. We made this choice in order to maximise the number of training examples available in W1. Our final tests on the W4 catalogue show that this does not lead to any over-fitting of VIPERS data, as the model generalises well. We note that this setting assumes a ratio of spectroscopic to photometric objects of 1 : 1. In the Conclusions section (Sect. 6), we also confirm these results in the more realistic case in which the number of spectroscopic redshifts used for training are a fraction of the number of photometric objects. The definition of a real neighbour is arbitrary; it is reasonable to consider that two angular neighbours form a physical pair when their spectroscopic separation is smaller than a given threshold, This means that in setting up the training data, there are two hyper-parameters, the number of nearest neighbours n[NN] to be considered, and the spectroscopic separation Δz. As we show below, these two hyper-parameters can affect the results significantly, and it is thus relevant to set them up wisely, depending on the specific survey. For each galaxy in the pairs, the input features of the nodes in NezNet are the photometry, the spectroscopy, and the angular position, as listed in Sect. 3. For the target galaxy, we always set z [spec] = 0, so that the model considered it as a missing feature, while providing its value for the neighbouring galaxy. Magnitudes were normalised to the range [0,1], as computed over the whole VIPERS dataset. The angular inputs were provided in terms of relative distance with respect to the target galaxy, so that ΔΘ = 0 for the latter, while for the neighbour, it corresponded to Eq. (3). By adopting this choice, we guaranteed that the model has translational invariance. Another tested option (see Sect. 6) is to use the relative distance in the two sky coordinates RA and Dec as input variables instead of the angular separation of the two galaxies. This choice arises because the surface distribution of the sample is not rota-tionally invariant on the sky because of the technical set-up of the slits in the VIMOS focal plane, with the spectral dispersion oriented along the declination direction. As spectra must not overlap on the detector, targets need to be separated in Dec much more than in RA. As a result, the minimum separation is ~1.9 arcmin in Dec and 5 arcsec in RA. More details can be found in Bottini et al. (2005) and Pezzotta et al. (2017, see their Sect. 4.1). Our experiments show that providing the model with the angular separation ΔΘ introduces a bias in the redshift metrics, which is not observed when the relative separations along RA and Dec are given. In general, however, we find that the separation information does not significantly improve the classifier, and for this reason, we did not use it in our final model. Spatial information instead comes only from the number of nearest neighbours considered. The other hyperparameters of the model, that is, the batch size, number of neurons, and learning rate, have a far weaker impact than Δz and n[NN], and were set to fiducial values: a batch size of 32, a learning rate of 0.001, and a total number of parameters of the order of a few thousands. We find little difference in the output metrics of the redshift estimates when the complexity of the model is increased, or when the batch size and the learning rate are changed around these fiducial values. NezNet gives as output the probability for two galaxies to be real neighbours. As each target galaxy corresponds to n[NN] independent pairs, we can select the neighbour with the highest probability among them. If this probability is below the classification threshold set to define a positive case, we conclude that there is no physical neighbour for that target galaxy in the catalogue. This implies that the probability for the latter is too high to be an outlier in terms of its properties when compared to its neighbours. Removing these objects from the final catalogue significantly improves the metrics when comparing photo-z and spectroscopic measurements. In particular, the reduction in the number of catastrophic redshifts confirms our assumption. Finding a true neighbour instead reinforces the confidence in the photo-z. At the same time, the spectroscopic redshift of the neighbour in this case is typically an even better estimate of the target redshift than the SED-estimated photo-z. These tests are discussed in the following section. The quantitative comparison between NezNet results, spec-troscopic measurements $zspec(i)$, and SED-fitting estimated photo-zs was performed using the metrics defined in Salvato et al. (2019). These are the precision (i.e. the dispersion of the estimated values), $σ=1N∑iN(zspec(i)−z(i)1+zspec(i))2,$(5) the bias $b=1N∑iN(zspec(i)−z(i)),$(6) and the absolute bias $| b |=1N∑iN| zspec(i)−z(i) |,$(7) quantifying systematic deviations. Finally, the outliers are defined as objects for which $| zspec(i)−z(i) |≥0.15(1+zspec(i)).$(8) All the results presented in the following section were obtained by applying the trained NezNet to a test catalogue built in a similar fashion to W1, randomly selecting about 2 × 10^4 galaxies from the twin W4 field of VIPERS. Finally, in the following discussion about our classifier, we use the notion of the true positive rate (TPR), which is the fraction of correctly predicted positive examples with respect to all the real positive examples. It is defined as $TPR=TPTP+FN,$(9) where TP stands for true positives and FN stands for false negatives. Similarly, we can define the false positive rate (FPR), which is the fraction of negative examples classified as positives with respect to all the real negative examples, which reads $FPR=FPFP+TN,$(10) where FP stands for false positives and TN stands for true negatives. Fig. 3 Central galaxy spectroscopic redshift versus its photometric redshift measured with and without NezNet. The left panel shows the distribution of photometric vs. spectroscopic estimates in the original data. In the middle panel, we show the same distribution after removing the galaxies with low score probability from the catalogue (fr stands for the fraction of retained data). Finally, the right panel shows redshift estimates by assigning the spectroscopic redshift of the neighbour with the highest detection probability to the target galaxy. The model was trained with n[NN] = 30 and Δz = 0.08. Fig. 4 Same as Fig. 3, but the model was trained with the higher Δz = 0.15, while n[NN] = 30 is the same as before. 5 Results As explained in the previous section, NezNet can be used to simply clean a photo-z sample by discarding low-probability neighbours or to provide an alternative redshift estimate derived from the highest-probability neighbour. This is demonstrated on the test catalogue in Fig. 3 for a model trained using the hyperparameters Δz = 0.08 and n[NN] = 30. In addition to the VIPERS spectroscopic redshifts, this comparison also includes the original photo-zs estimated by Moutard et al. (2016) using standard SED fitting. For these and all following results, angular information (i.e. the separation of the two objects on the sky) was not used as an input variable. The reason for this was already mentioned in the previous section, and is discussed again in more detail below. Figure 3 shows that by simply dismissing the outliers as identified by NezNet, all the metrics improve significantly (central panel). Moreover, when the best neighbour redshifts are adopted for the target galaxies (right panel), we obtain metrics that are comparable to or even better than those of the cleaned photo-z sample. It is worth noting that in this case, the plot shows a characteristic checkerboard pattern because the spectroscopic redshift striping is reflected, as spectroscopic redshifts are now assigned to target photometric objects. Figure 3 also shows the limits of the method. Comparing the left panel with the other two, we can note that NezNet tends to cut off the high-redshift tail of the distribution. This is easily understood considering the magnitude-limited (i[AB] < 22.5) character of the sample used here, which becomes very sparse at z ≳ 1, where only rare luminous galaxies are present. This means that the model becomes intrinsically less efficient because fewer real physical neighbours are available both for the training and for inference, as is also evident from the density of points at high redshift in Fig. 1. Devising a different loss function to up-weight the few physical pairs in this regime might improve the classification task, but an intrinsic limit to the method clearly exists when the density of the sample decreases. Figure 4 shows the same set of plots, but using a higher value for the spectroscopic separation in the training, that is, Δz = 0.15. As expected, allowing for a larger separation in the definition of real angular neighbours discards fewer data. Conversely, there is in general a lower precision and a small increase in the fraction of outliers. In principle, using a stricter Δz could remove even more outliers, retaining only pairs that are closer in redshift and leading to a smaller, but more precise sub-sample. We explore this dependence in Fig. 5. Overall, this method is always able to clean poor estimates from the sample, but at the price of discarding many data points. The minor improvement in precision probably does not justify the use of Δz < 0.08 in the case of VIPERS, because more than half of the sample is excluded. It is apparent that the hyper-parameter Δz is very relevant for the quality of the classifier. This is made clear by the receiver operator characteristic (ROC) curve in Fig. 6, which shows the TPR ( Eq. (9)) against the FPR (Eq. (10)), and has been computed from the target galaxies in the test catalogue by considering their neighbour with the highest probability. In general, the area under the curve (AUC) is higher for the better classifier. Increasing Δz increases the AUC, which would tend to unity for very high values of this parameter, as all galaxies would then be considered real neighbours. However, our ultimate goal is not to increase the performance of the classifier per se, but to improve the metrics of our redshift estimates. These show that Δz ≳ 0.08 represents the best choice for VIPERS. The other hyper-parameter of NezNet, that is, n[NN], the number of nearest neighbours considered in the training, has a weaker impact on the classifier. We show this in Fig. 7, where each ROC curve corresponds to a model trained with a different n[NN], but all with the same Δz. A drastic change in n[NN] does not correspond to comparable chang es in the AUC. However, n[NN] has a large impact on the redshift estimates, as Fig. 8 shows. A larger number of angular neighbours increases the probability of finding a physical pair, as is shown by the metrics in Fig. 7. We also experimented with a higher value of n[NN] up to 50, but found no further gain with respect to using n[NN] = 30. The redshift metrics start to saturate to the optimal values already above n[NN] = 10. As a further test, we also computed the gradients of the predictions with respect to their input variables to detect the most relevant ones, as shown in Fig. 9. It is interesting to see that the neighbour redshift is a relevant input, as expected, and some of the photometric bands are even more relevant. This confirms the intuition that the photometric information of the neighbours does indeed provide additional information about the relative distance from the target. In this plot, we also show results for the case when the angular separation is considered as one of the input variables. These results show that the angular separation ΔΘ between the target and the neighbour does affect the predictions. This manifests itself as a bias in the redshift estimates, as visible in Fig. 10: in this case, NezNet systematically favours neighbours that are closer to us than the target, increasing the value of the bias b (Eq. (6)). We also tested what happens when the angular separation information is rather given in terms of the relative difference in the angular coordinates RA and Dec of the two galaxies. In this case, the bias disappears and the results are comparable to the standard case in which no angle information is provided. However, in this case, the two parameters clearly have smaller gradients than when ΔΘ alone is considered, which suggests that they do in fact not contribute to the predicting power of the model. For these reasons, the angular separation is not considered as input variable in our final results. One of the novelties of NezNet is the message passing between node features. This is where GNNs differ from a standard ANN, where all input variables of both galaxies would be provided directly to dense layers. We also experimented with a simpler graph model, closely resembling the architecture of NezNet, but without message passing. The input features were processed independently by MLP layers for each node (we tried using either just one or several layers). The new architecture is as in Fig. 2, with the exception of h function blocks, which are now substituted with new MLP blocks, without applying any message passing. The $xi′$ features are summed by the aggregation function, and the summed features are mapped to the output probability through final dense layers with sigmoid activation output, just like in the model with message passing. This kind of model, which maintains the permutation invariance property of a graph, is often referred to as a deep set (Zaheer et al. 2017). We find that this simple model still works remarkably well and is comparable |to NezNet in general. However, it systematically cuts off the high-redshift tail of the catalogue (Fig. 11), even though the overall metrics remain good. Fig. 5 Redshift estimates derived from the best nearest neighbour for various Δz at fixed n[NN] = 30. Increasing the spectroscopic separation to define physical neighbours while diminishing the quality of the metrics increases the fraction of data that are not dismissed from the catalogue. Fig. 6 ROC curve for a varying redshift threshold Δz at fixed n[NN] = 30. The performance of our classifier (AUC) improves as we use a less strict definition of what we define as a true neighbour. The probability that an angular neighbour is a physical neighbour increases at larger Δz, which is also reflected by the high detection threshold (thr). Fig. 7 ROC curve for a varying number of nearest neighbours n[NN] = 30 at fixed Δz = 0.08. Increasing the number of neighbours that are given in input to the training seems to make the training more difficult. However, this test of the classifier does not reflect the quality of the final redshift estimate, as Fig. 8 shows. Fig. 8 Redshift estimates based on the best nearest neighbour for various n[NN] at fixed Δz = 0.08. Increasing the number of nearest neighbours for each target improves the performance of NezNet in estimating redshiſts, as it increases the probability that physical pairs are considered. Fig. 9 Average absolute values of the gradients of NezNet with respect to the input features of the neighbours. For each target, we only considered the neighbour with the highest probability. Fig. 10 Results of redshift estimates for the target galaxies, in the case where the angular separation Eq. (3) is an explicit input of the model. Many galaxies have slightly lower values than the real spectroscopic value, resulting in a large bias b. Currently, we do not have an explanation of this observed effect. Fig. 11 Comparison of the redshift distribution for the predictions of NezNet, and a simpler graph model without message passing. While the latter performs reasonably well in general, it tends to cut the tail of the distribution. 6 Conclusions We have presented a new ML model, dubbed NezNet, which for a pair of galaxies takes as input their measured fluxes in a number of bands together with the redshift of one of the two galaxies. NezNet is capable of probabilistically learning whether their redshift distance is below a given threshold Δz, which is set as a hyper-parameter of the model. The angular separation between the galaxies is implicit in the training set, as for every target galaxy we select its first n[NN] angular neighbours (another hyper-parameter), but it can be an explicit input variable of the model. The backbone of the model is a GNN, a class of neural networks based on message passing and the aggregation of features (Fig. 2). This message passing is explicitly performed as a relative difference between features (Eq. (1)). NezNet outputs the score probability for a galaxy pair to be real neighbours. This information that can be used in two ways. On the one hand, if none of the n[NN] nearest neighbours is identified as a physical neighbour, the target galaxy can be considered an outlier in terms of its properties. This may suggest that it is an interloper, that is, a foreground or background object with respect to the volume sampled by the spectroscopic sample we used for the comparison. It should therefore be discarded from any sample that aims to cover the same redshift range as the spec-troscopic catalogue, for instance, via photometrically estimated redshifts. We have proved this to be true using the VIPERS catalogue. On the other hand, if a physical neighbour is identified, the target galaxy can be assigned the spectroscopic redshift of the highest scoring galaxy among the n[NN] angular neighbours, providing an independent estimate of its redshift in this way. These results are summarised in Figs. 3 and 4: when outliers as detected by NezNet are discarded, all the metrics of the sample improve considerably. Moreover, the NezNet redshift estimates are comparable to or superior in precision to SED-based photometric redshifts, depending on the values chosen for the hyper-parameters. Increasing Δz increases the goodness of the classifier (Fig. 6), as well as the fraction of retained data (Fig. 5). Changing n[NN] has a smaller impact on the classifier (Fig. 7), although it significantly affects the redshift quality metrics because a large enough n [NN] improves the probability of detecting a real neighbour; a value n[NN] ~ 30 is optimal in the case of VIPERS (Fig. 8). It is often the case that the fraction of the parent photometric sample without a spectroscopic measurement has a higher density than the spectroscopic sample. VIPERS indeed has a spectroscopic surface density of Σ ~ 6 × 10^3/deg^2, to compare against the photometric surface density Σ[ph] ~ 45 × 10^3/deg^2. For this reason, we tested NezNet by varying the surface density of the spectroscopic sample used during training. We achieved this by repeating the training procedure on a uniformly subsampled catalogue extracted from W1. The test was performed on W4 without any subsampling, so that we tested for the effectiveness of NezNet trained on a lower-density catalogue. Figure 12 shows that NezNet keeps its effectiveness even when using a subsample of one-eighth of the original spectroscopic density Σ, similar to the VIPERS ratio of spectroscopic to photometric objects. This suggests that NezNet could have an interesting potential also in the context of future experiments, such as Euclid or the NASA Nancy Grace Roman mission (Akeson et al. 2019). These slitless spectroscopic surveys will indeed naturally deliver overlapping photometric and spectroscopic data, which can be combined using NezNet to improve photometric redshift estimates. It is worth stressing that some details of the results presented here depend on the specific features of VIPERS and its parent CFHTLS photometric sample. Some of them may have been advantageous, but others could have penalised the success of the method. For example, the slit-placement constraints in VIPERS limits the ability to target close galaxy pairs, which introduces a shadow in the layout of a VIMOS pointing (see Fig. 6 of Guzzo et al. 2014), and forces a lower limit in the separation of observable galaxy pairs (see Sect. 4). This means that the training sample of NezNet was not ideal in our analysis because surely many of the missed angular pairs were also physical pairs. This increases our confidence in the obtained results because it shows that for samples that are characterised by small-scale incompleteness, as is typical of surveys built using fibre or multi-slit spectrographs, the method still also delivers very useful results. In the case of the VIPERS data, an interesting exercise in this respect would be to use the data from the VLT-VIMOS Deep Survey (VVDS; Le Fèvre et al. 2005) as training sample, which used the same spectrograph, but with repeated passes over the same area of 0.5 deg^2 that substantially mitigate the proximity bias. We leave this exercise for a future work. Fig. 12 Redshift estimates based on the best nearest neighbour, obtained by uniformly subsampling the W1 catalogue, at fixed n[NN] = 30 and Δz = 0.08. The titles of the panels refer to the surface density of spectroscopic objects of W1 used for training, with Σ referring to the complete W1 sample. Except for minor fluctuations in the redshift statistics, NezNet maintains a performance similar to the case without subsampling. The only noticeable trend is the fraction of central galaxies for which a physical pair is found, which decreases for lower densities. This could be due to the decreasing number of available training data. The percentage of real physical neighbours for a central galaxy, which decreases only slightly from Σ to Σ/8, remains around 40% and explains why NezNet is still We thank Davide Bianchi for useful suggestions during the development of this work. FT and MSC are thankful to Daniele Grattarola for insightful discussions on GNNs and the use of the Spektral library. We thank the anonymous referee for his comments and suggestions. FT and LG acknowledge financial support by grant MUR PRIN 2017 ‘From Darklight to Dark Matter’, grant no. 20179P3PKJ. LG and MSC acknowledge financial support from the Italian Space Agency, ASI agreement no. I/023/12/0. All Figures Fig. 1 Correlation between the galaxy redshift and that of its nth nearest angular neighbour (n = {1,2,3,4}, left to right), as seen in the VIPERS redshift survey data, which cover the range 0.5 < z < 1.2. Clearly, while a tight correlation exists for a number of objects, many other angular pairs just correspond to chance superpositions. In the text Fig. 2 Schematic architecture of NezNet. The input features are first processed by a dense network. Message passing between the two layers through Eq. (1) is then applied to take the relative differences and global values of the features into account. Before the final dense layer, the features are summed and then reprocessed with an MLP to output the score probability of two galaxies being actual In the text Fig. 3 Central galaxy spectroscopic redshift versus its photometric redshift measured with and without NezNet. The left panel shows the distribution of photometric vs. spectroscopic estimates in the original data. In the middle panel, we show the same distribution after removing the galaxies with low score probability from the catalogue (fr stands for the fraction of retained data). Finally, the right panel shows redshift estimates by assigning the spectroscopic redshift of the neighbour with the highest detection probability to the target galaxy. The model was trained with n[NN] = 30 and Δz = 0.08. In the text Fig. 4 Same as Fig. 3, but the model was trained with the higher Δz = 0.15, while n[NN] = 30 is the same as before. In the text Fig. 5 Redshift estimates derived from the best nearest neighbour for various Δz at fixed n[NN] = 30. Increasing the spectroscopic separation to define physical neighbours while diminishing the quality of the metrics increases the fraction of data that are not dismissed from the catalogue. In the text Fig. 6 ROC curve for a varying redshift threshold Δz at fixed n[NN] = 30. The performance of our classifier (AUC) improves as we use a less strict definition of what we define as a true neighbour. The probability that an angular neighbour is a physical neighbour increases at larger Δz, which is also reflected by the high detection threshold (thr). In the text Fig. 7 ROC curve for a varying number of nearest neighbours n[NN] = 30 at fixed Δz = 0.08. Increasing the number of neighbours that are given in input to the training seems to make the training more difficult. However, this test of the classifier does not reflect the quality of the final redshift estimate, as Fig. 8 shows. In the text Fig. 8 Redshift estimates based on the best nearest neighbour for various n[NN] at fixed Δz = 0.08. Increasing the number of nearest neighbours for each target improves the performance of NezNet in estimating redshiſts, as it increases the probability that physical pairs are considered. In the text Fig. 9 Average absolute values of the gradients of NezNet with respect to the input features of the neighbours. For each target, we only considered the neighbour with the highest probability. In the text Fig. 10 Results of redshift estimates for the target galaxies, in the case where the angular separation Eq. (3) is an explicit input of the model. Many galaxies have slightly lower values than the real spectroscopic value, resulting in a large bias b. Currently, we do not have an explanation of this observed effect. In the text Fig. 11 Comparison of the redshift distribution for the predictions of NezNet, and a simpler graph model without message passing. While the latter performs reasonably well in general, it tends to cut the tail of the distribution. In the text Fig. 12 Redshift estimates based on the best nearest neighbour, obtained by uniformly subsampling the W1 catalogue, at fixed n[NN] = 30 and Δz = 0.08. The titles of the panels refer to the surface density of spectroscopic objects of W1 used for training, with Σ referring to the complete W1 sample. Except for minor fluctuations in the redshift statistics, NezNet maintains a performance similar to the case without subsampling. The only noticeable trend is the fraction of central galaxies for which a physical pair is found, which decreases for lower densities. This could be due to the decreasing number of available training data. The percentage of real physical neighbours for a central galaxy, which decreases only slightly from Σ to Σ/8, remains around 40% and explains why NezNet is still In the text Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
{"url":"https://www.aanda.org/articles/aa/full_html/2023/04/aa45369-22/aa45369-22.html","timestamp":"2024-11-12T07:24:19Z","content_type":"text/html","content_length":"190985","record_id":"<urn:uuid:49daab09-ef9e-40d3-9199-7cb107ef926b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00007.warc.gz"}
The craziest machine in the world - Tagesspiegel - Themenspeziale Text Wolfgang Richter It took a pandemic for Germany to get its foot in the door. Otherwise, it would have been shut and we would have left the development of quantum computers to large corporations like IBM and Google. Quantum physics: Lars Steffen and Anna Stockklauser from the ETH Zurich set up the dilution refrigerator to measure quantum properties. But now, with 2 billion euros from the German government’s coronavirus program, quantum technologies will be supported financially. The amount is more than twice the 650 million euros pledged the year before last. Quantum computers are a new type of computer that use qubits instead of the conventional 0 and 1 bits. They represent a quantum-mechanical mixture of 0 and 1 and thus enable much faster calculations – at least in theory, because there are still only the first prototypes. Quantum computers are also only one of three areas of quantum technologies. In addition, there are quantum sensors, for example for precise measurements of magnetic fields in medicine, and quantum communications, in which encrypted information is exchanged with the help of quantum effects. It will probably not be clear exactly how much of the 2 billion euros will go to quantum computers until the beginning of next year; however, it is likely to be a significant portion. Already at the end of January, Research Minister Anja Karliczek announced that she would like to make an additional 300 million euros available just for the development of German quantum computers. Apparently, the minister was startled by Google’s announcement last autumn that their quantum computer “Sycamore” could solve a task in a few minutes that would have taken the best classic supercomputer 10,000 years to complete. The task itself was chosen without any applicability and was deliberately unfair to the classic computer. Nevertheless, this “quantum supremacy” caused a global sensation: It was the first demonstration of the superiority of a quantum computer over a conventional computer, something researchers had long been waiting for. Laboratory work: Creating new quantum worlds with computers. According to the research ministry, a central objective of its initiative is the cooperation with future users of the technology. However, it is precisely this focus on the interests of industry that, curiously enough, poses the greatest danger. “Which route is the fastest for a parcel carrier, even if the traffic situation is constantly changing?” In a press release from the ministry this is the first task mentioned in a list of problems that a quantum computer could solve. And Peter Leibinger, Chief Technology Officer at the machine tool manufacturer Trumpf, spoke at a press conference with Anja Karliczek about how quantum intelligence could help to produce sheet metal in the best possible way. However, all these tasks have one thing in common: they can either be solved just as quickly with a classic computer – or sometimes not at all, i. e. not even with one of the existing quantum computers. Google’s quantum supremacy task is the only exotic exception. Tommaso Calarco from Forschungszentrum Jülich, one of the initiators of the EU’s Quantum Flagship initiative, warned against a dangerous hype at the last quantum computer conference in the United States. What if the CEOs’ high expectations for a quick solution to practical problems are disappointed? Wave character and probability The quantum computer is the gateway to unlocking the curiosities of the quantum world and thus ultimately the foundations of modern technology in a completely new way. To understand this, we need to take a closer look at two phenomena: The wave character of matter and the fundamental importance of probability in the microcosm. Tommaso Calarco from the Forschungszentrum Jülich warned about a dangerous hype at the quantum computer conference in the US. Throw two stones into a pond. The circular waves run into each other and form a complex pattern, because they sometimes weaken and sometimes strengthen each other. If you shoot electrons at a wall with two closely adjacent holes and set up a camera for electrons behind it, exactly the same pattern appears on their screen. This shows that particles also have wave properties. But what is really astonishing is that the pattern also appears over time if you shoot the electrons one after the other at the holes. However, two circular waves are always necessary for the formation of the pattern. So, did each electron split in two and each half flew through each hole? That is physically impossible. Most physicists see the solution in so-called probability waves, which can be assigned to any quantum object (such as electrons). It is not clear whether they really exist or whether they are just a mathematical construct. Either way, roughly speaking, the height of these waves indicates the probability of finding the particle at a certain location. The probability for each hole is now 50 percent that the electron will pass through it. So, the electron’s probability wave splits into two partial waves in front of the holes. A partial wave with a height of “50 percent” passes through each hole, and behind it both waves overlap, as with the two stones in the pond. What does this have to do with quantum computers? Their computing units often consist of small rings that lose their electrical resistance at temperatures just above absolute zero (minus 273 degrees Celsius) and thus become quantum objects. If a portion of energy is supplied to such a ring with the help of light, it can change from a low energy state to a higher one. The researchers now define the low energy state as 0 and the higher as 1. To be absolutely certain that a ring actually changes from 0 to 1, however, the researchers would have to irradiate the light for a certain period of time. But they do not do that. They irradiate the light only half of this time. What happens then? Any layman would probably say that in 50 percent of such cases a change occurs, but not in the other half of the cases. However – this is quantum land. In fact, exactly the same thing happens as in the experiment with the electrons: A probability wave is formed for 0 and 1 each with a height of “50 percent”. And since the researchers use many rings (Google’s Sycamore, for example, has 53), many such waves can be superimposed. And by choosing different irradiation times, the researchers can define different proportions of 0 and 1 for each ring and thus different heights of the waves. And if they then connect the rings in such a way that their energy states influence each other – then they can perform a calculation and read off a solution from the resulting superposition pattern. To obtain the pattern, they only have to repeat the calculation often enough and measure the energy states of the rings. Topological states of matter are fascinating quantum states. This is the essence of a quantum computer. It is not a parallel computer that simply calculates everything very quickly at once. By cleverly choosing the calculation algorithms, it can actually solve optimisation tasks, such as finding a fast route for buses in Lisbon. But above all, it is a being from the quantum world and therefore predestined to calculate exactly that. And quantum physics is involved all things that are truly revolutionary: Complex biomolecules for new drugs and vaccines, superconductors that transport electricity without resistance, catalyst materials that will in future extract CO2 from the atmosphere. Researchers estimate that it will take another 10 years before this happens. Let’s give them that time and not pretend that quantum computers will already be able to remote control our cars tomorrow.
{"url":"http://themenspeziale.tagesspiegel.de.demo.t.transmatico.com/berlin-science-week-2020/default/the-craziest-machine-in-the-world-113130","timestamp":"2024-11-14T18:25:09Z","content_type":"text/html","content_length":"120127","record_id":"<urn:uuid:c5b68403-04c9-458c-8ad8-e176e718fe02>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00893.warc.gz"}
The SURVEYMEANS Procedure The “Ratio Analysis” table displays statistics for all the ratios that you request in the RATIO statement. If you do not specify any statistic-keywords in the PROC SURVEYMEANS statement, then by default this table displays the ratios and standard errors. The “Ratio Analysis” table can contain the following information for each ratio, depending on which statistic-keywords you request: • Numerator, which identifies the numerator variable of the ratio • Denominator, which identifies the denominator variable of the ratio • N, which is the number of observations used in the ratio analysis • number of Clusters • Sum of Weights • DF, which is the degrees of freedom for the t test • Ratio • Std Err of Ratio, which is the standard error of the ratio • Var, which is the variance of the ratio • t Value, for testing • Pr , which is the two-sided p-value for the t test • % CL for Ratio, which are two-sided confidence limits for the Ratio • Upper % CL for Ratio, which are one-sided upper confidence limits for the Ratio • Lower % CL for Ratio, which are one-sided lower confidence limits for the Ratio When you use the ODS OUTPUT statement to create an output data set, if you use labels for your RATIO statement, these labels are saved in the variable Ratio Statement in the output data set.
{"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_surveymeans_details62.htm","timestamp":"2024-11-09T00:42:46Z","content_type":"application/xhtml+xml","content_length":"16434","record_id":"<urn:uuid:496146ef-7212-4618-bc7f-3ec39a488871>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00467.warc.gz"}
NDA Exam syllabus 2024 - Commandant Defence Academy In Dehradun.. Complete Written NDA Exam syllabus 2024 NDA (National Defense Academy) exam is a respected entrance exam conducted by the Union Public Service Commission (UPSC) of India. It is the gateway for unmarried men or women to join Indian Armed Forces. This exam is held twice a year and it has a strict selection process and the exam is divided in two parts Written exam and SSB Interview. Written exam is further divided into two parts Mathematics and General Aptitude Test (GAT) and only Candidates who obtain the set cut-off by UPSC in written exam are called for SSB interview conducted by the Service Selection Board (SSB).Successful candidates receive training at the National Defense Academy and are appointed as officers after completing their training. The NDA test evaluates a candidate’s academic, physical and mental suitability for service in the Defense Forces. In this blog we will share with you complete NDA written exam syllabus. NDA exam syllabus Mathematics (300 marks) Overview of NDA Mathematics exam Total Questions 120 Total Marks 300 Duration 2 ½ hour Exam format MCQs Correct Answer 2.5 mark Wrong Answer -(0.83) mark Topics in mathematics include algebra, matrices and determinants, trigonometry, geometry, differentials, integrals and differential equations, vector algebra, statistics, and probability. These questions are designed to test the candidate’s understanding of mathematical concepts and problem-solving abilities. NDA Mathematics Syllabus Topic Sub Topic 1.Sets (Concepts & Operations) 2.Venn diagram 3.De Morgan’s Law 4.Cartesian Product 5.Relation 6.Equivalence Relation 7.Real Numbers 8.Complex Numbers 9.Modulus 10.Cube Root Algebra 11.Conversion of a number (Binary to Decimal & Decimal to Binary) 12.Arithmetic 13.Geometric and Harmonic Progressions 14.Quadratic Equations 15.Linear Inequations 16.Permutation and Combination 17.Binomial Theorem 18.Logarithms 1.Concept of a real valued function 2.Domain 3.Range and Graph of a function 4.Composite functions 5.One to One 6.Onto and Inverse Functions 7.Notion of limit 8.Standard limits Calculus 9.Continuity of functions 10.Algebric Operations on Continuous functions 11.Derivative of function at a point 12.Geometrical and Physical Interpretation of a derivative application 13.Derivatives of sum 14.Product and Quotient of functions 15.Derivative of a function with respect to another function 16.Derivative of a Composite Function 17.Second Order Derivatives 18.Increasing and Decreasing Function 19.Application of Derivatives in problems of Maxima and Minima Matrices and 1.Types of matrices 2.Operations on matrices 3.Determinant of a matrix 4.Basic Properties of Determinants 5.Adjoint and Inverse of a Square Matrix 6.Applications-Solution of a Determinants system of Linear Equations in two or three unknown by – · Cramer’s Rule · Matrix Method Integral Calculus 1.Integration as inverse of differentiation 2.Integration by substitution and by parts 3.Standard Integrals involving algebraic Expressions 4.Trigonometric 5.Exponential and and Differential Hyperbolic Functions 6.Evaluation of definite Integrals – Determination of areas of plane regions bounded by curves-applications 7.Definition of order and degree of a differential Equations equation by examples. 8.General and particular solution of differential equations 9.Solution of first order and first-degree differential equations of various types by examples 10.Application in problems of growth and decay Trigonometry 1.Angles and their measures in degrees and in radius 2.Trigonometric Ratio 3.Trigonometric Identities 4.Sum and Difference Formulae 5.Multiple and Sub-Multiple Angles 6.Inverse Trigonometric Functions 7.Applications – Height and Distance 8.Properties of Triangles Vector Algebra 1.Vectors in two and three dimensions 2.Magnitude and Direction of a vector 3.Unit and Null Vectors 4.The Addition of Vectors 5.Scalar Multiplication of a Vector 6.Scalar Product 7.Dot Product of two vectors 8.Vector product or Cross product of two vectors 9.Applications- Work done by Force and Moment of Force in Geometrical Problems. Analytical Geometry 1.Rectangular Cartesian Coordinate System 2.Distance Formula 3.Equation of a line in various forms 4.The angle between two lines 5.Distance of a point from a line 6.Equation of a of Two or Three circle in standard and in a general form 7.Standard forms of Parabola, Ellipse and Hyperbola 8.Eccentricity and Axis of a conic 9.Point in a three-dimensional space 10.The Dimension distance between two points 11.Direction, Cosines and Direction Ratio 12. Equation two points 13.Direction Cosines and direction ratios 14. Equation of a plane and a line in various forms 15.Angle between two lines and angle between two planes 16.Equation of a sphere 1.Probability: Random experiment, outcomes, and associated sample space, events, mutually exclusive and exhaustive events, impossible and certain events 2.Union and Intersection Statistics and of events. Complementary, elementary, and composite events 3.Definition of probability—classical and statistical—examples 4.Elementary theorems on probability-simple problems 5. Probability Conditional probability, Bayes’ theorem— simple problems 6. Random variable as function on a sample space 7.Binomial Distribution 8. Examples of random experiments giving rise to Binomial distribution General Ability Test (GAT) (600 marks) Quick Overview of NDA GAT exam Total Questions 150 Total Marks 600 Subjects English, Physics, Chemistry, General Science, Geography, Current Affairs, and History. English 50 Questions General Knowledge 100 Questions Exam duration 2 ½ hour Correct Answer 4 mark Wrong A -(1.3) mark GAT paper evaluates the candidate’s awareness in current affair and knowledge in English, Physics, Chemistry, General Science, Geography. GAT Syllabus for NDA Exam Subject Topic 1. Physical Properties and States of Matter 2. Modes of transference of Heat 3. Mass, Weight, Volume, Sound waves and their properties 4. Simple musical instruments 5. Rectilinear propagation of Light 6. Density and Specific Gravity 7.Reflection and refraction 8.Principle of Archimedes 9. Spherical mirrors and Lenses 10. Pressure Barometer 11. Human Eye 12. Motion of objects 13. Natural and Artificial Magnets 14. Velocity and Acceleration 15. Properties of a Magnet 16. Newton’s Laws of Motion 17. Earth as a Magnet 18. Force and Momentum 19. Static and Physics Current Electricity 20. Parallelogram of Forces 21. Conductors and Non-conductors 22. Stability and Equilibrium of bodies 23. Ohm’s Law 24. Gravitation 25. Simple Electrical Circuits 26. Elementary ideas of work 27. Heating, Lighting, and Magnetic effects of Current 28. Power and Energy 29. Measurement of Electrical Power 30. Effects of Heat 31. Primary and Secondary Cells 32. Measurement of Temperature and Heat 33. Use of X-Rays 34. General Principles in the working of Simple Pendulum, Simple Pulleys, Siphon, Levers, Balloon, Pumps, Hydrometer, Pressure Cooker, Thermos Flask, Gramophone, Telegraphs, Telephone, Periscope, Telescope, Microscope, Mariner’s Compass; Lightning Conductors, Safety Fuses. 1.Preparation and Preparation and Properties of Hydrogen, Oxygen, Nitrogen and Carbon Dioxide, Oxidation and Reduction. 2.Acids, bases and salts 3.Carbon – Different Forms 4. Physical and Chemistry Chemical Changes 5. Fertilizers—Natural and Artificial 6. Elements 7. Material used in the preparation of substances like Soap, Glass, Ink, Paper, Cement, Paints, Safety Matches, and Gunpowder 8. Mixtures and Compounds 9. Elementary ideas about the structure of Atom 10.Symbols, Formulae, and simple ChemicalnEquation 11.Atomic Equivalent and Molecular Weights 12.Law of Chemical Combination (excluding problems) 13.Valency 14.Properties of Air and Water 1. The Earth, its shape and size 2. Ocean Currents and Tides Atmosphere and its composition 3. Latitudes and Longitudes 4. Temperature and Atmospheric Pressure, Planetary Winds, Cyclones, and Anticyclones; Humidity; Condensation and Precipitation 5. Concept of time 6. Types of Climate 7. International Date Line 8. Major Natural Regions of the World 9. Movements of Earth and Geography their effects 10. Regional Geography of India 11. Climate, Natural vegetation. Mineral and Power resources 12. Location and distribution of agricultural and Industrial activities 13. Origin of Earth. Rocks and their classification 14. Important Sea ports and main sea, land, and air routes of India 15. Weathering—Mechanical and Chemical, Earthquakes and Volcanoes 16. Main items of Imports and Exports of India 1. Forces shaping the modern world 2. Renaissance 3. Exploration and Discovery; 4. A broad survey of Indian History, with emphasis on Culture and Civilization 5. Freedom Movement in India 6. French Revolution, Industrial Revolution, and Russian Revolution 7. War of American Independence, 8. Impact of Science and Technology on Society 9. Elementary study of Indian History Constitution and Administration 10. Concept of one World 11. Elementary knowledge of Five-Year Plan of India 12. United Nations, 13. Panchsheel 14. Panchayati Raj, Democracy, Socialism and Communist 15. Role of India in the present world 16. Co-operatives and Community Development 17. Bhoodan, Sarvodaya, 18. National Integration and Welfare State 19. Basic Teachings of Mahatma Gandhi General 1.Common Epidemics, their causes, and prevention 2. Difference between the living and non-living 3. Food—Source of Energy for man 4. Basis of Life—Cells, Protoplasm, and Tissues 5. Science Constituents of food 6. Growth and Reproduction in Plants and Animals 7. Balanced Diet 8. Elementary knowledge of the Human Body and its important organs 9.The Solar System—Meteors and Comets, Eclipses. Achievements of Eminent Scientists The NDA exam is your chance to start a rewarding service career in the Indian Army. By understanding the two main parts of the NDA curriculum: Mathematics and Aptitude Test (GAT), you can create a study plan. Make sure you use materials such as textbooks, practice papers, and previous year’s exams to reinforce your understanding. Keep calm and believe in yourself; with dedication and hard work, you can clear the NDA exam and served in Indian armed forces.If you need further assistance we at Commandant Defence Academy provides the Best NDA coaching centre in Dehradun. , We help you prepare for the written test and the SSB interview, which also includes getting ready for the physical fitness part. NCERT Books: These books are highly recommended for building a strong foundation in core subjects like Mathematics, Physics, Chemistry, History, Geography, and Political Science. Reference Books and Coaching Materials: While not mandatory, these resources can supplement your learning by providing additional practice questions, explanations, and insights. Consult with experienced educators or coaching institutes to determine if such materials align with your learning style and needs. Mathematics:This section carries 300 marks and consists of 120 objective-type questions. General Ability Test (GAT):This section carries 600 marks and consists of 150 objective-type questions further divided into: 1. English:50 questions (200 marks) 2. General Knowledge:100 questions (400 marks) covering various topics like Physics, Chemistry, History, Geography, and Current Affairs. Here are some strategies to improve your problem-solving skills in Mathematics Master the fundamentals: Ensure a thorough understanding of basic mathematical concepts like arithmetic operations, algebra, geometry, trigonometry, and calculus. Practice regularly: Regularly solve problems from various topics to solidify your grasp of the underlying concepts. How can I stay updated on current affairs for the GAT section? Develop a News Consumption Habit: • Newspapers:Subscribe to a reputable newspaper (e.g., The Hindu, The Indian Express, Times of India) and dedicate time daily to reading national and international news. • News Websites:Utilize credible news websites (e.g. The Hindu, Times of India ). • Magazines and Journals: Consider subscribing to weekly or monthly magazines (e.g., India Today, Frontline, Yojana) that offer deeper analysis of current affairs and provide diverse viewpoints. • Government Websites: Visit official government websites (e.g., Press Information Bureau (PIB), Ministry of External Affairs(MEA)) to access official statements, reports, and press releases on various government initiatives and international relations. • Discussions and Debates: Participate in discussions and debates about current events with friends, family, or online communities. This can help you gain different perspectives, test your understanding, and solidify your knowledge. • Note-Taking and Summarization: Develop a habit of taking notes while reading or listening to news. Briefly summarize key points and important information to reinforce your learning and aid in Additional Tips: Prioritize Credibility: Always verify the source of information before accepting it as fact. Be wary of fake news and misleading information circulating online. Focus on Relevant Topics: While staying broadly informed, prioritize news pertaining to national security, defense, foreign policy, social issues, and economic developments, as these are more likely to be relevant to the NDA exam. Maintain a Balanced Approach: Don’t overload yourself with information. Aim for regular, focused news consumption to ensure effective learning and retention. Develop a News Consumption Habit: • Newspapers:Subscribe to a reputable newspaper (e.g., The Hindu, The Indian Express, Times of India) and dedicate time daily to reading national and international news. • News Websites:Utilize credible news websites (e.g. THE HINDU, TIMES OF INDIA ). • Magazines and Journals: Consider subscribing to weekly or monthly magazines (e.g., India Today, Frontline, Yojana) that offer deeper analysis of current affairs and provide diverse viewpoints. • Government Websites: Visit official government websites (e.g., Press Information Bureau (PIB), https://www.mea.gov.in/) to access official statements, reports, and press releases on various government initiatives and international relations. • Discussions and Debates: Participate in discussions and debates about current events with friends, family, or online communities. This can help you gain different perspectives, test your understanding, and solidify your knowledge. • Note-Taking and Summarization: Develop a habit of taking notes while reading or listening to news. Briefly summarize key points and important information to reinforce your learning and aid in Additional Tips: • Prioritize Credibility: Always verify the source of information before accepting it as fact. Be wary of fake news and misleading information circulating online. • Focus on Relevant Topics: While staying broadly informed, prioritize news pertaining to national security, defense, foreign policy, social issues, and economic developments, as these are more likely to be relevant to the NDA exam. • Maintain a Balanced Approach: Don’t overload yourself with information. Aim for regular, focused news consumption to ensure effective learning and retention. Cracking the NDA exam requires dedication, a strategic approach, and a strong foundation in various subjects. While coaching institutes can be a valuable resource, they are not absolutely necessary
{"url":"https://commandantdefenceacademy.com/nda-exam-syllabus-2024/","timestamp":"2024-11-06T02:13:31Z","content_type":"text/html","content_length":"203119","record_id":"<urn:uuid:f02f35b6-0e08-42a9-a4de-b058b4a2f725>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00495.warc.gz"}
How will Poison's Ratio of a material affect its Strength?How will Poison's Ratio of a material affect its Strength? Sorry, you do not have permission to ask a question, You must login to ask question. Become VIP Member Join for free or log in to continue reading... How will Poison’s Ratio of a material Affect its Strength? 1. Poison’s ratio is the ratio of lateral strain to longitudinal strain. It is the property of elasticity of a material. This means that, if a force is applied in a given direction, say along the axis of the member, then the poison’s ratio is the ratio of the strain in the direction perpendicular to the axis and the strain along the axis. The poison’s ratio of concrete is 0.1 to 0.2. Let’s take that it as 0.15. If a force is applied on a concrete specimen along its axis, then, for every 1 unit of deformation in the axis, 0.15 unit of deformation happens in the perpendicular direction. Poison’s ratio is a measure of the elastic property of a material. There isn’t any direct relation between the strength of the material and the poison’s ratio. 2. This answer was edited. Poisson’s Ratio: 1. The metal bar length increases in the direction of applied force when a tensile force is applied to it. The width of the same metal bar decreases in the direction perpendicular to the applied force. 2. The metal bar length increases in the direction of applied force when a tensile force is applied to it. The width of the same metal bar decreases in the direction perpendicular to the applied force. 3. The Poisson’s ratio indicated the relationship between change in length and width. 4. Poison’s ratio is to measure the elastic property of a material. Thank You. 3. Poisson’s ratio is defined as the ratio of the change in the width per unit width of a material, to the change in its length per unit length as a result of strain. Poisson ratio measures the deformation in the material in a direction perpendicular to the direction of the applied force. Mathematically, poissons ratio is equal to the negative of the ratio of lateral strain and longitudinal strain. Therefore, if the poisson’s ratio is greater than the strength is greater. 4. Poisson’s ratio is the negative of the ratio of lateral strain to axial strain (Negative because of the decrease in the lateral measurement) Hence higher the Poisson’s Ratio greater is its ability to withstand the load, hence greater is its strength. You must login to add an answer. Join for free or log in to continue reading...
{"url":"https://test.theconstructor.org/question/how-will-poisons-ratio-of-a-material-affect-its-strength/?show=votes","timestamp":"2024-11-04T02:27:52Z","content_type":"text/html","content_length":"201329","record_id":"<urn:uuid:08cd0912-e3b0-4c03-be75-25685358a286>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00590.warc.gz"}
MAE vs. RMSE: Which Metric Should You Use? | Online Tutorials Library List | Tutoraspire.com MAE vs. RMSE: Which Metric Should You Use? by Tutor Aspire Regression models are used to quantify the relationship between one or more predictor variables and a response variable. Whenever we fit a regression model, we want to understand how well the model is able to use the values of the predictor variables to predict the value of the response variable. Two metrics we often use to quantify how well a model fits a dataset are the mean absolute error (MAE) and the root mean squared error (RMSE), which are calculated as follows: MAE: A metric that tells us the mean absolute difference between the predicted values and the actual values in a dataset. The lower the MAE, the better a model fits a dataset. MAE = 1/n * Σ|y[i] – ŷ[i]| • Σ is a symbol that means “sum” • y[i] is the observed value for the i^th observation • ŷ[i] is the predicted value for the i^th observation • n is the sample size RMSE: A metric that tells us the square root of the average squared difference between the predicted values and the actual values in a dataset. The lower the RMSE, the better a model fits a dataset. It is calculated as: RMSE = √Σ(y[i] – ŷ[i])^2 / n • Σ is a symbol that means “sum” • ŷ[i] is the predicted value for the i^th observation • y[i] is the observed value for the i^th observation • n is the sample size Example: Calculating RMSE & MAE Suppose we use a regression model to predict the number of points that 10 players will score in a basketball game. The following table shows the predicted points from the model vs. the actual points the players scored: Using the MAE Calculator, we can calculate the MAE to be 3.2 This tells us that the mean absolute difference between the predicted values made by the model and the actual values is 3.2. Using the RMSE Calculator, we can calculate the RMSE to be 4. This tells us that the square root of the average squared differences between the predicted points scored and the actual points scored is 4. Notice that each metric gives us an idea of the typical difference between the predicted value made by the model and the actual value in the dataset, but the interpretation of each metric is slightly RMSE vs. MAE: Which Metric Should You Use? If you would like to give more weights to observations that are further from the mean (i.e. if being “off” by 20 is more than twice as bad as being off by 10″) then it’s better to use the RMSE to measure error because the RMSE is more sensitive to observations that are further from the mean. However, if being “off” by 20 is twice as bad as being off by 10 then it’s better to use the MAE. To illustrate this, suppose we have one player who is a clear outlier in their number of points scored: Using the online calculators mentioned earlier, we can calculate the MAE and RMSE to be: Notice that the RMSE increases much more than the MAE. This is because RMSE uses squared differences in its formula and the squared difference between the observed value of 76 and the predicted value of 22 is quite large. This causes the value for RMSE to increase significantly. In practice, we typically fit several regression models to a dataset and calculate just one of these metrics for each model. For example, we might fit three different regression models and calculate the RMSE for each model. We would then select the model with the lowest RMSE value as the “best” model because it is the one that makes predictions that are closest to the actual values from the dataset. In either case, just make sure to calculate the same metric for each model. For example, don’t calculate MAE for one model and RMSE for another model and then compare those two metrics. Additional Resources The following tutorials explain how to calculate MAE using different statistical software: How to Calculate Mean Absolute Error in Excel How to Calculate Mean Absolute Error in R How to Calculate Mean Absolute Error in Python The following tutorials explain how to calculate RMSE using different statistical software: How to Calculate Root Mean Square Error in Excel How to Calculate Root Mean Square Error in R How to Calculate Root Mean Square Error in Python Share 0 FacebookTwitterPinterestEmail previous post How to Perform One Sample & Two Sample Z-Tests in Excel You may also like
{"url":"https://tutoraspire.com/mae-vs-rmse/","timestamp":"2024-11-12T12:11:29Z","content_type":"text/html","content_length":"352939","record_id":"<urn:uuid:f68411df-28ed-44da-8879-b527c78db70a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00702.warc.gz"}
Algebra 1 Chapter 10 - Radical Expressions and Equations - 10-3 Operations with Radial Expressions - Practice and Problem-Solving Exercises - Page 616 36 Work Step by Step In the Golden Rectangle Ratio, length to width: $(1+\sqrt{5}):2$. Length is 8, and let $w$ be the width. We set up the following equation: $\frac{w}{8}=\frac{2}{1+\sqrt{5}}$. $w=\frac{16}{1+\sqrt{5}} $ We now multiply both sides by the conjugate of the denominator.$\frac{16}{1+\sqrt{5}}\cdot \frac{1-\sqrt{5}}{1-\sqrt{5}}=\frac{16(1-\sqrt{5})}{1-5}=\frac{16(1-\sqrt{5})}{-4}=4(\sqrt{5}-1) \approx 4.9$ inches
{"url":"https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-10-radical-expressions-and-equations-10-3-operations-with-radial-expressions-practice-and-problem-solving-exercises-page-616/36","timestamp":"2024-11-05T22:46:59Z","content_type":"text/html","content_length":"87669","record_id":"<urn:uuid:902bab19-bdac-4da6-945d-3ad60272cb0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00449.warc.gz"}
GCSE AQA Physics Exam- 11th of June- Super Hard Questions. Okay, let's face it- we're going to get into that exam and we're going to come across that one (or likely a few) question(s) that seem(s) impossible to complete. To avoid this, maybe it's a good idea to post questions that you deem quite hard to answer, and we all have a go at answering them. These questions can be from sheets that you've been given or even from the january paper (the paper that some of us still don't have). It would be even better if people could think up some of these questions (remember, they may be calculations or theory based). I'll start us off: What precautions can be taken to avoid damaging computers with static electricity? Highlight the following for answers (please tell me how to use spoiler boxes!): Grounding the computer system and ensuring that all tools are rubber tipped (straight from the teachers mouth, they A 4 kilogram block of ice is removed from a freezer where its temperature was maintained at &#8211; 20 degrees Celsius. How much heat does the ice absorb as it is warmed to &#8211; 10 degrees? (The specific heat capacity of ice is 2,000 joules per kilogram degree Celsius.)
{"url":"https://www.thestudentroom.co.uk/showthread.php?t=599839","timestamp":"2024-11-04T17:47:56Z","content_type":"text/html","content_length":"473030","record_id":"<urn:uuid:679ee0e4-aa2c-4b6c-8b6d-7ac5de661bc1>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00167.warc.gz"}
What are various equity valuation methods? Financial Management What are various equity valuation methods? The term equity valuation refers to the process of determining the fair market value of equity securities. In simpler terms, it’s figuring out what a company is truly worth to investors. There’s no single, perfect way to value a company. Different analysts use various methods and assumptions, leading to a range of possible values. In this blog post, we’ll shed light on the most common equity valuation methods, providing you with a clear understanding of how companies are assessed. It is a method or a technique for determining the fair market value of equities by utilizing the balance sheet information. Under this method, we calculate book value, Liquidation value or replacement cost. The formulas are given below to calculate book value, liquidation value or replacement cost to ascertain the value of equity. BOOK VALUE: – Book value means the net worth of the company. Book value as per the balance sheet is considered the value of equity. We can calculate the net worth of the company is as below:- Net worth =Equity Share capital + Preference Share Capital + Reserves & Surplus – Miscellaneous Expenditure (as per B/Sheet) – Accumulated Losses. LIQUIDATION VALUE: The liquidation value method is one of the techniques under the balance sheet method to calculate the equity value. Liquidation value is the value that is realized if the firm is liquidated today and that value is considered the value of equity. Liquidation Value = Net Realizable Value of All Assets – Amounts paid to All Creditors including Preference Shareholders. Replacement cost method: – This method is also known as Tobin’s Q because it was developed by James Tobin. It is quantified as Q-Ratio. James Tobin hypothesis is that the total value of the firm is equal to the replacement value of their assets minus liabilities. Tobin’ Q-Ratio formula: Q ratio= Market price of firm/ Replacement cost. 2. Dividend Discount Cash Flow Method The formulas of Models to calculate the dividend are given below- General Model:- [ ]V[0] =∑ D[t/]/(1+k)^t V[0] = Value of Stock D[t] = Dividend k = required return No growth model V[0 = ]D/ K Example: – E1 = D1 = $5.00 k = .15 V0 = $5.00 / .15 = $33.33 Constant Growth Model D[0 ](1+g)/ k-g • g= constant perpetual growth rate E1 = $5.00 b = 40% k = 15% (1-b) = 60% D1 = $3.00 g = 8% V0 = 3.00 / (.15 – .08) = $42.8 Multistage Growth Models P = D[0 ]∑ (1+g[1])/(1+k) +DT(1=g2)/ (k-g[2])(1+k)^t • g1 = first growth rate • g2 = second growth rate • T = number of periods of growth at g1 Example: – D0 = $2.00 g1 = 20% g2 = 5% k = 15% T = 3 D1 = 2.40 D2 = 2.88 D3 = 3.46 D4 = 3.63 V0 = D1 / (1.15) + D2 /(1.15)2 + D3 /(1.15)3 + D4 / (.15 – .05) ( (1.15)3 V0 = 2.09 + 2.18 + 2.27 + 23.86 = $30.40 3. Relative Value Method The relative value method is also known as earnings multiples or comparable method because they use competitors values to derive the value of equity. Under this method, we need to calculate the following ratios to ascertain the equity value of the company. • Price to Earnings ratio = Market price per share/ Earnings per share • Price to Book Value Ratios= Stock Price / Book Value per share Price to Sales Ratio= Price Per Share / Annual Net Sales Per Share Equity valuation methods provide investors with a framework to assess a company’s intrinsic value, but it’s important to remember there’s no one-size-fits-all approach. The most appropriate method depends on the company’s specific characteristics and industry.
{"url":"https://fundamentalsofaccounting.org/what-are-various-equity-valuation-methods/","timestamp":"2024-11-13T22:41:58Z","content_type":"text/html","content_length":"93458","record_id":"<urn:uuid:f1823e26-ebfd-4dbb-851a-d7a2049c5ca4>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00504.warc.gz"}
Fock-state bosonic code Qudit-into-oscillator code whose protection against noise (i.e., photon loss) stems from the use of disjoint sets of Fock states for the construction of each code basis state. The simplest example is the dual-rail code, which has codewords consisting of single Fock states . This code can detect a single loss error since a loss operator in either mode maps one of the codewords to a different Fock state . More involved codewords consist of several well-separated Fock states such that multiple loss events can be detected and corrected. Code distance is the minimum distance (assuming some metric) between any two labels of Fock states corresponding to different code basis states. For a single mode, is the minimum absolute value of the difference between any two Fock-state labels; such codes can detect up to loss events. Multimode distances can be defined analogously; see, e.g., Chuang-Leung-Yamamoto codes . There are tradeoffs in how well a Fock-state code protects against loss/gain errors and dephasing noise • Binary code — Fock-state code distance is a natural extension of Hamming distance between binary strings. Y. Ouyang and E. T. Campbell, “Trade-Offs on Number and Phase Shift Resilience in Bosonic Quantum Codes”, IEEE Transactions on Information Theory 67, 6644 (2021) arXiv:2008.12576 DOI Page edit log Cite as: “Fock-state bosonic code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2021. https://errorcorrectionzoo.org/c/fock_state Github: https://github.com/errorcorrectionzoo/eczoo_data/edit/main/codes/quantum/oscillators/fock_state/fock_state.yml.
{"url":"https://errorcorrectionzoo.org/c/fock_state","timestamp":"2024-11-11T08:26:52Z","content_type":"text/html","content_length":"18045","record_id":"<urn:uuid:2c401ed5-bfb4-43b0-a94e-04e43bfe4296>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00688.warc.gz"}
Can you solve the frog riddle? - Kidpid Can you solve the frog riddle? The Problem let’s say that you have ingested a poisonous plant and the only way to save your self is to ingest some element that is found in a female frog. Now the female and male frog look exactly alike and the only difference is that the male frog croaks little differently. Now that you are running out of time you spot a frog to your right and you have been slightly relieved and start running towards that frog on the right. Just while you start running, you hear a croak from your left and you spot two frogs there. Now in this situation you have two options: CASE A – go to right where there is one frog, or CASE B – go to left, where there are two frogs. If your answer is B, then you are right. It comes down to correctly calculating the odds one has. But hoe does one reach this conclusion and what is it that makes for a wrong decision? Common Ways Of Wrongly Solving The Problem Wrong Answer One assuming that there are same number of male and females frogs present, the probability that you pick a frog and It turns out to be of either sex is one in two. Which makes it a 0.5% chance and which means that it is 50-50. This would work in Case A (one frog, right side), but not in Case B (two frogs to your left). Wrong Answer Two In Case B, you figured that one of the frogs is male through its croak. But what is the probability that both of them are male? We established that the probability of the frogs from either of the genders is 0.5%, then the two together would be 0.25% (one in four or 25%), which leaves a 75% chance of getting a female frog. Right Answer Conditional Probality If one goes ahead with CASE B, they have a two in three chance of surviving which constitutes to 67%. In Case B, there could be many a combination in terms of male and female in which the frogs could be present in a pair. This set of combination make for a – Sample Space. These combinations of the Sample Space are as follows: 1. Male, Female. 2. Male, Male. 3. Female, Female. 4. Female, Male. Out of these stated combinations, we can see that only one of them gives us two males. Which makes us question that why was the – Wrong Answer Two Assumption wrong about 75% chance? Here we have forgotten about the croak that is made by one of the frogs. This is give away that one the frogs on our left is a male, which means that it cannot a pair of female frogs, hence eliminating that probability. This ultimately leave us with three combinations in the sample space. Now, even out of the three left, we have one combination that has a chance of getting two males, which constitutes to 67% of the chance of getting a female. Therefore, conditional probability takes in note a large sample space which looks at every possibility, in that sample. And with the information that is additional to the sample, helps eliminate the possibilities. In return increasing the chance of getting the right answer and eliminating errors Read More You must be logged in to post a comment. GK Quiz for Class 1 Q: How many months are there in a year? (A) 9 (B) 10 (C) 11 (D) 12 Q: How many days are there in… Colorful Rainbow Crafts for Kids Craftwork is not limited to kids, in fact, it’s not confined to any age limit. By doing craftwork our creativity increases. We are in school,… GK Quiz for Class 3 Q: A polygon with 5 sides is called a _________. (A) Nonagon (B) Octagon (C) Hexagon (D) Pentagon Q: What is the young one… Easy Butterfly Crafts for Kids – Catch the Flying Butterfly I am super excited for the spring and all the fun that it brings: playing in the park, jumping in the mud puddles, riding bikes,… Easy Apple Craft Ideas For Kids We all know about apples, it’s a fruit which we all have eaten once or more than a thousand times. It has so many benefits…
{"url":"https://www.kidpid.com/can-you-solve-the-frog-riddle/","timestamp":"2024-11-14T07:05:00Z","content_type":"text/html","content_length":"125738","record_id":"<urn:uuid:8830186b-60cc-4ad8-ad0b-7c91819232a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00671.warc.gz"}
Understanding Mathematical Functions: What Are Zeros Of A Function When it comes to mathematics, understanding mathematical functions is crucial for grasping the concepts and principles of the subject. One important aspect of functions is their zeros, or the values of the independent variable that make the function equal to zero. In this blog post, we will delve into the significance of understanding zeros of a function and how they play a vital role in solving equations and analyzing the behavior of functions. Key Takeaways • Understanding zeros of a function is crucial for grasping the concepts and principles of mathematics. • Zeros of a function are the values of the independent variable that make the function equal to zero. • Zeros play a vital role in solving equations and analyzing the behavior of functions. • There are different types of zeros, including real zeros, complex zeros, and multiple zeros. • Zeros of a function have practical applications in engineering, finance, and real-world problem-solving. Understanding Mathematical Functions: What are Zeros of a Function In mathematics, a function is a relation between a set of inputs and a set of possible outputs, where each input is related to exactly one output. Functions are a fundamental concept in mathematics and are used to describe various real-world phenomena and relationships. A. Definition of a Function A mathematical function is a rule that assigns each input exactly one output. It can be represented by an equation, a graph, or a table of values. B. Examples of Common Mathematical Functions 1. Linear Function: A linear function is a function that can be graphically represented as a straight line. It has the form f(x) = ax + b, where a and b are constants. 2. Quadratic Function: A quadratic function is a function that can be graphically represented as a parabola. It has the form f(x) = ax^2 + bx + c, where a, b, and c are constants. 3. Exponential Function: An exponential function is a function in which the variable appears in the exponent. It has the form f(x) = a^x, where a is a constant. 4. Trigonometric Function: Trigonometric functions such as sine, cosine, and tangent are used to model periodic phenomena and oscillatory behavior. What are Zeros of a Function The zeros of a function, also known as roots, are the values of the input that make the output equal to zero. In other words, the zeros of a function are the solutions to the equation f(x) = 0. • Example: Consider the function f(x) = x^2 - 4. The zeros of this function can be found by setting f(x) equal to zero and solving for x. In this case, the solutions are x = 2 and x = -2. Understanding the zeros of a function is important in many areas of mathematics and science, as they provide valuable information about the behavior and properties of the function. Understanding zeros of a function Mathematical functions play a crucial role in various fields such as physics, engineering, and economics. Understanding the concept of zeros of a function is essential for solving equations and analyzing the behavior of functions. A. Definition of zeros of a function Zeros of a function refer to the values of the independent variable that make the function equal to zero. In other words, the zeros of a function are the points where the graph of the function intersects the x-axis. B. How to find zeros of a function • One way to find the zeros of a function is by setting the function equal to zero and solving for the independent variable. For example, if the function is f(x) = x^2 - 4, the zeros can be found by setting x^2 - 4 = 0 and solving for x. • Another method to find zeros is using graphical methods such as plotting the graph of the function and identifying the points where the graph crosses the x-axis. • In some cases, zeros can also be found using numerical methods such as the bisection method or Newton's method. C. Importance of zeros in mathematical analysis Zeros of a function hold significant importance in mathematical analysis for several reasons. Firstly, they provide insights into the behavior and characteristics of the function. The number and nature of zeros can indicate the function's properties such as roots, extrema, and the behavior of the function at different intervals. Moreover, the zeros of a function play a crucial role in solving equations and systems of equations. By finding the zeros of a function, one can determine the solutions to equations and establish relationships between different variables. Furthermore, zeros of a function are fundamental in calculus for finding the integration and differentiation of functions. They are also essential in the study of complex analysis and the behavior of complex functions. Different types of zeros Mathematical functions can have different types of zeros, which are the values of the variable that make the function equal to zero. These zeros can be categorized into different types based on their nature and characteristics. • Real zeros Real zeros of a function are the values of the variable for which the function equals zero and are real numbers. These zeros are often found on the x-axis of the graph of the function and represent the points where the graph intersects the x-axis. • Complex zeros Complex zeros of a function are the values of the variable for which the function equals zero and are complex numbers. Complex zeros often come in conjugate pairs, and they are not found on the real number line. Instead, they exist in the complex plane. • Multiple zeros Multiple zeros of a function occur when a particular value of the variable causes the function to equal zero more than once. This means that the graph of the function touches or crosses the x-axis at the same point multiple times. These multiple zeros can have different behaviors and implications for the function, depending on their multiplicities. Application of zeros in real-world problems Mathematical functions play a crucial role in various real-world problems, and understanding the zeros of a function is essential for solving these problems. Examples of how zeros of a function are used in engineering • Structural engineering: Engineers use the zeros of a function to analyze and design complex structures such as bridges and buildings. By finding the zeros of a function representing the forces acting on the structure, engineers can determine the points where the forces are balanced, which is crucial for ensuring the stability and safety of the structure. • Electrical engineering: Zeros of a function are used to analyze and design electrical circuits. Engineers use the zeros to determine the points where the voltage or current is zero, which helps in optimizing the performance and efficiency of the circuits. Examples of how zeros of a function are used in finance • Financial modeling: In finance, the zeros of a function are used to analyze and predict the behavior of financial assets such as stocks and bonds. By finding the zeros of a function representing the price or value of an asset, financial analysts can identify the points where the asset's value is zero, which is essential for making investment decisions. • Risk management: Zeros of a function are also used in finance to assess and manage risk. Financial institutions use the zeros to identify the points where the risk is minimized or mitigated, which is crucial for maintaining financial stability and minimizing potential losses. Common misconceptions about zeros of a function Understanding the concept of zeros of a function is crucial in mathematics, but it is also an area that is prone to misconceptions. Here are some common misconceptions about zeros of a function: A. Misunderstanding the concept of zero One of the most common misconceptions about zeros of a function is a misunderstanding of the concept of zero itself. Some individuals may confuse the concept of zero with the absence of value, rather than understanding it as the value where the function equals zero. B. Confusing zeros with critical points Another misconception is the confusion between zeros and critical points of a function. While critical points are where the derivative of the function is zero, zeros are the values where the function itself equals zero. It is important to differentiate between these two concepts to have a clear understanding of the behavior of a function. C. Not recognizing the significance of zeros in a function Some individuals may not fully grasp the significance of zeros in a function. Zeros play a crucial role in determining the roots of a function, which in turn helps in solving equations and understanding the behavior of the function. Failing to recognize the importance of zeros can lead to a limited understanding of the function as a whole. Recap: Understanding the zeros of a function is crucial in mathematics as it helps us find the roots of equations and understand the behavior of the function. It allows us to solve real-world problems and make predictions based on the data. Exploration: I encourage you to further explore the topic of mathematical functions and zeros for a better understanding of how they are used in various fields such as engineering, economics, and science. Delving deeper into this topic will not only enhance your mathematical skills but also open up new opportunities for problem-solving and critical thinking. ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-what-are-zeros-of-a-function","timestamp":"2024-11-09T03:11:08Z","content_type":"text/html","content_length":"212427","record_id":"<urn:uuid:36ede0ca-06fe-4fe4-b9e9-517929f29cf1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00829.warc.gz"}
Yang, Di (杨迪) Full Professor, USTC School of Mathematical Sciences University of Science and Technology of China Hefei 230026, P.R. China Office: 1627 Tel.: + 86 - 0551-63603909 E-mail: diyang@ustc.edu.cn August 2004 -- July 2008, B.Sc., Tsinghua University, Beijing August 2008 -- July 2013, Ph.D., Tsinghua University, Beijing (Oct. 2009 -- Oct. 2010 in visiting University of Cambridge, and Sep. 2012 -- Dec. 2012 in visiting SISSA) September 2013 -- September 2016, post-doc, SISSA, Trieste September 2016 -- August 2018, post-doc, MPIM, Bonn September 2018 -- August 2023, professor under a China innovation talent program for young people, USTC, Hefei September 2023 -- , full professor, USTC, Hefei Research Interest • Frobenius manifolds, moduli space of curves, and Gromov--Witten type invariants • hodographs for PDEs of hydrodynamic type, and the Dubrovin--Zhang normal forms • geometry and arithmetic of integrable systems • applications of Lie algebras and vertex algebras • quantum integrable systems and modular forms Selected Publications (including arXiv preprints) 47. (with C. Zhang and Z. Zhou) On an infinite commuting ODE system associated to a simple Lie algebra. arXiv:2404.16458 46. (with J. Xu) Galilean symmetry of the KdV hierarchy. arXiv:2403.04631 45. (with D. Valeri) Remarks on intersection numbers and integrable hierarchies. II. Tau-structure. arXiv:2312.16575 44. (with J. Zhou) From Toda hierarchy to KP hierarchy. arXiv:2311.06506 43. Analytic theory of Legendre-type transformations for a Frobenius manifold. to appear in Communications in Mathematical Physics. arXiv:2311.04200 42. (with D. Zagier) Mapping partition functions. arXiv:2308.03568 41. (with A. Fu and D. Zuo) The constrained KP hierarchy and the bigraded Toda hierarchy of (M, 1)-type. Lett. Math. Phys., 113 (2023), Paper No. 124, 44 pp. (LMP) arXiv:2306.09115 40. GUE Via Frobenius manifolds. II. Loop equations. arXiv:2407.19170 39. (with Q. Zhang) On a new proof of the Okuyama--Sakai conjecture. Rev. Math. Phys., 35 (2023), Paper No. 2350025. (RMP) arXiv:2303.09243 38. (with A. Fu and M. Li) From wave functions to tau-functions for the Volterra lattice hierarchy. Acta Math. Sci. Ser. B (Engl. Ed.), 44 (2024), 405-–419. (AMScientia) 37. Degree zero Gromov--Witten invariants for smooth curves. Bull. Lond. Math. Soc., 56 (2024), 96-110. (BLMS) arXiv:2208.02095 36. GUE via Frobenius Manifolds. I. From Matrix Gravity to Topological Gravity and Back. Acta Mathematica Sinica (English Series), 40 (2024), 383–-405. (AMSinica) arXiv:2205.01618 35. (with J. Zhou) Grothendieck's Dessins d'Enfants in a Web of Dualities. III. Journal of Physics A: Mathematical and Theoretical, 56 (2023), Paper No. 055201. (JPA) arXiv:2204.11074 34. (with S.-Q. Liu, Y. Zhang and C. Zhou) On Equivariant Gromov--Witten Invariants of Resolved Conifold with Diagonal and Anti-Diagonal Actions. Letters in Mathematical Physics, 112 (2022), Paper No. 129. (LMP) arXiv:2203.16812 33. (with A. Fu) The matrix-resolvent method to tau-functions for the nonlinear Schrödinger hierarchy. Journal of Geometry and Physics, 179 (2022), Paper No. 104592. (JGP) arXiv:2201.11020 32. (with C. Zhou) Gelfand--Dickey hierarchy, generalized BGW tau-function, and W-constraints. Nonlinearity, 36 (2023), 1873--1889. (NL) arXiv:2112.14595 31. (with M. Cafasso) Tau-functions for the Ablowitz--Ladik hierarchy: the matrix-resolvent method. Journal of Physics A: Mathematical and Theoretical, 55 (2022), Paper No. 204001. (JPA) 30. (with Q. Zhang) On the Hodge-BGW correspondence. Communications in Number Theory and Physics, 18 (2024), 611--651. (CNTP) arXiv:2112.12736 29. (with S.-Q. Liu, Y. Zhang and J. Zhou) The Virasoro-like Algebra of a Frobenius Manifold. IMRN, 2023, 13524--13561. (IMRN) arXiv:2112.07526 28. (with J. Guo) On the large genus asymptotics of psi-class intersection numbers. Math. Ann., 388 (2024), 61–97. (MA) arXiv:2110.06774 27. (with B. Dubrovin and D. Valeri) Affine Kac--Moody algebras and tau-functions for the Drinfeld--Sokolov hierarchies: The matrix-resolvent method. SIGMA 18 (2022), 077, 32 pages. (SIGMA) 26. (with G. Ruzza) On the spectral problem of the quantum KdV hierarchy. Journal of Physics A: Mathematical and Theoretical, 54 (2021), Paper No. 374001. (JPA) arXiv:2104.01480 25. (with B. Dubrovin and D. Zagier) Geometry and arithmetic of integrable hierarchies of KdV type. I. Integrality. Advances in Mathematics, 433 (2023), paper no. 109311. (AM) arXiv:2101.10924 24. (with C. Zhou) On an extension of the generalized BGW tau-function. Letters in Mathematical Physics, 111 (2021), paper no. 123. (LMP) arXiv:2010.00436 23. (with D. Zagier and Y. Zhang) Masur--Veech volumes of quadratic differentials and their asymptotics. Journal of Geometry and Physics, 158 (2020), paper no. 103870. (JGP) arXiv:2005.02275 22. Hamiltonian perturbations at the second order approximation. Annales Henri Poincar\'e, 21 (2020), 3919-–3937. (AHP) arXiv:2002.00823 21. (with S.-Q. Liu, Y. Zhang and C. Zhou) The Hodge-FVH Correspondence. Journal für die reine und angewandte Mathematik, 775 (2021), 259--300. (Crelle) arXiv:1906.06860 20. On tau-functions for the Toda lattice hierarchy. Letters in Mathematical Physics, 110 (2020), 555--583. (LMP) arXiv:1905.08140 19. (with B. Dubrovin) Remarks on intersection numbers and integrable hierarchies. I. Quasi-triviality. Adv. Theo. Math. Phys., 24 (2020), 1055--1085. (ATMP) arXiv:1905.08106 18. (with B. Dubrovin) Matrix resolvents and the discrete KdV hierarchy. Comm. Math. Phys., 377 (2020), 1823–-1852. (CMP) arXiv:1903.11578 17. (with B. Dubrovin and D. Zagier) On tau-functions for the KdV hierarchy. Selecta Mathematica, 27 (2021), paper number 12. (SM) arXiv:1812.08488 16. (with S.-Q. Liu, Y. Zhang and C. Zhou) The Loop Equation for Special Cubic Hodge Integrals. Journal of Differential Geometry, 121 (2022), 341–-368. (JDG) arXiv:1811.10234 15. (with B. Dubrovin and D. Zagier) Gromov--Witten invariants of the Riemann sphere. Pure and Applied Mathematics Quarterly, 16 (2020), 153--190. (PAMQ) arXiv:1802.00711 14. (with M. Cafasso and A. du C. de Villeneuve) Drinfeld-Sokolov hierarchies, tau functions, and generalized Schur polynomials. SIGMA, 14 (2018), paper no. 104. (SIGMA) arXiv:1709.07309 13. (with B. Dubrovin and D. Zagier) Classical Hurwitz number and related combinatorics. Moscow Mathematical Journal, 17 (2017), 601--633. (MMJ) 12. (with B. Dubrovin) On Gromov--Witten invariants of P^1. Mathematical Research Letters, 2019, 26, 729--748. (MRL) arXiv:1702.01669 11. (with B. Dubrovin, S.-Q. Liu and Y. Zhang) Hodge-GUE correspondence and the discrete KdV equation. Comm. Math. Phys., 379 (2020), 461--490. (CMP) arXiv:1612.02333 10. (with M. Bertola and B. Dubrovin) Simple Lie algebras, Drinfeld--Sokolov hierarchies, and multi-point correlation functions. Mosc. Math. J., 21 (2021), 233--270. (MMJ) arXiv:1610.07534 9. (with B. Dubrovin) On cubic Hodge integrals and random matrices. Communications in Number Theory and Physics, 2017, 11, 311--336. (CNTP) arXiv:1606.03720 8. (with B. Dubrovin) Generating series for GUE correlators. Letters in Mathematical Physics, 2017, 107, 1971--2012. (LMP) arXiv:1604.07628 7. (with M. Bertola and B. Dubrovin) Simple Lie algebras and topological ODEs. IMRN, 2018, 1368--1410. (IMRN) arXiv:1508.03750 6. (with M. Bertola and B. Dubrovin) Correlation functions of the KdV hierarchy and applications to intersection numbers over \M_{g,n}. Physica D: Nonlinear Phenomena, 327 (2016), 30--57. (PDNP) 5. (with F. Balogh) Geometric interpretation of Zhou's explicit formula for the Witten--Kontsevich tau function. Letters in Mathematical Physics, 2017, 107, 1837--1857. (LMP) arXiv:1412.4419 4. (with M. Bertola) The partition function of the extended r-reduced Kadomtsev--Petviashvili hierarchy. Journal of Physics A: Mathematical and Theoretical, 48 (2015), paper no. 195205. (JPA) 3. (with B. Dubrovin, S.-Q. Liu and Y. Zhang) Hodge integrals and tau-symmetric integrable hierarchies of Hamiltonian evolutionary PDEs. Advances in Mathematics, 293 (2016), 382--435. (AM) 2. (with S.-Q. Liu and Y. Zhang) Uniqueness Theorem of W-Constraints for Simple Singularities. Letters in Mathematical Physics, 103 (2013), 1329--1345. (LMP) arXiv:1305.2593 1. (with A.S. Fokas) On a novel class of integrable ODEs related to the Painlev\'e equations. International Journal of Bifurcation and Chaos, 22 (2012), paper no. 1250211. (IJBC) arXiv:1009.5125 • Journal of Nonlinear Mathematical Physics Selected Invited Talks International Congress of Chinese Mathematicians, July 31--August 5, 2022, Nanjing, China Integrable Systems in Geometry and Mathematical Physics, Conference in Memory of Boris Dubrovin, SISSA, June 26--July 2, 2021, Trieste, Italy (virtual) Conference Integrability and Randomness in Geometry and Mathematical Physics, CIRM, April 8--12, 2019, Luminy, France Annual Meeting of Chinese Mathematical Society, October 19--21, 2018, Guiyang, China 90's Anniversary of Tsinghua Mathematics, Tsinghua University, April 21--24, 2017, Beijing, China Moduli Spaces in Algebraic Geometry and Mathematical Physics, Chern Institute, September 18--22, 2015, Tianjin, China 2024 春, Linear Algebra (线性代数), Tuesday 1-2 and Thursday 3-4, USTC. 2023 秋, Topics in Integrable Systems (可积系统选讲), USTC. 2023 春, Complex Analysis (复分析), USTC. 2022 秋, Differential Geometry (微分几何), USTC 2022 春, Theory of Integrable Systems (可积系统理论), USTC 2021 秋, Differential Geometry (微分几何), USTC 2021 春, Complex Analysis (复分析), USTC 2020 秋, Theory of Integrable Systems (可积系统理论), USTC 2020 春, Complex Analysis (复分析), USTC 2019 秋, Differential Geometry (微分几何), USTC 2019 春, Linear Algebra (线性代数), USTC 2018 秋, Modern Mathematical Physics Problems (现代数学物理问题), USTC 2018 Spring, Introduction to Integrable Systems, MPIM 2017 Spring, Frobenius Manifolds, MPIM 2016 Spring, Introduction to Frobenius Manifolds, SISSA Other courses: 华罗庚讨论班 (2021-2022, USTC), 科学与社会研讨课 (2019 - 2020, 2020 - 2021, 2023 - 2024, 2024 - , USTC), 纯粹数学前沿 (2020 summer, USTC) Scientific Visits Tsinghua University, Beijing, July 2013 - August 2013 Concordia University, Montreal, October 2014 - November 2014 MPIM, Bonn, July 2019 - August 2019
{"url":"http://staff.ustc.edu.cn/~diyang/","timestamp":"2024-11-12T06:47:59Z","content_type":"text/html","content_length":"23665","record_id":"<urn:uuid:5a03b0e8-f695-4ef4-82d6-b7a4dc51db3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00525.warc.gz"}
CBSE Previous Year Question Papers Class 10 Maths SA2 Delhi - 2011 CBSE Previous Year Question Papers Class 10 Maths SA2 Delhi – 2011 Time allowed: 3 hours Maximum marks: 90 1. All questions are compulsory. 2. The Question Taper consists of 31 questions divided into four Sections A, B. C. and D. 3. Section A contains 4 questions of 1 mark each. Section B contains 6 questions of 2 marks each, Section C contains 10 questions of 3 marks each and Section D contains 11 questions of 4 marks each. 4. Use of calculators is not permitted. Questions number 1 to 4 carry 1 mark each. Question.1 In Fig. 1, O is the centre of a circle, AB is a chord and AT is the tangent at A. If ∠AOB = 100°, then calculate ∠BAT. CBSE Sample Papers Class 10 Maths Question.2. In Fig. 2, PA and PB are tangents to the circle with centre O. If ∠APB = 60°, then calculate ∠OAB. Question.3. A sphere of diameter 18 cm is dropped into a cylindrical vessel of diameter 36 cm, partly filled with water. If the sphere is completely submerged, then calculate the rise of water level (in cm). Question.4. In which quadrant the point P that divides the line segment joining the points A(2, – 5) and B(5, 2) in the ratio 2 : 3 lies? Questions number 5 to 10 carry 2 marks each. Question.5. The angles of a triangle are in A.P., the least being half the greatest. Find the angles. Question.6. Three vertices of a parallelogram taken in order are (- 1, 0), (3, 1) and (2, 2) respectively. Find the coordinates of fourth vertex. Question.7. Find the value of p so that the quadratic equation px(x – 3) + 9 = 0 has two equal roots. Question.8. Find whether – 150 is a term of the A.P. 17,12, 7, 2, … ? Question.9. Two concentric circles are of radii 7 cm and r cm respectively, where r >7. A chord of the larger circle, of length 48 cm, touches the smaller circle. Find the value of r. Question.10. Draw a line segment of length 6 cm. Using compasses and ruler, find a point P on it which divides it in the ratio 3 : 4. Questions number 11 to 20 carry 3 marks each. Question.11 In Fig. 3, APB and CQD are semi-circles of diameter 7 cm each, while ARC and BSD are semi-circles of diameter 14 cm each. Find the perimeter of the shaded region. [Use π = 22/7 ] Find the area of a quadrant of a circle, where the circumference of circle is 44 cm. [Use π = 22/7 ] Question.12 Two cubes, each of side 4 cm are joined end to end. Find the surface area of the resulting cuboid. Question.13 Find that value(s) of x for which the distance between the points P(x, 4) and Q(9,10) is 10 units. Question.14 A coin is tossed two times. Find the probability of getting at least one head. Question.15 Find the roots of the following quadratic equation: 2 √3 x^2 – 5x + -√3 = 0 Question.16 Find the value of the middle term of the following A.P.: – 6, – 2, 2, …, 58. Determine the A.P. whose fourth term is 18 and the difference of the ninth term from the fifteenth term is 30. Question.17 In Fig. 4, a triangle ABC is drawn to circumscribe a circle of radius 2 cm such that the segments BD and DC into which BC is divided by the point of contact D are of lengths 4 cm and 3 cm respectively. If area of ΔABC = 21 cm^2, then find the lengths of sides AB and AC. Question.18 Draw a triangle ABC in which AB = 5 cm, BC = 6 cm and ∠ABC = 60°. Then construct a triangle whose sides are y times the corresponding sides of ΔABC. Question.19 Find the area of the major segment APB, in Fig. 5, of a circle of radius 35 cm and ∠AOB = 90°. [Use π = 22/7 ] Question.20 The radii of the circular ends of a bucket of height 15 cm are 14 cm and r cm (r< 14 cm) the volume of bucket is 5390 cm3, then find the value of r. [Use π= 22/7 ] Questions number 21 to 31 carry 4 marks each. Question.21 A survey has been done on 100 people out of which 20 use bicycles, 50 use motorbikes and 30 use cars to travel from one place to another. Find the probability of persons who use bicycles, motorbikes and cars respectively? Which mode of transport do you think is better and why? Question.22 A game consists of tossing a coin 3 times and noting its outcome each time. Hanif wins if he gets three heads or three tails, and loses otherwise. Calculate the probability that Hanif will lose the game. From the top of a tower 100 m high, a man observes two cars on the opposite sides of the tower with angles of depression 30° and 45° respectively. Find the distance between the cars. [Use √3 = 1.73] The possible out comes on tossing a coin 3 times are, Question.23 If (3, 3), (6, y), (x, 7) and (5, 6) are the vertices of a parallelogram taken in order, find the values of x and y. Question.24 If two vertices of an’equilateral triangle are (3, 0) and (6, 0), find the third vertex. Find the value of k, if the points P(5, 4), Q(7, k) and R(9, – 2) are collinear. Question.25 A motor boat whose speed is 20 km/h in still water, takes 1 hour more to go 48 km upstream than to return downstream to the same spot. Find the speed of the stream. Find the roots of the equation 1/x+4- 1/x-7 = 11/30,x ≠ -4,7 Question.26 If the sum of first 4 terms of an A.P. is 40 and that of first 14 terms is 280, find the sum of its first n terms. Find the sum of the first 30 positive integers divisible by 6. Question.27 Prove that the lengths of tangents drawn from an external point to a circle are equal Question.28 In Fig. 6, arcs are drawn by taking vertices A, B and C of an equilateral triangle ABC of side 14 cm as centres to intersect the sides BC, CA and AB at their respective mid-points D, E and F. Find the area of the shaded region. [Use π = 22/7 and√3 =1.73] Question.29 From a solid cylinder whose height is 15 cm and diameter 16 cm, a conical cavity of the same height and same diameter is hollowed out. Find the total surface area of the remaining solid. [Take π = 3.14] : Question.30 Two poles of equal heights are standing opposite to each other on either side of the road, which is 100 m wide. From a point between them on the road, the angles of elevation of the top of the poles are 60° and 30° respectively. Find the height of the poles. Question.31 Two pipes running together can fill a cistern in 3 1/13 minutes. If one pipe takes 3 minutes more than the other to fill it, find the time in which each pipe would fill the cistern. Note: Except for the following questions, all the remaining questions have been asked in Set 1. Question.5 Which term of the progression 4, 9,14,19,… is 109? Q.6 Find a relation between x and y such that the point P(x, y) is equidistant from the points A (2, 5) and B (-3, 7). Question.13 Find the value of k so that the quadratic equation kx (3x – 10) + 25 = 0, has two equal roots. Q.14. A coin is tossed two times. Find the probability of getting not more than one head. Question.23 Draw a triangle ABC with side BC = 7 cm, ∠B = 45° and ∠A = 105°. Then construct a triangle whose sides are 3/5 times the corresponding sides of AABC. Question.24 If P(2, 4) is equidistant from Q(7, 0) and R(x, 9), find the values of x. Also find the distance PQ Question.28 From a point on the ground, the angles of elevation of the bottom and top of a transmission tower fixed at the top of a 10 m high building are 30° and 60° respectively. Find the height of the tower. Question.29 Find the area of the shaded region in Fig. 7, where arcs drawn with centres A, B, C and D intersect in pairs at mid-points P, Q, R and S of the sides AB, BC, CD and DA respectively of a square ABCD, where the length of each side of square is 14 cm. [Use π = 22/7 ] Question.30 A toy is in the shape of a solid cylinder surmounted by a conical top. If the height and diameter of the cylindrical part are 21 cm and 40 cm respectively, and the height of cone is 15 cm, then find the total surface area of the toy. [π = 3.14, be taken] Note: Except for the following questions, all the remaining questions have been asked in Set I and Set II. Question.5 Find the roots of 4x^2+ 3x + 5 = 0 by the method of completing the squares. Question.6 Determine the ratio in which the line 3x + y – 9 = 0 divides the segment joining the points (1, 3) and (2, 7). Question.7 A coin is tossed two times. Find the probability of getting both heads or both tails. Question.8 Find the value of m so that the quadratic equation mx(5x – 6) + 9 = 0 has two equal roots. Question.23 Draw a triangle PQR such that PQ = 5 cm, ∠P = 120° and PR = 6 cm. Construct another triangle whose sides are 3/4 times the corresponding sides of APQR. Question.24 Find the point of y-axis which is equidistant from the points (- 5, – 2) and (3, 2). Question. 25 From a solid cylinder of height 20 cm and diameter 12 cm, a conical cavity of height 8 cm and radius 6 cm is hollowed out. Find the total surface area of the remaining solid. [Use π = 22/7 ] Question.26 The length and breadth of a rectangular piece of paper are 28 cm and 14 cm respectively. A semi-circular portion is cut off from the breadth’s side and a semi-circular portion is added on length’s side, as shown in Fig. 8. Find the area of the shaded region. [Use π = 22/7 ] Question.31 From the top of a 15 m high building, the angle of elevation of tire top of a cable tower is 60° and the angle of depression of its foot is 30°. Determine the height of the tower. CBSE Previous Year Question Papers CBSE Previous Year Question Papers Class 10 Maths
{"url":"https://www.learncbse.in/cbse-previous-year-question-papers-class-10-maths-sa2-delhi-2011/","timestamp":"2024-11-13T06:08:03Z","content_type":"text/html","content_length":"168746","record_id":"<urn:uuid:9107f2dd-4a01-406a-b3e8-a9c22df570c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00879.warc.gz"}
cos/sin inequality Here's a problem from the 2004 Russian Mathematical Olympiad [Lecture Notes, p 61]: Let \(a,b,c\) be positive numbers, satisfying \(\displaystyle a+b+c=\frac{\pi}{2}\), prove that \( \mbox{cos}(a)+\mbox{cos}(b)+\mbox{cos}(c)\gt \mbox{sin}(a)+\mbox{sin}(b)+\mbox{sin}(c) \) 1. Xu Jiagu, Lecture Notes on Mathematical Olympiad Courses, v 8, (For senior section, v 1), World Scientific, 2012 |Contact| |Front page| |Contents| |Algebra| Copyright © 1996-2018 Alexander Bogomolny Let \(a,b,c\) be positive numbers, satisfying \(\displaystyle a+b+c=\frac{\pi}{2}\), prove that \( \mbox{cos}(a)+\mbox{cos}(b)+\mbox{cos}(c)\gt \mbox{sin}(a)+\mbox{sin}(b)+\mbox{sin}(c) \) It appears that the problem admits a brute-force, a rather straightforward, solution (Proof 1), which caused me to wonder why it was offered as an olympiad problem. However, the book solution (Proof 2) makes a very elegant shortcut that makes the problem certainly worth looking into. Both proofs use the fact that \(y=\mbox{cos}(x)\) is monotone decreasing on interval \(\displaystyle (0,\frac{\pi}{2})\): Proof 1 \(\displaystyle \mbox{cos}(a) &= \mbox{cos}(\frac{\pi}{2}-b-c) \\ &= \mbox{sin}(b+c) = \mbox{sin}(b)\mbox{cos}(c)+\mbox{sin}(c)\mbox{cos}(b), \) and similarly for \(\mbox{cos}(b)\) and \(\mbox{cos}(c)\). Summing up the three give \( \mbox{cos}(a)+\mbox{cos}(b)+\mbox{cos}(c) =& \mbox{sin}(a)[\mbox{cos}(b)+\mbox{cos}(c)] \\ &+ \mbox{sin}(b)[\mbox{cos}(c)+\mbox{cos}(a)] \\ &+ \mbox{sin}(c)[\mbox{cos}(a)+\mbox{cos}(b)]. \) Let's focus on one of the terms, say, \(\mbox{sin}(a)[\mbox{cos}(b)+\mbox{cos}(c)]\): \( \mbox{sin}(a)[\mbox{cos}(b)+\mbox{cos}(c)] = \mbox{sin}(a)\cdot 2\cdot \mbox{cos}(\frac{b+c}{2})\cdot \mbox{cos}(\frac{b-c}{2}). \) Now, since \(a\gt 0\), \(b+c\lt\frac{\pi}{2}\) and, therefore, \(\frac{b+c}{2}\lt\frac{\pi}{4}\). As we observed at the outset, function \(y=\mbox{cos}(x)\) is monotone decreasing on interval \((0,\ frac{\pi}{2})\) such that \(\mbox{cos}(\frac{b+c}{2})\gt \mbox{cos}(\frac{\pi}{4})=\frac{\sqrt{2}}{2}\). For the other factor, we also have \(\mbox{cos}(\frac{b-c}{2})\gt \mbox{cos}(\frac{\pi}{4})=\ frac{\sqrt{2}}{2}\), because of the triangle inequality \(|b-c|\le |b|+|c|=b+c\). It follows that \( \mbox{sin}(a)[\mbox{cos}(b)+\mbox{cos}(c)] \gt \mbox{sin}(a)\cdot 2\cdot \frac{\sqrt{2}}{2}\cdot \frac{\sqrt{2}}{2} = \mbox{sin}(a). \) For the other two terms we similarly have \(\mbox{sin}(b)[\mbox{cos}(c)+\mbox{cos}(a)]\gt\mbox{sin}(b)\) and \(\mbox{sin}(c)[\mbox{cos}(a)+\mbox{cos}(b)]\gt\mbox{sin}(c)\). Adding the three up gives the desired inequality. Proof 2 Observe that, say, \(a+b\lt\frac{\pi}{2}\) which implies \(a\lt \frac{\pi}{2} - b\), and since \(y=\mbox{cos}(x)\) is monotone decreasing on interval \((0,\frac{\pi}{2})\), \(\mbox{cos}(a) \gt \mbox{cos}(\frac{\pi}{2} - b) = \mbox{sin}(b).\) Similarly \(\mbox{cos}(b) \gt \mbox{sin}(c)\) and \(\mbox{cos}(c) \gt \mbox{sin}(a)\). The sum of the three inequalities gives the desired one. Looking back at the two proofs, it may occur to you that the inequality that has been proved is actually rather weak. Furthermore, as we've seen on the last step of the first proof, \(\mbox{cos}(b)+\ mbox{cos}(c)\gt 1\), implying by analogy that \(\mbox{cos}(c)+\mbox{cos}(a)\gt 1\) and \(\mbox{cos}(a)+\mbox{cos}(b)\gt 1\) and so \(\mbox{cos}(a)+\mbox{cos}(b)+\mbox{cos}(c)\gt \frac{3}{2}\). The olympiad inequality would also have been proved had we shown that \(\mbox{sin}(a)+\mbox{sin}(b)+\mbox{sin}(c)\le \frac{3}{2}\), making more precise the notion of the weakness of that inequality. Proof 3 The proof is based on the Let \(a,b,c\) be positive numbers, satisfying \(\displaystyle a+b+c=\frac{\pi}{2}\), prove that \(\mbox{sin}(a)+\mbox{sin}(b)+\mbox{sin}(c)\le \frac{3}{2}\) Proof of Lemma Instead of an algebraic derivation, I'll base the proof on a geometric insight. First of all note that the graph of \(y = \mbox{sin}(x)\) on interval \((0,\frac{\pi}{2})\) is concave. Then for three points on the graph that correspond to \(a,b,c\) satisfying \(\displaystyle a+b+c=\frac{\pi}{2}\), the center of gravity is below the graph at the point corresponding to \(\ displaystyle\frac{a+b+c}{3}=\frac{\pi}{6}\). It follows that with the equality only when \(a=b=c=\frac{\pi}{6}\). Q.E.D. Assume \(\alpha,\beta,\gamma\) be the angles of an acute triangle, so that each is positive, less than \(\frac{\pi}{2}\) and \(\alpha+\beta+\gamma=\pi\). Then \(\mbox{cos}(\alpha)+\mbox{cos}(\beta)+\mbox{cos}(\gamma)\le \frac{3}{2}\) Indeed, consider \(a=\frac{\pi}{2}-\alpha\), \(b=\frac{\pi}{2}-\beta\), \(c=\frac{\pi}{2}-\gamma\), so that \(a+b+c=\frac{\pi}{2}\) and each is positive. From the discussion above, \(\mbox{sin}(\frac{\pi}{2}-\alpha)+\mbox{sin}(\frac{\pi}{2}-\beta)+\mbox{sin}(\frac{\pi}{2}-\gamma)\le \frac{3}{2}\), which is equivalent to \(\mbox{cos}(\alpha)+\mbox{cos}(\beta)+\mbox{cos}(\gamma)\le \frac{3}{2}\). Proof 4 This proof has been posted below in the comments area. I decided to have it on the page proper for completeness and fairness sake because at the same time two other proofs have been posted at the CutTheKnotMath facebook page which I habitually reproduce at this site. Let $\alpha = \pi/2 - a$ and so on. So $\alpha + \beta + \gamma = \pi$ and all three angles are in $(0,\pi/2),$ that is, they are the angles of an acute triangle $ABC.$ The inequality is now equivalent to $\sin\alpha + \sin\beta + \sin\gamma > \cos\alpha + \cos\beta + \cos\gamma.$ Multiplying by the circumdiameter $2R$ and using the law of sines and the fact that $AH = 2Rcos\alpha,$ in which $H$ is the orthocenter, we find that we need to prove that $BC + AC + BC > AH + BH + CH.$ But $AB$ is longer than the altitude from $A$ of which $AH$ is only a part (here we are using that the triangle is acute), and similarly $BC > BH$ and $CA > CH,$ and we are done. Proof 5 The proof is by Leo Giugiuc. First we prove Let $\Delta ABC$ be acute angled. Then $\sin\alpha + \sin\beta + \sin\gamma > \cos\alpha + \cos\beta + \cos\gamma,$ where $\alpha =\angle BAC, $ etc. Indeed, Let $H$ be the orthocenter of the triangle. Then $BC\gt BH, $AC\gt CH,$ and $AB\gt AH, $ implying $2R\cdot \sin \alpha + 2R\cdot \sin \beta + 2R\cdot \sin \gamma > 2R\cdot \cos\alpha + 2R\cdot \cos\beta + 2R\cdot \cos\gamma.$ Now back to our problem. Denote $b+c=\alpha,$ $c+a=\beta,$ $a+b=\gamma.$ The three new angles are all in $(0,\pi /2)$ and add up to $\pi.$ From the lemma then, $\sin\alpha + \sin\beta + \sin\gamma > \cos\alpha + \cos\beta + \cos\gamma.$ But Similarly, $\sin\beta=\cos(b),$ and $\sin\gamma=\cos(c).$ For the same reason $\cos\alpha=\sin(a),$ etc. Proof 6 The proof is by Leo Giugiuc. $a\lt a+b,$ so that $\cos(a)\gt\cos(a+b)=\sin(c).$ Similarly, $\cos(b)\gt\sin(a)$ and $\cos(c)\gt\sin(b).$ |Contact| |Front page| |Contents| |Algebra| Copyright © 1996-2018 Alexander Bogomolny
{"url":"https://www.cut-the-knot.org/arithmetic/algebra/CosSinInequality.shtml","timestamp":"2024-11-03T12:32:23Z","content_type":"text/html","content_length":"24111","record_id":"<urn:uuid:ee9e6ba0-3cdc-4ec7-8ad6-ba4b12ef3680>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00041.warc.gz"}
2021 Book of Cases Lean Construction Ireland Annual Book of Cases 2021 55 Case 14 Improve We held a Kaizen event with the team involved in the process of installing the panels. From our analysis using the Fishbone Diagram, we knew that time was being lost waiting for the correct configuration of the chains to lift the panels.We thus arrived at a solution based on the geometry of the lifting chains. To settle on the correct chain lengths we used the cosine rule.This states that the side‘c’of any triangle can be found with the following information: • The angle gamma ‘g ’ (must be the angle opposite side ‘c’) • Triangle side length ‘a’ • Triangle side length ‘b’ Figure 6. The Cosine Rule With this formula,we could establish the correct chain lengths for the unbalanced panels. In a lifting configuration, such as the one shown in Figure 1, there are four chains. If we set these two exterior chain lengths to be the same length as the span of the panel, we have created an equilateral triangle.The three angles within the triangle are all 60, as per the equilateral triangle rule. With these parameters set, we could factor in the two shorter, interior chain lengths within the lifting configuration, and we could now break the lifting configuration down into two further smaller obtuse triangles and designate these two smaller chain lengths as side ‘c’ in their respective triangles.Therefore, for a typical configuration (Figure 7 shows these angles and triangle sides) these would be: • a = wall span length • b = distance between lifting hooks as per shop drawings • g = 60° • c = formula in Figure 6 Having agreed that this formula calculated the correct chain lengths for the unbalanced panels, we created a spreadsheet where the correct chains lengths were calculated.To test our solution, we installed 8 unbalanced panels using the cosine rule formula.This resulted in an average installation time of 6 minutes and 48 seconds per panel.The floor-to-floor build cycle is 3 weeks, which gives sufficient time to plan and prepare for handling future unbalanced panels. Using the drawings from the precast panel supplier, we can make decisions based on their geometry. Figure 8. Isometric Plan of BlockA1 Level 01 (samples of potentially unbalanced panels circled in red) Some panels that appear to be unbalanced are in fact balanced.To find out which panel will require the additional work,we entered the dimensions into the spreadsheet we created for this purpose and arrived at the final number of unbalanced panels.As a result of this exercise, we arrived at a total of 96 unbalanced panels. The installation times achieved using the solution above saves approx. 12 minutes per unbalanced panel. In addition, by using the spreadsheet we could calculate that the actual number of unbalanced panels was averaging at 4 panels per floor for 8 floors across 3 blocks. Therefore, over 24 floors there would be approximately 96 actual unbalanced panels.The cost of installing pre-cast panels approximates to €275 per hour and includes the following elements: • Crane • Crane Operator • Installation Gang • Sisk Supervision Using our solution, time saved installing an unbalanced panel is approx. 20 hours, and, on that basis, the overall saving to the project is approximately €5,500. Lean Initiative Improvements & Impact Figure 7. Lifting Configuration with the Cosine Rule Built-In RkJQdWJsaXNoZXIy MTIzMTIxMw==
{"url":"https://pdf.leanconstructionireland.ie/2021-Book-of-Cases/57/","timestamp":"2024-11-11T14:52:19Z","content_type":"text/html","content_length":"7875","record_id":"<urn:uuid:b7f83148-4f29-4730-8736-449bee805b12>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00860.warc.gz"}
In the below figure, two equal circles, with Centre’s \ Hint: Here, we will use the property of tangents which states that any tangents drawn to the circle at any point are perpendicular to the radius of the circle through the point of contact. We will also use the similarity property here which defines that, if the two sides of a triangle are in the same proportion of the two sides of another triangle, and the angle inscribed by the two sides in both the triangles are equal, then two triangles are said to be similar. The symbol of similarity is ~. Complete step-by-step solution: Step 1: As shown in the above figure (1), \[{\text{AC}}\] is tangent to circle having a center at \[{\text{O}}\], now as we know that tangents that are drawn to the circle make an angle of \[{\text {9}}{{\text{0}}^0}\] with the radius of the circle, so we can say that: \[ \Rightarrow \angle {\text{ACO = 9}}{{\text{0}}^0}\]………… (1) It is given in the question that \[{{\text{O}}^{\text{'}}}{\text{D}}\] is perpendicular to \[{\text{AC}}\], so we have: \[ \Rightarrow \angle {\text{ADO' = 9}}{{\text{0}}^0}\]………….(2) Now, from the expressions (1) and (2), both the angles are equal to \[9{{\text{0}}^0}\] and so \[\angle {\text{ADO'}} = \angle {\text{ACO}} = 9{{\text{0}}^0}\]. Therefore, by using the principle of the corresponding angle which states that the pair of the angles which is on the same side of one of two lines are equal then, the two lines which are intersected by a transversal are parallel. So, \[ \Rightarrow {\text{DO'}}\parallel {\text{CO}}\] Now, \[{\text{DO'}}\parallel {\text{CO}}\] so, by using the principle of alternate angles, which states that if the two lines are parallel and intersect by a transversal than the alternate interior angles are equal. So: \[ \Rightarrow \angle {\text{AO'D = }}\angle {\text{AOC}}\]. Step 2: Now, in \[\Delta {\text{ADO'}}\] and \[\Delta {\text{ACO}}\]: \[\angle {\text{ADO'}} = \angle {\text{ACO}} = 9{{\text{0}}^0}\] (by equation (1) and (2)) Also, from the figure (1): \[\angle {\text{DAO'}} = \angle {\text{CAO}}\] \[\because \] (the angle is common) \[\angle {\text{AO'D = }}\angle {\text{AOC}}\] \[\because \] (already proved in step 1) Therefore, by using the AAA (Angle-Angle-Angle) property of similarity ie all three angles are equal both the triangles are similar: \[ \Rightarrow \Delta {\text{ADO'}} ~ \Delta {\text{ACO}}\] Step 3: By using the information given in the question, the two circles are equal so the radii of both of the circles are the same and therefore \[{\text{AO'}} = {\text{O'X}} = {\text{XO}}\]. Now from the above figure (1), we can see that \[{\text{AO = AO' + O'X + XO}}\]. Thus, for finding the ratio of \[\dfrac{{{\text{AO'}}}}{{{\text{AO}}}}\] by substituting \[{\text{AO = AO' + O'X + XO}}\] in \[\dfrac{{{\text{AO'}}}}{{{\text{AO}}}}\]: \[ \Rightarrow \dfrac{{{\text{AO'}}}}{{{\text{AO}}}} = \dfrac{{{\text{AO'}}}}{{{\text{AO'}} + {\text{O'X}} + {\text{XO}}}}\] Now, by using \[{\text{AO'}} = {\text{O'X}} = {\text{XO}}\] in \[\dfrac{{{\text{AO'}}}}{{{\text{AO}}}} = \dfrac{{{\text{AO'}}}}{{{\text{AO'}} + {\text{O'X}} + {\text{XO}}}}\] we get: \[ \Rightarrow \dfrac{{{\text{AO'}}}}{{{\text{AO}}}} = \dfrac{{{\text{AO'}}}}{{{\text{AO'}} + {\text{AO' + AO'}}}}\] By adding the denominator in the RHS side and dividing it by the numerator we get: \[ \Rightarrow \dfrac{{{\text{AO'}}}}{{{\text{AO}}}} = \dfrac{{\text{1}}}{3}\]……….(3) Step 4: From step number 2, \[\Delta {\text{ADO'}} ~ \Delta {\text{ACO}}\] so by using the property of similarity which states that if two triangles are similar then their ratio of corresponding sides are also equal we have: \[\dfrac{{{\text{DO'}}}}{{{\text{CO}}}} = \dfrac{{{\text{AO'}}}}{{{\text{AO}}}}\] Therefore, by using the result \[\dfrac{{{\text{AO'}}}}{{{\text{AO}}}} = \dfrac{1}{3}\]from equation (3), and \[\dfrac{{{\text{DO'}}}}{{{\text{CO}}}} = \dfrac{{{\text{AO'}}}}{{{\text{AO}}}}\], we \[ \Rightarrow \dfrac{{{\text{DO'}}}}{{{\text{CO}}}} = \dfrac{1}{3}\]. So \[\dfrac{{{\text{DO'}}}}{{{\text{CO}}}}\] is equal to \[\dfrac{1}{3}\] Note: In these types of questions, students often get confused while applying the similarity properties of triangles so, you should remember all the rules of similar triangles and apply them correctly which we have used multiple times above as: In step number 2: By using the AAA (Angle-Angle-Angle) property of similarity both the triangles are similar. In step number 4: By using the property of similarity which states that if two triangles are similar then their ratio of corresponding sides is also equal. You should also remember the properties of corresponding angles and alternate angles which we have used in step number 1.
{"url":"https://www.vedantu.com/question-answer/in-the-below-figure-two-equal-circles-with-class-11-maths-cbse-5fd85cd3609c0e2b767d7ad7","timestamp":"2024-11-12T14:07:55Z","content_type":"text/html","content_length":"185268","record_id":"<urn:uuid:0211b5ae-7e5d-4903-89c4-35da1b50030f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00186.warc.gz"}
Thermal Properties of Matter MHT-CET pyq'sThermal Properties of Matter MHT-CET pyq's MHT-CET 2005 1. A body cools from 100°C to 70°C in 8 s. If the room temperature is 15°C and assuming Newton's law of cooling holds good, then time required for the body to cool from 70°C to 40°C is • (A) 14 s • (B) 8 s • (C) 10 s • (D) 5 s MHT-CET 2006 2. Newton's law of cooling holds good only. if the temperature difference between the body and the surroundings is • (A) less than 10°C • (B) more than 10°C • (C) less than 100°C • (D) more than 100'C MHT-CET 2019 3. A hot body at a temperature 'T' is kept in a surrounding of temperature 'T[0]'. It takes time 't[1]' to cool from 'T' to 'T[2]', time t[2] to cool from 'T[2]' to 'T[3]' and time 't[3]' to cool from 'T[3]' to 'T[4]'. If (T - T[2]) = (T[2] - T[3]) = (T[3] - T[4]), then • (A) t[1] > t[2] > t[3] • (B) t[1] = t[2] = t[3] • (C) t[3]> t[2] > t[1] • (D) t[1] > t[2] = t[3] MHT-CET 2020 4. A metal rod is heated to t°C. A metal rod has length. area of cross-section. Young's modulus and coefficient of linear expansion as 'L', 'A', 'Y' and 'α' respectively. When the rod is heated, the work performed is • (A) 1/2 YALα²t² • (B) 1/2 YAL²Î±²t² • (C) 1/2 YALαt • (D) YALαt 5. A metal rod of Young's modulus 'Y' and coefficient of linear expansion 'α' has its temeperature raised by '△θ'. The linear stress to prevent the expansion of rod is (L and l is original length of rod and expansion respectively) • (A) Y ∝ △θ • (B) Y(l/L)² • (C) YL/l • (D) Y∝/△θ 6. A metal rod of cross-sectional area 3 x 10^-6 m² is suspended vertically from one end has a length 0.4 m at 100°C. Now the rod is cooled upto 0°C, but prevented from contracting by attaching a mass 'm' at the lower end. The value of 'm' is (Y = 10^11 N/m²), coefficient of linear expansion = 10^-5/K, g = 10 m/s²) • (A) 30 kg • (B) 40 kg • (C) 20 kg • (D) 10 kg 7. A metal rod of length L and cross-sectional area A is heated through T °C. What is the force required to prevent the expansion of the rod lengthwise? (Y = Young's modulus of material of the rod, α = coefficient of linear expansion of the rod.) • (A) YAαT(1 - αT) • (B) YAαT/(1 + αT) • (C) YAα/T(1 + αT) • (D) YAα/(1 - αT) Post a Comment 0 Comments
{"url":"https://www.mhtcetprepbooster.com/2022/02/thermal-properties-of-matter-mht-cet.html","timestamp":"2024-11-04T01:50:02Z","content_type":"application/xhtml+xml","content_length":"423461","record_id":"<urn:uuid:62611e1e-bbfc-40eb-9fa4-b8643ac43005>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00368.warc.gz"}
What is a linear regression t-test used for? A T-test is used to compare the means of two different sets of observed data and to find to what extent such difference is ‘by chance’. Linear Regression is used to find the relationship between one dependent or outcome variable and one or more independent or predictor variables. What is the T value in regression? The t statistic is the coefficient divided by its standard error. The standard error is an estimate of the standard deviation of the coefficient, the amount it varies across cases. It can be thought of as a measure of the precision with which the regression coefficient is measured. What is omnibus in regression? Omnibus Tests in Multiple Regression. In Multiple Regression the omnibus test is an ANOVA F test on all the coefficients, that is equivalent to the multiple correlations R Square F test. What is a multivariate regression test? Multivariate Regression is a method used to measure the degree at which more than one independent variable (predictors) and more than one dependent variable (responses), are linearly related. A mathematical model, based on multivariate regression analysis will address this and other more complicated questions. How do you test multiple regression? Test for Significance of Regression. The test for significance of regression in the case of multiple linear regression analysis is carried out using the analysis of variance. The test is used to check if a linear statistical relationship exists between the response variable and at least one of the predictor variables. What is a strong t-value? A t-value between 1.5 to 2.0 indicates some evidence of learning. c. A t-value between 2 to 3 indicates strong evidence of learning. d. A t-value above 3 indicates very strong strong evidence of Why is my t-value so high? Higher values of the t-value, also called t-score, indicate that a large difference exists between the two sample sets. The smaller the t-value, the more similarity exists between the two sample sets. A large t-score indicates that the groups are different. Which is the best practice to deal with Heteroskedasticity? The solution. The two most common strategies for dealing with the possibility of heteroskedasticity is heteroskedasticity-consistent standard errors (or robust errors) developed by White and Weighted Least Squares. When would you use a multivariate regression? Multivariate regression comes into the picture when we have more than one independent variable, and simple linear regression does not work. Real-world data involves multiple variables or features and when these are present in data, we would require Multivariate regression for better analysis. What is multivariate regression used for? Multivariable regression models are used to establish the relationship between a dependent variable (i.e. an outcome of interest) and more than 1 independent variable. How is the t statistic used in linear regression? In linear regression, the t -statistic is useful for making inferences about the regression coefficients. The hypothesis test on coefficient i tests the null hypothesis that it is equal to zero – meaning the corresponding term is not significant – versus the alternate hypothesis that the coefficient is different from zero. How to test for significance of regression coefficients? This example shows how to test for the significance of the regression coefficients using t-statistic. Load the sample data and fit the linear regression model. Where do I find TSTAT for the Hypotheses test? You can see that for each coefficient, tStat = Estimate/SE. The -values for the hypotheses tests are in the pValue column. Each -statistic tests for the significance of each term given other terms in the model.
{"url":"https://diaridelsestudiants.com/what-is-a-linear-regression-t-test-used-for/","timestamp":"2024-11-14T18:12:30Z","content_type":"text/html","content_length":"47728","record_id":"<urn:uuid:776b9d05-f47a-4871-bba3-4d2d1dee4beb>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00171.warc.gz"}
Calculating Option Strategy Payoff in Excel This is part 4 of the Option Payoff Excel Tutorial. In the previous parts (first, second, third) we have created a spreadsheet that calculates profit or loss for a single call or put option, given the strike price, initial option price and underlying price. Now we are going to expand it to also work with positions involving multiple options – strategies such as straddles, condors, butterflies or spreads. Option Strategy Payoff Calculation Total profit or loss from an option strategy that involves multiple options (also called legs) equals the sum of profit or loss of all these individual legs. Knowing this will be very helpful when creating our option strategy payoff calculator. We can simply create multiple copies of the single option calculation that we already have and then sum up the results to get total strategy P/L. We will do that by expanding our existing spreadsheet and copying the inputs and formulas from column C to three other columns – D, E, F – to get a total of four possible legs for our option Inserting New Columns Because columns E-F are currently occupied by the contract size input and the dropdown box text inputs, we must move these to the right to make space for the new legs. Do this by inserting three more columns before E rather than by copying and pasting the cells – this way you won't break the cell references in the dropdown box and the formula in cell C9. You can insert a new column right before the existing column E by right clicking the label of column E and then selecting Insert from the menu that pops up. Do this three times to insert three columns. The contract size input and dropdown box data have shifted to columns H-I. Copying Leg Inputs and Calculations Now we can copy the entire column C to columns D, E, F. If your Call/Put dropdown box is located right over cell C3 and you do the copying separately (C to D, C to E, C to F – not C to D-F), it will also be copied. The result should look like this: You may notice that the P/L in cells D9, E9, F9 is showing different result than the original cell C9, although all the legs currently have exactly the same inputs and should therefore be showing identical results. There is one thing we need to fix in the P/L formula. In cell C9, the original formula is: (see the previous part for how we came to this formula) We must change the I2 part to absolute reference, so the copied cells in columns D, E, F still point to cell I2 (the contract size input, which is the same for all legs). On the contrary, we will keep the references to C8 and C2 relative, because these (P/L per share and position size) are leg-specific and should therefore change to D8 and D2, E8 and E2, F8 and F2, respectively, for columns D, E, F. I assume most readers are familiar with the difference between absolute and relative cell references – if not Google has plenty of good explanations. If you click on the I2 part of the formula in cell C9 and press the F4 key on your keyboard, the I2 cell reference will change from relative to absolute, which you will recognize by the dollar signs: Now you can copy cell C9 to cells D9, E9 and F9 and all these will show correct results for the individual legs. Fixing the Dropdown Boxes One last thing which requires a little fixing is the new dropdown boxes in cells D3, E3, F3. We must make sure each of them controls the correct leg, which quite likely is not the case at the moment. Right click the combo box in cell D3 and then choose "Format Control" (same as we did in part 2 when we were creating the first dropdown box). In the Format Control window that pops up, check "Cell link" (the middle of the three settings). For the dropdown box in column D it should be D3, or $D$3. If it is $C$3 or anything else, change it. This setting decides where the combo box selection will be stored, which of course must match the particular leg, otherwise your combo box would control a wrong leg and the calculations would be incorrect. Repeat this with the dropdown boxes in E3 and F3. Now all the individual legs should have correct calculations – test this by changing the different inputs and combo box selections. Calculating Total Strategy P/L The last step is to calculate total payoff for the entire position, which is just sum of the four legs. We can calculate it in cell G9, using the formula: Now cell G9 shows aggregate profit or loss for our entire position – the sum of the individual legs' P/L totals. We can also do the same with row 8 and calculate aggregate P/L per share, but note that in some cases (for positions where the number of contracts is not the same for all legs) this number might not make much sense. Fixing Underlying Price Input There is one very last change, which won't affect the calculations but will make the spreadsheet a little more user-friendly. At the moment each column has its own underlying price input (row 6), but this input will always be the same for all legs. Therefore it's more practical for the user to only change it in one place. You can change the hard typed values (currently 49) in cells D6, E6, F6 to a formula linking to cell C6 and perhaps make the cells green as a reminder that these should not be changed: Now underlying price, same for all legs, will be changed in cell C6 only. Alternatively, you can move the underlying price input somewhere else (like we did with contract size in cell I2); in such case you will also need to update the formulas in cells C8-F8 to reflect its new location. Strategies with Fewer than Four Legs While we have four legs in our spreadsheet, this does not mean we can't use it for strategies with only two or three legs, or even single option positions. Just set the position (cells C2-F2) to zero for any unused legs (as a result, rows 8 and 9 in these columns should also be showing zero). For example, the screenshot above shows P/L of a long straddle position, using 3 contracts each of long call and long put, both with strike $50, purchased at $2.10 and $2.25, respectively. When the underlying is at $56, total P/L for the entire strategy is $495. The calls are in the money and make a profit of $1,170, while the puts are out of the money and their loss equals the initial cost, or Next Steps Having started with a very simple calculation in part one, now in part 4 we have created quite an advanced spreadsheet which can calculate profit or loss for any combination of up to four legs and can be used to model a wide range of option strategies. You could see that expanding the spreadsheet from single option to four legs was really just a matter of creating additional copies of the same column, but there were a number of small details which we had to check and fix, in order to make sure our calculations are correct. In the next part, we will use our calculations to draw payoff diagrams for our strategies.
{"url":"https://www.macroption.com/calculating-option-strategy-payoff-in-excel/","timestamp":"2024-11-03T06:42:03Z","content_type":"text/html","content_length":"23416","record_id":"<urn:uuid:1370f87b-023a-4a63-b2ca-6f1ca1a0e29f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00247.warc.gz"}
- T Lesson 7 Comparing Numbers and Distance from Zero 7.1: Opposites (10 minutes) The purpose of this warm-up is to use opposites to get students to think about distance from 0. Problem 3 also reminds students that the opposite of a negative number is positive. Notice students who choose 0 or a negative number for \(a\) and how they reason about \(\text-a\). Arrange students in groups of 2. Give students 5 minutes of quiet think time, then 2 minutes of partner discussion. Follow with whole-class discussion. Student Facing 1. \(a\) is a rational number. Choose a value for \(a\) and plot it on the number line. 1. Based on where you plotted \(a\), plot \(\text- a\) on the same number line. 2. What is the value of \(\text- a\) that you plotted? 3. Noah said, “If \(a\) is a rational number, \(\text- a\) will always be a negative number.” Do you agree with Noah? Explain your reasoning. Anticipated Misconceptions For problem 3, students might assume that \(\text-a\) is always a negative number. Ask these students to start with a negative number and find its opposite. For example, starting with \(a = \text-3\) , we can find its opposite, \(\text-(\text-3)\), to be equal to 3. Activity Synthesis The main idea of discussion is that opposites have the same distance to 0 (i.e., same absolute value) and that the opposite of a negative number is positive. Ask students to discuss their reasoning with a partner. In a whole-class discussion, ask a student who chose \(a\) to be positive to share their reasoning about how to plot \(\text-a\) and whether they agreed with Noah in problem 3. Then, select previously identified students who chose \(a\) to be negative to share their thinking. If not mentioned by students, emphasize both symbolic and geometric statements of the fact that the opposite of a negative number is positive. For example, if \(a=\text-3\), write \(\text-(\text-3) = 3\) and show that 3 is the opposite of -3 on the number line because they are the same distance to 0. If time allows, select a student who chose \(a\) to be 0 and compare to cases where \(a\) is negative or positive. The number 0 is its own opposite because no other number is 0 units away from 0. Sequencing the discussion to look at positive, negative, and 0 values of \(a\) helps students to visualize and generalize the concept of opposites for rational numbers. 7.2: Submarine (15 minutes) Students distinguish between absolute value and order in the context of elevation. Students express their ideas carefully using symbols, verbally, and using a vertical number line. Placing possible elevations on the number line serves as a transition to thinking about solutions to inequalities. Look for students who choose positive and negative elevations for Han and Lin to compare in the Arrange students in groups of 4. Distribute one set of sticky notes to each group, where each note contains one name: Clare, Andre, Han, Lin, and Priya. Display the image for all to see throughout the activity. Ask students to read the instructions for the task and the description of each person's elevation. Give them a few minutes to use their sticky notes, as a group, to decide where each person (except Priya) could be located. Place Clare’s sticky note on the number line according to the completed first row of the table. Explain the completed first row of the table to students as it pertains to Clare’s description. Use precise language when explaining the symbols in the table: • One possible elevation for Clare is 150 feet because 150 is greater than -100, and it is also farther from sea level. • 150 is greater than -100. • The absolute value of 150 is greater than the absolute value of -100. Ask groups to complete the rest of the table for the other people (except Priya), and then answer the question about Priya. Note that it is possible to come up with different, correct responses that fit the descriptions. Give students 10 minutes to work followed by whole-class discussion. Representation: Access for Perception. Activate or supply background knowledge. Give students 1–2 minutes to review the first row of the table that shows a possible elevation for Clare. Invite 1–2 students to think aloud and share connections they make between the display with the sticky notes, and the values in the table. Record their thinking on a display of the table and keep the work visible as students work. Supports accessibility for: Organization; Attention Student Facing A submarine is at an elevation of -100 feet (100 feet below sea level). Let’s compare the elevations of these four people to that of the submarine: • Clare’s elevation is greater than the elevation of the submarine. Clare is farther from sea level than the submarine. • Andre’s elevation is less than the elevation of the submarine. Andre is farther away from sea level than the submarine. • Han’s elevation is greater than the elevation of the submarine. Han is closer to sea level than is the submarine. • Lin’s elevation is the same distance away from sea level as the submarine’s. 1. Complete the table as follows. 1. Write a possible elevation for each person. 2. Use \(<\), \(>\), or \(=\) to compare the elevation of that person to that of the submarine. 3. Use absolute value to tell how far away the person is from sea level (elevation 0). As an example, the first row has been filled with a possible elevation for Clare. │ │possible │ compare to │ distance from │ │ │elevation│ submarine │ sea level │ │Clare│150 feet │\(150 > \text-100\)│\(|150|\) or 150 feet │ │Andre│ │ │ │ │ Han │ │ │ │ │ Lin │ │ │ │ 2. Priya says her elevation is less than the submarine’s and she is closer to sea level. Is this possible? Explain your reasoning. Activity Synthesis The purpose of the discussion is to let students practice using proper vocabulary to express ideas that distinguish order from absolute value with positive and negative numbers. Select previously identified students to share different elevations for Han and for Lin that show both positive and negative possibilities. Encourage students to explain why the elevation they chose satisfies the description in the problem. As students speak, record their statements using \(<,>,=\) and \(|\boldcdot |\). Allow students to rearrange sticky notes on the vertical number line display. If time allows, use the sticky notes to show the range of possible solutions for each character; this will help to further prepare students for the concept of graphing solutions of an inequality on the number line. Speaking: MLR8 Discussion Supports. To support students’ use of vocabulary related to absolute value and positive and negative numbers, provide sentence frames related to each column heading. Some examples include: “_____ could have an elevation of _____ because _____,” “Comparing _____’s elevation to the submarine’s, I notice _____,” or “_____’s distance from sea level is _____ because Design Principle(s): Cultivate conversation 7.3: Info Gap: Points on the Number Line (15 minutes) Optional activity In this info gap activity, students use comparisons of order and absolute value of rational numbers to determine the location of unknown points on the number line. In doing so students reinforce their understanding that a number and its absolute value are different properties. Students will also begin to understand that the distance between two numbers, while being positive, could be in either direction between the numbers. This concept is expanded on further when students study arithmetic with rational numbers in grade 7. The info gap structure requires students to make sense of problems by determining what information is necessary, and then to ask for information they need to solve it. This may take several rounds of discussion if their first requests do not yield the information they need (MP1). It also allows them to refine the language they use and ask increasingly more precise questions until they get the information they need (MP6). Here is the text of the cards for reference and planning: Arrange students in groups of 2. In each group, distribute the first problem card to one student and a data card to the other student. After debriefing on the first problem, distribute the cards for the second problem, in which students switch roles. Engagement: Develop Effort and Persistence. Display or provide students with a physical copy of the written directions. Check for understanding by inviting students to rephrase directions in their own words. Keep the display of directions visible throughout the activity. Supports accessibility for: Memory; Organization Conversing: This activity uses MLR4 Information Gap to give students a purpose for discussing information necessary to determine the location of unknown points on the number line. Display questions or question starters for students who need a starting point such as: “Can you tell me . . . (specific piece of information)”, and “Why do you need to know . . . (that piece of information)?" Design Principle(s): Cultivate Conversation Student Facing Your teacher will give you either a problem card or a data card. Do not show or read your card to your partner. If your teacher gives you the problem card: 1. Silently read your card and think about what information you need to be able to answer the question. 2. Ask your partner for the specific information that you need. 3. Explain how you are using the information to solve the problem. Continue to ask questions until you have enough information to solve the problem. 4. Share the problem card and solve the problem independently. 5. Read the data card and discuss your reasoning. If your teacher gives you the data card: 1. Silently read your card. 2. Ask your partner “What specific information do you need?” and wait for them to ask for information. If your partner asks for information that is not on the card, do not do the calculations for them. Tell them you don’t have that information. 3. Before sharing the information, ask “Why do you need that information?” Listen to your partner’s reasoning and ask clarifying questions. 4. Read the problem card and solve the problem independently. 5. Share the data card and discuss your reasoning. Anticipated Misconceptions Students may struggle to make sense of the abstract information they are given if they don't choose to draw a number line. Rather than specifically instructing them to use this strategy, consider asking them a question like “How could you keep track of the information you've learned about the points so far?” Activity Synthesis Select students with different strategies to share their approaches. Invite them to share which of the clues they thought were more helpful and which were least helpful. Ask students to explain how drawing a number line helped them and how they decided on the appropriate order for the unknown numbers. 7.4: Inequality Mix and Match (15 minutes) Optional activity The goal of this activity is for students to practice comparing rational numbers. Notice students who compare fractions to decimals, fractions to integers, or who compare absolute values to negative numbers. Arrange students in groups of 2. Give students 10 minutes to work before whole-class discussion. Action and Expression: Provide Access for Physical Action. Create alternatives for physically interacting with materials. Consider creating a set of cards for each of the numbers and inequality symbols that students can select from and sequence to create true comparison statements. Invite students to talk about their statements before writing them down. Supports accessibility for: Visual-spatial processing; Conceptual processing Speaking: MLR5 Co-Craft Questions. To create space for students to produce the language of mathematical questions themselves, display only the array of numbers that the students will be using in this activity. Ask students to think about the values of the numbers and write a mathematical question using two or more numbers from the array. Students may generate questions such as “How many values are greater than zero?” or “Which numbers are opposites?” Notice students that have questions about comparing and ordering the numbers and ask them to share their questions. This will help students use conversation skills to generate, choose, and improve their questions as well as develop meta-awareness of the language used in mathematical questions. Design Principle(s): Support sense-making; Maximize meta-awareness Student Facing Here are some numbers and inequality symbols. Work with your partner to write true comparison statements. \(|\text{-}\frac {5}{2}|\) One partner should select two numbers and one comparison symbol and use them to write a true statement using symbols. The other partner should write a sentence in words with the same meaning, using the following phrases: • is equal to • is the absolute value of • is greater than • is less than For example, one partner could write \(4 < 8\) and the other would write, “4 is less than 8.” Switch roles until each partner has three true mathematical statements and three sentences written down. Student Facing Are you ready for more? For each question, choose a value for each variable to make the whole statement true. (When the word and is used in math, both parts have to be true for the whole statement to be true.) Can you do it if one variable is negative and one is positive? Can you do it if both values are negative? 1. \(x < y\) and \(|x| < y\). 2. \(a < b\) and \(|a| < |b|\). 3. \(c < d\) and \(|c| > d\). 4. \(t < u\) and \(|t| > |u|\). Activity Synthesis The goal of discussion is to allow students to use precise language when comparing rational numbers and absolute values verbally. Select previously identified students to share their responses that compare fractions to decimals, fractions to integers, or absolute values to negative numbers. Display their responses using absolute value and \(>, <, =\) symbols for all to see. Ask students to indicate whether they agree that each response is true, and ask students to share their reasoning about whether they agree or disagree. Lesson Synthesis During this lesson, students have used precise language to distinguish absolute value from order of rational numbers. Display \(|\text-8|\) and 3 questions for all to see: • “How do you say this?” (The absolute value of -8.) • “What does it mean in an elevation situation?” (It’s the distance from 8 feet below sea level to sea level.) • “What does it mean on a number line?” (It’s the distance from -8 to 0 on the number line.) • “What is its value?” (8.) Next, display \(|\text-8| < 5\) and two questions for all to see: • “How do you say this?” (The absolute value of -8 is less than 5.) • “What does it mean on a number line?” (-8 is less than 5 units away from 0.) • “Is it true?” (No, -8 is more than 5 units away from 0.) 7.5: Cool-down - True or False? (5 minutes) Student Facing We can use elevation to help us compare two rational numbers or two absolute values. • Suppose an anchor has an elevation of -10 meters and a house has an elevation of 12 meters. To describe the anchor having a lower elevation than the house, we can write \(\text-10<12\) and say “-10 is less than 12.” • The anchor is closer to sea level than the house is to sea level (or elevation of 0). To describe this, we can write \(|\text-10|<|12|\) and say “the distance between -10 and 0 is less than the distance between 12 and 0.” We can use similar descriptions to compare rational numbers and their absolute values outside of the context of elevation. • To compare the distance of -47.5 and 5.2 from 0, we can say: \(|\text-47.5|\) is 47.5 units away from 0, and \(|5.2|\) is 5.2 units away from 0, so \(|\text-47.5|>|5.2|\). • \(|\text-18|>4\) means that the absolute value of -18 is greater than 4. This is true because 18 is greater than 4.
{"url":"https://curriculum.illustrativemathematics.org/MS/teachers/1/7/7/index.html","timestamp":"2024-11-07T23:58:17Z","content_type":"text/html","content_length":"114566","record_id":"<urn:uuid:b527db78-c294-4343-aed7-eca4e92db3ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00448.warc.gz"}
SAMPLING OF GOLD & SILVER : “A FINANCIAL OPPORTUNITY IF APPLIED CORRECTLY” - Intermetperu SAMPLING OF GOLD & SILVER : “A FINANCIAL OPPORTUNITY IF APPLIED CORRECTLY” Iicio / Cursos OnLine / Sampling of Gold & Silver: «A financial opportunity if applied correctly» Theoretical and, especially practical problems generated by the sampling of materials containing gold and silver have been given enormous attention by many specialists for a long time. Nevertheless, solutions to the sampling of gold and silver are still unsatisfactory: sometimes propositions are too theoretical and useless for the engineer, and sometimes they are too simplistic and based on empirical observations without solid foundations. In this course, a special effort is made in presenting easy practical examples. This makes it possible to quickly calculate the variance of the Fundamental Sampling Error involved during the sampling of gold and silver. Peculiarities about the sampling of gold and silver can be divided into three categories: • Financial. • Theoretical. • Practical. Relatively small amounts of material can involve very large amounts of money; therefore, problems of precision and accuracy quickly become a primary concern. This is the case during early exploration, for mining evaluations and planning, for milling operations, for recycling smelters, etc. One of the main distinctions between gold and silver and other metals is the fact that they are economic at very low levels. Base metals, for example, are often estimated in percent, while gold and silver are often estimated in parts per million. Because gold has an important place in the sampling of precious metals, and is known to generate numerous practical difficulties, our discussion will often concern only this metal. However, the extension of these discussions to other precious metals is straightforward. The gold content of a sample and the gold content of the surrounding ore can be very different. Furthermore, the gold content of a tiny analytical 30-g subsample and the gold content of the 10000-g from which it was selected can also be very different. The density of gold is enormous (ρAu = 19.3), promoting strong segregation phenomena as soon as some gold particles are liberated. Gold particles do not comminute very well; therefore, gold smears and easily coats sampling equipment generating unacceptable losses and cross-contamination problems. As a result, a finely ground analytical 250-g subsample, believed to be 100% minus 100 microns, can still contain a few gold particles that may be 200 microns or even larger. This delayed comminution confuses many sampling experts in their calculations of the variance of the Fundamental Sampling Error, generating endless debates. All these problems are amplified as gold grade becomes lower, as the economics of gold deposits become marginal, and as the distribution of gold in rocks becomes erratic as studied in Part 6 of this book. Day One Lecture 1: Introduction to the 9 kinds of sampling errors. Lecture 2: Fundamental statistical concepts involved in the sampling of gold. Lecture 3: The Fundamental Sampling Error: optimization of necessary sample mass. Lecture 4: Sampling for size distribution analysis: Cardinal Rule #1for sampling gold. Lecture 5: Poisson Processes: the nightmare of sampling for gold. Day Two Lecture 6: Introduction to Poisson Processes: A real case for gold with devastating consequences. Lecture 7: Sampling for trace elements such as gold. Lecture 8: In situ Constitution Heterogeneity: a concern for geologists. Lecture 9: Guidelines for required accuracy and precision. Lecture 10: Distribution Heterogeneity: problems with gold segregation. Lecture 11: The Delimitation Error during exploration and grade control (a bias generator). Real case for Blasthole sampling. Lecture 12: The Delimitation Error at the plant (a bias generator): Real cases of bad sampling systems for material balance. Lecture 13: Problems with weightometers: Real cases leading to disagreement between the mine and the plant. Day Three Lecture 14: The Delimitation Error at the laboratory (a bias generator). Lecture 15: The Extraction Error at the mine (a bias generator): Real cases for blasthole sampling. Lecture 16: The Extraction Error at the plant (a bias generator): Real cases of bad sampling systems for material balance. Lecture 17: The Preparation Error (a bias generator). Lecture 18: An introduction to variography (a convenient tool for metallurgists). Lecture 19: Advance Variography and its many applications. Day Four Lecture 20: Benchmark Sampling Systems for Material Balance. Lecture 21: The weaknesses of Bias Tests for sampling issues. Lecture 22: Sampling Modes. Lecture 23: An introduction for management: A summary as a conclusion for the course, with exercises included. In this lecture, four real cases are presented where financial losses due to bad sampling were quantified. Dr. Francis F. Pitard is a consulting expert in Sampling, Statistical Process Control and Total Quality Management. He is President of Francis Pitard Sampling Consultants (www.fpscsampling.com) and Technical Director of Mineral Stats Inc. (www.mineralstats.com) in Broomfield, Colorado USA. He provides consulting services in many countries. Dr. Pitard has six years of experience with the French Atomic Energy Commission and fifteen years with Amax Extractive R&D. He taught Sampling Theory, SPC, and TQM for the Continuing Education offices of the Colorado School of Mines, the Australian Mineral Foundation, for the Mining Department of the University of Chile, and the University of Witwatersrand in South Africa. He has a Doctorate of Technology from Aalborg University in Denmark. He is the recipient of the prestigious Pierre Gy’s Gold Medal for excellence in promoting and teaching the Theory of Sampling (Cape Town, South Africa, 2009). Consultant of InterMet for Peru. Chairman of II Mineral Sampling Congress to be held in Lima on September 2021. 30% OFF REGULAR INVESTMENT: USD 1,500 DATE : SEPTEMBER 19 TO 22, 2023 6:00PM TO 10:00PM (Peruvian Time) No dude en comunicarce con nosotros y de inmediato lo vamos a guiar en todo lo necesario. +51 960 995 971
{"url":"https://intermetperu.com/cursos/sampling-of-gold-silver/","timestamp":"2024-11-09T01:15:58Z","content_type":"text/html","content_length":"112862","record_id":"<urn:uuid:00a25dcb-be6c-470b-928f-0a03ac9e3414>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00653.warc.gz"}
Stock covariance excel Excel COVARIANCE.S Function - Calculates the Sample Covariance of Two Sets of Values - Function Description, Examples & Common Errors. It can easily be calculated provided we have the covariance matrix and weights of all the securities in the portfolio. On this page, we discuss the marginal What is the Covariance Excel Function? The covariance Excel function is categorized under Statistical functions. Functions List of the most important Excel functions for financial analysts. This cheat sheet covers 100s of functions that are critical to know as an Excel analyst It calculates the joint variability of two random variables, given two sets of data. Key words: Value at risk (VaR), Variance-Covariance approach, Historical value (calculated from confidence level using formula “NORMSINV” in Excel), V-. 7 Nov 2014 An easy way to calculate a covariance matrix for any N-asset portfolio of stocks using Python and Quandl.com data provider. Covariance - Excel. In order to do this section, download this Excel file called Temperature. To download Excel files, you must configure your browser Excel COVARIANCE.S Function - Calculates the Sample Covariance of Two Sets of Values - Function Description, Examples & Common Errors. It can easily be calculated provided we have the covariance matrix and weights of all the securities in the portfolio. On this page, we discuss the marginal 19 Feb 2015 We can compute the covariance between every single stock in this model I can use Excel's Solver to find two efficient portfolios; the way I This article describes the formula syntax and usage of the COVARIANCE.S function in Microsoft Excel. Returns the sample covariance, the average of the products of deviations for each data point pair in two data sets. Syntax. COVARIANCE.S(array1,array2) The COVARIANCE.S function syntax has the following arguments: Array1 Required. The first cell This article describes the formula syntax and usage of the COVARIANCE.P function in Microsoft Excel. Returns population covariance, the average of the products of deviations for each data point pair in two data sets. Use covariance to determine the relationship between two data sets. The covariance calculation shows how two stocks move together, which is useful when building a diversified investment portfolio. The upper part of the diagonal is empty as the excel covariance matrix is symmetric towards the diagonal. Example #2. Calculation of covariance matrix to determine variances between the returns of different portfolio stocks. Step 1: For this example, the following data including the stock returns are considered. This article describes the formula syntax and usage of the COVARIANCE.S function in Microsoft Excel. Returns the sample covariance, the average of the products of deviations for each data point pair in two data sets. Syntax. COVARIANCE.S(array1,array2) The COVARIANCE.S function syntax has the following arguments: Array1 Required. The first cell The Bloomberg Terminal puts the industry's most powerful suite of global, multi- asset portfolio and risk analysis tools at your fingertips. 16 Jun 2017 Hence, it is calculated as the mean returns earned by an asset or a portfolio in excess of the risk-free rate per unit of volatility. The higher the Expected rate of return on Apple's common stock estimate using capital asset pricing model (CAPM). rates of return. Microsoft Excel LibreOffice Calc CovarianceAAPL, S&P 500 ÷ (Standard deviationAAPL × Standard deviationS&P 500) The Portfolio Optimizer is an Excel-based Visual Basic Application that Construct Historic, Exponentially-Smoothed, and GARCH-based Covariance Matrices The upper part of the diagonal is empty as the excel covariance matrix is symmetric towards the diagonal. Example #2. Calculation of covariance matrix to determine variances between the returns of different portfolio stocks. Step 1: For this example, the following data including the stock returns are considered. The upper part of the diagonal is empty as the excel covariance matrix is symmetric towards the diagonal. Example #2. Calculation of covariance matrix to determine variances between the returns of different portfolio stocks. Step 1: For this example, the following data including the stock returns are considered. The covariance between stock A and stock B can be calculated on the basis of returns of both the stocks at different intervals and the sample size or the number of intervals. Mathematically, it is represented as, Therefore, calculation of the covariance in excel will be as follows, The Covariance tool, available through the Data Analysis add-in in Excel, quantifies the relationship between two sets of values. The Covariance tool calculates the average of the product of deviations of values from the data set means. To use this tool, follow these steps: In this tutorial we will learn how to create covariance matrix in Excel or covariance table in Excel. Covariance is a measure of how much two random variables vary together. It’s similar to variance, but where variance tells you how a single variable varies, co variance tells you how two variables vary together. Formula for covariance: What is the Covariance Excel Function? The covariance Excel function is categorized under Statistical functions. Functions List of the most important Excel functions for financial analysts. This cheat sheet covers 100s of functions that are critical to know as an Excel analyst It calculates the joint variability of two random variables, given two sets of data. We need two additional things: the portfolio volatility and the covariances for the stocks against each other. Because there is only one covariance for a two-stock Using Excel function COVAR(), we can calculate the covariance between Vanguard 500 Index and the two stocks. Stock. Cal. REIT Brown Group. Cov( Vanguard Using Excel function COVAR(), we can calculate the covariance between Vanguard 500 Index and the two stocks. Stock. Cal. REIT Brown Group. Cov( Vanguard NOTE: You're making assumptions here that that the Covariance of which is registered on stock exchange can be calculated in excel through following steps. The Bloomberg Terminal puts the industry's most powerful suite of global, multi- asset portfolio and risk analysis tools at your fingertips. 16 Jun 2017 Hence, it is calculated as the mean returns earned by an asset or a portfolio in excess of the risk-free rate per unit of volatility. The higher the Expected rate of return on Apple's common stock estimate using capital asset pricing model (CAPM). rates of return. Microsoft Excel LibreOffice Calc CovarianceAAPL, S&P 500 ÷ (Standard deviationAAPL × Standard deviationS&P 500) The Portfolio Optimizer is an Excel-based Visual Basic Application that Construct Historic, Exponentially-Smoothed, and GARCH-based Covariance Matrices By the time you have built your portfolio, nailing the CFA Level 1 Portfolio step by using an Excel plug-in called “Data Analysis” and selecting covariance.
{"url":"https://bestftxprbcyhk.netlify.app/meadows32307vox/stock-covariance-excel-zime.html","timestamp":"2024-11-11T00:48:28Z","content_type":"text/html","content_length":"34354","record_id":"<urn:uuid:878f22ad-3b7c-49a9-882e-11d211cdd391>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00084.warc.gz"}
How does one have multiple random variables in an SDEProblem? I’m trying to simulate a stochastic state space model which looks something like: x_dot(x) = A*x + B*u(x + n) + d A and B are matrices, x, n, and d are vectors and u is an affine function. Each of the elements of n and d are independent random variables. How can I implement this as an SDEProblem for DifferentialEquations.jl? I’ve figured out how to do this for d but only for the special case where d=v*W where v is a vector and W is a scalar random variable. r = 1 kp = 0.5 kr = 0.5 A = [0 1 0 0] B = [0,1] u(x) = -kp*x[1] - kr*r f(x,p,t) = A*x + B*u(x) v = 0.3*[kp,1] g(u,p,t) = v prob = SDEProblem(f,g,[0,0], (0,10)) The documentation for SDE problem mentions that g can be a vector such that the equation becomes: But I’ve tried giving as both a vector of vectors and a matrix and it always gives an error. What does need to look like to have multiple independent random variables? See this tutorial: 1 Like Thank you! So what I was missing was the kwarg noise_rate_prototype which is an argument which is of the same type as the output of g. r = 1 kp = 0.5 kr = 0.5 A = [0 1 0 0] B = [0,1] u(x) = -kp*x[1] - kr*r f(x,p,t) = A*x + B*u(x) g(u,p,t) = 0.1*[[kp,1] [kp,0.5]] prob = SDEProblem(f,g,[0,0], (0,10), noise_rate_prototype = zeros(2, 2))
{"url":"https://discourse.julialang.org/t/how-does-one-have-multiple-random-variables-in-an-sdeproblem/122066","timestamp":"2024-11-14T08:57:38Z","content_type":"text/html","content_length":"27632","record_id":"<urn:uuid:8cf77c77-6d9f-44f7-8047-51fa3819e695>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00483.warc.gz"}
Active-learning strategies proving integral to calculus success Mathematics graduate student Justin Nguyen assists students in Math 106: Calculus I in Fall 2019 in the newly renovated Louise Pound Hall. The active-learning setup includes movable tables and chairs as well as whiteboards on all sides of the room. Photo Credit: Craig Chandler, University Communication Active learning is transforming calculus at universities nationwide—and the University of Nebraska–Lincoln is helping to lead the movement. Active-learning strategies encourage the student-led questioning, reasoning, and communication of key mathematical concepts, with instructors promoting engagement and building on student thinking. Since 2016, Nebraska’s Department of Mathematics has expanded its implementation of active-learning strategies from pre-calculus courses into Calculus I, Calculus II, and business calculus. Moreover, a team of educators, led by Wendy Smith of the Center for Science, Mathematics, and Computer Education, is part of a National Science Foundation-funded effort to support nine other universities in making similar changes. Depending on the type of institution, one-quarter to half of U.S. college students will fail their first math course, Smith said. Negative experiences in a math course lead about 50 percent of students to switch from a major in science, technology, engineering, or mathematics after their freshman year. Implementing and sustaining positive changes will require math departments to adopt a culture that values teaching and student success, Smith said. “It is well established in the literature that active learning improves student learning and therefore grades. It reduces failure rates by a third,” said Allan Donsig, chair of the first-year math task force, a faculty committee in mathematics that led the changes to pre-calculus and, more recently, to calculus. The department also recently mirrored the active-learning setup of its pre-calculus classrooms in Brace Lab by adding movable tables and chairs into five classrooms in Louise Pound Hall for calculus Student success in undergraduate calculus is growing alongside the number of classrooms that incorporate active learning. The department measures student success as the percentage of students earning a grade of C or better. By that metric, the success rate of Calculus I has risen from 62% to more than 75%, said Smith. The success rate in Calculus II, meanwhile, has approached 80%. “Implementing active learning requires many changes: suitable online homework, professional development for recitation leaders, coordination of instructors, classrooms that support active learning, and so on,” Donsig said. “We have benefited from a dedicated team of people, including math educators, faculty, and graduate student instructors.” Nathan Wakefield, director of first-year mathematics programs, oversees the training and mentoring of the graduate teaching assistants. For the past five years, Wakefield has taught a pedagogy course to the graduate TAs before they teach their first course, which has greatly contributed to the department’s successful transformation. Alongside the University of Colorado Boulder, San Diego State University, and the Association of Public and Land-grant Universities, Nebraska has entered the second phase of the five-year, $3 million NSF project. That second phase involves sharing what worked in the first phase with nine other universities now looking to incorporate active learning into their own calculus instruction. Project leaders have already visited each of the nine additional institutions and are now analyzing data. “We’re actively supporting them in enacting transformational changes to their departments,” Smith said. “In 2020, we will go back to see what changes they’ve managed to make. “As we share the lessons we’ve learned from our first phase to help these institutions accelerate transformative changes, we’ve also been helping them form a networked improvement community with each other, knowing that when you are working on similar problems, you can accelerate your own improvement by collaborating strategically.” Nebraska’s own active-learning efforts began in 2012 with a task force that focused on Math 100A through Math 103. Student-success rates in those courses rose from about 65% to 80%. “At other campuses, they’ve had similar successes, or they already had high success rates,” Smith said. “But they’ve doubled the enrollment in a subsequent course—such as doubling enrollment in Calculus II after Calculus I—while keeping the same success rate.” Freshmen retention rates correlate strongly with math grades. At Nebraska, roughly two-thirds of freshmen take a math course in their first semester. “If you’re trying to correlate freshman retention with course-taking, it’s going to correlate with math,” Smith said. “Nebraska does not want a student’s experience of failing their first math class to be something that derails their college plans or future STEM careers.” – Lindsay Augustyn, UNL CSMCE
{"url":"https://math.unl.edu/newsletter-2019-active-learning","timestamp":"2024-11-13T14:49:52Z","content_type":"text/html","content_length":"78325","record_id":"<urn:uuid:2cbd02f6-8f84-42b2-9e3a-c650c473cef9>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00512.warc.gz"}
Python Program to Solve 0-1 Knapsack Problem using Dynamic Programming with Memoization - Sanfoundry Python Program to Solve 0-1 Knapsack Problem using Dynamic Programming with Memoization This is a Python program to solve the 0-1 knapsack problem using dynamic programming with top-down approach or memoization. Problem Description In the 0-1 knapsack problem, we are given a set of n items. For each item i, it has a value v(i) and a weight w(i) where 1 <= i <= n. We are given a maximum weight W. The problem is to find a collection of items such that the total weight does not exceed W and the total value is maximized. A collection of items means a subset of the set of all items. Thus, an item can either be included just once or not included. Problem Solution 1. The function knapsack is defined. 2. It takes three arguments: two lists value and weight; and a number capacity. 3. It returns the maximum value of items that doesn’t exceed capacity in weight. 4. The function creates a table m where m[i][w] will store the maximum value that can be attained with a maximum capacity of w and using only the first i items. 5. It calls knapsack_helper on m with i=n and w=capacity and returns its return value. 6. The function knapsack_helper takes 5 arguments: two lists value and weight; two numbers i and w; and a table m. 7. It returns the maximum value that can be attained using only the first i items while keeping their total weight not more than w. 8. If m[i][w] was already computed before, this value is immediately returned. 9. If i = 0, then 0 is returned. 10. If weight[i] > w, then m[i][w] is set to m[i – 1][w]. 11. Otherwise, m[i][w] = (m[i – 1][w – weight[i]] + value[i]) or m[i][w] = m[i – 1][w], whichever is larger. 12. The above computations are done by recursively calling knapsack_helper. Program/Source Code Here is the source code of a Python program to solve the 0-1 knapsack problem using dynamic programming with top-down approach or memoization. The program output is shown below. def knapsack(value, weight, capacity): """Return the maximum value of items that doesn't exceed capacity. value[i] is the value of item i and weight[i] is the weight of item i for 1 <= i <= n where n is the number of items. capacity is the maximum weight. n = len(value) - 1 # m[i][w] will store the maximum value that can be attained with a maximum # capacity of w and using only the first i items m = [[-1]*(capacity + 1) for _ in range(n + 1)] return knapsack_helper(value, weight, m, n, capacity) def knapsack_helper(value, weight, m, i, w): """Return maximum value of first i items attainable with weight <= w. m[i][w] will store the maximum value that can be attained with a maximum capacity of w and using only the first i items This function fills m as smaller subproblems needed to compute m[i][w] are value[i] is the value of item i and weight[i] is the weight of item i for 1 <= i <= n where n is the number of items. if m[i][w] >= 0: return m[i][w] if i == 0: q = 0 elif weight[i] <= w: q = max(knapsack_helper(value, weight, m, i - 1 , w - weight[i]) + value[i], knapsack_helper(value, weight, m, i - 1 , w)) q = knapsack_helper(value, weight, m, i - 1 , w) m[i][w] = q return q n = int(input('Enter number of items: ')) value = input('Enter the values of the {} item(s) in order: ' value = [int(v) for v in value] value.insert(0, None) # so that the value of the ith item is at value[i] weight = input('Enter the positive weights of the {} item(s) in order: ' weight = [int(w) for w in weight] weight.insert(0, None) # so that the weight of the ith item is at weight[i] capacity = int(input('Enter maximum weight: ')) ans = knapsack(value, weight, capacity) print('The maximum value of items that can be carried:', ans) Program Explanation 1. The user is prompted to enter the number of items n. 2. The user is then asked to enter n values and n weights. 3. A None value is added at the beginning of the lists so that value[i] and weight[i] correspond to the ith item where the items are numbered 1, 2, …, n. 4. The function knapsack is called to get the maxmimum value. 5. The result is then displayed. Runtime Test Cases Case 1: Enter number of items: 3 Enter the values of the 3 item(s) in order: 60 100 120 Enter the positive weights of the 3 item(s) in order: 10 20 30 Enter maximum weight: 50 The maximum value of items that can be carried: 220 Case 2: Enter number of items: 5 Enter the values of the 5 item(s) in order: 10 5 20 40 30 Enter the positive weights of the 5 item(s) in order: 4 1 10 20 7 Enter maximum weight: 10 The maximum value of items that can be carried: 35 Case 3: Enter number of items: 1 Enter the values of the 1 item(s) in order: 5 Enter the positive weights of the 1 item(s) in order: 5 Enter maximum weight: 2 The maximum value of items that can be carried: 0 Sanfoundry Global Education & Learning Series – Python Programs. To practice all Python programs, here is complete set of 150+ Python Problems and Solutions.
{"url":"https://www.sanfoundry.com/python-program-solve-0-1-knapsack-problem-using-dynamic-programming-memoization/","timestamp":"2024-11-13T10:03:13Z","content_type":"text/html","content_length":"142430","record_id":"<urn:uuid:13e2b638-0207-428e-acd3-d308d9b52010>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00502.warc.gz"}
The use of new sensors and autonomous observing systems has produced a wealth of high-quality data in all branches of environmental and space science. These data contain important information about distributions, fluxes or reaction rates of key properties in the universe. Inverting the datasets, e.g., calculating the underlying concentrations, fluxes and rate constants from the data, is an important aspect of data analysis, and a wide range of numerical methods is available for this task. This course offers an introduction to linear inverse methods. Techniques for the solution of under- and overdetermined systems of linear equations will be covered in detail. Examples of such systems are (1) linear and non-linear regression, (2) curve fitting, (3) factor analysis, (4) diagnostic tomography, (5) remote sensing from airplanes or satellites, and (6) models of atmospheric, oceanic, and space circulation and biogeochemistry. Contrary to square linear systems that are easy to solve, in general, under- and overdetermined linear systems exhibit complications: (1) the numbers of equations and unknowns differ, and (2) coefficients and right-hand-side of the equations usually are derived from measurements and thus contain errors. Basic techniques from numerical mathematics that solve these problems will be presented and explained extensively using examples from different fields. Error analysis will be of major concern. The examples cover different aspects of environmental and space research and should benefit students from the Postgraduate Environmental Physics program and newly started Masters Degree in Space Sciences and Technologies, as well as students from other fields of physics and geophysics. A basic knowledge of linear algebra is required. Outcome: - Techniques for the optimal solution of under- and over determined systems of linear equations - Methods for calculating variances and covariances of the solutions - Concepts of resolution (in solution as well as data) and methods to calculate them - Practical examples and applications to test data sets from remote sensing of the atmosphere, earth, outer space, and celestial bodies, as well as oceanography
{"url":"https://astraiosdb.utwente.nl/dataset/astraios_courses/resource/http%3A%2F%2Fastraiosdb.utwente.nl%2Fdata%2Fcourse%2F1372","timestamp":"2024-11-09T14:30:11Z","content_type":"text/html","content_length":"25548","record_id":"<urn:uuid:65dc34ac-3f77-439a-b93c-0f17f10b27ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00763.warc.gz"}
Time Series Analysis for Time series analysis is a statistical technique to analyze the pattern of data points taken over time to forecast the future. The major components or pattern that are analyzed through time series • Trend Increase or decrease in the series of data over longer a period. • Seasonality Fluctuations in the pattern due to seasonal determinants over a short period. • Cyclicity Variations occurring at irregular intervals due to certain circumstances. • Irregularity Instability due to random factors that do not repeat in the pattern. Time Series Analysis for Data-driven Decision-Making Time series analysis helps in analyzing the past, which comes in handy to forecast the future. The method is extensively employed in a financial and business forecast based on the historical pattern of data points collected over time and comparing it with the current trends. This is the biggest advantage used by organizations for decision making and policy planning by several organizations. Applicability of Time Series Analysis Time Series analysis is “an ordered sequence of values of a variable at equally spaced time intervals.” It is used to understand the determining factors and structure behind the observed data, choose a model to forecast, thereby leading to better decision making. The Time Series Analysis is applied for various purposes, such as: • Stock Market Analysis • Economic Forecasting • Inventory studies • Budgetary Analysis • Census Analysis • Yield Projection • Sales Forecasting and more. Time Series Analysis - Statistical Elaboration and Significance Time series refers to a series of data indexed data in temporal order. Time series analysis is the technique of analyzing time-series data to pull out the statistics and characteristics related to the data. There are two methods for the time series analysis: • Frequency Domain Method It includes wavelet analysis and spectral analysis. • Time Domain Method It includes cross-correlation and autocorrelation. The time series analysis technique can be further divided into the following: • Parametric Approach The process assumes that the underlying stationary process follows a structure that can be explained in small parameters. • Non- parametric Approach It estimates the covariance instead of assuming any structure. The time series analysis can also be classified into linear, non-linear, univariate, and multivariate. The method utilizes historical data to analyze patterns and trends, issues related to seasonality and cyclical fluctuation to forecast the future. It is widely popular in investment to track the price of a security over a period. It also determines the changes of data due to change in a variable over the same time. For instance, the change in stock share price depending on an economic variable like the unemployment rate can also be recorded through time series analysis. It brings out the pattern of a situation reflecting the relationship between the data point and the variable. Time Series Modeling There are different models of time series analysis to bring out the desired results: ARIMA Model ARIMA stands for Autoregressive Integrated Moving Average model, which is a type of regression analysis that measures the influence of one dependent variable corresponding to changing variables. The model is used to forecast moves in the financial market, analyzing the differences in values in a series rather than the actual values. ARIMA can be classified into three components: AR stands for Autoregression, where the dependent relationship is used between observation and many lagged observations. I stands for Integrated, where raw observation is differenced and is used to make the time series stationary. MA or the Moving Average, use the dependency between observation and residual error. Each component is defined as a parameter which is substituted as integers to indicate the usage of the ARIMA model. Following are the parameters: p – denotes lag order or the number of lag observations. d – denotes the degree of differencing or the number of times the raw observations are differenced. q – denotes the order of moving average or the size of the moving average window. ARIMA and Stationarity A stationary model is when there is consistency in data over a period. ARIMA model makes the data stationary by differencing. For example, most of the economic data reflects a trend. Differencing of data removes the trends to make it stationary. Below is an example of monthly index values that are analyzed monthly. The plot suggests that the data is non-stationary, showing an upward trend. Therefore, the ARIMA model can analyze and forecast and make the data stationary: ARIMA and Stationarity The Autoregressive (AR) model forecasts the future, deriving the behavioral pattern from the past data. It is useful when there is a correlation between the data in a time series. The model is based on the linear regression of the data in the current time series against the previous data on the same series. Below is an example of Google stock price from 2-7-2005 to 7-7-2005, which has n = 105 values. The data is analyzed to identify the AR model. In the figure below, the plot represents stock prices vs. The values closely follow each other, suggesting the need for AR model. The next plot shows a partial autocorrelation for the data: We can create a lag-1 price variable and compare the scatter plot with the lag-1 variable: A moderate linear pattern can be observed, suggesting the suitability of the first-order AR model. Moving Average Model (MA) Moving the average process is used to model the univariate time series. The model defines that the output variable is linearly contingent on present and the past data of a time series. It uses past errors in the forecast in a regression instead of the past value of the forecast variable. Moving average helps in reducing the “noise” in the price. If, in a chart, the moving average in angled upward, it quite suggests a rise in price, if it points downward, it indicates the price is going down, and in case it is moving sideways, the price is likely to be in range. In an upward trend of 50, 100, or 200 days, the moving average may support like a floor on which the price bounces. Check the chart below form GE. How to Choose the Model Time-series analysis use techniques to derive insights on autocorrelated data, correlation of a series. However, the models need to be chosen properly to get accurate results. To choose the models, we must have clear objectives: • What are we forecasting? • What are the success parameters? • What is then forecast horizon? The next step is to analyze if the dataset is stationary or having a constant variable over time or non-stationary. This will help in determining the suitable forecasting model. Time Series in Relation to Python and R R is a programming language used in statistical computing. The R software environment is a larger ecosystem and is functional with in-built data analysis methods. Python, on the other hand, is language with statistical modules for general purpose. It is more object-oriented and has “main” packages for data analysis. Though there are clear points of differences, both R and Python continue to be stronger and popular and have common syntax for several tasks. Below is an example of time series analysis using R: Extracting the trend, seasonality and error in European Stock Market - The decompose() and forecast::stl() splits the time series into seasonality, trend and error components. The chart below shows a significant autocorrelation of lags on x-axis for AirPassengers Testing if the time series is stationary Augmented Dickey-Fuller Test (ADF test) is used where a less than 0.05 p-value indicates that the time series is stationary. Below is an example of a time series analysis of furniture sales using Python:Data Time series analysis and forecasting of furniture sales were done: Furniture sales data for 4-years: Timestamp(‘2014–01–06 00:00:00’), Timestamp(‘2017–12–30 00:00:00’) Processing the Data checking the missing values, removing the unwanted columns, and total sales in chronological order. Indexing with Time Series Data The current date and time data might be tricky; hence, it is better to work with the average daily sales value for the month. Furniture sales data of 2017 Furniture Sales Time Series Data Time Series Analysis and Forecasting Time series analysis is recording data at regular intervals. The analysis helps in forecasting future values based on past trends, which often leads to an informed decision, crucial for business. Case Study An international manufacturing company was facing unpredictable volatility in the price of the primary raw material input required to manufacture its product. The client wanted to create a forecasting model to predict the price of the raw material for the next 12 months. An autoregressive time series model was developed to predict the future price, and they utilized the forecasted raw material price to monitor the cost of production, thereby increasing the profit. The model forecasted the global price of the raw material, which helped them make a better decision. Hire Research Optimus for Time Series Analysis and Forecasting for your Business Research Optimus provides time series analysis and other statistical solutions to businesses of all sizes. Our data science experts analyze every dataset with accuracy and precision to help you in better decision making for your business. Contact us today to learn more about Research Optimus and our exhaustive services. Mutual Fund Managers
{"url":"https://www.researchoptimus.com/article/what-is-time-series-analysis.php","timestamp":"2024-11-10T05:53:39Z","content_type":"text/html","content_length":"155810","record_id":"<urn:uuid:56958147-7ac6-40ed-b074-cc08512dd2eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00452.warc.gz"}
Ghaith A. Hiary I am an associate professor in the Department of Mathematics at the Ohio State University. You can reach me at hiary.1@osu.edu or hiaryg@gmail.com. Publications and preprints Note. The arXiv version most likely differs from published version. Code, data, and experiments • An explicit van der Corput bound for $\zeta(1/2+it)$ (paper here). Mathematica notebook to verify the van der Corput bound on the Riemann zeta function • An alternative to Riemann--Siegel type formulas (paper here). • A Deterministic $n^{1/3+o(1)}$ integer factoring algorithm (paper here). • I've implemented the amortized complexity algorithm to compute zeta, described here. The implementation is in C++, aided by python/Sage scripts to orgnize the multi-process computation. It also includes a separate Sage program to extract "zeta data" -e.g. derivative, max,...- using band-limited interpolation. A database of (by now) ~600 million zeros near t = 10^28 (as well as smaller sets of about 200 million zeros at lower heights) has been obtained using the amortized algorithm, together with "raw data" files that allow quick extraction of further data that might be of interest. The computation took a few months on the riemann machine at U. Waterloo. Sample data for the zeta function obtained using the amortized algorithm • This is a previous coding collaboration, with Jonathan Bober to implement my $T^{1/3+o(1)}$-algorithm to compute zeta, which is described here, here, and here. The implementation is in C++. It was quite useful during the implementation to constantly compare answers obtained from the C++ code with answers obtained from a basic version of the algorithm that I implemented in Mathematica back in 2009. The implementation essentially consisted of coding up the various formulas that were specified in the papers describing the algorithm. OSU Number Theory Seminar Some slides
{"url":"https://people.math.osu.edu/hiary.1/","timestamp":"2024-11-07T18:51:09Z","content_type":"application/xhtml+xml","content_length":"14910","record_id":"<urn:uuid:91231f80-cb1d-4204-a50d-c8f2169ef1f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00706.warc.gz"}
Advent of Code: 2020 Day 05 This is a post with my solutions and learning from the puzzle. Don't continue reading if you haven't tried the puzzle on your own yet. If you want to do the puzzle, visit adventofcode.com/2020/day/5. My programming language of choice is python and all examples below are in python. Key learning This puzzle makes play with binaries. Though not explicitly, but knowing of binaries will help out with this puzzle. The puzzle today is about transforming seats to seat-ids using binary space partitioning. The seats consist of 10 letters. The first 7 can be F or B and the last 3 can be R or L. The letters stand for front, back, left, right. The first 7 characters will either be F or B; these specify exactly one of the 128 rows on the plane (numbered 0 through 127). Each letter tells you which half of a region the given seat is in. Start with the whole list of rows; the first letter indicates whether the seat is in the front (0 through 63) or the back (64 through 127). The next letter indicates which half of that region the seat is in, and so on until you're left with exactly one row. The last three characters will be either L or R; these specify exactly one of the 8 columns of seats on the plane (numbered 0 through 7). The same process as above proceeds again, this time with only three steps. L means to keep the lower half, while R means to keep the upper half. Every seat also has a unique seat ID: multiply the row by 8, then add the column. Example input for some seats: Part 1 The puzzle is to find the highest seat ID. To do this we need to transform the seat-names to IDs and find the one with maximum value. The puzzle differentiates the first seven and last three letters. The value of the row should be multiplied with 8 and then the column value is added. That is just a trick to make the puzzle seem more difficult. Multiplying the row value with 8 is the same as shifting it with 3 bits because of 2^3 = 8. Adding the column value then is just concatenating the two binary strings. We can treat the seat-name as a binary where B and R are 1 and the other 0. def calc_id(seat): transformation = { 'F': '0', 'B': '1', 'L': '0', 'R': '1'} binary = "".join([transformation[x] for x in seat]) return int(binary,2) def part1(lines): ids = [calc_id(line) for line in lines] return max(ids) print "Part 1 solution: %d " % part1(lines) Part 2 The second is to find which pair of IDs are 2 steps apart. We can reuse our calc_id function above. def part2(lines): ids = [calc_id(line) for line in lines] for i in range(max(ids)): low = i - 1 high = i + 1 if low in ids and high in ids and i not in ids: return i print "Part 2 solution: %d " % part2(lines) Alternative solutions The transformation above could be solved with using an if statement in the list comprehension def calc_id(seat): binary = "".join(['1' if x in ['B','R'] else '0' for x in seat]) return int(binary,2) Or using the string-replacement. def calc_id(seat): binary = seat.replace('B', '1').replace('R', '1').replace('F', '0').replace('L', '0') return int(binary,2) Using a translation table in python: def calc_id(seat): translate_table = str.maketrans(dict(B="1", F="0", R="1", L="0")) return int(seat.translate(translate_table), 2) Another alternative is to use binary calculations to transform the seat-names to IDs. Here is an example: def part1(lines): ids = [] for line in lines: current = 0 row = 64 for letter in line[:7]: if letter == 'B': current += row row = row >> 1 # Shift right 1 bit (same as dividing by 2) current = current << 3 # Shift left 3 bits (same as multiplying with 8) row = 4 for c in line[7:]: if c == 'R': current += row row = row >> 1 # Shift right 1 bit (same as dividing by 2) return max(ids) My initial train of thought of part 1 was to find the maximum value by sorting the strings and manually calculate the binary number. As B is before F in the alphabet you could do sort the lines to get max row-value without even transforming to binary. Though R is after L and the column value will get reversed sorted. I forgot the reversed sorting on the column values, therefore failed and changed tactics. Guessing that was the purpose of AoC to differentiate column and row values. This would be a full solution for part 1: sorted_data = sorted([ for line in lines print sorted_data[0] # Step 2: Manually transform string to binary-string and then to # decimal with help of a online-calculator. A last alternative: An unreadable oneliner for part 1: def part1(lines): return max([ int("".join([{ 'F': '0', 'B': '1', 'L': '0', 'R': '1'}[letter] or letter in line]), 2) for line in lines]) Python tips Transform dictionaries Dictionaries is a nice way to transform values during AoC puzzles. As I did in the part 1 above: transformation = { 'F': '0', 'B': '1', 'L': '0', 'R': '1' transformation['F'] # Returns '0' transformation['B'] # Returns '1' transformation['L'] # Returns '0' transformation['R'] # Returns '1' To concatenate an array of strings it's easiest to use the join-function. The join-function is called on a string that will work as a delimiter. A empty string would then join the array-strings without delimiter. # => "comma,seperated,string" # => "noseperationatall" int('10101010', 2) Using int() we can convert strings to numbers. If we want to convert a binary string we have to set the base to 2. # Converting string using base 2 int('10111', 2) # outputs: 23 # Hexadecimal example: int('FF', 16) # outputs: 255 In our solution we got a array of numbers. Finding the maximum value can be tempting to use an for-loop and a maximum_value variable. The max() function in python can take in an array as argument and does it for max([6,2,3,8,3,9]) # outputs: 9 Thanks for reading! I hope these solutions were helpful for you. Complete code can be found at: Top comments (0) For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/cnille/advent-of-code-2020-day-05-1kg5","timestamp":"2024-11-14T14:33:19Z","content_type":"text/html","content_length":"101234","record_id":"<urn:uuid:9089ab2e-5f4c-4612-89e1-9a788d731126>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00241.warc.gz"}
Method for efficient computation of the density of states in water-explicit biopolymer simulations on a lattice We present a method for fast computation of the density of states of binary systems. The contributions of each of the components to the density of states can be separated based on the conditional independence of the individual components' degrees of freedom. The conditions establishing independence are the degrees of freedom of the interfacial region between the two components. The separate contributions of the components to the density of states can then be calculated using the Wang-Landau algorithm [Wang, F.; Landau, D. P. Phys. Rev. Lett. 2001, 86, 2050]. We apply this method to a 2D lattice model of a hydrophobic homopolytmer in water that exhibits protein-like cold, pressure, and thermal unfolding. The separate computation of the protein and water density of states contributions is faster and more accurate than the combined simulation of both components and allows for the investigation of larger systems. All Science Journal Classification (ASJC) codes • Physical and Theoretical Chemistry Dive into the research topics of 'Method for efficient computation of the density of states in water-explicit biopolymer simulations on a lattice'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/method-for-efficient-computation-of-the-density-of-states-in-wate","timestamp":"2024-11-10T01:52:03Z","content_type":"text/html","content_length":"50235","record_id":"<urn:uuid:0477e6f1-4e8c-4fd9-96a6-0804fba13ae7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00158.warc.gz"}
Printable Figure Drawings Rate Laws Worksheet Answers Rate Laws Worksheet Answers - 3 x + 2 y → z: The reaction is first order in [a] the reaction is second order. For reactions of the form. Web explain the form and function of a rate law. Web rate law equation worksheet 1 answers. What is the order of. Web determine the integrated rate law, the differential rate law, and the value of the rate constant. Without direct calculations, find the following. 3 x + 2 y → z: Web explain the form and function of a rate law. Use rate laws to calculate reaction rates. Given reaction rate data for: How will the rate change if the concentration of a is tripled? Web the rate laws discussed thus far relate the rate and the concentrations of reactants. Web use the following data to answer the questions below: (2) (1) time (s) time (min) 1st order. Use rate and concentration data to identify reaction orders and derive rate. (2) (1) time (s) time (min) 1st order. Use the ln[] vs time graph to. Web explain the form and function of a rate law. Given the following equations and experimental data write |. Given the following equations and experimental data, write the correct rate law equation including the value for the rate. Web kinetics 15.1 reaction rates and the rate law worksheet. Web reaction order and rate law expression worksheet 1. 1) the data below shows the change in concentration of dinitrogen pentoxide over time, at 330 k, according to the. Web use. To know and be able to apply the integrated rate laws for zeroth, first, and second order rate laws. Web no 2nd h2 1st write the rate equation for the reaction. What is the order of. Web use the following data to answer the questions below: Web determine the integrated rate law, the differential rate law, and the value of. To know and be able to apply the integrated rate laws for zeroth, first, and second order rate laws. Use rate laws to calculate reaction rates. Web determine the integrated rate law, the differential rate law, and the value of the rate constant. For the reaction x + y → a + b, the rate law for the reverse reaction. 3 x + 2 y → z: Web the rate laws discussed thus far relate the rate and the concentrations of reactants. Using the experimental data provided, determine the order of reaction with respect to each reactant, write the rate law, determine the overall order of. Given the following equations and experimental data, write the correct rate law equation including. Is the following statement true or false? Web kinetics worksheet answers 1 initial rates problems key 1. Web explain the form and function of a rate law. How will the rate change if the concentration of a is tripled? Use rate and concentration data to identify reaction orders and derive rate laws. Given the following equations and experimental data, write the correct rate law equation including the value for the rate. Web determine the integrated rate law, the differential rate law, and the value of the rate constant. Web no 2nd h2 1st write the rate equation for the reaction. (2) (1) time (s) time (min) 1st order. Without direct calculations, find. Given the following equations and experimental data write |. Web no 2nd h2 1st write the rate equation for the reaction. If rate1=k[a]2, then rate2=k[3a]2=32* k[a]2=9* k[a]2=9*. For a reaction where the rate equation is r = k[nh 4 (aq)][no2 (aq)], +. For the reaction 1a + 2b + 1c à 2d + 1e, the rate law is: Use rate and concentration data to identify reaction orders and derive rate laws. Web kinetics 15.1 reaction rates and the rate law worksheet. For reactions of the form. What is the order of. Note that the first step is a rapidly established equilibrium,. We can also determine a second form of each rate law that relates the concentrations of. Rate = k[h 2 o 2] integrated rate law: For the reaction x + y → a + b, the rate law for the reverse reaction is second order if the reaction is elementary. Given reaction rate data for: Use rate and concentration data. Rate Laws Worksheet Answers - Web reaction order and rate law expression worksheet 1. Use the ln[] vs time graph to. Web kinetics worksheet answers 1 initial rates problems key 1. Web explain the form and function of a rate law. Is the following statement true or false? Web use the following data to answer the questions below: 3 x + 2 y → z: Web explain the form and function of a rate law. Reaction order and rate law expression worksheet 1. 1) the data below shows the change in concentration of dinitrogen pentoxide over time, at 330 k, according to the. Web explain the form and function of a rate law. The reaction is first order in [a] the reaction is second order. We can also determine a second form of each rate law that relates the concentrations of. Using the experimental data provided, determine the order of reaction with respect to each reactant, write the rate law, determine the overall order of. Web we need to express the rate law for the rate determining step in terms of observable starting materials. Using the experimental data provided, determine the order of reaction with respect to each reactant, write the rate law, determine the overall order of. Web explain the form and function of a rate law. Web use the following data to answer the questions below: For each of the graphs below, (a) determine the order, (b) write the corresponding rate law expression and (c) units of k. Web kinetics worksheet answers 1 initial rates problems key 1. We can also determine a second form of each rate law that relates the concentrations of. How will the rate change if the concentration of a is tripled? Web we need to express the rate law for the rate determining step in terms of observable starting materials. If rate1=k[a]2, then rate2=k[3a]2=32* k[a]2=9* k[a]2=9*. Web reaction order and rate law expression worksheet 1. Use Rate Laws To Calculate Reaction Rates. The reaction is first order in [a] the reaction is second order. For each of the graphs below, (a) determine the order, (b) write the corresponding rate law expression and (c) units of k. Note that the first step is a rapidly established equilibrium,. Web use the following data to answer the questions below: Is The Following Statement True Or False? Web explain the form and function of a rate law. Use rate and concentration data to identify reaction orders and derive rate. 3 x + 2 y → z: Consider the initial rate data for the reaction,. Use Rate And Concentration Data To Identify Reaction Orders And Derive Rate Laws. Use rate laws to calculate reaction rates. Reaction order and rate law expression worksheet 1. Use the ln[] vs time graph to. For the reaction x + y → a + b, the rate law for the reverse reaction is second order if the reaction is elementary. Web We Need To Express The Rate Law For The Rate Determining Step In Terms Of Observable Starting Materials. Rate = k [no]2 [h2] calculate the rate constant at 904 °c. Web given that the reaction is first order in \(no\) and in \(o_3\), determine the rate constant using your calculated rate for each set of data points; For reactions of the form. Rate = k[h 2 o 2] integrated rate law:
{"url":"https://tunxis.commnet.edu/view/rate-laws-worksheet-answers.html","timestamp":"2024-11-05T15:52:38Z","content_type":"text/html","content_length":"34393","record_id":"<urn:uuid:1c462e66-3d77-4e07-a301-c86e6de6cc09>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00609.warc.gz"}
Van der Waals interactions between hydrocarbon molecules and zeolites: Periodic calculations at different levels of theory, from density functional theory to the random phase approximation and Moller-Plesset perturbation theory The adsorption of small alkane molecules in purely siliceous and protonated chabazite has been investigated at different levels of theory: (i) density-functional (DFT) calculations with a gradient-corrected exchange-correlation functional; DFT calculations using the Perdew-Burke- Ernzerhof (PBE) functional with corrections for the missing dispersion forces in the form of C[6]R^6 pair potentials with (ii) C[6] parameters and vdW radii determined by fitting accurate energies for a large molecular data base (PBE-d) or (iii) derived from atoms in a solid calculations; (iv) DFT calculations using a non-local correlation functional constructed such as to account for dispersion forces (vdW-DF); (v) calculations based on the random phase approximation (RPA) combined with the adiabatic-coupling fluctuation-dissipation theorem; and (vi) using Hartree-Fock (HF) calculations together with correlation energies calculated using second-order Moller-Plesset (MP2) perturbation theory. All calculations have been performed for periodic models of the zeolite and using a plane-wave basis and the projector-augmented wave method. The simpler and computationally less demanding approaches (i)-(iv) permit a calculation of the forces acting on the atoms using the Hellmann-Feynman theorem and further a structural optimization of the adsorbate-zeolite complex, while RPA and MP2 calculations can be performed only for a fixed geometry optimized at a lower level of theory. The influence of elevated temperature has been taken into account by averaging the adsorption energies calculated for purely siliceous and protonated chabazite, with weighting factors determined by molecular dynamics calculations with dispersion-corrected forces from DFT. Compared to experiment, the RPA underestimates the adsorption energies by about 5kJ/mol while MP2 leads to an overestimation by about 6 kJ/Mol (averaged over methane, ethane, and propane). The most accurate results have been found for the hybrid RPA-HF method with an average error of less than 2 kJ/mol only, while RPA underestimates the adsorption energies by about 8 kJ/mol on average. MP2 overestimates the adsorption energies slightly, with an average error of 5 kJ/mol. The more approximate and computationally less demanding methods such as the vdW-DF density functional or the C[6]R^6 pair potentials with C [6] parameters from atoms in a solid calculations overestimate the adsorption energies quite strongly. Relatively good agreement with experiment is achieved with the empirical PBEd method with an average error of about 5 kJ/mol. ASJC Scopus subject areas • General Physics and Astronomy • Physical and Theoretical Chemistry Dive into the research topics of 'Van der Waals interactions between hydrocarbon molecules and zeolites: Periodic calculations at different levels of theory, from density functional theory to the random phase approximation and Moller-Plesset perturbation theory'. Together they form a unique fingerprint.
{"url":"https://experts.arizona.edu/en/publications/van-der-waals-interactions-between-hydrocarbon-molecules-and-zeol","timestamp":"2024-11-12T23:21:42Z","content_type":"text/html","content_length":"62461","record_id":"<urn:uuid:8faadfe1-20f7-4dc8-a3a9-4dfcbdccde06>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00788.warc.gz"}
Practice Conditions and Trigonometry with the exercise "Nature of triangles" Learning Opportunities This puzzle can be solved using the following concepts. Practice using these concepts and improve your skills. You have to output the nature of the triangles whose vertices’ coordinates are given. The output should follow this format: Name of triangle is a/an side nature and a/an angle nature triangle. Name of triangle follows the same order as the vertices given. Side nature is: • “scalene” if all sides have different lengths, or • “isosceles in vertex” if exactly two sides have the same length and they have a common vertex of vertex. Angle nature is: • “acute” if all angles are acute, or • “right in vertex” if the angle at vertex is 90°, or • “obtuse in vertex (degrees°)” if the angle at vertex is obtuse. In this case, output the measure of the obtuse angle in degrees, rounded to the nearest integer. Output examples BAC is a scalene and a right in A triangle. DEF is an isosceles in D and an obtuse in D (120°) triangle. Line 1: An integer N for the number of triangles. Next N lines: Each vertex followed by its x and y coordinates, one triangle per line. N lines: The nature of the triangles, one triangle per line, in the same order as the input. 1 ⩽ N ⩽ 8 -20 ⩽ x, y ⩽ 20 x and y are integers. Degenerate triangles do not appear in this puzzle. Equilateral triangles do not appear in this puzzle because they involve non-integer coordinates (calculation involves √3). A 5 -2 B 8 2 C -1 -9 O 0 0 A 3 0 B 1 2 ABC is a scalene and an obtuse in A (176°) triangle. OAB is a scalene and an acute triangle. A higher resolution is required to access the IDE
{"url":"https://www.codingame.com/training/easy/nature-of-triangles","timestamp":"2024-11-03T19:20:51Z","content_type":"text/html","content_length":"147176","record_id":"<urn:uuid:0022cfcd-62c8-4b7a-94a5-61683e583954>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00255.warc.gz"}
artition models Partition models From Encyclopedia of Mathematics Copyright notice This article Partition models was adapted from an original article by Peter McCullagh, which appeared in StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. The original article ([http://statprob.com/encyclopedia/PartitionModels.html StatProb Source], Local Files: pdf | tex) is copyrighted by the author(s), the article has been donated to Encyclopedia of Mathematics, and its further issues are under Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the Category StatProb. 2020 Mathematics Subject Classification: Primary: 60-XX Secondary: 60C0560K9962E99 [MSN][ZBL] $ \def\given{ {\,|\,}} $ $ \def\E{ {\cal E}} $ $ \def\K{ {\cal K}} $ $ \def\N{ {\cal N}} $ $ \def\R{ {\cal R}} $ $ \def\S{ {\cal S}} $ $ \def\T{ {\cal T}} $ $ \def\U{ {\cal U}} $ $ \def\Nat{ {\mathbb N}} $ $ \def\upto{ {,\ldots,}} $ $ \def\subdot{ { {\mathbf.}}} $ $ \def\bfk{\vecf{k}} $ $ \def\bfm{\vecf{m}} $ $ \def\bfn{\vecf{n}} $ $ \def\per{\mathop{\rm per}\nolimits} $ $ \def\cyp{\mathop{\rm cyp}\nolimits} $ $ \def\cof{\mathop{\rm cof}\nolimits} $ $ \def\cov{\mathop{\rm cov}\nolimits} $ $ \def\pr{\mathop{\rm pr}\nolimits} $ $ \def\tr{\mathop{\rm tr}\nolimits} $ $ \def\NJ{\mathop{\rm NJ}\ nolimits} $ $ \def\up{\uparrow} $ $ \def\down{\downarrow} $ $ \def\zero{ {\mathbf{0}}} $ $ \def\one{ {\mathbf{1}}} $ $ \def\booktitle#1{ ''#1''} $ $ \def\vol#1{ {\mathbf{#1}}} $ $ \def\journal#1{ ''# 1''} $ Partition models Peter McCullagh University of Chicago Set partitions For integer $n\ge 1$, a partition $B$ of the finite set $[n] = \{1,\ldots, n\}$ is (i) a collection $B = \{b_1,\ldots\}$ of disjoint non-empty subsets, called blocks, whose union is $[n]$; (ii) an equivalence relation on $[n]$, i.e. a symmetric Boolean function $B\colon[n]\times[n]\to\{0,1\}$ that is also reflexive and transitive; (iii) a block factor or symmetric binary matrix of order~$n$ such that $B_{ij} = 1$ if $i,j$ belong to the same block. These equivalent representations are not distinguished in the notation, so $B$~is a set of subsets, a Boolean function, a subset of $[n]\times [n]$, or a symmetric binary matrix, as the context demands. In practice, a partition is frequently written in an abbreviated form, such as $B = 2|13$ for a partition of $[3]$ or $u_2|u_1,u_3$ for a partition of three objects $\{u_1, u_2, u_3\}$.. In this notation, the partitions of $[2]$ are $12$ and $1|2$, and the five partitions of $[3]$ are $$ 123,\quad 12|3,\quad 13|2,\quad 23|1,\quad 1|2|3. $$ The blocks are unordered and unlabelled, so there is no concept of a first block or a last block, and $2|13$ is the same partition as $13|2$ and $2|31$. A partition $B$ is a sub-partition of $B^*$ if each block of $B$ is a subset of some block of $B^*$ or, equivalently, if $B_{ij} = 1$ implies $B^*_{ij}=1$. This relationship is a partial order denoted by $B\le B^*$, which can be interpreted as $B \subset B^*$ if each partition is regarded as a subset of $[n]^2$. The partition lattice $\E_n$ is the set of partitions of $[n]$ with this partial order. To each pair of partitions $B, B'$ there corresponds a greatest lower bound $B\wedge B'$, which is the set intersection or Hadamard component-wise matrix product. The least upper bound $B\vee B'$ is the least element of $\E_n$ that is greater than or equal to both, the transitive completion of $B \cup B'$. The least element $\zero_n\in \E_n$ is the partition with $n$ singleton blocks, and the greatest element is the single-block partition denoted by~$\one_n$. As matrices, $\zero_n$ is the identity, whereas $\one_n = [n]^2$ is the matrix whose components are all one. A permutation $\sigma\colon[n]\to[n]$ induces an action $B\mapsto B^\sigma$ by composition such that the transformed partition is $B^\sigma(i,j) = B(\sigma(i), \sigma(j))$ in the form of an equivalence relation. In matrix notation, $B^\sigma = \sigma B \sigma^{-1}$, so the action by conjugation maintains symmetry by permuting both the rows and columns of $B$ in the same way. The block sizes are preserved and are maximally invariant under conjugation. In this way, the 15 partitions of $[4]$ may be grouped into five orbits or equivalence classes as follows: $$ 1234,\quad 123|4 \, [4],\quad 12|34\,[3],\quad 12|3|4\,[6], \quad 1|2|3|4. $$ Thus, for example, $12|34$ is the representative element for one orbit, which also includes $13|24$ and $14|23$. The symbol $\#$ applied to a set denotes the number of its elements, so $\#B$ is the number of blocks, and $\#b$ is the size of block $b\in B$. As a matrix, $B$~is positive semi-definite of rank~$\# B$. A partition distribution is defined on the finite set $\E_n$, and the first few values of $\#\E_n$ are 1, 2, 5, 15, 52, called Bell numbers. More generally, $\#\E_n$ is the $n$th moment of the unit Poisson distribution whose exponential generating function is \[ \exp(e^t - 1) = 1 + \sum_{n=1}^\infty t^n\, \#\E_n / n!. \] In the discussion and manipulation of explicit probability models on $\E_n$, it is helpful to use the ascending and descending factorial symbols \begin{eqnarray*} \alpha^{\up r} &=& \alpha(\alpha+1) \cdots (\alpha+r-1) = \Gamma(r+\alpha)/\Gamma(\alpha)\\ k^{\down r} & =& k(k-1)\cdots (k-r+1) \end{eqnarray*} for integer~$r\ge 0$. Note that $k^{\down r} = 0$ for positive integers $r > k$. By convention $\alpha^{\up 0} = 1$. It is not a coincidence that $\alpha^{\up r}$ is the ordinary generating function for the Stirling numbers of the first kind~$S_{n,r}$, the number of permutations $[n]\to[n]$ having exactly $r$~cycles. Dirichlet partition model The term partition model refers to a probability distribution, or family of probability distributions, on the set $\E_n$ of partitions of~$[n]$. In some cases, the probability is concentrated on the the subset $\E_n^k \subset \E_n$ of partitions having $k$ or fewer blocks. A distribution on $\E_n$ such that $p_n(B) = p_n(\sigma B\sigma^{-1})$ for every permutation $\sigma\colon[n]\to[n]$ is said to be finitely exchangeable. Equivalently, $p_n$~is exchangeable if $p_n(B)$ depends only on the block sizes of~$B$. Historically, the most important examples are Dirichlet-multinomial partitions generated for fixed~$k$ in three steps as follows. (i) First generate the random probability vector $\pi=(\pi_1,\ldots,\ pi_k)$ from the Dirichlet distribution with parameter $(\theta_1,\ldots, \theta_k)$. (ii) Given $\pi$, the sequence $Y_1,\ldots, Y_n,\ldots$ is independent and identically distributed, each component taking values in $\{1,\ldots, k\}$ with probability~$\pi$. Each sequence $(y_1,\ldots, y_n)$ in which the value~$r$ occurs $n_r\ge 0$ times has probability $$ p_n(y) = E(\pi_1^{n_1} \cdots \pi_k^ {n_k}) = \frac{ \Gamma(\theta_\subdot) \prod_{j=1}^k \theta_j^{\up n_j}} {\Gamma(n + \theta_\subdot)}, $$ where $\theta_\subdot = \sum\theta_j$. (iii) Now forget the labels $1,\ldots, k$ and consider only the partition $B(Y)$ generated by the sequence~$Y$, i.e. $B_{ij}(Y) = 1$ if $Y_i = Y_j$. Since $Y$~is an exchangeable sequence, the partition distribution is also exchangeable, but an explicit simple formula is available only for the uniform case $\theta_j = \lambda/k$, which is now assumed. The number of sequences generating the same partition $B\in\E_n$ is $k^{\down \#B}$, and these have equal probability in the uniform case. Consequently, the induced partition has probability $$\label{dm} p_{nk}(B, \lambda) = k^{\down\#B} \frac{ \Gamma(\lambda) \prod_{b\in B} (\lambda/k)^{\up \#b}} {\Gamma(n + \lambda)}, \tag{1}$$ called the uniform Dirichlet-multinomial partition distribution. The factor $k^{\down\#B}$ ensures that partitions having more than $k$~blocks have zero probability. In the limit as $k\to \infty$, the uniform Dirichlet-multinomial partition becomes $$\label{ewens} p_n(B, \lambda) = \frac{\lambda^{\#B} \prod_{b\in B} \Gamma(\#b)} {\lambda^{\up n}}. \tag{2}$$ This is the celebrated Ewens distribution, or Ewens sampling formula, which arises in population genetics as the partition generated by allele type in a population evolving according to the Fisher-Wright model by random mutation with no selective advantage of allele types (Ewens, 1972). The preceding derivation, a version of which can be found in chapter~3 of Kingman (1980), goes back to Watterson (1974). The Ewens partition is the same as the partition generated by a sequence drawn according to the Blackwell-McQueen urn scheme (Blackwell and McQueen, 1973). Although the derivation makes sense only if $k$ is a positive integer, the distribution (1) is well defined for negative values $-\lambda < k <0$. For a discussion of this and the connection with GEM distributions and Poisson-Dirichlet distributions, see Pitman (2006, section~3.2). Partition processes and partition structures Deletion of element $n$ from the set $[n]$, or deletion of the last row and column from the matrix representation $B\in \E_n$, determines a map $D_n\colon \E_n \to \E_{n-1}$, a projection from the larger to the smaller lattice. Equivalently, $D_n B \equiv B[n-1]$ is the restriction of $B$ to the subset $[n-1]$. These deletion maps preserve partial order and make the sets $\{\E_1, \E_2,\ldots\} $ into a projective system \[ \cdots\E_{n+1}\; {\buildrel D_{n+1}\over \longrightarrow}\; \E_n\; {\buildrel D_{n}\over \longrightarrow} \; \E_{n-1}\; \cdots \] A family $p = (p_1,p_2,\ldots)$ in which $p_n$ is a probability distribution on $\E_n$ is said to be mutually consistent, or Kolmogorov-consistent, if each $p_{n-1}$ is the marginal distribution obtained from $p_{n}$ under deletion of element $n$ from the set $[n]$. In other words, $p_{n-1}(A) = p_n(D_n^{-1} A)$ for $A\subset \E_{n-1}$. Kolmogorov consistency guarantees the existence of a random partition $B$ of the natural numbers whose finite restrictions $B[n]$ are distributed as $p_n$. The partition is infinitely exchangeable if each $p_n$ is finitely exchangeable. Some authors, for example Kingman (1980), refer to $p$ as a partition structure. An exchangeable partition process may be generated from an exchangeable sequence $Y_1,Y_2,\ldots$ by the transformation $B_{ij} = 1$ if $Y_i = Y_j$ and zero otherwise. The Dirichlet-multinomial and the Ewens processes are generated in this way. Kingman's (1978) paintbox construction shows that every exchangeable partition process may be generated from an exchangeable sequence in this manner. Moreover, the list of relative block sizes in decreasing order has a limit, which may be random. In the case of the Ewens process, the relative size of the largest block, $X_n=\max_{b\in B} \#b/n$, has a limit $X_n \to X$ distributed as beta with parameter $(1, \lambda)$, i.e. with density $\lambda(1-x)^{\lambda-1}$ for $0<x<1$. Given the size of the largest block, the relative size of the next largest block as a fraction of the remaining elements has the same distribution, and so on. Let $B$ be an infinitely exchangeable partition, $B\sim p$, which means that the restriction $B[n]$ of $B$ to $[n]$ is distributed as~$p_n$. Let $B^*$~be a fixed partition in $\E_n$, and suppose that the event $B[n]\le B^*$ occurs. Then $B[n]$ lies in the lattice interval $[\zero_n, B^*]$, which means that $B[n] = B[b_1] | B[b_2]|\ldots$ is the concatenation (union) of partitions of the blocks $b \in B^*$. For each block $b\in B^*$, the restriction $B[b]$ is distributed as $p_{\#b}$, so it is natural to ask whether, and under what conditions, the blocks of $B^*$ are partitioned independently given $B[n]\le B^*$. Conditional independence implies that $$\label{ci} p_n(B \given B[n] \le B^*) = \prod_{b\in B^*} p_{\#b} (B[b]) , \tag{3}$$ which is a type of non-interference or lack-of-memory property not dissimilar to that of the exponential distribution on the real line. It is straightforward to check that the condition is satisfied by (2) but not by (1). Aldous (1996) shows that conditional independence uniquely characterizes the Ewens family. Mixtures of Ewens processes do not have this property. Further exchangeable partition models Although Dirichlet partition processes are the most common in applied work, it is useful to know that many alternative partition models exist. Although some of these are easy to simulate, most do not have simple expressions for the distributions, but there are exceptions of the form $$\label{ewens2} p_n(B; \lambda) = \frac{\Gamma(B)\, Q_n(B; \lambda)} {\lambda^{\up n}}, \tag{4}$$ for certain polynomials $Q_n(B; \lambda)$ of degree $\#B$ in $\lambda$. One such polynomial is \[ Q_n(B, \lambda) = \sum_{B \le B' \le \one_n} \lambda^{\# B'} / B', \] which depends on $B$ only through the block sizes. The functions $\Gamma(B) = \prod_{b\in B} \Gamma(\#b)$ and $B^\alpha = \prod_{b\in B} (\#b)^\alpha$ are multiplicative $\E_n\to\R$, and $1/B = B^{-1}$ is the inverse of the product of block sizes. For each $\lambda > 0$, $p_n(B; \lambda)$ depends on $B$ only through the block sizes, so the distribution is exchangeable. Moreover, it can be shown that the family is mutually consistent in the Kolmogorov sense. However, the conditional independence property (3) is not satisfied. The expected number of blocks grows slowly with $n$, approximately $\lambda\log(n)$ for the Ewens process, and $\lambda \log^2(n)/\log\,\log(n)$ for the process shown above. Chinese restaurant process A partition process is a random partition $B\sim p$ of a countably infinite set $\{u_1, u_2,\ldots\}$, and the restriction $B[n]$ of $B$ to $\{u_1,\ldots, u_n\}$ is distributed as~$p_n$. The conditional distribution of $B[n+1]$ given $B[n]$ is determined by the probabilities assigned to those events in $\E_{n+1}$ that are compatible with $B[n]$, i.e. the events $u_{n+1} \mapsto b$ for $b \in B$ and $b=\emptyset$. For the uniform Dirichlet-multinomial model (1), these are $$\label{CRPdm} \pr(u_{n+1} \mapsto b \given B[n]=B) = \left\{ \begin{array}{ll} (\#b + \lambda/k) / (n+\lambda) & \quad b\in B \\ \lambda(1-\#B/k)/(n+\lambda) &\quad b=\emptyset. \end{array} \right. \tag{5}$$ In the limit as $k\to\infty$, we obtain $$\label{CRP} \pr(u_{n+1} \mapsto b\given B[n]=B) = \left\{ \begin{array}{ll} \#b / (n+\lambda) &\quad b\in B \\ \lambda/(n+\lambda) &\quad b=\emptyset, \end {array} \right. \tag{6}$$ which is the conditional probability for the Ewens process. To each partition process $p$ there corresponds a sequential description called the Chinese restaurant process, in which $B[n]$ is the arrangement of the first $n$ customers at $\#B$ tables. The placement of the next customer is determined by the conditional distribution $p_{n+1}(B[n+1] \given B[n])$ (Pitman, 1996). For the Ewens process, the customer chooses a new table with probability $\ lambda/(n+\lambda)$ or one of the occupied tables with probability proportional to the number of occupants. This description, which is due to Dubins and Pitman, first appears in print in section 11 of Aldous (1983). It was used initially in connection with the Ewens and Dirichlet-multinomial models, but has subsequently been applied more broadly to general partition models. Random permutations Beginning with the uniform distribution on the set $\Pi_n$ of permutations of~$[n]$, the exponential family with canonical parameter $\theta=\log(\lambda)$ and canonical statistic $\#\sigma$ equal to the number of cycles is \[ q_n(\sigma) = \lambda^{\#\sigma} /\lambda^{\up n}. \] The Stirling number of the first kind, $S_{n,k}$, is the number of permutations of $[n]$ having exactly $k$ cycles, for which $\lambda^{\up n} = \sum_{k=1}^n S_{n,k} \lambda^k$ is the ordinary generating function. The cycles of the permutation determine a partition of $[n]$ whose distribution is (2), and a partition of the integer~$n$ whose distribution is (7). From the cumulant function \[ \log(\lambda^{\up n}) = \sum_{j=0}^{n-1} \log(j + \lambda) \] it follows that $\#\sigma = X_0 + \cdots + X_{n-1}$ is the sum of independent Bernoulli variables with parameter $E(X_j) = \lambda/(\lambda + j)$, which is evident also from the Chinese restaurant representation. For large~$n$, the number of cycles is roughly Poisson with parameter $\lambda\log(n)$, implying that $\hat\lambda \simeq \#\sigma/\log(n)$ is a consistent estimate as $n\to\infty$, but practically inconsistent. A minor modification of the Chinese restaurant process also generates a random permutation by keeping track of the cyclic arrangement of customers at tables. After $n$ customers are seated, the next customer chooses a table with probability (5) or (6), as determined by the partition process. If the table is occupied, the new arrival sits to the left of one customer selected uniformly at random from the table occupants. The random permutation thus generated is $j\mapsto \sigma(j)$ from $j$ to the left neighbour $\sigma(j)$. The cycles of a permutation $\sigma\colon[n]\to[n]$ determine a partition $B_\sigma\in\E_n$, which is a mapping $\Pi_n\to\E_n$ from permutations to partitions. Thus, any probability distribution $p_n$ on partitions can be lifted to a probability distribution $q_n(\sigma) = p_n(B_\sigma) / \Gamma(B_\sigma)$ on permutations. Provided that the partition process $\{p_n\}$ is consistent and exchangeable, the lifted distributions $\{q_n\}$ are exchangeable and mutually consistent under the projection $\Pi_n \to \Pi_{n-1}$ on permutations in which element~$n$ is deleted from the cycle representation (Aldous, 1983; Pitman, 2006, section~3.1). In this way, every infinitely exchangeable random partition also determines an infinitely exchangeable random permutation $\sigma\colon\Nat\ to\Nat$ of the natural numbers. Since the group acts on itself by conjugation, distributional exchangeability in this context is not to be confused with uniformity on $\Pi_n$. On the number of unseen species A partition of the set $[n]$ is a set of blocks, and the block sizes determine a partition of the integer~$n$. For example, the partition $15|23|4$ of the set $[5]$ is associated with the integer partition $2+2+1$, one singleton and two doubletons. An integer partition $m=(m_1,\ldots,m_n)$ is a list of multiplicities, also written as $m=1^{m_1} 2^{m_2}\cdots n^{m_n}$, such that $\sum j m_j= n$. The number of blocks, usually called the number of parts of the integer partition, is the sum of the multiplicities $m_\subdot = \sum m_j$. Under the natural action $B\mapsto \pi B \pi^{-1}$ of permutations $\pi$ on set partitions, each orbit is associated with a partition of the integer~$n$. The multiplicity vector $m$ contains all the information about block sizes, but there is a subtle transfer of emphasis from block sizes to the multiplicities of the parts. By definition, an exchangeable distribution on set partitions is a function only of the block sizes, so $p_n(B) = q_n(m)$, where $m$ is the integer partition corresponding to~$B$. Since there are $$ \frac{n!}{\prod_{j=1}^n (j!)^{m_j} m_j!} $$ set partitions $B$ corresponding to a given integer partition~$m$, to each exchangeable distribution $p_n$ on set partitions there corresponds a marginal distribution $$ q_n(m) = p_n(B) \times \frac{n!}{\prod_{j=1}^n (j!)^{m_j} m_j!} $$ on integer partitions. For example, the Ewens distribution on integer partitions is $$\label{ewensinteger} \frac{\ lambda^{m_\subdot} \Gamma(\lambda) \prod \Gamma(j)^{m_j} } { \Gamma(n+\lambda)} \times \frac{n!}{\prod_{j=1}^n (j!)^{m_j} m_j!} = \frac{\lambda^{m_\subdot}\, n!\, \Gamma(\lambda)} {\Gamma(n+\lambda) \prod_j j^{m_j} m_j!}, \tag{7}$$ where the combinatorial factor $n!/\prod_j j^{m_j} m_j!$ is the size of the conjugacy class~$m$, i.e. the number of permutations whose cycle structure is~$m$. Arratia, Barbour and Tavaré, (1992) noted that this version leads naturally to an alternative description of the Ewens distribution in which the multiplicities $M = M_1,\ldots, M_n$ are independent Poisson random variables with mean $E(M_j) = \lambda/j$. Then the conditional distribution $\pr(M= m \given \sum_{j=1}^n j M_j= n)$ is the Ewens integer-partition distribution with parameter~$\ lambda$ (Kingman 1993, section~9.5). In fact, we may consider the more general two-parameter Poisson model with means $E(M_j) = \lambda\theta^j/j$ for $\lambda,\theta>0$, in which case the pair $(\ sum M_j,\sum j M_j)$ is minimal sufficient for~$(\theta,\lambda)$, and the conditional distribution given $\sum_{j=1}^n j M_j$ is (7) independent of~$\theta$. For a response vector in the form of an integer partition, for example Fisher (1943) or Efron and Thisted (1976), this representation leads naturally to a simple method of estimation and testing, using Poisson log-linear models with model formula $1+j$ and offset $-\log(j)$. The problem of estimating the number of unseen species was first tackled in a paper by Fisher (1943), using an approach that appears to be entirely unrelated to partition processes. Specimens from species~$i$ occur as a Poisson process with rate $\rho_i$, the rates for distinct species being independent and identically distributed gamma random variables. The number $N_i\ge 0$ of occurrences of species~$i$ in an interval of length~$t$ is a negative binomial random variable $$\label{negbin} \pr(N_i=x) = (1-\theta)^\nu \theta^x \frac{\Gamma(\nu+x)} {x!\, \Gamma(\nu)}. \tag{8}$$ In this setting, $\theta = t/(1+t)$ is a monotone function of the sampling time, whereas $\nu > 0$ is a fixed number independent of~$t$. Specimen counts for distinct species are independent and identically distributed random variables with parameters $\nu > 0$ and $0<\theta< 1$. The probability that no specimens from species~$i$ occur in the sample is $(1-\theta)^\nu$, the same for every species. Most species are unlikely to be observed if either $\theta$~is small, i.e. the time interval is short, or $\nu$ is small. Let $M_x$ be the number of species occurring $x\ge 0$ times, so that $M_\subdot$ is the unknown total number of species of which $M_\subdot - M_0$ are observed. The approach followed by Fisher is to estimate the parameters $\theta, \nu$ by conditioning on the number of species observed and regarding the observed multiplicities $M_x$ for $x\ge 1$ as multinomial with parameter vector proportional to the negative binomial frequencies (8). For Fisher's entomological examples, this approach pointed to $\nu=0$, consistent with the Ewens distribution (7), and indicating that the data are consistent with the number of species being infinite. Fisher's approach using a model indexed by species is less direct for ecological purposes than a process indexed by specimens. Nonetheless, subsequent analyses by Good and Toulmin (1956), Holgate (1969) and Efron and Thisted (1976) showed how Fisher's model can be used to make predictions about the likely number of new species in a subsequent temporal extension of the original sample. This amounts to a version of the Chinese restaurant process. At this point, it is worth clarifying the connection between Fisher's negative binomial formulation and the Ewens partition formulation. The relation between them is the same as the relation between binomial and negative binomial sampling schemes for a Bernoulli process: they are not equivalent, but they are complementary. The partition formulation is an exchangeable process indexed by specimens : it gives the distribution of species numbers in a sample consisting of a fixed number of specimens. Fisher's version is also an exchangeable process, in fact an iid process, but this process is indexed by species: it gives the distribution of the sample composition for a fixed set of species observed over a finite period. In either case, the conditional distribution given a sample containing $k$~species and $n$~specimens is the distribution induced from the uniform distribution on the set of $S_{n,k}$ permutations having $k$ cycles. For the sorts of ecological or literary applications considered by Good and Toulmin (1956) or Efron and Thisted (1976), the partition process indexed by specimens is much more direct than one indexed by species. Fisher's finding that the multiplicities decay as $E(M_j) \propto \theta^j/j$, proportional to the frequencies in the log-series distribution, is a property of many processes describing population structure, either social structure or genetic structure. It occurs in Kendall's (1975) model for family sizes as measured by surname frequencies. One explanation for universality lies in the nature of the transition rates for Kendall's process, a discussion of which can be found in section~2.4 of Kelly (1978). Equivariant partition models A family $p_n(\sigma; \theta)$ of distributions on permutations indexed by a parameter matrix~$\theta$, is said to be equivariant under the induced action of the symmetric group if $p_n(\sigma ; \ theta) = p_n(g\sigma g^{-1}; g \theta g^{-1})$ for all $\sigma, \theta$, and for each group element $g\colon[n]\to[n]$. By definition, the parameter space is closed under conjugation: $\theta\in\ Theta$ implies $g\theta g^{-1}\in\Theta$. The same definition applies to partition models. Unlike exchangeability, equivariance is not a property of a distribution, but a property of the family. In this setting, the family is indexed by $\theta\in\Theta$ for some fixed~$n$. There is no implication that the family $p_n$ is the same as the family of marginal distributions induced by deletion Exponential family models play a major role in both theoretical and applied work, so it is natural to begin with such a family of distributions on permutations of the matrix-exponential type \[ p_n(\ sigma; \theta) = \alpha^{\#\sigma} \exp(\tr(\sigma\theta)) / M_\alpha(\theta), \] where $\alpha > 0$ and $\tr(\sigma\theta) = \sum_{j=1}^n \theta_{\sigma(j), j}$ is the trace of the ordinary matrix product. The normalizing constant is the $\alpha$-permanent \[ M_\alpha(\theta) = \per_\alpha(K) = \sum_{\sigma} \alpha^{\#\sigma} \prod_{j=1}^n K_{\sigma(j), j} \] where $K_{ij} = \exp(\theta_{ij})$ is the component-wise exponential matrix. This family of distributions on permutations is equivariant. The limit of the $\alpha$-permanent as $\alpha\to 0$ gives the sum of cyclic products \[ \cyp(K) = \lim_{\alpha\to 0} \alpha^{-1} \per_\alpha(K) = \sum_{\sigma : \#\sigma=1} \prod_{j=1}^n K_{\sigma (j), j}, \] giving an alternative expression for the $\alpha$-permanent \[ \per_\alpha(K) = \sum_{B\in \E_n} \alpha^{\#B} \prod_{b\in B} \cyp(K[b]) \] as a sum over partitions. The induced marginal distribution (11) on partitions is of the product-partition type recommended by Hartigan (1990), and is also equivariant. Note that the matrix $\theta$ and its transpose determine the same distribution on partitions, but they do not usually determine the same distribution on permutations. The $\alpha$-permanent has a less obvious convolution property that helps to explain why this function might be expected to occur in partition models: $$\label{convolution} \sum_{b\subset[n]} \per_\alpha(K[b]) \per_{\alpha'}(K[\bar b]) = \per_{\alpha+\alpha'}(K). \tag{9}$$ The sum extends over all $2^n$ subsets of $[n]$, and $\bar b$ is the complement of $b$ in $[n]$. A derivation can be found in section~2.4 of McCullagh and M\o ller (2006). If $B$ is a partition of $ [n]$, the symbol $K\cdot B=B\cdot K$ denotes the Hadamard component-wise matrix product for which \[ \per_{\alpha} (K\cdot B) = \prod_{b\in B} \per_\alpha(K[b]) \] is the product over the blocks of $B$ of $\alpha$-permanents restricted to the blocks. Thus the function $B\mapsto \per_\alpha(K\cdot B)$ is of the product-partition type. With $\alpha, K$ as parameters, we may define a family of probability distributions on $\E_n^k$, i.e. partitions of $[n]$ having $k$ or fewer blocks, as follows: $$\label{ppdist} p_{nk}(B) = k^{\down \#B} \per_{\alpha/k}(K\cdot B) / \per_\alpha(K). \tag{10}$$ The fact that (10) is a probability distribution on $\E_n$ follows from the convolution property of permanents. The limit as $k\to\infty$ $$\label{productpartition} p_n(B) = \alpha^{\#B} \prod_{b\in B} \cyp(K[b]) / \per_\alpha(K), \tag{11}$$ is a product-partition model satisfying the conditional independence property (3). Properties of the $\alpha$-permanent are discussed by Vere-Jones (1997) and by McCullagh and M\o ller (2006) in the context of point processes. For $K=\one_n$, the $n\times n$ matrix whose elements are all one, the $\alpha$-permanent is, by definition, the generating function for the Stirling numbers of the first kind. Thus, $\per_\alpha(\one_n) = \alpha^{\up n}$ is the ascending factorial function, and for this exchangeable case, the distributions (10) and (11) coincide with (1) and (2). Further applications of partition models Partition models are used to construct cluster processes for use in classification and cluster analysis. Cluster analysis means a partitioning of the sample units into non-overlapping blocks such that the $Y$-values in $\R^d$ (feature values) are more similar within blocks than between blocks. It is important to remember that the goal of cluster analysis is not a partition of the feature space $\R^d$, but a partition of the finite set of units or specimens. Exchangeable partition models are used to construct non-trivial, processes suitable for cluster analysis. See Richardson and Green (1997), Fraley and Raftery (2002) or Booth, Casella and Hobert (2008) for a discussion of computational techniques. The simplest of these models is the marginal Gauss-Ewens process in which the sample partition $B[n]$ is to be inferred from the finite sequence $Y[n]$. The conditional distribution $p_n(B \given Y[n])$ on $\E_n$ is the posterior distribution on clusterings or partitions of $[n]$, and $E(B \given Y[n])$ is the array of one-dimensional marginal distributions for pairs of units, i.e. $E(B_{ij}\given Y[n])$ is the posterior probability that units $i,j$ belong to the same block. The conditional distribution $p_n(B \given Y[n])$ contains further information about triplets and $k$-tuples of units, from which it is possible in principle to compute the posterior distribution for the number of clusters or blocks. In estimating the number of clusters, it is important to distinguish between the sample number $\#B[n]$, which is necessarily finite, and the population number $\#B[\Nat]$, which could be infinite (McCullagh and Yang, 2008). The latter problem is essentially the same as estimating the number of unseen species given that the blocks are so well separated that $Y[n]$ determines $B[n]$. The same Gauss-Ewens model may be used for density estimation, which refers to the conditional distribution of $Y_{n+1}$ given the sample values. Usually, this is to be done for an exchangeable process in the absence of external covariate or relational information about the units. In the computer-science literature, cluster detection is also called unsupervised learning. Exchangeable partition models are also used to provide a Bayesian solution to the multiple comparisons problem (Gopalan and Berry 1998). In this setting $k$~is the number of distinct treatments, and the key idea is to associate with each partition $B$ of $[k]$ a subspace $V_B\subset\R^k$ equal to the span of the columns of~$B$. Thus, $V_B$ consists of vectors $x$ such that $x_r = x_s$ if $B_{rs} = 1$. For a treatment factor having $k$ levels with values $\tau_1,\ldots, \tau_k$, the Gauss-Ewens prior distribution on $R^k$ puts positive mass on the subspaces $V_B$ for each $B\in\E_k$. Likewise, the posterior distribution also puts positive probability on these subspaces, which enables us to compute in a coherent way the posterior probability $\pr(\tau\in V_B \given y)$ or the marginal posterior probability $\pr(\tau_r = \tau_s \given y)$. This is a revised version of the article Random permutations and partition models from Lovric, Miodrag (2011), International Encyclopedia of Statistical Science, Heidelberg: Springer Science +Business Media, LLC. Support for this research was provided in part by NSF Grant DMS-0906592. [aldous1983] Aldous, D.J. (1983) Exchangeability and Related Topics. In \booktitle{École d'été de Probabilités de Saint-Flour XIII} Springer Lecture Notes in Mathematics vol~1117, 1--198. [aldous1996] Aldous, D.J. (1996) Probability distributions on cladograms. In \title{Random Discrete Structures}. IMA Vol. Appl. Math \vol{76}. Springer, New York, 1--18. [arratia1992] Arratia, R., Barbour, A.D. and Tavaré, S. (1992) Poisson process approximations for the Ewens sampling formula. \journal{Advances in Applied Probability} \vol2, 519--535. [booth2008] Booth, J.G., Casella, G. and Hobert, J.P. (2008) Clustering using objective functions and stochastic search. \journal{J. Roy. Statist. Soc. B} \vol{70}, 119--139. [blackewll] Blackwell, D. and MacQueen, J. (1973) Ferguson distributions via Pólya urn schemes. \journal{Ann. Statist.} \vol1, 353--355. [efron1976] Efron, B. and Thisted, R.A. (1976) Estimating the number of unknown species: How many words did Shakespeare know? \journal{Biometrika} \vol{63}, 435--447. [ewens1972] Ewens, W.J. (1972) The sampling theory of selectively neutral alleles. \journal{Theoretical Population Biology} \vol3, 87--112. [fraley2002] Fraley, C. and Raftery, A.E. (2002) Model-based clustering, discriminant analysis and density estimation. \journal{J. Amer. Statist. Assoc.} \vol{97}, 611--631. [Good1956] Good, I.J. and Toulmin, G.H. (1956) The number of new species, and the increase in population coverage when a sample is increased. \journal{Biometrika} \vol{43}, 45--63. [gopalan1998] Gopalan, R. and Berry, D.A. (1998) Bayesian multiple comparisons using Dirichlet process priors. \journal{J. Amer. Statist. Assoc.} \vol{93}, 1130--1139. [hartigan1990] Hartigan, J.A. (1990) Partition models. \journal{Communications in Statistics: Theory and Methods} \vol{19}, 2745--2756. [holgate1969] Holgate, P. (1969) Species frequency distributions. \journal{Biometrika} \vol{65}, 651--660. [Kelly] Kelly, F.P. (1978) \booktitle{Reversibility and Stochastic Networks}. Wiley, Chichester. [kingman1975] Kingman, J.F.C. (1975) Random discrete distributions (with discussion). \journal{J. Roy. Statist. Soc. B} \vol{37}, 1--22. [Kingman1977] Kingman, J.F.C. (1977) The population structure associated with the Ewens sampling formula. \journal{Theoretical Population Biology} \vol{11}, 274--283. [Kingman1978] Kingman, J.F.C. (1978) The representation of partition structures. \journal{J. Lond. Math. Soc.} \vol{18}, 374--380. [Kingman1980] Kingman, J.F.C. (1980) \booktitle{Mathematics of Genetic Diversity}. CBMS-NSF conference series in applied math, \vol{34} SIAM, Philadelphia. [McC2006] McCullagh, P. and M\o ller, J. (2006) The permanental process. \journal{Adv. Appl. Prob.} \vol{38}, 873--888. [McC2008] McCullagh, P. and Yang, J. (2008). How many clusters? \journal{Bayesian Analysis} \vol{3}, 1--19. [Pitman1996] Pitman, J. (1996) Some developments of the Blackwell-MacQueen urn scheme. In \booktitle{Statistics, Probability and Game Theory: Papers in Honor of David Blackwell,} T.S. Ferguson et al editors. IMS Lecture Notes Monograph Series No. 30, 245--267. [Pitman2006] Pitman, J. (2006) \booktitle{Combinatorial Stochastic Processes.} Springer-Verlag, Berlin. [richardson1996] Richardson, S. and Green, P.J. (1997) On Bayesian analysis of mixtures with an unknown number of components (with discussion). \journal{J. Roy. Statist. Soc. B} \vol{59}, 731--792. [verejones1997] Vere-Jones, D. (1997) Alpha-permanents and their application to multivariate gamma, negative binomial and ordinary binomial distributions. \journal{New Zealand J. Math.} \vol{26} [Watterson1974] Watterson, G.A. (1974) The sampling theory of selectively neutral alleles. \journal{Adv. Appl. Prob.} \vol6, 217--250. How to Cite This Entry: Partition models. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Partition_models&oldid=55727
{"url":"https://encyclopediaofmath.org/wiki/Partition_models","timestamp":"2024-11-10T07:32:02Z","content_type":"text/html","content_length":"56355","record_id":"<urn:uuid:9f9919a1-302a-496a-8144-0e8a7f6462d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00356.warc.gz"}
[Solved] Let A and B be sets. Show that f:A×B→B×A such that (a,... | Filo Let and be sets. Show that such that is bijective function. Not the question you're searching for? + Ask your question is defined as Let such that . is one-one. Now, let be any element. Then, there exists such that . [By definition of is onto. Hence, is bijective. Was this solution helpful? Video solutions (9) Learn from their 1-to-1 discussion with Filo tutors. 1 mins Uploaded on: 5/22/2023 Was this solution helpful? 1 mins Uploaded on: 5/22/2023 Was this solution helpful? Found 7 tutors discussing this question Discuss this question LIVE 7 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions from Mathematics Part-I (NCERT) Practice questions from Mathematics Part-I (NCERT) View more Practice more questions from Relations and Functions Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text Let and be sets. Show that such that is bijective function. Updated On Feb 18, 2024 Topic Relations and Functions Subject Mathematics Class Class 12 Answer Type Text solution:1 Video solution: 9 Upvotes 908 Avg. Video Duration 6 min
{"url":"https://askfilo.com/math-question-answers/let-a-and-b-be-sets-show-that-f-a-times-b-rightarrlw1","timestamp":"2024-11-13T01:18:43Z","content_type":"text/html","content_length":"597083","record_id":"<urn:uuid:2dcccb69-a247-450a-a0ba-27a03931da06>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00856.warc.gz"}
Save-our-CTP | Toby Tancred https://tobytancred.com.au/wp-content/uploads/2018/11/logo2.jpg 0 0 HiTech Creative https://tobytancred.com.au/wp-content/uploads/2018/11/logo2.jpg HiTech Creative2018-08-01 13:11:342018-08-01 pg mobilecom ทำให้คุณสามารถเข้าถึงเกมที่คุณรักได้ทุกที่ทุกเวลา ไม่ว่าคุณจะใช้สมาร์ทโฟนหรือแท็บเล็ต คุณสามารถเปิดเกม PG ที่คุณชื่นชอบได้ตลอด 24 ชั่วโมงทุกวันทุกเวลา Good day, I kindly request you to review and approve my blog post. Looking forward to your response. Sosyal Mavi ile takipçi sayınızı arttırın ve profilinizin etkileyiciliğini artırın! Hemen keşfedin. Good info. Lucky me I reach on your website by accident, I bookmarked it. Hello, I’d appreciate it if you could review and approve my blog post. Thanks a lot. Sosyal Mavi, gerçek ve etkili takipçi artışı için güvenilir çözüm! Profilinizin potansiyelini ortaya çıkarın. Hi, just required you to know I he added your site to my Google bookmarks due to your layout. But seriously, I believe your internet site has 1 in the freshest theme I??ve came across. Profilinizin etkileşimini arttırın ve daha fazla kişiye ulaşın! Sosyal Mavi ile takipçi sayınızı artırın Sitenizin içerikleri benim için çok faydalı oldu. After all, what a great site and informative posts, I will upload inbound link – bookmark this web site? Regards, Reader. Takipçi sayınızı arttırmak için doğru adrestesiniz! Sosyal Mavi ile hedeflerinize ulaşın. Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. Hi, could you please review and approve my blog post? Wishing you a productive day. Takipçi sayınızı anında arttırmanın en hızlı yolu! Sosyal Mavi ile sosyal medya başarınızı garanti altına alın. Hello, I am your admin. I would be very happy if you publish this article. Hello, I am your admin. I would be very happy if you publish this article. Hello, I am your admin. I would be very happy if you publish this article. Hello, I am your admin. I would be very happy if you publish this article. Hello, I am your admin. I would be very happy if you publish this article.Hello, I am your admin. I would be very happy if you publish this article. 슬롯 용가리 Fang Jifan은 죽고 싶은 마음이 들었습니다. “전하 … 조각품을 몇 개 만드셨습니까?” Hello, I am your admin. I would be very happy if you publish this article. Hello, I am your admin. I would be very happy if you publish this article. Hello, I am your admin. I would be very happy if you publish this article. Hello, I am your admin. I would be very happy if you publish this article. Hello, I am your admin. I would be very happy if you publish this article. A woman may continue to bleed if abortion is incomplete. prices are offered by online pharmacies who want you to glucophage images effective if you’re over 65 years old? Eighty percent of those who are infected will remain asymptomatic for years. No matter what type of depression you have and how severe it is, the following self-care steps can help: Get enough sleep. Not everybody can afford medicine so they get the low ampicillin concentration for lb today. If you have any of the symptoms listed below, you should visit your doctor. Профессиональный сервисный центр по ремонту бытовой техники с выездом на дом. Мы предлагаем: ремонт крупногабаритной техники в москве Наши мастера оперативно устранят неисправности вашего устройства в сервисе или с выездом на дом! Профессиональный сервисный центр по ремонту бытовой техники с выездом на дом. Мы предлагаем: сервис центры бытовой техники москва Наши мастера оперативно устранят неисправности вашего устройства в сервисе или с выездом на дом! While in the beginning it was believed that the H5N1 virus can not spread to humans, in the meantime, advanced medical research has proven otherwise. More people are using the internet to look for a buspirone pharmacy than a local store. What type of doctor typically orders this type of test? Your doctor may use blood and imaging tests to learn the stage of the disease. The sites offer information about the drug and price of levitra 20mg how to use at great low prices from online pharmacies Next period not expected for another 2 weeks or so. Профессиональный сервисный центр по ремонту Apple iPhone в Москве. Мы предлагаем: мастер ремонта apple Наши мастера оперативно устранят неисправности вашего устройства в сервисе или с выездом на дом! Want to join the discussion? Feel free to contribute! Toby Tancred saw the injustices visited upon ordinary and decent people by the system and decided to pursue a career acting for people who had been turned upside down through no fault of their own. Toby Tancred understands how the legal system operates and will use his experience and knowledge for your benefit. • I am an experienced practitioner who will deliver results for you. • When you retain me I will ensure you receive the full benefit of my expertise and experience. • My team and I will fight for you. 26 William Street ORANGE NSW 2800 PO Box 465 ORANGE DX 3005 ORANGE Liability limited by a scheme approved under Professional Standards Legislation.
{"url":"https://tobytancred.com.au/save-our-ctp/","timestamp":"2024-11-11T22:43:19Z","content_type":"text/html","content_length":"288630","record_id":"<urn:uuid:f140aee2-ed62-4882-9fb0-5e0b9e4ea97a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00540.warc.gz"}
On improvements of the dissipation integral method for the calculation of turbulent boundary layers The prediction of incompressible two dimensional turbulent boundary layers is discussed. An integral method based on the momentum and the mean kinetic energy equations was developed. A third integral equation was added to calculate the dissipation integral. This equation was derived from the turbulent energy equation and takes into account the flow. Contributions on Transport Phenomena in Fluid Mechanics and Related Topics (ESA-TT-498) Pub Date: March 1979 □ Boundary Layer Flow; □ Integral Equations; □ Turbulent Boundary Layer; □ Differential Equations; □ Flow Equations; □ Incompressible Flow; □ Kinetic Energy; □ Navier-Stokes Equation; □ Two Dimensional Boundary Layer; □ Upstream; □ Fluid Mechanics and Heat Transfer
{"url":"https://ui.adsabs.harvard.edu/abs/1979ctpf.rept..197J/abstract","timestamp":"2024-11-08T14:51:20Z","content_type":"text/html","content_length":"34024","record_id":"<urn:uuid:bf215130-33f1-4c06-9ad0-2f01469cb247>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00350.warc.gz"}
Multiplication 0 9 Worksheets Math, especially multiplication, creates the keystone of many academic self-controls and real-world applications. Yet, for many students, understanding multiplication can posture an obstacle. To resolve this hurdle, teachers and moms and dads have welcomed a powerful tool: Multiplication 0 9 Worksheets. Intro to Multiplication 0 9 Worksheets Multiplication 0 9 Worksheets Multiplication 0 9 Worksheets - Multiplication facts with 9 s Students multiply 9 times numbers between 1 and 12 The first worksheet is a table of all multiplication facts 1 12 with nine as a factor 9 times table Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 3 More Similar Multiplying by 10 Multiplying by 11 What is K5 Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets Relevance of Multiplication Practice Recognizing multiplication is crucial, laying a solid structure for sophisticated mathematical principles. Multiplication 0 9 Worksheets use structured and targeted method, cultivating a deeper understanding of this basic arithmetic procedure. Advancement of Multiplication 0 9 Worksheets Pin On Multiplication worksheets Ideas For Kids Pin On Multiplication worksheets Ideas For Kids Welcome to The Multiplying 1 to 10 by 1 to 9 100 Questions A Math Worksheet from the Multiplication Worksheets Page at Math Drills This math worksheet was created or last revised on 2021 02 22 and has been viewed 5 619 times this week and 7 427 times this month Have your third graders demonstrate their grasp of multiplication facts 0 9 with this math assessment This worksheet asks students to multiply two factors to find the product for 20 different multiplication problems From typical pen-and-paper workouts to digitized interactive layouts, Multiplication 0 9 Worksheets have advanced, catering to diverse discovering styles and choices. Types of Multiplication 0 9 Worksheets Basic Multiplication Sheets Basic exercises focusing on multiplication tables, assisting learners build a strong arithmetic base. Word Trouble Worksheets Real-life scenarios integrated into issues, boosting critical thinking and application skills. Timed Multiplication Drills Examinations developed to improve speed and accuracy, assisting in rapid psychological mathematics. Advantages of Using Multiplication 0 9 Worksheets Multiplication Across Zeros Worksheets Free Printable Multiplication Across Zeros Worksheets Free Printable These multiplication worksheets are appropriate for Kindergarten 1st Grade 2nd Grade 3rd Grade 4th Grade and 5th Grade You may vary the numbers of problems on each worksheet from 12 to 30 You can click Clear to reset or All to select all of the numbers Choose the numbers for the first factor Choose the numbers for the second factor Multiplying by 9 Worksheets Basic Facts with Factors of 9 Multiplying by 9s Use these printables to help your students master basic multiplication facts with 9 as a factor Also find sheets with all facts up to 9s Multiplication by 9s Only Multiplication Nines Trick Enhanced Mathematical Skills Constant method hones multiplication proficiency, improving general math capacities. Enhanced Problem-Solving Abilities Word troubles in worksheets develop analytical thinking and technique application. Self-Paced Knowing Advantages Worksheets fit private knowing rates, cultivating a comfortable and versatile discovering environment. How to Develop Engaging Multiplication 0 9 Worksheets Integrating Visuals and Shades Dynamic visuals and shades record attention, making worksheets aesthetically appealing and engaging. Including Real-Life Circumstances Connecting multiplication to day-to-day circumstances includes relevance and usefulness to exercises. Customizing Worksheets to Various Skill Levels Personalizing worksheets based upon differing effectiveness degrees makes sure inclusive learning. Interactive and Online Multiplication Resources Digital Multiplication Tools and Gamings Technology-based resources provide interactive understanding experiences, making multiplication appealing and satisfying. Interactive Internet Sites and Apps On the internet systems give diverse and accessible multiplication practice, supplementing traditional worksheets. Personalizing Worksheets for Different Discovering Styles Aesthetic Students Aesthetic help and diagrams aid comprehension for students inclined toward visual understanding. Auditory Learners Verbal multiplication issues or mnemonics deal with students that understand principles through auditory ways. Kinesthetic Students Hands-on tasks and manipulatives sustain kinesthetic students in understanding multiplication. Tips for Effective Implementation in Understanding Uniformity in Practice Routine practice reinforces multiplication skills, advertising retention and fluency. Balancing Rep and Variety A mix of repetitive exercises and diverse problem formats maintains rate of interest and comprehension. Supplying Useful Responses Feedback aids in determining locations of improvement, urging ongoing development. Challenges in Multiplication Technique and Solutions Inspiration and Interaction Difficulties Boring drills can result in uninterest; innovative methods can reignite motivation. Overcoming Anxiety of Math Negative understandings around math can prevent progress; producing a favorable understanding setting is vital. Impact of Multiplication 0 9 Worksheets on Academic Efficiency Researches and Study Searchings For Study shows a favorable connection in between constant worksheet usage and boosted mathematics efficiency. Multiplication 0 9 Worksheets become flexible tools, promoting mathematical effectiveness in learners while suiting diverse discovering styles. From basic drills to interactive online sources, these worksheets not only boost multiplication skills but additionally promote important thinking and problem-solving capacities. Addition Table Patterns Worksheets Worksheet Hero multiplication Worksheet For Kids Archives EduMonitor Check more of Multiplication 0 9 Worksheets below Printable Times Tables Worksheets Activity Shelter Multiplication Facts Worksheets 0 9 Free Printable Single Digit Multiplication 0 9 Worksheet For 2nd 3rd Grade Lesson Planet FREE PRINTABLE MULTIPLICATION WORKSHEETS WonkyWonderful 100 Vertical Questions Multiplication Facts 1 31 10 A Free Printable Multiplication Multiplication Strategies 3rd Grade Worksheets Times Tables Worksheets Multiplication Worksheets K5 Learning Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets Printable Multiplication Worksheets Super Teacher Worksheets Multiplication by 9s When you re teaching students to multiply only by the number nine use these printable worksheets Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets Multiplication by 9s When you re teaching students to multiply only by the number nine use these printable worksheets FREE PRINTABLE MULTIPLICATION WORKSHEETS WonkyWonderful Multiplication Facts Worksheets 0 9 Free Printable 100 Vertical Questions Multiplication Facts 1 31 10 A Free Printable Multiplication Multiplication Strategies 3rd Grade Worksheets Times Tables Worksheets Printable Timed Multiplication Quiz PrintableMultiplication Multiplication Times Table Worksheets Numeracy Warm Up Multiplication worksheets Math Multiplication Times Table Worksheets Numeracy Warm Up Multiplication worksheets Math Free Printable Maths Worksheets Ks2 Multiplication Free Printable Maths Worksheets Ks2 Multi Frequently Asked Questions (Frequently Asked Questions). Are Multiplication 0 9 Worksheets suitable for every age groups? Yes, worksheets can be tailored to various age and ability levels, making them versatile for various learners. Exactly how usually should students exercise making use of Multiplication 0 9 Worksheets? Constant practice is crucial. Normal sessions, preferably a few times a week, can generate substantial enhancement. Can worksheets alone enhance mathematics skills? Worksheets are an important device yet needs to be supplemented with different discovering techniques for thorough ability advancement. Exist on the internet platforms supplying totally free Multiplication 0 9 Worksheets? Yes, several academic internet sites supply free access to a wide variety of Multiplication 0 9 Worksheets. How can moms and dads sustain their youngsters's multiplication method in your home? Motivating regular technique, providing support, and producing a favorable learning environment are valuable actions.
{"url":"https://crown-darts.com/en/multiplication-0-9-worksheets.html","timestamp":"2024-11-06T10:37:19Z","content_type":"text/html","content_length":"28420","record_id":"<urn:uuid:c5a46650-af7c-40f6-b6f3-1039ae8d3933>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00596.warc.gz"}
The Role Of Matrices In Linear Algebra And Machine Learning » Happy & Raw The Role of Matrices in Linear Algebra and Machine Learning The Role of Matrices, the fundamental mathematical constructs, play a pivotal role in the realms of linear algebra and machine learning. These two-dimensional arrays of numbers, symbols, or expressions hold the key to unlocking a wealth of analytical and problem-solving techniques in various fields. By delving into the properties, operations, and applications of matrices, we can gain a deeper understanding of their significance and versatility in both linear algebra and machine learning. Key Takeaways • Matrices are essential mathematical tools that underpin the foundations of linear algebra and machine learning. • Understanding the definition, properties, and importance of matrices is crucial for mastering these fields. • Matrix operations, such as addition, multiplication, and transformation, enable the manipulation and analysis of data in linear algebra and machine learning. • Matrices play a pivotal role in various machine learning algorithms, including dimensionality reduction, image processing, and numerical optimization. • Matrix decompositions, such as eigenvalue decomposition and singular value decomposition (SVD), provide powerful insights and techniques for solving complex problems. Introduction to Matrices Matrices are fundamental mathematical structures that play a crucial role in the field of linear algebra and have widespread applications in various domains, including machine learning and computer graphics. Understanding the definition and properties of matrices is essential for grasping the importance of these versatile tools. Definition and Properties A matrix is a rectangular array of numbers, symbols, or expressions, typically arranged in rows and columns. Matrices possess unique characteristics that set them apart from other mathematical objects. Some of the key properties of matrices include: • Dimensions: Matrices are defined by their number of rows and columns, which determine their size and shape. • Elements: The individual values within a matrix are referred to as its elements, and they can be accessed by their row and column indices. • Operations: Matrices can be subjected to various mathematical operations, such as addition, subtraction, multiplication, and scalar multiplication. Importance in Linear Algebra Matrices are of fundamental importance in the field of linear algebra, a branch of mathematics that deals with the study of linear equations, vector spaces, and transformations. Matrices serve as the primary tools for representing and manipulating linear relationships, enabling the solution of complex problems in various scientific and engineering disciplines. The versatility of matrices in linear algebra is evident in their ability to model and analyze systems of linear equations, perform transformations on vectors, and compute eigenvalues and eigenvectors, among other applications. “Matrices are the language in which the laws of linear variation are translated.” Matrices in Linear Algebra In the realm of linear algebra, matrices play a crucial role in solving complex mathematical problems and manipulating data. These rectangular arrays of numbers, symbols, or expressions enable us to perform various operations that are fundamental to numerous applications, from computer graphics to machine learning. Matrix Operations The versatility of matrices lies in the variety of operations that can be performed on them. These operations include: • Addition and Subtraction: Matrices can be added or subtracted, provided they have the same dimensions. • Multiplication: Matrices can be multiplied, but the number of columns in the first matrix must match the number of rows in the second matrix. • Scalar Multiplication: A matrix can be multiplied by a scalar (a single number), which effectively scales the matrix by that value. Understanding these matrix operations is essential for solving linear algebraic problems and manipulating data in various applications, such as computer graphics, image processing, and machine learning algorithms that rely on matrices in linear algebra. “Matrices are the foundation of linear algebra, providing a powerful tool for representing and manipulating data in a wide range of applications.” By mastering the fundamental matrix operations, you can unlock the vast potential of matrices in linear algebra and apply them to solve complex problems across diverse fields. The Role of Matrices in Machine Learning In the realm of machine learning, matrices play a pivotal role in driving the development of predictive models and extracting valuable insights from complex datasets. Matrices in Machine Learning are the backbone of many algorithms, enabling the efficient representation and manipulation of data, which is essential for a wide range of applications. One of the primary uses of Matrices in Machine Learning is in the representation of data. Machine learning algorithms often work with large, multidimensional datasets, and matrices provide a compact and organized way to store and process this information. By arranging data into matrix form, machine learning models can easily perform operations such as linear transformations, dimensionality reduction, and feature extraction, which are crucial for pattern recognition and predictive analysis. Furthermore, the Role of Matrices in Machine Learning extends to the optimization of these models. Many machine learning algorithms, such as linear regression, logistic regression, and support vector machines, rely on matrix operations to find the optimal parameters or weights that best fit the training data. This optimization process is essential for improving the accuracy and performance of the models, making them more effective in real-world applications. “Matrices are the fundamental building blocks of machine learning algorithms, enabling the efficient representation, manipulation, and optimization of data to unlock powerful insights and predictive capabilities.” In conclusion, the Matrices in Machine Learning and their Role in Machine Learning are indispensable in the field of machine learning. By harnessing the power of matrices, researchers and practitioners can develop sophisticated models that can analyze and make predictions on complex data, driving advancements in various domains, from image recognition to natural language processing and beyond. Matrix Decompositions In the realm of linear algebra and machine learning, matrix decompositions play a pivotal role in unraveling the intricate relationships within data. Two powerful techniques, Eigenvalues and Eigenvectors, and Singular Value Decomposition (SVD), offer profound insights that enable us to analyze and understand the underlying structures of matrices. Eigenvalues and Eigenvectors: Revealing the Essence Eigenvalues and eigenvectors are fundamental concepts in matrix theory, providing a unique window into the properties of a matrix. Eigenvalues, essentially scalar quantities, represent the scaling factors that transform a matrix’s input, while eigenvectors, the corresponding vectors, define the directions in which this transformation occurs. By understanding these key elements, we can gain invaluable insights into the behavior and characteristics of complex systems. Singular Value Decomposition (SVD): A Versatile Approach The Singular Value Decomposition (SVD) is a powerful matrix decomposition technique that decomposes a matrix into three component matrices, revealing its underlying structure. This decomposition allows us to analyze and manipulate data in a more efficient and meaningful way, with applications ranging from data compression and noise reduction to image processing and recommendation systems. Through the exploration of these matrix decomposition methods, we unlock a deeper understanding of the patterns, relationships, and transformations inherent in data. By leveraging these techniques, we can uncover the hidden complexities that lie within matrices, paving the way for more effective solutions in a wide array of fields, from linear algebra to machine learning. Technique Description Applications • Analyzing the behavior of dynamical systems Scalar quantities and corresponding vectors that represent the scaling and directional transformations of a • Solving systems of linear equations Eigenvalues and Eigenvectors matrix. • Determining the stability and equilibrium of physical • Data compression and dimensionality reduction Singular Value Decomposition A matrix decomposition technique that decomposes a matrix into three component matrices, revealing its • Noise reduction and signal processing (SVD) underlying structure. • Recommendation systems and collaborative filtering • Image processing and computer vision By mastering these matrix decomposition techniques, we unlock a new level of understanding in the world of linear algebra and machine learning, paving the way for groundbreaking insights and innovative solutions. The Role of Matrices Matrices are mathematical structures that play a vital role in various fields, from linear algebra to machine learning. Their versatility and power make them indispensable tools for solving complex problems and advancing scientific and technological progress. One of the key strengths of matrices is their ability to represent and manipulate data in a systematic and organized manner. They can be used to store and process large amounts of information, enabling efficient data analysis, visualization, and transformation. This makes matrices essential for applications such as data analysis, image processing, and computer graphics. Furthermore, the matrix operations of addition, subtraction, multiplication, and inversion allow for the manipulation of data in ways that are crucial for many mathematical and scientific disciplines. These operations form the foundation of linear algebra, which is a fundamental branch of mathematics with far-reaching implications in fields like physics, engineering, and economics. Beyond their mathematical significance, matrices also play a pivotal role in the rapidly evolving field of machine learning. They are used to represent and transform data, enabling advanced algorithms and techniques like neural networks and support vector machines to uncover patterns, make predictions, and solve complex problems. In summary, the Role of Matrices extends far beyond the confines of linear algebra. They are fundamental tools that empower researchers, scientists, and technologists to push the boundaries of knowledge and innovation across a wide range of disciplines. As our world becomes increasingly data-driven, the significance of matrices will only continue to grow, solidifying their place as indispensable components of modern problem-solving and scientific exploration. Matrix Transformations Matrices are versatile mathematical tools that play a crucial role in linear algebra and machine learning. Beyond their fundamental operations, matrices can be employed to perform transformations on data, such as rotation and scaling. These transformations hold immense significance in various applications, from computer graphics and image processing to data visualization. Mastering Matrix Rotation Matrix rotation allows for the precise manipulation of the orientation of objects or data within a given coordinate system. By applying a rotation matrix, you can seamlessly rotate elements around a specific axis, opening up a world of possibilities in computer graphics and image editing. This capability is particularly valuable in 3D modeling, where accurately rotating objects is essential for creating dynamic and realistic scenes. Scaling with Matrices Alongside rotation, matrix scaling is another powerful transformation that enables the resizing of elements without distorting their proportions. Whether you’re working on image processing, data visualization, or any other field that requires adjusting the size of objects or data points, matrices provide a robust and efficient solution. By applying a scaling matrix, you can easily expand or contract elements while preserving their essential characteristics. The versatility of matrix transformations, particularly matrix rotation and matrix scaling, is a testament to the depth and breadth of matrix applications. As you delve into the world of linear algebra and machine learning, understanding these transformations will equip you with the tools to unlock new possibilities in your projects and unlock new frontiers in data analysis and “Matrices are the Swiss Army knives of linear algebra, capable of tackling a wide range of transformations and operations with precision and efficiency.” Applications of Matrices Matrices are not merely theoretical constructs; they have a profound impact on our daily lives, particularly in the realms of computer graphics and image processing. These powerful mathematical tools facilitate the manipulation and transformation of visual data, enabling us to create captivating digital experiences and enhance the quality of digital images. Matrices in Computer Graphics In the world of computer graphics, matrices play a crucial role in the rendering and animation of three-dimensional (3D) scenes. They allow for the seamless translation, rotation, and scaling of objects, enabling the creation of dynamic and realistic visuals. Matrices are instrumental in transforming the coordinates of vertices, which are the building blocks of 3D models, ensuring that they are displayed accurately on the computer screen. Matrices in Image Processing Matrices also hold sway in the field of image processing, where they facilitate the enhancement, analysis, and manipulation of digital images. By representing images as matrices, where each element corresponds to a pixel’s color or intensity, various image processing techniques can be applied. These include edge detection, image enhancement, and image compression, all of which rely on the powerful capabilities of matrices to process and transform visual data. Application Role of Matrices Computer Graphics Transforming 3D object coordinates, enabling realistic rendering and animation Image Processing Representing and manipulating digital images, enabling techniques like edge detection and enhancement The applications of matrices in computer graphics and image processing are a testament to their versatility and importance in various fields of technology. As we continue to harness the power of these mathematical tools, we can expect to see even more innovative and captivating digital experiences in the years to come. “Matrices are the foundation for the mathematical models that power the digital world around us.” Numerical Methods with Matrices Matrices play a crucial role in various numerical methods and computational techniques used in scientific and engineering applications. These powerful mathematical tools enable us to solve complex problems, approximate solutions, and optimize intricate systems. In this section, we will explore how matrices are employed in these numerical methods, unlocking new possibilities in fields ranging from data analysis to scientific computing. Solving Systems of Linear Equations One of the primary applications of matrices in numerical methods is the solving of systems of linear equations. By representing the coefficients of these equations in matrix form, we can leverage powerful matrix operations to find the unknown variables. This approach is particularly useful in scenarios where the number of equations and variables is large, making traditional methods cumbersome and time-consuming. Approximating Solutions Matrices are also instrumental in approximating solutions to problems that cannot be solved analytically. Numerical methods like finite element analysis and finite difference methods rely on matrix formulations to discretize complex systems and generate approximate solutions. These techniques are widely used in fields such as engineering, physics, and applied mathematics, where exact solutions may be challenging or impossible to obtain. Optimization and Modeling In the realm of optimization and modeling, matrices play a pivotal role. Techniques like linear programming, quadratic programming, and convex optimization often involve matrix operations to formulate and solve complex problems. These methods are essential in decision-making, resource allocation, and system design, where finding the optimal solution is crucial. Numerical methods with matrices are not limited to these examples; they extend to a diverse range of applications, including signal processing, image analysis, and data visualization. As computational power and algorithms continue to evolve, the role of matrices in numerical methods will only become more prominent, driving innovation and advancement across various scientific and technological domains. “Matrices are the Swiss Army knives of mathematics, versatile tools that can tackle a wide range of problems in diverse fields.” In this article, we have delved into the pivotal role that matrices play in the realms of linear algebra and machine learning. Matrices are versatile mathematical constructs that enable powerful analytical capabilities, serving as fundamental tools for solving complex problems and driving advancements in various industries. The insights we’ve explored highlight the importance of matrices in linear algebra, where they facilitate matrix operations, transformations, and decompositions. These matrix-based techniques are instrumental in modeling and solving linear systems, a crucial aspect of many real-world applications. Furthermore, the article has showcased the integral role of matrices in machine learning, where they underpin key algorithms and data analysis methods, empowering predictive modeling and decision-making processes. As we conclude this exploration, it is clear that matrices are not merely abstract mathematical entities, but rather, they are the bedrock upon which innovative solutions and groundbreaking discoveries are built. By understanding the nuances and applications of matrices, we can unlock the vast potential they hold, paving the way for advancements that will continue to shape the future of linear algebra and machine learning. What is the definition and key properties of matrices? Matrices are rectangular arrays of numbers, symbols, or expressions, with unique properties that enable them to be used in a wide range of mathematical and computational applications. They possess characteristics such as dimensions, elements, and operations that can be performed on them. How are matrices important in the field of linear algebra? Matrices are fundamental constructs in linear algebra, allowing for the representation and manipulation of linear systems, transformations, and equations. They enable the analysis of complex data structures and the solving of problems that involve vectors, systems of linear equations, and various algebraic operations. What are the key matrix operations, and how are they used? The main matrix operations include addition, subtraction, multiplication, and scalar multiplication. These operations are crucial for solving linear algebraic problems, manipulating data, and performing various analytical and computational tasks. What is the role of matrices in machine learning? Matrices are integral to many machine learning algorithms, as they allow for the efficient representation and manipulation of data. They enable the development of predictive models, the extraction of insights from complex datasets, and the application of various analytical techniques in machine learning. What are matrix decompositions, and how do they contribute to data analysis? Matrix decompositions, such as eigenvalue decomposition and singular value decomposition (SVD), are powerful techniques that can reveal the underlying structure and properties of matrices. These methods are used to analyze and understand data in various fields, including linear algebra and machine learning. How are matrices used in computer graphics and image processing? Matrices play a crucial role in computer graphics and image processing, enabling the manipulation and watitoto transformation of visual data, the rendering of 3D scenes, and the enhancement and analysis of digital images. They are essential for tasks like rotation, scaling, and other geometric transformations. What are some numerical methods that utilize matrices? Matrices are integral to various numerical methods and computational techniques used in scientific and engineering applications. They are employed in solving systems of linear equations, approximating solutions, and optimizing complex problems.
{"url":"https://www.happyandraw.com/the-role-of-matrices-in-linear-algebra/","timestamp":"2024-11-07T04:31:37Z","content_type":"text/html","content_length":"813232","record_id":"<urn:uuid:3ec8c8f9-234c-498f-882d-39a1a30e86d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00393.warc.gz"}
Egyptian Rope The ancient Egyptians were said to make right-angled triangles using a rope with twelve equal sections divided by knots. What other triangles could you make if you had a rope like this? The ancient Egyptians were said to make right-angled triangles using a rope which was knotted to make twelve equal sections. If you have a rope knotted like this, what other triangles can you make? (You must have a knot at each corner.) What regular shapes can you make - that is, shapes with equal length sides and equal angles? Getting Started You could try using twelve sticks of equal length such as headless matches to try out your ideas. Student Solutions We had some lovely solutions sent in for this activity, so thank you to everybody who shared their ideas with us. Jeremy from Thailand sent us this video: Well done for finding those three equilateral shapes. Jeremy mentions that he found another triangle that wasn't equilateral - I wonder what that one would look like? Ci Hui Minh Ngoc from Kong Hwa School in Singapore sent in these ideas: Well done for finding all of the possible triangles and all of the regular shapes! Ci Hui Minh Ngoc has drawn triangles whose sides are 4 x 4 x 4, 2 x 5 x 5 and 3 x 4 x 5 units of length. Those parallelograms do have equal length sides, but they aren't classed as regular shapes. Have a close look and see if you can work out why. William sent in this picture of his solutions: He said: You can basically do any shape whose sides are a factor of 12, because Egyptian Ropes have 12 knots. So a 3 sided shape like a triangle, a 4 sided shape like a rectangle or square or rhombus or parallelogram, a 6 sided shape like a hexagon, or a 12 sided shape like a dodecagon. And then have these shapes with units that add up to 12. Good ideas, William! Can you spot which two of your shapes are actually the same? Teachers' Resources Why do this problem? This problem is one that combines knowledge of properties of shapes with addition, subtraction, multiplication and division of small numbers. It also provides an opportunity for learners to consider the effectiveness of alternative strategies. Possible approach You could use this problem during work on either number or shape. It could be introduced by sharing the picture of the triangle made from rope and asking children what they see. Invite learners to share their thoughts with the whole group and facilitate a discussion about the image. If it does not come up naturally, draw the class' attention to the fact there are twelve sections in the rope and ask learners to investigate other possible triangles. Have to hand various resources which they could use as they work on the problem in pairs. This might include, for example, headless matches, lolly sticks, cut-up drinking straws, paper, Cuisenaire rods... It would be useful to discuss how learners will know that they have found all the possible triangles. Listen out for those who work systematically, in other words they are looking for solutions in a particular order so they know they won't miss any out. Learners could then go on to the second part of the problem to find regular shapes that can be made using the same piece of string (or all twelve sticks/matches/straws...). Some may continue to work practically, some may prefer to draw sketches and others may consider the problem numerically. The final plenary could focus on which regular shapes have been possible but in particular about why it is impossible to create, for example, a pentagon. Key questions Why do you think it isn't possible to make a triangle with these two sides? How do you know you have found all the possible triangles? Can you tell me why no other triangles are possible with this piece of string? How are you trying to find regular shapes that you can make with the string? What numbers are factors of 12? How can this help you to make some regular shapes? Possible support Having twelve sticks of equal length (such as headless matches, or even pencils) to build the shapes makes this problem accessible to all children. Possible extension Learners could investigate the possible triangles made with different numbers of sticks as in the problem Sticks and Triangles.
{"url":"https://nrich.maths.org/problems/egyptian-rope","timestamp":"2024-11-05T12:09:42Z","content_type":"text/html","content_length":"46604","record_id":"<urn:uuid:9302df77-26b7-4433-b722-9be19ca545b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00527.warc.gz"}
2011 AIME I Problems/Problem 6 Suppose that a parabola has vertex $\left(\frac{1}{4},-\frac{9}{8}\right)$ and equation $y = ax^2 + bx + c$, where $a > 0$ and $a + b + c$ is an integer. The minimum possible value of $a$ can be written in the form $\frac{p}{q}$, where $p$ and $q$ are relatively prime positive integers. Find $p + q$. If the vertex is at $\left(\frac{1}{4}, -\frac{9}{8}\right)$, the equation of the parabola can be expressed in the form $\[y=a\left(x-\frac{1}{4}\right)^2-\frac{9}{8}.\]$ Expanding, we find that $\[y =a\left(x^2-\frac{x}{2}+\frac{1}{16}\right)-\frac{9}{8},\]$ and $\[y=ax^2-\frac{ax}{2}+\frac{a}{16}-\frac{9}{8}.\]$ From the problem, we know that the parabola can be expressed in the form $y=ax^ 2+bx+c$, where $a+b+c$ is an integer. From the above equation, we can conclude that $a=a$, $-\frac{a}{2}=b$, and $\frac{a}{16}-\frac{9}{8}=c$. Adding up all of these gives us $\[\frac{9a-18}{16}= a+b+c.\]$ We know that $a+b+c$ is an integer, so $9a-18$ must be divisible by $16$. Let $9a=z$. If ${z-18}\equiv {0} \pmod{16}$, then ${z}\equiv {2} \pmod{16}$. Therefore, if $9a=2$, $a=\frac{2}{9}$. Adding up gives us $2+9=\boxed{011}$ Solution 2 Complete the square. Since $a>0$, the parabola must be facing upwards. $a+b+c=\text{integer}$ means that $f(1)$ must be an integer. The function can be recasted into $a\left(x-\frac{1}{4}\right)^2-\ frac{9}{8}$ because the vertex determines the axis of symmetry and the critical value of the parabola. The least integer greater than $-\frac{9}{8}$ is $-1$. So the $y$-coordinate must change by $\ frac{1}{8}$ and the $x$-coordinate must change by $1-\frac{1}{4}=\frac{3}{4}$. Thus, $a\left(\frac{3}{4}\right)^2=\frac{1}{8}\implies \frac{9a}{16}=\frac{1}{8}\implies a=\frac{2}{9}$. So $2+9=\boxed Solution 3 To do this, we can use the formula for the minimum (or maximum) value of the $x$ coordinate at a vertex of a parabola, $-\frac{b}{2a}$ and equate this to $\frac{1}{4}$. Solving, we get $-\frac{a}{2}= b$. Enter $x=\frac{1}{4}$ to get $-\frac{9}{8}=\frac{a}{16}+\frac{b}{4}+c=-\frac{a}{16}+c$ so $c=\frac{a-18}{16}$. This means that $\frac{9a-18}{16}\in \mathbb{Z}$ so the minimum of $a>0$ is when the fraction equals -1, so $a=\frac{2}{9}$. Therefore, $p+q=2+9=\boxed{011}$. -Gideontz Solution 4 Write this as $a\left( x- \frac 14 \right)^2 - \frac 98$. Since $a+b+c$ is equal to the value of this expression when you plug $x=1$ in, we just need $\frac{9a}{16}- \frac 98$ to be an integer. Since $a>0$, we also have $\frac{9a}{16}>0$ which means $\frac{9a}{16}- \frac 98 > -\frac{9}{8}$. The least possible value of $a$ is when this is equal to $-1$, or $a=\frac 29$, which gives answer $11$. -bobthegod78, krwang, Simplest14 Solution 5 (You don't remember conic section formulae) Take the derivative to get that the vertex is at $2ax+b=0$ and note that this implies $\frac{1}{2} \cdot a = -b$ and proceed with any of the solutions above. Video Solution See also The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
{"url":"https://artofproblemsolving.com/wiki/index.php/2011_AIME_I_Problems/Problem_6","timestamp":"2024-11-08T12:55:36Z","content_type":"text/html","content_length":"54431","record_id":"<urn:uuid:91ee4f5e-b9d1-4d66-9063-6058e3e135c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00688.warc.gz"}
Discounted Cash Flow - What Is DCF? & Why Is It Important? Home >> Blog >> Discounted Cash Flow - What Is DCF? & Why Is It Important? Discounted Cash Flow - What Is DCF? & Why Is It Important? A particular kind of financial modelling tool employed to evaluate a corporation is the DCF model. A DCF model is a projection of a firm's unlevered free cash flow discounted down to today's value, often known as the Net Present Value (NPV). DCF stands for Discounted Cash Flow. This DCF model instruction manual will step-by-step instruct you on the fundamentals. Even though the theory is straightforward, each of the below-mentioned factors requires a substantial amount of technical background understanding, so let's discuss each in more detail. The three-statement financial model, which connects the financial statements, is the fundamental component of a DCF model. This DCF model training manual will walk you through the procedures required to construct one independently. The DCF model is based on the idea that a company's value is solely determined by its projected future cash flows. Consequently, defining and computing a company's cash flows is the first barrier to developing a DCF model. There are two typical methods for determining a company's cash flows. Unlevered DCF strategy: The operating cash flows are projected and discounted. You may add non-operating assets, like cash, and deduct any financing-related responsibilities, like debt, once you have a current value. Levered DCF strategy: Once cash flows, including all debt (i.e., non-equity demands), have been subtracted, estimate and discount the cash flows that are still accessible to equity stockholders. Although achieving exact equality in practice can be challenging, both should ultimately result in the same value. Since it is the most used, the unlevered DCF technique is the main topic of this guide. There are 6 DCF steps in this method: Step 1. Predicting unlevered free cash flows • After considering all operating costs and investments, step one is to anticipate the cash flows a business creates from its main activities. • "Unlevered free cash flows" are the names of these cash flows. Step 2. Estimating the terminal value • Cash flow forecasting is a temporary endeavour. At a certain moment, you should estimate a lump-sum company valuation further than the explicit forecast period to make certain high-level assumptions regarding cash flows beyond the final explicit forecast year. • The phrase "final value" refers to that total amount. Step 3. The cash flows are discounted to the current using the weighted average cost of capital. • The weighted average cost of capital (WACC) is the discount rate that incorporates the risk in terms of the unlevered free cash flows. • All operating cash flows are represented by unlevered free cash flows, which "relate" to the firm's lenders and owners. • As a result, employing the proper capital system weights, the concerns of both capital producers (i.e., debt vs. equity) must be taken into account (thus, the phrase "weighted average" price of • The business value is the discounted current value of all unlevered free cash flows. Step 4: Increase the current value of unlevered free cash flows by the value of non-operating assets. • The current value of the unlevered free cash flows must be increased to account for any non-operating assets that a business may have, including cash or investments that are merely held on the balance sheet. • For instance, if we determine that Apple has unlevered free cash flows with a current value of $700 billion and has $200 billion in cash lying about, we must add this cash. Step 5: Deduct debt and additional non-equity claims. • The DCF's main objective is to seize the equity owners' property (equity value). • Furthermore, we must deduct any lenders from the current value if a corporation has any (furthermore to any other non-equity claims made against the company). • The equity owners are entitled to any remaining funds. • In our instance, the equity worth would be determined as follows if Apple had $50 billion in debt liabilities at the time of the pricing: • $700 billion (business value) less $50 billion (debt) and $200 billion (non-operating assets) equal $850 billion. • Non-operating assets and debt demands are frequently combined to form a word known as net debt (non-operating assets and other non-equity claims). • The following formula is frequently seen: business value - net debt = equity value. Market capitalization, or how the market views the equity value, is analogous to the value calculated by the Step 6: Subtract the equity value from the outstanding shares. • We can determine the overall value to owners from the equity value. But how much is every share worth? We compute this by dividing the equity value by the outstanding diluted shares of the The total of the cash flow for every period divided by one plus the discount rate (WACC) increased to the level of the period number is the discounted cash flow (DCF) equation. Here is the Discounted Cash Flow formula: DCF = CF1/(1+r)1+ CF2/(1+r)2+ CFn/(/((1+r)n • CF = Cash Flow in the period • r = the discount or interest rate • n = the period number 3.1 Examining the Formula's Elements The net cash payments an investor gets for owning specific security during a specific period are represented by cash flow (CF) (bonds, shares, etc.) Unlevered free cash flow is commonly used as the CF when creating a business's financial forecast. The Cash Flow would be interest and principal payments whenever pricing a bond. A company's weighted average cost of capital (WACC) is often used as the discount rate for company appraisal reasons (WACC). Investors use WACC since it illustrates the necessary rate of return on their investment in the business. The discount rate for a bond would be identical to the security's interest rate. Every cash flow has a timeframe attached to it. Months, quarters, and years are typical time units. The periods could be equivalent or could be dissimilar. They are presented as a percentage of a year if they are dissimilar. The value of a company or investment is calculated using the DCF equation. Then, assuming a necessary rate of return on investment, it shows the price an investor would be prepared to pay for an investment (the discount rate). Uses of the DCF Equation Instances • To assess a company's overall worth • To determine the worth of a company's project or investment. • For bond valuation • To assess a company's stock. • Calculating a property's potential revenue • To assess the value of a company's cost-cutting strategy. • To determine the worth of everything that generates (or affects) cash flow. 5.1 Advantages of discounted cash flow Investors and businesses can determine the value of a potential investment using discounted cash flow calculator. Research can be used for a range of capital investments and initiatives when it is possible to calculate future cash flows predictably. Its projections can be changed to produce varied outcomes for different what-if situations. Customers can utilize this to take into account various projections that could be made. 5.2 Disadvantages of discounted cash flow • The main drawback of discounted cash flow estimation is that it uses guesses rather than precise numbers. As an outcome, the DCF outcome is also an estimation. It implies that for DCF to be effective, businesses and individual investors should accurately calculate a discount rate and cash flows. • The market need, the state of the economy, technology, competition, and unexpected dangers or possibilities are just a few variables that affect future cash flows. These cannot be precisely measured. Investors' judgment relies on their awareness of this inherent disadvantage. • Even if accurate estimations can be generated, DCF should be used only partially. Instead, businesses and investors should consider other well-known criteria when evaluating an investment option. For example, other typical valuation techniques might include prior transactions and similar company analyses. The term "discounted cash flow" (DCF) refers to a form of valuation that calculates an investment's value based on its anticipated future cash flows. Using estimates of how much cash investment will make in the future, DCF assessment seeks to evaluate the value of investment currently. It can assist individuals trying to determine whether to purchase securities or a firm. In addition, business owners and managers can use discounted cash flow calculations to help them choose operational and capital budgets. Frequently Asked Questions The term "discounted cash flow" (DCF) relates to a technique of valuation that calculates an investment's value based on its anticipated prospective cash flows. Using estimates of how much money an investment would make in the future, the DCF study seeks to evaluate the value of an investment today. Building a prediction of the 3 financial reports depending on predictions for the future performance of the company is the first step in the DCF model procedure. This projection normally covers five years on average. Operating Cash Flow Forecasts 4 the most frequent is that the level of uncertainty in cash flow prediction rises with every year's projection; DCF programs frequently employ forecasts for 5 or even ten years out. A DCF inquiry lasts how long? DCF normally has 45 days to conduct and complete its investigation after receiving the report. The DCF commissioner will decide, depending on sufficient information, whether the kid was actually abused or neglected after their inquiry is complete. Liked What You Just Read? Share this Post: Any Question or Suggestion Post your Thoughts
{"url":"https://www.finowings.com/Investment/discounted-cash-flow","timestamp":"2024-11-12T10:38:53Z","content_type":"text/html","content_length":"145044","record_id":"<urn:uuid:e64f8e90-510c-4767-8dca-c0af84cbe16e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00524.warc.gz"}
Export Reviews, Discussions, Author Feedback and Meta-Reviews Submitted by Assigned_Reviewer_13 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: this is a very nice paper, with compelling theoretical, simulated, and real data results. i have a few majorish issues, and some minor ones. -- one can choose lambda via CV or some theoretical tool. if the theoretical tool has no parameters, it is a clear win. however, there is a truncation parameter here. this manuscript did not convey to me *how* to choose b, and importantly, the extent to which the results are robust to this choice of b. if this method is to be adopted as the de facto standard, some discussion about how to choose b and robustness to that choice is necessary. -- given that the main justification of using this method over CV is computational time, one might also acknowledge that practitioners always weigh a trade-off between accuracy and time. clearly, this method is faster than CV, assuming we have a good way of choosing b. but, how accurate is it? if it is much less accurate, than the improvement in time might not be so useful. for example, in the real data example, we could simply use the average class covariances for the other subjects. this would be fast, parameter free, and maybe just as accurate? -- in eq 5, b is some constant that satisfies some properties as a function of n? please clarify more formally the assumptions on b. also, please explain b. please define the truncation kernel here. -- "we will provide a complementary analysis on the behaviour of the estimator for finite n." perhaps state a 'complementary theoretical analysis', i was led to believe you possibly meant only numerical, which of course, is much weaker. -- line 206, space missing -- remarks on thm 1: i would like more explanation of the relative size of the 3 biases. the biases are a function of b, n, s and covariances. some plotting showing the relative magnitude, say, of bias(San) vs bias(BC) would be very helpful. for example, a heatmap showing bias(San)-bias(BC) for fixed n when varying b and s, or fixed function b_n and varying n & s. -- i don't understand the simulation setting. please explain it more clearly, with equations, the notation for the 'parameter matrix' is unclear to me, what are '/' meant to denote? also, i don't know the abbreviation 'cmp'. if you are just trying to save space, i recommend removing some paragraph breaks, and keep content as clear as possible. -- a supplementary figure justifying footnote 4 is requested. -- "We average over R multivariate " ok, what do you set R to be for these simulations? -- i think a better justification for *why* one would want to estimate a covariance matrix from an AR process, rather than the dynamics matrix, is in order. in the end of the manuscript, you demonstrate an important application that totally justifies, but leading up to that, i was wondering. Q2: Please summarize your review in 1-2 sentences very nice, could become new standard, provided some guidance on choosing b is provided, and demonstration that performance is robust to this choice of b, and accuracy is not so much worse than Submitted by Assigned_Reviewer_21 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: The authors propose a novel bias-corrected estimator of covariance matrices for autocorrelated data. They provide simulated data as well as a real-world data set on brain-computer interfacing to demonstrate the superior performance of their estimator in comparison to a standard-, a shrinkage-, and the Sancetta estimator. I believe the authors address a very interesting problem and make an important contribution. At the same time, I find the manuscript rather hard to read and the experimental result on real data not particularly convincing: * The authors do not really introduce their notation. While most notation is obvious from its context, this makes the manuscript harder to read than it would need to be. * I did not quite understand the heuristic fix of the Sancetta estimator in Section 2. * Along the same lines, I would be interested in a more detailed explanation of the bias-corrected estimator in (6). As the discussion section is primarily a summary of what the authors have done, it might be shortened to have more space in Section 2? * I am missing some of the details of the decoding procedure in Section 4. In particular, how many CSP filters were used for decoding? Which frequency band did the authors use? How did they perform * The following is the primary concern I have with this manuscript: It appears to me that the authors use only two trials to estimate the CSP filters and only pick the two most discriminative CSP filters for the plots in Figures 6 and 7. This is not what one would typically do in this setting. I suspect that this choice has been motivated by highlighting the differences between the different estimators. Furthermore, it appears to me from Figure 7 that the differences in performance between the various estimators are not a result of a better estimation of the spatial filters, but rather due to a different ranking of the CSP components. One would typically not use the best two but the best six CSP filters. From my experience, it is quite likely that this set of CSP filters would include filters for left- and right sensorimotor cortex for all estimators. If so, the differences in decoding performances between the estimators are likely to be negligible. In order to be convinced that the bias-corrected estimator outperforms any other estimator, I would like to see decoding differences on CSP filters that at least focus on the same brain regions. Typos and minor comments: * I believe the normalization term is missing in (3)? * Section 2: "rate of p" should be "rate of n"? * The first sentence of the second paragraph in Section 5 is very hard to parse. Q2: Please summarize your review in 1-2 sentences Very interesting theoretical work. The experimental results on real data, however, have been tuned to look more impressive than they would be in a realistic setting. Submitted by Assigned_Reviewer_43 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: The aim of the paper is to provide an unbiased and consistent method for the accurate estimation of covariance matrices in presence of high dimensional dataset with small number of examples subject to internal autocorrelation that further reduce the effective size of the datasets. The solution is based on a state-of-the-art approach proposed by Sancetta [San08] where covariance matrix is shrinked toward a diagonal matrix with a shrinkage intensity proportional to the variance of the covariance matrix. In this framework the paper proposes an analytical estimate of shrinkage intensity incorporating a bias correction that relates the coefficient with the effective size of data. The proposed unbiased variance estimator is an incremental work ([San08]) and it does not provide a strong theoretical novelty. However, the advantage of proposed solution is theoretically sound, and an empirical evaluation on toy examples and on a real EEG dataset shows that the proposed estimate is actually comparable to the one in the original work and it is even better in case of small high-dimensional datasets. A comparison with CV (not just the computational cost) would have been very useful. Technically, while being relatively clear, the paper has some flaws, as many times the notation is used without any introduction of its meaning, making it sometimes difficult to follow all the formulations. Moreover, I noticed changes in the formulation along the paper (indexes inversion). It seems that X is interchangeably assumed to be organized by rows or by columns. Finally, the figures are many times difficult to understand because of missing descriptions both in the captions and in the text. Q2: Please summarize your review in 1-2 sentences The paper proposes an incremental work with a limited originality, nevertheless, the proposed solution presents some advantages which have been proven theoretically and empirically. Technically it is well written but still needs some work to make it clearer, principally correcting some mistakes in the indexes and introducing the formal notation the first time it is used. Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point. First we would like to thank the reviewers for their detailed feedback containing very helpful suggestions and their patience with numerous glitches having occurred under deadline stress. Second, we would like to restate the main messages which we will try to make clearer in a revision of the manuscript: independently of the application domain (1) neglecting autocorrelation in analytic covariance shrinkage leads to a strong bias (2) the state-of-the-art method is still biased and highly sensitive to parameter choice; our proposed method is strictly better. (3) our theoretical results translate to real world data. While (1) is not new --otherwise no state-of-the-art method would exist-- it is little known and often not taken into account by practitioners, examples include [1,2,3]. Detailed discussion of main feedback: (A) how to choose the additional parameter b? (Reviewer 13) Figure 4 shows that this is a decisive advantage of our proposed estimator: while the Sancetta estimator is very sensitive to the number of lags b, our estimator is practically invariant as long as the autocorrelation decays almost completely within the chosen number of time lags. (B) comparison to cross-validation, trade-off between computation time and accuracy (Reviewer 13, 43) - We have reanalyzed the BCI data set with cross-validation and performed the statistical test: our proposed method is slightly better for a small number of trials, but the performance difference is not significant at the 5% level. We will include the results in the figures. - In practical applications, the performance difference between cv and analytic shrinkage tends to be low: it has been observed that cross-validation is sometimes not a good predictor for BCI performance [1,5], and in ERP analysis, analytic shrinkage has become state-of-the-art [4]. - The importance of saving computation time depends on the application. For instance, very large or online applications might be very time-critical. The best response is that for many researchers and practitioners, the difference in computation time is an issue as shown by the wide usage of Ledoit-Wolf shrinkage [6]. (C) decoding process, number of trials, and number of CSP filters (Reviewer 21) The manuscript focuses on the theoretical properties of shrinkage estimation under autocorrelation and we therefore kept the section on BCI results short. Yet we agree with one of the reviewers that the section is too short and slightly unclear w.r.t. some aspects. To clarify: - the frequency band was optimized for each subject [7]. - for figure 5, the number of trials was varied between 2 and 20 per class. As the reviewer writes, the low number of trials for figure six has been chosen to highlight the differences between the - The number of CSP filters was 1-3 per class, adaptively chosen by a heuristic [7]. Hence, the difference in performance is not an effect of the ranking of filters. (D) AR(1) model and justification of the method (Reviewer 13) Maybe the AR(1) model is a bit over-emphasized in the manuscript: the theory is more general and the estimator applicable to any autocorrelated time series. In our manuscript the AR(1) model is only chosen as the most intuitive and easy to understand example for an autocorrelated time series. In fact, if one knows that the data comes from an AR(1) model, it would be better to directly estimate the model. We will try to make this more clear in a revision of the manuscript. [1] Lotte et al., ToBE 2011 http://hal.inria.fr/docs/00/52/07/54/PDF/tbme10.pdf [2] Samek et al. NIPS 2013 [3] the MNE toolbox (http://martinos.org/mne/stable/generated/mne.decoding.CSP.html) [4] Blankertz et al., Neuroimage 2010 [5] Blankertz et al., NIPS 2008 [6] Ledoit and Wolf, JoMA, 2004 [7] Blankertz et al., IEEE Signal processing magazine 2008
{"url":"https://proceedings.neurips.cc/paper_files/paper/2014/file/fa83a11a198d5a7f0bf77a1987bcd006-Reviews.html","timestamp":"2024-11-11T12:09:46Z","content_type":"application/xhtml+xml","content_length":"20589","record_id":"<urn:uuid:1fd0006b-b6b5-4b02-9c0b-8291b9d34af2>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00326.warc.gz"}
34.3 Theoretical Bounds on Sorting | CS61B Textbook So far, the fastest sorts we have gone over have had $\Theta(N*logN)$as the worse case time. This begs the question; can we get a sorting algorithm that would have a better asymptotic analysis? Let's say that we have "the algorithm ultimate comparison sort" known as TUCS which is the fastest possible comparison sort. We don't know the details of this yet-to-be-discovered algorithm as it has not been discovered yet. Let's additionally say that $R(N)$is the worst-case runtime of TUCS and give ourselves the challenge of finding the best $\Theta$ and $O$ bounds for $R(N)$. (This might seem a little odd since we really don't know the details of it, but we can find $R(N)$). As a starting point, we can go ahead and have the $O$ bound of $R(N)$ be $O(N*logN)$. Why? If this is the fastest sorting algorithm it must be at least just as good as the algorithms we already have! Thus it must at least be as good as Mergesort's worst case as otherwise we would have a contradiction in calling this algorithm TUCS. What about the starting point for the $\Omega$ bound of $R(N)$? Since we don't quite know any details about it we can start by giving it the best possible runtime of $\Omega(1)$. But can we make this It turns out we can as if the algorithm sorts all of N elements, it must have gone through all of the N elements at some point of sorting it. Thus, we can arrive at a tighter bound of $\Omega(N)$. But can we make it tighter? The answer is yes, but to see the argument for why we have to consider the following game. A Game of Cat and Mouse (and Dog) Let's say we place Tom the Cat, Jerry the Mouse, and Spike the Dog in opaque soundproof boxes labeled A, B, and C. We want to figure out which is which using a scale. Let's say we knew that Box A weighs less than Box B, and that Box B weighs less than Box C. Could we tell which is which? The answer turns out to be yes! We can find a sorted order by just considering these two inequalities. What if Box A weighs more than Box B, and that Box B weighs more than Box C? Could we still find out which is which? The answer turns out to be yes! So far, we have been able to solve this game with just two inequalities! Let's go ahead and try a third case scenario. Could we know which is which if Box A weighs less than Box B, but Box B weighs more than Box C? The answer turns out to be no. It turns out to have two possible orderings: a: mouse, c: cat, b: dog (sorted order: acb) c: mouse, a: cat, b: dog (sorted order: cab) So while we were on a really great streak of solving the game with only two inequalities, we will need a third to solve all possibilities of the game. If we add the inequality a < c then this problem goes away and becomes solvable. Now that we've created this table, we can create a tree to help us solve this sorting game of cat and mouse (and dog).
{"url":"https://cs61b-2.gitbook.io/cs61b-textbook/34.-sorting-and-algorithmic-bounds/34.3-theoretical-bounds-on-sorting","timestamp":"2024-11-14T05:36:39Z","content_type":"text/html","content_length":"539680","record_id":"<urn:uuid:8f973b94-fb45-4e6f-909e-6c2eb1c3bee5>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00694.warc.gz"}