content
stringlengths
86
994k
meta
stringlengths
288
619
Equations You Should Know: That Changed The Future Of Physics It is well known that science without physics is nothing, but lame. Even physics can be considered the backbone of science. Every great invention is interlinked to physics. Physics can be divided majorly in two sections that is classical and modern physics. However, classical is very old one and deals with mainly person’s like Galileo and Archimedes, but however we are going to sketch only modern physics and the theories that developed and changed the whole physics. There are many laws dealing the same, but we are going to know about the basic ones which is of importance in the view of instruments and devices which we use in our daily life or the theories which developed and helped in broadening our minds thinking about outside of the earth. 1. Newton’s Law Of Universal Gravitation: The law and theory were developed by the great scientist Issac Newton. It got published in Printipia in July 1687 for the first time. The law is very special because it deals with the facts of revolving planet’s around the sun in the orbit and the factors affecting the revolution. Secondly, the law deals with the gravity of the revolving planets. This is the main section which is the basis for the Aerospace education and helps in mainly finding out all the concepts required to eject the satellites outside the space. 2. Theory Of Relativity: This law was proposed by Albert Einstein in 1905. This is the basic equation taught to every high school student of physics, the equation which is taught to students is E=mc^2. Why it is so important? This is the most accepted theory on the relationship between space and earth, it was derived after around two hundred years of Newton’s law of gravitation. Most of the Knowledge which we have about space and the universe is known with this Einstein’s great relation. Well, after he proposed this theory many other theories got developed with respect to relativity accordingly, but thus the basic equation which helps everyone who is presenting developed one. 3. Maxwell’s Equations: This law was proposed by James Clark Maxwell. Maxwell proposed this law in around 1860’s. This law deals with how charged particles give rise to electric and magnetic field and by this time you would have understood why it is so important to study this topic? This is basic theory which is helpful in running the great trains, running of different motors and magnetic instruments. Maxwell’s equations are the primary syllabus of the electrical and electronics graduates they basically have to study this equation in their entire curriculum. The basic application is observed in our induction cooking, ever wondered on which principle it runs, obviously it is a Maxwells law of electromagnetic induction which is monitoring it. 4. Law’s Of Thermodynamics: There are three law’s of Thermodynamics but second one plays a key role. This law was developed by Rudolf Clausius. This law deals with how energy flows from higher concentration to lower concentration, it was developed in 1865. This law is responsible for the development of the instruments and appliances which are used in industries, household and everywhere we look at. For example the refrigerator Which we use in our house is developed according to this law, motors, compressors everything runs on this principle. When dealing with studies this law is useful for the students of mechanical engineering. 5. Schrödinger’s Equation: This equation was proposed by Austrian physicist Erwin Schrödinger in 1926. This law is the basic equation of the area quantum physics. This law, mainly deals with how the quantum state of a quantum system with respect to time. The law helped in developing various instruments including electron microscope, microchips and lot more. This law helped in widening the facts and myths related to atoms and it’s constituents present inside it. 6. Newton’s law of motion: Every one of us are aware about the incident took place with Newton which led to the discovery of gravity. Yes, you are right falling of an apple. Most of them include this topic in classical physics but it is almost helpful in modern physics equally. These laws are about three in number, first one speaks about inertia, the second deals with momentum and the last one deals with the reaction force. The laws are fundamental for all the Engineers. 7. Logaritms: When it comes to logarithms it is the most accepted practice when a bigger calculation has to be done. It was developed by Scottish Laird, John Napier of Merchiston. When it comes to solving multiplication of large numbers, for example 3456×56789 is really time consuming and needs good skills. To overcome this problem Napier came up with logarithms which us boon for calculations. Now, due to the advent of computers and calculators is of less use because it is inbuilt whereas, scientists always use logarithms in their day to day life. The equations are mainly used in finding exponential equations, compound interest and radioactive decay by a Scientist, Astronimer and an Engineer. 8. Coloumb’s Law Of Electrostatics: When it comes to electricity electrostatics is the main branch dealing it and in turn coulombs law is the basic law of electrostatics. This law explains how force is related to charges and the distance between them. Interestingly, it helped in developing many electrical instruments which we use nowadays from fan to its capacitors, generators and what not. Last but not least there are many laws are proposed by scientist’s day by day, each law has got its own importance, it’s own dominancy, its own applications and every law is of great importance. But the major laws which led to great transformation in our society are the one discussed above. If you are a science student it is obvious that you may know all these laws, but it is exactly not like that even being general everyone should know about this law at least basic terminology because every appliance which we use has a principle due to which it is working and as we are using it we have to know the basic law which is governing that appliance or instrument.
{"url":"https://www.isrgrajan.com/equations-you-should-know-that-changed-the-future-of-physics.html","timestamp":"2024-11-12T19:58:19Z","content_type":"text/html","content_length":"110205","record_id":"<urn:uuid:67877140-d141-41bb-8937-a0434138da8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00715.warc.gz"}
LPC Microcontrollers Knowledge Base Recently, customers reported that the number of PWM generated by SCTimer module was inconsistent between LPC55s06 user manual and data sheet. There are many kinds of PWM generation formats, so the maximum number of PWM generated by SCTimer is also different. I think the user manual and data sheet are not very clear, so this paper makes a specific analysis. It mainly depends on SCTimer resources, such as the number of events and output channels. For all LPC series, the mechanism of SCTimer generating PWM is the same. Therefore, this paper takes LPC55s6 as an example. LPC55s06 user manual: The SCTimer/PWM supports: – Eight inputs. – Ten outputs. – Sixteen match/capture registers. – Sixteen events. – Thirty two states. According to the different control modes of generating PWM wave, this paper is divided into single-edge PWM control, dual-edge PWM control and center-aligned PWM control. 1. Single-edge PWM control The figure below shows two single-edge control PWM waves with different duty cycles and the same PWM cycle length. It can be seen from the above figure that the two PWM waves require three events: when the counter reaches 41, 65 and 100 respectively. Because of the same PWM cycle length, all PWM outputs need only one period event. Summary: The cycle length of all PWM waves are the same, so only one period event is required. The duty cycles of each PWM are different, and each PWM requires an event. The SCTimer of LPC55s06 has 16 events, one is used as PWM period event, and there are 15 left. Theoretically, 15 channels of PWM can be generated. However, LPC55s06 has only 10 outputs, so it can generate up to 10 single-edge control PWM waves. 2. Dual-edge PWM control The figure below shows three Dual-edge control PWM waves with different duty cycles and the same PWM cycle length. It can be seen from the above figure that the three PWM waves require seven events: when the counter reaches 1, 27, 41, 53, 65, 78, 100. Summary: PWM cycle length control needs one event, and each PWM duty cycle needs two events to trigger. The SCTimer of LPC55s06 has 16 events, one as PWM frequency event, and the remaining 15, so it can generate up to 7 dual-edge control PWM waves. 3. Center-aligned PWM control Center-aligned PWM control is a special case of dual-edge PWM control. The figure below shows two center-aligned PWM waves with different duty cycles and the same PWM duty length. It can be seen from the above figure that the two center-aligned PWM waves need three events in total, which are the PWM cycle length and the duty cycle trigger of the two PWM waves. Because the left and right are symmetrical, only one event is needed to control the duty cycle of one PWM. Summary: All PWM have the same cycle length, so an event is required. The duty cycle of each PWM circuit is different, but the left and right are symmetrical, and an event trigger is required for each circuit. The SCTimer of LPC55s06 has 16 events, one is used as PWM cycle length, and there are 15 left. Theoretically, 15 channels of PWM can be generated, but LPC55s06 has only 10 outputs, so it can generate up to 10 channels of unilateral control PWM wave. Summary: Maximum number of PWM generated by LPC55s6 SCTimer: Single-edge PWM control: 10 Dual-edge PWM control: 7 Center-aligned control: 10 The number of SCTimer events and output channels is different with different chips, but the analysis method is the same. Customers can analyze whether the SCTimer in a certain chip meets the requirements. View full article
{"url":"https://community.nxp.com/t5/LPC-Microcontrollers-Knowledge/tkb-p/lpc%40tkb/label-name/lpc51uxx?labels=lpc51uxx","timestamp":"2024-11-02T08:08:44Z","content_type":"text/html","content_length":"183588","record_id":"<urn:uuid:ee71dcc1-f4d8-473e-ab7f-d2ab1a55941b>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00045.warc.gz"}
Maximum Likelihood Estimation with Missing Data Suppose that a portion of the sample data is missing, where missing values are represented as NaNs. If the missing values are missing-at-random and ignorable, where Little and Rubin [7] have precise definitions for these terms, it is possible to use a version of the Expectation Maximization, or EM, algorithm of Dempster, Laird, and Rubin [3] to estimate the parameters of the multivariate normal regression model. The algorithm used in Financial Toolbox™ software is the ECM (Expectation Conditional Maximization) algorithm of Meng and Rubin [8] with enhancements by Sexton and Swensen [9]. Each sample z[k] for k = 1, ..., m, is either complete with no missing values, empty with no observed values, or incomplete with both observed and missing values. Empty samples are ignored since they contribute no information. To understand the missing-at-random and ignorable conditions, consider an example of stock price data before an IPO. For a counterexample, censored data, in which all values greater than some cutoff are replaced with NaNs, does not satisfy these conditions. In sample k, let x[k] represent the missing values in z[k ] and y[k] represent the observed values. Define a permutation matrix P[k ]so that ${z}_{k}={P}_{k}\left[\begin{array}{c}{x}_{k}\\ {y}_{k}\end{array}\right]$ for k = 1, ..., m. ECM Algorithm The ECM algorithm has two steps – an E, or expectation step, and a CM, or conditional maximization, step. As with maximum likelihood estimation, the parameter estimates evolve according to an iterative process, where estimates for the parameters after t iterations are denoted as b^(^t^) and C^(^t^). The E step forms conditional expectations for the elements of missing data with $\begin{array}{l}E\left[{X}_{k}{|Y}_{k}={y}_{k};\text{\hspace{0.17em}}{b}^{\left(t\right)},\text{\hspace{0.17em}}{C}^{\left(t\right)}\right]\\ cov\left[{X}_{k}{|Y}_{k}={y}_{k};\text{\hspace{0.17em}} for each sample $k\in \left\{1,\dots ,m\right\}$ that has missing data. The CM step proceeds in the same manner as the maximum likelihood procedure without missing data. The main difference is that missing data moments are imputed from the conditional expectations obtained in the E step. The E and CM steps are repeated until the log-likelihood function ceases to increase. One of the important properties of the ECM algorithm is that it is always guaranteed to find a maximum of the log-likelihood function and, under suitable conditions, this maximum can be a global maximum. Standard Errors The negative of the expected Hessian of the log-likelihood function and the Fisher information matrix are identical if no data is missing. However, if data is missing, the Hessian, which is computed over available samples, accounts for the loss of information due to missing data. So, the Fisher information matrix provides standard errors that are a Cramér-Rao lower bound whereas the Hessian matrix provides standard errors that may be greater if there is missing data. Data Augmentation The ECM functions do not “fill in” missing values as they estimate model parameters. In some cases, you may want to fill in the missing values. Although you can fill in the missing values in your data with conditional expectations, you would get optimistic and unrealistic estimates because conditional estimates are not random realizations. Several approaches are possible, including resampling methods and multiple imputation (see Little and Rubin [7] and Shafer [10] for details). A somewhat informal sampling method for data augmentation is to form random samples for missing values based on the conditional distribution for the missing values. Given parameter estimates for $X\subset {R}^{n}$ and $\stackrel{^}{C}$, each observation has for k = 1, ..., m, where you have dropped the parameter dependence on the left sides for notational convenience. For observations with missing values partitioned into missing values X[k] and observed values Y[k] = y[k], you can form conditional estimates for any subcollection of random variables within a given observation. Thus, given estimates E[ Z[k] ] and cov(Z[k]) based on the parameter estimates, you can create conditional estimates using standard multivariate normal distribution theory. Given these conditional estimates, you can simulate random samples for the missing values from the conditional distribution ${X}_{k}\sim N\left(E\left[{X}_{k}|{y}_{k}\right],\text{\hspace{0.17em}}cov\left({X}_{k}|{y}_{k}\right)\right).$ The samples from this distribution reflect the pattern of missing and nonmissing values for observations k = 1, ..., m. You must sample from conditional distributions for each observation to preserve the correlation structure with the nonmissing values at each observation. If you follow this procedure, the resultant filled-in values are random and generate mean and covariance estimates that are asymptotically equivalent to the ECM-derived mean and covariance estimates. Note, however, that the filled-in values are random and reflect likely samples from the distribution estimated over all the data and may not reflect “true” values for a particular observation. See Also mvnrmle | mvnrstd | mvnrfish | mvnrobj | ecmmvnrmle | ecmmvnrstd | ecmmvnrfish | ecmmvnrobj | ecmlsrmle | ecmlsrobj | ecmmvnrstd | ecmmvnrfish | ecmnmle | ecmnstd | ecmnfish | ecmnhess | ecmnobj | convert2sur | ecmninit Related Topics
{"url":"https://kr.mathworks.com/help/finance/maximum-likelihood-estimation-with-missing-data.html","timestamp":"2024-11-03T10:55:51Z","content_type":"text/html","content_length":"82151","record_id":"<urn:uuid:c6e08879-80b6-469d-b7f3-dbdefb00fe61>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00729.warc.gz"}
Addition and Subtraction of Measuring Length | How do you Add and Subtract Length? The addition and subtraction of measuring lengths can be done very easily. It is the same as addition and subtraction. Like addition and subtraction here also we will apply the concept of carrying and borrowing. For addition and subtraction of measuring lengths, we have to know the relation between lengths. It also explains the definition of length, the relation between lengths, how to add and subtract the lengths. You can also check the solved examples on adding, subtracting lengths for a better understanding of the concept. Read More: Length – Definition Length is defined as the size of the object from starting point to the endpoint of the object. Length is measured in m,cm,dm,km,etc. For smaller units of length, we use m, dm, mm, cm. For measuring larger units of length we use km. For example, in day-to-day life, we measure the length of the cloth, the height of the person, the length of the road, etc. Meter is the standard use of length. If the meter is divided into 100 equal parts then each part is called a centimeter. In short notation, meters and centimeters are represented by m, cm. We can measure meters, centimeters by using the scale or measuring tape, etc. Relation between meter, centimeter, kilometer are as follows: ┃1 meter=100cm┃ ┃ ┃ ┃100cm=1m ┃ ┃ ┃ ┃1 km=1000m ┃ Adding and Subtracting Lengths Examples Here we will explain how to add/subtract the measuring lengths with the help of the examples. Example 1: A room is of length 30m and breadth 15m. Find the sum of length and breadth and find the difference between length and breadth. 1. Sum of Length and breadth=30+15 +1 5 Hence, sum=45 2. Difference in length and breadth -1 5 Hence, Difference=15. Example 2: If the length of a bed is 425 cm and breadth is 210 cm, by how much does the length exceed the breadth? Find the sum of the length and breadth in m and cm. Length of the bed=425 The breadth of the bed=210 Length exceeds by the breadth is -2 1 0 The Sum of length and breadth is + 2 1 0 The sum of length and breadth is 6m 35 cm. Example 3: Sita has a rope 8m long and Gita has a rope that is 10m long. what is the total length of both ropes? Find the difference between the ropes? Sita’s rope length=8 Gita’s rope length=10 The total length of two ropes + 8 Therefore, the total length of ropes=18m. Difference between ropes Therefore, the difference between the two ropes=2m Example 4: Sindhu has a material of length 80m and breadth 100m. Find the sum of length and breadth? and find the difference of length and breadth? Material length=80 Material breadth=100 Sum of length and breadth=100+80=180. Difference of length and breadth=100-80=20. Example 5: Raju has a bat which is of length 40m and breadth 10m. Find the sum of length and breadth of the bat? Length of the bat=40m The breadth of the bat=10m The Sum of length and breadth of the bat is +1 0 Example 6: Sita draws two line segments. One is 8 cm long and the other is 5 cm long. (i) Find the total length of both the line segments. (ii) Find the difference between the lengths of the segments. length of the line segment=8 another length of the line segment=5 The total length of the line segments is 8+5=13 Difference of the line segments=8-5=3. Example 7: Addition of 7m 20 cm, 9m 30 cm m cm +9 30 First arrange meters, centimeters columns First centimeters columns are added i.e. 20+30=50 and the sum 50 is placed under the centimeters column. Meters column are added i.e. 7+9=16 and the sum 16 is placed under the meters column. Therefore, the Addition of 7m20cm, 9m30cm is 16m50cm. Example 8: Find the addition of 42cm,76cm +7 6 Therefore, the addition of 42cm,76cm is 118cm. Example 9: Find the subtraction of 18m 38cm,12m18cm First arrange meters, centimeters in columns. Then subtract 18cm from 38 cm Write 20 in cm column Subtract 12 from 18 write 6 in m column -12 18 Therefore, the difference is 6m20cm. Example 10: (i) Find the subtraction of 6m 89cm,5m 35cm First arrange meters, centimeters in columns. Then subtract 35cm from 89 cm Write 54 in cm column Subtract 5 from 6 write 1 in m column m cm -5 35 Therefore, the difference is 1m 54cm. (ii) Find the subtraction of 3m 63cm, 1m 42cm First arrange meters, centimeters in columns. Then subtract 42cm from 63 cm Write 21 in cm column Subtract 1 from 3 write 2 in m column m cm (iii) Sita bought a rope of length 25m long. If the rope is cut off 6m then find the length of the rope? Rope length=25m rope cutoff=6m Now Length of the rope after cutoff=25-6=19. Therefore, the length of the rope=19m.
{"url":"https://eurekamathanswerkeys.com/addition-and-subtraction-of-measuring-length/","timestamp":"2024-11-03T15:31:16Z","content_type":"text/html","content_length":"42837","record_id":"<urn:uuid:6911b625-5b3d-4575-961d-109f8f1013d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00032.warc.gz"}
DeepSource | The Modern Static Analysis Platform Cyclomatic Complexity Cyclomatic complexity is a metric for measuring the complexity of a program. It indicates the maximum number of independent paths through which control may flow in a program throughout its execution. It has many applications in understanding the structuredness of a program and ensuring thorough testing for source code. A program that has no branching statements like if-else or switch-case has a cyclomatic complexity of 1. A single if statement (or an if-else statement) increments the complexity to 2, as now there are two paths through which control may flow. How is cyclomatic complexity calculated? Cyclomatic complexity calculations employ a directed graph that maps each statement in the program to a node. A directed edge connects two nodes if control may flow from the first node to the second. Each exit node must be connected back to the entry node, hence the name "cyclomatic complexity." The complexity, denoted by M, is defined as M = E - N + 2, where E is the number of edges in the graph and N is the number of nodes. Here we see that if a program has no branching statements, E will be one less than N as there is one edge between two sequential nodes, which gives us a complexity of 1. What is the significance of cyclomatic complexity? • Cyclomatic complexity helps visualize the flow of control and thus in understanding the inherent complexity of an algorithm. Suppose the complexity of a module exceeds a threshold such as 10. In that case, we should break it into simpler modules. • Cyclomatic complexity analysis determines the number of test cases sufficient to test all paths of a program thoroughly. Code with a complexity of M requires at most M test cases to test every possible branch. Write clean and secure code with DeepSource Powerful static analysis that takes 5 minutes to set up and helps you fix code health and security problems on every pull request.
{"url":"https://deepsource.com/glossary/cyclomatic-complexity","timestamp":"2024-11-13T05:16:02Z","content_type":"text/html","content_length":"68760","record_id":"<urn:uuid:d2a77c9b-3a37-40f1-9685-8a7c461125b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00673.warc.gz"}
Elimination Method - Steps, Techniques, and Examples - The Story of Mathematics - A History of Mathematical Thought from Ancient Times to the Modern Day The elimination method is an important technique widely used when we’re working with systems of linear equations. It is essential to add this to your toolkit of Algebra techniques to help you work with different word problems involving systems of linear equations. The elimination method allows us to solve a system of linear equations by “eliminating” variables. We eliminate variables by manipulating the given system of equations. Knowing the elimination method by heart allows you to work on different problems such as mixture, work and number problems with ease. In this article, we’ll break down the process of solving a system of equations using the elimination method. We’ll also show you applications of this method when solving word problems. What Is the Elimination Method? The elimination method is a process that uses elimination to reduce the simultaneous equations into one equation with a single variable. This leads to the system of linear equations being reduced to a single-variable equation, making it easier for us. This is one of the most helpful tools when solving systems of linear equations. \begin{aligned}\begin{matrix}&\underline{\begin{array}{cccc}&{\color{red} \cancel{-40x}} &+ 12 y&=-400\phantom{x}\\+&{\color{red} \cancel{40x}}&+ 2y&=-300\phantom{1}\end{array}}\\ &\begin{array} {cccc}\phantom{+xx} &\phantom{7xxx}&14y&=-700\\&&y&=\phantom{}-50\end{array}\end{matrix}\end{aligned} Take a look at the equations shown above. By adding the equations, we’ve managed to eliminate $x$ and leave a simpler linear equation, $14y = -700$. From this, it will be easier for us to find the value of $y$ and eventually find the value of $x$. This example shows how easy it is for us to solve a system of equations by manipulating the equations. The elimination method is possible all thanks to the following algebraic properties: • Multiplication Properties • Addition and Subtraction Properties In the next section, we’ll show you how these properties are applied. We’ll also break down the process of solving a system of equations by using the elimination method. How To Solve System of Equations by Elimination? To solve a system of equations, rewrite the equations so that when these two equations are added or subtracted, one or two variables can be eliminated. The goal is to rewrite the equation so that it will be easier for us to eliminate the terms. These steps will help you rewrite the equations and apply the elimination method: 1. Multiply one or both of the equations by a strategic factor. □ Focus on making one of the terms be the negative equivalent or be identical to the term found in the remaining equation. □ Our goal is to eliminate the terms sharing the same variable. 2. Add or subtract the two equations depending on the result from the previous step. □ If the terms we want to eliminate are negative equivalents of each other, add the two equations. □ If the terms we want to eliminate are identical, subtract the two equations. 3. Now that we’re working with a linear equation, solve for the remaining variable’s value. 4. Use the known value and substitute it into either of the original equations. □ This results in another equation with one unknown. □ Use this equation to solve for the remaining unknown variable. Why don’t we apply these steps to solve the system of linear equation $ \begin{array}{ccc}x&+\phantom{x}y&=5\\-4x&+3y&= -13 \end{array} $? We’ll highlight the steps applied to help you understand the process: 1. Multiply both sides of the first equation by $4$ so that we end with $4x$. \begin{aligned}\begin{array}{ccc}{\color{Teal}4}x&+{\color{Teal}4}y&={\color{Teal}4}(5)\\-4x&+3y&= -13 \\&\downarrow\phantom{x}\\4x&+ 4y&= 20\\ -4x&+3y&= -13\end{array} \end{aligned} We want $4x$ on the first equation so that we can eliminate $x$ in this equation. We can also eliminate $y$ first by multiplying the first equation’s sides by $3$. That’s for you to work on your own, but for now, let’s continue by eliminating $x$. 2. Since we’re working with $4x$ and $-4x$, add the equations to eliminate $x$ and have one equation in terms of $y$. \begin{aligned}\begin{matrix}&\underline{\begin{array}{cccc}\phantom{+xxx}\bcancel{\color{Teal}4x}&+4y &=\phantom{+}20\\+\phantom{xx}\bcancel{\color{Teal}-4x} &+ 3y&= -13\end{array}}\\ &\begin{array} {cccc}\phantom{+} & \phantom{xxxx}&7y&=\phantom{+}7\end{array}\end{matrix} \end{aligned} 3. Solve for $y$ from the resulting equation. \begin{aligned}7y &= 7\\y &= 1\end{aligned} 4. Substitute $y =1$ into either of the equations from $\begin{array}{ccc}x&+\phantom{x}y&=5\\-4x&+3y&= -13 \end{array} $. Use the resulting equation to solve for $x$. \begin{aligned}x + y&= 5\\ x+ {\color{Teal} 1} &= 5\\x& =4\end{aligned} This means that the given system of linear equations is true when $x = 4$ and $y = 1$. We can also write its solution as $(4, 5)$. To double-check the solution, you can substitute these values into the remaining equation. \begin{aligned}-4x + 3y&= -13\\-4(4) + 3(1)&= -13\\-13&= -13 \checkmark\end{aligned} Since the equation holds true when $x = 4$ and $y =1$, this further confirms that the solution to the system of equation is indeed $(4, 5)$. When working a system of linear equations, apply a similar process as we have done in this example. The level of difficulty may change but the fundamental concepts needed to use the elimination method remains constant. In the next section, we’ll cover more examples to help you master the elimination method. We’ll also include word problems involving systems of linear equations to make you appreciate this technique Example 1 Use the elimination method to solve the system of equations, $\begin{array}{ccc}4x- 6y&= \phantom{x}26 \,\,(1)\\12x+8y&= -12 \,\,(2)\end{array}$. Inspect the two equations to see which equation would be easier for us to manipulate. \begin{aligned} \begin{array}{ccc}4x- 6y&= \phantom{x}26\,\,(1)\\12x+8y&= -12\,\,(1)\end{array} \end{aligned} Since $12x$ is a multiple of $4x$, we can multiply $3$ on both sides of Equation (1) so we’ll have $12x$ in the resulting equation. This leads to us having a $12x$ on both equations, making it possible for us to eliminate later. \begin{aligned} \begin{array}{ccc}{\color{DarkOrange}3}(4x)& -{\color{DarkOrange}3}(6)y&={\color{DarkOrange}3}(26)\\12x&+8y&= -12\,\, \\&\downarrow\phantom{x}\\12x&- 18y&= 78\,\,\,\, \\ 12x&+8y&= -12 Since the two resulting equations have $12x$, subtract the two equations to eliminate $12x$. This leads to a single equation with one variable. \begin{aligned}\begin{matrix}&\underline{\begin{array}{cccc}\phantom{+xxx}\bcancel{\color{DarkOrange}12x}& -18y &=\phantom{+}78\\-\phantom{xx}\bcancel{\color{DarkOrange}12x} &+ 8y&= -12\end{array}}\\ &\begin{array}{cccc}\phantom{+} & \phantom{xxxx}&-26y&=\phantom{+}90\end{array}\end{matrix}\end{aligned} Find the value of $y$ using the resulting equation by dividing both sides by $-26$. \begin{aligned}-26y&= 90\\y&= -\dfrac{90}{26}\\&= -\dfrac{45}{13}\end{aligned} Now, substitute $y = -\dfrac{45}{13}$ into one of the equations from $\begin{array}{ccc}4x- 6y&= \phantom{x}26 \,\,(1)\\12x+8y&= -12 \,\,(2)\end{array}$. \begin{aligned}4x – 6y&= 26\\4x -6\left(-\dfrac{45}{13}\right)&= 26\\4x + \dfrac{270}{13}&= 26\end{aligned} Use the resulting equation to solve $x$ then write down the solution to our system of linear equations. \begin{aligned}4x + \dfrac{270}{13}&= 26\\52x + 270&= 338\\52x&=68\\x&= \dfrac{17}{13}\end{aligned} Hence, we have $x = \dfrac{17}{13}$ and $y = -\dfrac{45}{13}$. We can double-check our solution by substituting these values into the remaining equation and see if the equation still holds true. \begin{aligned}12x+8y&= -12\\ 12\left({\color{DarkOrange}\dfrac{17}{13}}\right)+ 8\left({\color{DarkOrange}-\dfrac{45}{13}}\right)&= -12\\-12 &= -12 \checkmark\end{aligned} This confirms that the solution to our system of equations is $\left(\dfrac{17}{13}, -\dfrac{45}{13}\right)$. We’ve shown you examples where we only manipulate one equation to eliminate one term. Let’s now try out an example where we’re required to multiply different factors on both equations. Example 2 Use the elimination method to solve the system of equations $ \begin{array}{ccc}3x- 4y&= \phantom{x}12\,\,(1)\\4x+3y&= \phantom{x}16\,\,(2)\end{array}$. This example shows that we sometimes need to work on both linear equations before we can eliminate either $x$ or $y$. Since our first two examples show you how to eliminate the terms with $x$, let’s make it our goal to eliminate $y$ first this time. Rewrite the terms with $y$ in both equations by multiplying $3$ on both sides of Equation (1) and $4$ on both sides of Equation (2). \begin{aligned} \begin{array}{ccc}{\color{Orchid}3}(3x)& -{\color{Orchid}3}(4y)&={\color{Orchid}3}(12)\\{\color{Orchid}4}(4x)& -{\color{Orchid}4}(3y)&={\color{Orchid}4}(16)\,\, \\&\downarrow\phantom {x}\\9x&- 12y&= 36\,\, \\ 16x&+ 12y&= 64\,\,\end{array}\end{aligned} Now that we have $-12y$ and $12y$ on both resulting equations, add the two equations to eliminate $y$. \begin{aligned} \begin{matrix}&\underline{\begin{array}{cccc}\phantom{+xxx}9x& -\bcancel{\color{Orchid}12y} &=\phantom{+}36\\+\phantom{xx}16x &+ \bcancel{\color{Orchid}12y} &= \phantom{x}64\end {array}}\\ &\begin{array}{cccc}\phantom{+} &25x&\phantom{xxxxx}&=100\end{array}\end{matrix}\end{aligned} The system of equations has now been reduced to a linear equation with $x$ as the only unknown. Divide both sides of the equation by $25$ to solve for $x$. \begin{aligned}25x &= 100\\x&= \dfrac{100}{25}\\&= 4\end{aligned} Substitute $x =4$ into either of the system of linear equations to solve for $y$. In our case, let’s use Equation (1). \begin{aligned}3x-4y&= 12\\3(4) -4y&= 12\\-4y&= 0\\y &=0\end{aligned} Hence, the solution to our system of linear equations is $(4, 0)$. Feel free to substitute these values into either Equation (1) or Equation (2) to double-check the solution. For now, let’s try out a word problem involving systems of linear equations to help you appreciate this topic even more! Example 3 Amy has a favorite pastry shop where she often buys donuts and coffee. On Tuesday, she paid $\$12$ for two boxes of donuts and one cup of coffee. On Thursday, she purchased one box of donuts and two cups of coffee. She paid $\$9$ this time. How much does each box of donuts cost? How about one cup of coffee? First, let’s set up the system of linear equations that represent the situation. • Let $d$ represent the cost of one box of donuts. • Let $c$ represent the cost of one cup of coffee. Each equation’s right-hand side represents the total cost in terms of $d$ and $c$. Hence, we have $ \begin{array}{ccc}2d+ c&= \phantom{x}12\,\,(1)\\d+2c&= \phantom{xc}9\,\,(2)\end{array}$. Now that we have a system of linear equations, apply the elimination method to solve for $c$ and $d$. \begin{aligned} \begin{array}{ccc}2d& + c\phantom{xxx}&= 12\phantom{xx}\\{\color{Green}2}(d)& +{\color{Green}2}(2c)&={\color{Green}2}(9)\,\, \\&\downarrow\phantom{x}\\2d&+ c\,\,&= 12\,\, \\ 2d&+ 4c&= Once we’ve eliminated one of the variables (for our case, it’s $d$), solve the resulting equation to find $c$. \begin{matrix}&\underline{\begin{array}{cccc}\phantom{+xxx}\bcancel{\color{Green}2d} & + c&=\phantom{+}12\\-\phantom{xx}\bcancel{\color{Green}2d} &+ 4c&= \phantom{x}18\end{array}}\\ &\begin{array} {cccc}\phantom{+} &\phantom{xxxx}&-3c&=-6\\&\phantom{xx}&c&= 2\end{array}\end{matrix} Substitute $c = 2$ into either of the system of linear equations to solve for $d$. \begin{aligned}2d + c &= 12\\2d + 2&= 12\\2d&= 10\\d&= 5\end{aligned} This means that one box of donuts costs $\$5$ while a cup of coffee costs $\$2$ at Amy’s favorite pastry shop. Practice Question 1. Which of the following shows the solution to the system of equations $\begin{array}{ccc}3a – 4b&= \phantom{x}18\\3a – 8b&= \phantom{x}26\end{array}$? B. $a=\dfrac{10}{3},b=-2$ C. $a=-2,b=-\dfrac{10}{3}$ D. $a=\dfrac{10}{3},b=2$ 2. Which of the following shows the solution to the system of equations $\begin{array}{ccc}4x + 5y&= \phantom{x}4\\5x- 4y&= -2\end{array}$? A. $\left(-\dfrac{28}{41},-\dfrac{6}{41}\right)$ B. $\left(-\dfrac{6}{41},-\dfrac{28}{41}\right)$ C. $\left(\dfrac{28}{41},\dfrac{6}{41}\right)$ D. $\left(\dfrac{6}{41},\dfrac{28}{41}\right)$ Answer Key 1. B 2. D
{"url":"https://www.storyofmathematics.com/elimination-method/","timestamp":"2024-11-09T01:24:24Z","content_type":"text/html","content_length":"145970","record_id":"<urn:uuid:004d7482-83dd-45f6-ba8f-b37505037a22>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00099.warc.gz"}
What is the difference between a Critical value and a Test statistic and between a Z-score and a T-statistic? Critical values for a test of hypothesis depend upon a test statistic, which is specific to the type of test, and the significance level, α, which defines the sensitivity of the test. A value of α = 0.05 implies that the null hypothesis is rejected 5 % of the time when it is in fact true. The choice of α is somewhat arbitrary, although in practice values of 0.1, 0.05, and 0.01 are common. Critical values are essentially cut-off values that define regions where the test statistic is unlikely to lie; for example, a region where the critical value is exceeded with probability α if the null hypothesis is true. The null hypothesis is rejected if the test statistic lies within this region which is often referred to as the rejection region(s). The Z-score allows you to decide if your sample is different from the population mean. In order to use z, you must know four things: The population mean, the population standard deviation, the sample mean, and the sample size. Usually in statistics, you don’t know anything about a population, so instead of a Z- score you use a T-Test with a T -Statistic. The major difference between using a Z-score and a T-statistic is used when you have to estimate the population standard deviation. The T-test is also used if you have a small sample size e.g., less than 30.
{"url":"https://support.fitchlearning.com/hc/en-us/articles/23524910643223-What-is-the-difference-between-a-Critical-value-and-a-Test-statistic-and-between-a-Z-score-and-a-T-statistic","timestamp":"2024-11-07T15:57:09Z","content_type":"text/html","content_length":"47585","record_id":"<urn:uuid:83454657-db07-4a98-94f1-cb68ab7f00b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00202.warc.gz"}
Cerón-Rojas, J.J. Cerón-Rojas, J.J. Research Projects Organizational Units Search Results • Chapter 11. RIndSel: selection Indices with R (Springer, 2018) Alvarado Beltrán, G.; Pacheco Gil, Rosa Angela; Pérez-Elizalde, S.; Burgueño, J.; Rodríguez, F.M.; Cerón-Rojas, J.J.; Crossa, J. RIndSel is a graphical unit interface that uses selection index theory to select individual candidates as parents for the next selection cycle. The index can be a linear combination of phenotypic values, genomic estimated breeding values, or a linear combination of phenotypic values and marker scores. Based on the restriction imposed on the expected genetic gain per trait, the index can be unrestricted, null restricted, or predetermined proportional gain indices. RIndSel is compatible with any of the following versions of Windows: XP, 7, 8, and 10. Furthermore, it can be installed on 32-bit and 64-bit computers. In the context of fixed and mixed models, RIndSel estimates the phenotypic and genetic covariance using two main experimental designs: randomized complete block design and lattice or alpha lattice design. In the following, we explain how RIndSel can be used to determine individual candidates as parents for the next cycle of improvement. • Chapter 9. Multistage linear selection indices (Springer, 2018) Cerón-Rojas, J.J.; Crossa, J. Multistage linear selection indices select individual traits available at different times or stages and are applied mainly in animals and tree breeding, where the traits under consideration become evident at different ages. The main indices are: the unrestricted, the restricted, and the predetermined proportional gain selection index. The restricted and predetermined proportional gain indices allow null and predetermined restrictions to be imposed on the trait expected genetic gain (or multi-trait selection response) values, whereas the rest of the traits remain changed without any restriction. The three indices can use phenotypic, genomic, or both sets of information to predict the unobservable net genetic merit values of the candidates for selection and all of them maximize the selection response, the expected genetic gain for each trait, have maximum accuracy, are the best predictor of the net genetic merit, and provide the breeder with an objective rule for evaluating and selecting several traits simultaneously. The theory of the foregoing indices is based on the independent culling method and on the linear phenotypic selection index, and is described in this chapter in the phenotypic and genomic selection context. Their theoretical results are validated in a two-stage breeding selection scheme using real and simulated data. • Chapter 6. Constrained linear genomic selection indices (Springer, 2018) Cerón-Rojas, J.J.; Crossa, J. The constrained linear genomic selection indices are null restricted and predetermined proportional gain linear genomic selection indices (RLGSI and PPG-LGSI respectively), which are a linear combination of genomic estimated breeding values (GEBVs) to predict the net genetic merit. They are the results of a direct application of the restricted and the predetermined proportional gain linear phenotypic selection index theory to the genomic selection context. The RLGSI can be extended to a combined RLGSI (CRLGSI) and the PPG-LGSI can be extended to a combined PPG-LGSI (CPPG-LGSI); the latter indices use phenotypic and GEBV information jointly in the prediction of net genetic merit. The main difference between the RLGSI and PPG-LGSI with respect to the CRLGSI and the CPPG-LGSI is that although the RLGSI and PPG-LGSI are useful in a testing population where there is only marker information, the CRLGSI and CPPG-LGSI can be used only in training populations when there are joint phenotypic and marker information. The RLGSI and CRLGSI allow restrictions equal to zero to be imposed on the expected genetic advance of some traits, whereas the PPG-LGSI and CPPG-LGSI allow predetermined proportional restriction values to be imposed on the expected trait genetic gains to make some traits change their mean values based on a predetermined level. We describe the foregoing four indices and we validated their theoretical results using real and simulated data. • Chapter 7. Linear phenotypic eigen selection index methods (Springer, 2018) Cerón-Rojas, J.J.; Crossa, J. Based on the canonical correlation, on the singular value decomposition (SVD), and on the linear phenotypic selection indices theory, we describe the eigen selection index method (ESIM), the restricted ESIM (RESIM), and the predetermined proportional gain ESIM (PPG-ESIM), which use only phenotypic information to predict the net genetic merit. The ESIM is an unrestricted linear selection index, but the RESIM and PPG-ESIM are linear selection indices that allow null and predetermined restrictions respectively to be imposed on the expected genetic gains of some traits, whereas the rest remain without any restrictions. The aims of the three indices are to predict the unobservable net genetic merit values of the candidates for selection, maximize the selection response, and the accuracy, and provide the breeder with an objective rule for evaluating and selecting several traits simultaneously. Their main characteristics are: they do not require the economic weights to be known, the first multi-trait heritability eigenvector is used as its vector of coefficients; and because of the properties associated with eigen analysis, it is possible to use the theory of similar matrices to change the direction and proportion of the expected genetic gain values without affecting the accuracy. We describe the foregoing three indices and validate their theoretical results using real and simulated data. • Chapter 10. Stochastic simulation of four linear phenotypic selection indices (Springer, 2018) Crossa, J.; Burgueño, J.; Toledo, F.H.; Cerón-Rojas, J.J. Stochastic simulation can contribute to a better understanding of the problem, and has already been successfully applied to evaluate other breeding scenarios. Despite all the theories developed in this book concerning different types of indices, including phenotypic data and/or data on molecular markers, no examples have been presented showing the long-term behavior of different indices. The objective of this chapter is to present some results and insights into the in silico (computer simulation) performance comparison of over 50 selection cycles of a recurrent and generic population breeding program with different selection indices, restricted and unrestricted. The selection indices included in this stochastic simulation were the linear phenotypic selection index (LPSI), the eigen selection index method (ESIM), the restrictive LPSI, and the restrictive ESIM. • Chapter 5. Linear genomic selection indices (Springer, 2018) Cerón-Rojas, J.J.; Crossa, J. The linear genomic selection index (LGSI) is a linear combination of genomic estimated breeding values (GEBVs) used to predict the individual net genetic merit and select individual candidates from a nonphenotyped testing population as parents of the next selection cycle. In the LGSI, phenotypic and marker data from the training population are fitted into a statistical model to estimate all individual available genome marker effects; these estimates can then be used in subsequent selection cycles to obtain GEBVs that are predictors of breeding values in a testing population for which there is only marker information. The GEBVs are obtained by multiplying the estimated marker effects in the training population by the coded marker values obtained in the testing population in each selection cycle. Applying the LGSI in plant or animal breeding requires the candidates to be genotyped for selection to obtain the GEBV, and predicting and ranking the net genetic merit of the candidates for selection using the LGSI. We describe the LGSI and show that it is a direct application of the linear phenotypic selection index theory in the genomic selection context; next, we present the combined LGSI (CLGSI), which uses phenotypic and GEBV information jointly to predict the net genetic merit. The CLGSI can be used only in training populations when there are phenotypic and maker information, whereas the LGSI is used in testing populations where there is only marker information. We validate the theoretical results of the LGSI and CLGSI using real and simulated data. • Chapter 8. Linear molecular and genomic eigen selection index methods (Springer, 2018) Cerón-Rojas, J.J.; Crossa, J. The three main linear phenotypic eigen selection index methods are the eigen selection index method (ESIM), the restricted ESIM (RESIM) and the predetermined proportional gain ESIM (PPG-ESIM). The ESIM is an unrestricted index, but the RESIM and PPG-ESIM allow null and predetermined restrictions respectively to be imposed on the expected genetic gains of some traits, whereas the rest remain without any restrictions. These indices are based on the canonical correlation, on the singular value decomposition, and on the linear phenotypic selection indices theory. We extended the ESIM theory to the molecular-assisted and genomic selection context to develop a molecular ESIM (MESIM), a genomic ESIM (GESIM), and a genome-wide ESIM (GW-ESIM). Also, we extend the RESIM and PPG-ESIM theory to the restricted genomic ESIM (RGESIM), and to the predetermined proportional gain genomic ESIM (PPG-GESIM) respectively. The latter five indices use marker and phenotypic information jointly to predict the net genetic merit of the candidates for selection, but although MESIM uses only statistically significant markers linked to quantitative trait loci, the GW-ESIM uses all genome markers and phenotypic information and the GESIM, RGESIM, and PPG-GESIM use the genomic estimated breeding values and the phenotypic values to predict the net genetic merit. Using real and simulated data, we validated the theoretical results of all five indices. • Chapter 4. Linear marker and genome-wide selection indices (Springer, 2018) Cerón-Rojas, J.J.; Crossa, J. There are two main linear marker selection indices employed in marker-assisted selection (MAS) to predict the net genetic merit and to select individual candidates as parents for the next generation: the linear marker selection index (LMSI) and the genome-wide LMSI (GW-LMSI). Both indices maximize the selection response, the expected genetic gain per trait, and the correlation with the net genetic merit; however, applying the LMSI in plant or animal breeding requires genotyping the candidates for selection; performing a linear regression of phenotypic values on the coded values of the markers such that the selected markers are statistically linked to quantitative trait loci that explain most of the variability in the regression model; constructing the marker score, and combining the marker score with phenotypic information to predict and rank the net genetic merit of the candidates for selection. On the other hand, the GW-LMSI is a single-stage procedure that treats information at each individual marker as a separate trait. Thus, all marker information can be entered together with phenotypic information into the GW-LMSI, which is then used to predict the net genetic merit and select candidates. We describe the LMSI and GW-LMSI theory and show that both indices are direct applications of the linear phenotypic selection index theory to MAS. Using real and simulated data we validated the theory of both indices. • Chapter 2. The linear phenotypic selection index theory (Springer, 2018) Cerón-Rojas, J.J.; Crossa, J. The main distinction in the linear phenotypic selection index (LPSI) theory is between the net genetic merit and the LPSI. The net genetic merit is a linear combination of the true unobservable breeding values of the traits weighted by their respective economic values, whereas the LPSI is a linear combination of several observable and optimally weighted phenotypic trait values. It is assumed that the net genetic merit and the LPSI have bivariate normal distribution; thus, the regression of the net genetic merit on the LPSI is linear. The aims of the LPSI theory are to predict the net genetic merit, maximize the selection response and the expected genetic gains per trait (or multi-trait selection response), and provide the breeder with an objective rule for evaluating and selecting parents for the next selection cycle based on several traits. The selection response is the mean of the progeny of the selected parents, whereas the expected genetic gain per trait, or multi-trait selection response, is the population means of each trait under selection of the progeny of the selected parents. The LPSI allows extra merit in one trait to offset slight defects in another; thus, with its use, individuals with very high merit in one trait are saved for breeding even when they are slightly inferior in other traits. This chapter describes the LPSI theory and practice. We illustrate the theoretical results of the LPSI using real and simulated data. We end this chapter with a brief description of the quadratic selection index and its relationship with the LPSI.
{"url":"https://repository.cimmyt.org/entities/person/fb83db00-9915-47a4-8178-66cd54f3f69e?f.itemtype=Book%20Chapter,equals&spc.page=1","timestamp":"2024-11-08T20:32:55Z","content_type":"text/html","content_length":"832766","record_id":"<urn:uuid:5dd1694e-3398-4ed9-a3f0-7f8b983db245>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00556.warc.gz"}
New Research Compendium Addresses Productivity & Transformation When Applying Technology in Learning Math – Digital Promise Recently, the 60,000-member National Council of Teachers of Mathematics (NCTM) released a new “Research Compendium,” a handbook with 38 chapters, each summarizing the best research on an important aspect of teaching and learning mathematics. For the past three years, I was honored to work on the team that developed the chapter on “Technology for Mathematics Learning.” The aspects addressed in other chapters included summaries of how students learn specific mathematics content and practices, research on teaching, and much more. Writing the technology chapter gave my team the opportunity (and challenge) to comprehensively organize the vast research knowledge about technology in learning mathematics. So what did we discover? First, humility! As my mentor Jim Kaput vividly wrote in a 1992 NCTM research handbook “anyone who presumes to describe the roles of technology in mathematics education faces challenges akin to describing a newly active volcano.” That volcano is still erupting; the geography of digital possibilities for math learning is constantly expanding. With co-authors Richard Noss, Nicholas Jackiw, and Paulo Blikstein, we quickly realized that if we tried to pen a consumer guide to research on each of today’s available mathematics technologies, our findings would be obsolete before the ink dried. Indeed, the chapter was finalized before the results of my own most recent efficacy trial, on the use of ASSISTments for online mathematics homework, could be included. So this is not a listing of product-by-product research, but rather an organization of research about categories of digital mathematics tools. Second, the value of a well-chosen framework. We chose to organize around a framework developed by Paul Drijvers at the Freudenthal Institute in the Netherlands. Drijvers identified three main purposes for technology in mathematics learning: 1. Doing Mathematics. Tools for this purpose are used both in and out of school, and include tools like calculators and spreadsheets. Such tools can handle distracting details for a learner and enable a learner to focus cognitive effort on mathematical actions that are closely related to their current learning objective. For example, if you have already mastered the skills of plotting points to draw a graph, it can be beneficial to let a graphing calculator do that. Then the learner can focus an important mathematical concept, such as how the shape of the graph changes with changing function parameters. 2. Practicing Mathematics. Tools for this purpose organize the sequence of a student’s practice sessions, help students when they struggle, and provide feedback. This purpose can have a positive impact on learning because working many practice problems really is an important aspect of learning mathematics. Yet in school, mindlessly filling out worksheets is too often what actually happens, and mindless mathematical grunt work is not beneficial. Consider an analogy to sports training, where science is making athletes better by shaping more effective practice sessions. As in sports, learning sciences research has uncovered much about the optimal way to organize and support mathematics practice. Today’s technologies can help to optimize the practice sequence, support students when they need help, and give well-timed and targeted feedback. 3. Understanding Mathematics. Tools for this purpose help students make coherent connections among mathematical ideas that otherwise can appear to students as arcane, arbitrary and disconnected. A central learning sciences insight is that using spatial visualizations that vary in time to engage students with mathematical relationships is a powerful aid to forming appropriate mathematical connections. For example, imagine two situations in which a student is trying to understand what geometry proof means. In one situation, the student tries to infer the meaning by looking at a printed geometric figure. That’s hard. In the other, the student moves their finger and experiences how a displayed figure changes in time according to the rules of the proof. Research shows that “dynamic representations” (using rule-driven visualizations that move) can lead students to “a-ha!” moments. Thus, technology can enable new ways to represent mathematics dynamically, and this can help students make sense of math. In preparing the chapter, we found that for each of these three purposes there is a wide selection of high-quality tools. Further, each purpose also can organize along relevant learning science principles, empirical research findings, and other insights that can help educators to use the technologies well. Importantly, we were able to find strong research support for all three purposes — it is NOT the case that one purpose is best. Thus, we’d recommend that educators start with a refined sense of their own purpose for bringing technology into their students’ mathematical learning process. Once they select a purpose, we’d recommend careful attention to the learning principles and empirical findings specific to that purpose. In our view, technology is a valuable enabling infrastructure, but it takes an educator’s concerted attention to many factors outside the technology to make a difference for their learners. Interest-Driven Mathematic Emerges. In addition, the research literature pointed us to a fourth, emerging purpose — one with a smaller but growing scientific literature. Increasingly, research is studying what happens as students become engaged in technology-rich activities in robotics, fabrication labs, maker activities, electronic crafting activities, coding activities and more. In part due to the mathematics intrinsic to the technologies in these activities, there is increasing potential for Interest-Driven Mathematics. Interest-driven mathematics is different from the other purposes, because it emerges in extra-curricular settings, rather than being driven by workplace or school prerogatives. In our chapter, we recognized the need for more research to guide educators who seek to develop students’ mathematics learning as the purpose to technology-driven extracurricular interests. Complementary Factors: Productivity and Transformation. The chapter enabled us to see an important pair of factors that were essential to the best research-based uses of technology — and to realize that these factors were complementary and not in opposition. One factor is productivity. Technology in mathematics learning either scales or fails depending on whether it makes teachers’ and students’ efforts more productive. The productivity dimension can rise to the surface in a graphing tool that enables students to focus on the shape of a function’s slope instead of the tedious process of plotting points. It can also arise in a practice tool that makes homework sessions more fruitful. Alas, too often this factor can also arise negatively when teachers find that technology runs out of batteries, internet connections fail, or servers are down. When technology wastes precious classroom time, it’s less likely that use will continue. Conceptual understanding tools often offer attract teachers with their potential outcomes — for example, a shift to deeper learning objectives or a flipped classroom model. And yet an eventual “scale or fail” outcome often depends on how attentive the program is to teacher and student’s productivity as they pursue these aspirational outcomes. The other factor is transformation. Technology in mathematics learning either attracts zeal or declining appeal not only due to productivity, but also to the degree it enables a shift to a much more desirable mathematical learning experience. The transformational factor is often close to the surface when educators apply technology for the purpose of conceptual understanding, where many initiatives have the purpose of “not only learning mathematics better, but also learning better mathematics.” In one example covered in the chapter, MiGen aims to help students participate in mathematical generalization — a fundamental aspect of professional mathematics that is vanishingly rare in conventional classroom experiences. Transformation is also inherent in the new category of interest-driven and technology-rich mathematics — it proposes the radical idea of locating math in activities that students love, rather than trying to make school mathematics “relevant” to students lives after-the-fact. We also found in writing the chapter that the attracting zeal or fading appeal factor is important to the doing math and practicing math as well. With regard to tools for doing math, a division-of-labor view of graphing calculators and spreadsheets is shallow. It’s not JUST that a student can offload calculations to a calculator or spreadsheet and focus their effort on a mathematical strategy. The zeal comes as learners grow into a mathematical tool, and become aware of how its advanced capabilities make it an indispensable partner in their reasoning process — as the relationship of person and tool doing mathematics becomes transformed (a process of co-evolution called “instrumental genesis” in the literature). Tools that build a zealous following don’t just save effort, but also offer a trajectory of growth in mathematical capability distributed across the mind and machine. Likewise, we noted that a cognitive tutor not only manages student’s mathematical practice, but also creates programmatic opportunities to re-imagine the teacher’s role (for example, as intensively working with individual students) and to allow for more time and focus in non-technology activities, for example, collaborative learning among students. Thus, practice tools that survive for longer times in the marketplace also develop their potential for transformation. Hence our recommendation to educators is to embark on programs of applying technology in learning mathematics with attention to both productivity and transformation, as both the fail or scale and attracting zeal or fading appeal dimension were strongly evident in all long-lasting research programs regarding technology in mathematics learning that we were able to identify. To close, I heartily recommend the entire Compendium to all who are committed to improving mathematics teaching and learning (disclosure: my only financial compensation as an author was to get a free copy — which I definitely will be using regularly for all great information in the other chapters). Although the Compendium is a bit pricy, the one-stop-shopping it offers on such a full range of research topics in mathematics education will be invaluable. And realize that 60,000 mathematics teachers have already contributed to the production of this volume through their NCTM membership dues. It’s an opportunity to stand beside these mathematic teachers’ commitment to pulling together high quality research. Further information on the Compendium is available on the NCTM site.
{"url":"https://digitalpromise.org/2017/09/25/new-research-compendium-addresses-productivity-transformation-applying-technology-learning-math/","timestamp":"2024-11-13T18:31:47Z","content_type":"text/html","content_length":"97664","record_id":"<urn:uuid:e1ed2409-3f23-4f28-998e-724945757fcb>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00014.warc.gz"}
Civilization of Mayans is also called as Meso-American civilization which was present in Central America (which extended from Mexico to Cuba). Maya civilization started around 2000 BC and ended around in 1000 AD but still millions of people even today speak Mayan language. One of the team had found Mayan scripts as old as 400 BC and is considered as the oldest Mayan scripts. They had good amount of contribution in mathematics which in turn helped them lot in astrological and astronomy subjects like calculating no. of days in an year, to develop the calendar, Solar eclipse etc. Mayan Numerals: Mayan people used the Vigesimal number system; i.e. to the base 20 (sometimes they used to the base 5). Base 20 was used probably they used their fingers of hands and toes for counting. The numerals consists of only 3 symbols: shell (for representing 0), Dash and Dot. Thus like Hindu numerals Mayan numeral system is considered very efficient as n numbers can be created with remembering very less symbols unlike Roman system. Concept of Mayan 0 is predicated to be originated around 36 BC. Their system was positional in nature but was correct only till tens place. Example: [ 9;8;9;13; 2] = 2 + 13 x 20 + 9 x 18 x 20 + 8 x 18 x 202 + 9 x 18 x 203 [ 8;14;3;1;12] = 12 + 1 × 20 + 3 × 18 × 20 + 14 × 18 × 202 + 8 × 18 × 203 = 1253912. Thus as observed till tens place, base 20 system works perfectly fine but from hundreds place instead of multiplying by 20 digits are multiplied by 18. It is assumed it is because initially Mayans considered 1 year = 360 days so they used 18 x 20. Although they had no concept of fractions and division process was not to the mark still they could get very accurate astronomical readings. They could accurately understand the cycles of celestial bodies like sun and moon. Without using any astronomical instruments, just by using the sticks they were able to calculate almost accurate astronomical figures. They could measure the length of the solar … [Read more...]
{"url":"http://mathlearners.com/tag/mayans/","timestamp":"2024-11-02T17:59:45Z","content_type":"text/html","content_length":"44685","record_id":"<urn:uuid:954615bd-afbc-442a-b7b9-9e0fa94361bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00344.warc.gz"}
(398al) Modeling Dust Explosions AIChE Annual Meeting Tuesday, November 5, 2013 - 6:00pm to 8:00pm explosions are a major safety topic, on every plant, that handles flammable granular or pulverized substances. Commonly dusts are characterized by two scalars, the maximum explosion pressure and the maximum rate of pressure rise. With that method it is not possible to know how the flame front and the pressure act in the geometry. When a description of the behavior of a dust explosion in the geometry is needed, a CFD (Computational Fluid Dynamics) model, which can describe the physics of a dust explosion, has to be used For the modeling of gas explosion, two types of combustion models are common. On the one hand chemical and mixture limited reaction models like the eddy dissipation concept, on the other hand flame speed approaches. To model the dust explosion with a chemical and mixture limited reaction model, the pyrolysis of the dust has to be characterized. The calculation of the pyrolysis in the dust particles needs a kinetic and enormous computing recourses. If a flame speed approach is used, the durst can be characterized by the laminar flame speed and the burnable fuel fraction as function of the dust concentration and the heat of combustion. The flame speed approach is also relatively cheap in terms of computing resources. So a CFD code to model dust explosions, which is based on a flame speed approach was developed. This solver is based on the XiFOAM solver of the software package OpenFOAM. Description of the model Basic concept The key parameter is the turbulent flame speed, which is highly depended to the turbulence and also depends on the laminar flame speed. The laminar flame speed is a function of the dust concentration. These values are based on measurements with a Siwek 20-liter vessel. With the pressure time curve from the experiment, the laminar flame speed can be calculated, shown by Skjold [1]. It is also possible to calculate the burnable fuel fraction by energy balancing. Figure 1 shows the laminar flame speed and burnable fuel fraction of lycopodium as function of the concentration. influence of the turbulence on the flame speed follows the description of Dahoe Figure 1: laminar flame speed and burnable fuel fraction of lycopodium as function of the concentration Particle and gas motion describe the movement of the dust particles an Eulerian approach was chosen because of the high particle count. In bigger geometries a Lagrangen approach is too expensive in terms of computing resources. The exchange coefficient for momentum K is based on the work of Syamlal [3] and the drag coefficient, to calculate is described by the Schiller Neuman approach [4]. The granular pressure ps and the granular temperature are computed by an analytical published by Syamlal [5]. 1 describes the momentum equation of the gaseous phase and equation 2 the momentum equation of the particle phase. 3 and 4 show the continuity equation. The source term on the right side, describes the mass transfer between the phases through the combustion process. comparison between the used Eulerian and the Fluent DPM model is shown in figure 3. In a simple 2D geometry, shown in figure 2 particles where injected. The curves in figure 3 show the dust concentration on the Y axis after different times. Figure 2: Geometry for comparison between the used Eulerian and the Fluent DPM model Figure 3: Comparison between the used Eulerian and the Fluent DPM model Turbulence modeling simulations had shown that the in OpenFOAM implemented RANS fail. The reason for that is the high relative velocity in front of the flame front. The high relative velocity generates turbulent kinetic energy behind the particles. To model this effect an additional source tem, based on the consistent approach of Crowe [6] was implemented in the k equation of the standard k-e model. Equation 5 shows the modified k equation. Combustion model Like in every flame speed approach the state of combustion is defined by a progress variable, in this case called C. If C has a value of 1 the dust-air mixture is unburned. For a burned mixture C has a value of 0. The source tem of conservation equation for C, which is shown in equation 6, includes the laminar flame speed, the variable Bsh, which describes the ratio between the laminar and the turbulent flame speed and the magnitude of the gradient of C. That source term is also used in other equations, to describe changes based on combustion processes. Energy Equations To be able to consider the high relative velocities in front of the flame front, for each phase a separate energy equation was implemented. The energy equation for the gaseous phase, shown in equation 7, has two source terms. The first one describes the energy of combustion based on the heat of combustion, dust concentration, the burnable fuel fraction and the combustion source term. The second term describes the energy exchange to the particle phase, based on the heat exchange coefficient of Ranz-Marshall [4]. The energy equation for the particle phase, shown in equation 8 has just a storage function. The source term just describes the energy transfer to the gaseous Species Equation density of the gas is a very important parameter for expansion. It is based on pressure, temperature and the molar mass. To describe the molar mass the variable b was introduced. This variable describes the off gas- air mixture. The conservation equation for b is shown in equation 9. The source term is based on a coefficient, dust concentration, the burnable fuel fraction and in the combustion source term. Results and comparison to the experiment evaluate the model, in house experiments from Kern [7] were used. Figure 4 shows the experimental set up. A screw conveyer delivers a defined amount on dust, in that case lycopodium, in to a vertical duct.When the dust gets a constant defined concentration, the mixture is ignited by an electric spark. Figure 4: Experimental set In figure 5 the comparison of the flame front between experiment and the model is shown. For a concentration of 300g/m³ lycopodium. The model fits well to the experiment in the first 200ms. After that the results gets different. The reason is that the momentum out of the duct influences turbulence after 200ms. In the experiment a flame arrester at the top of the duct and the off gas system has to be used. This parts are not implemented in the model. In the future, experiments with the duct at another location are planned, where no flame suppression barrier and the off gas system is needed. Figure 5: comparison between experiment and model of abbreviations a[Eff] Effective thermal diffusivity t Time b Progress variable off gas- air mixture T[g] Temperature gaseous phase B[aus] Burnable fulefraction T[p] Temperature particle phase Bsh Ratio between laminar and turbulent flamespeed Velocity vector gaseous phase C Progress variable flamefront U[lam] Laminar flame speed D Dust concentration Velocity vector particle phase eps Dissipation of the turbulent kinetic energie a[g,p] Heat exchange coefficient Gravity vector Volume fraction gaseous phase H[Vbr] Heat of combustion Stress tensor gaseous phase K Momentum exchange coefficient Stress tensor particle phase p Pressure ρ Density gaseous phase ps Granular pressure ρ[ unburned] Density gaseous phase unburned Q[Vbr] Chemical coefficient [1] T. Skjold, B.J. Arntzen, O.R. Hansen, O.J. Taraldset, I.E. Storvik, R.K. Eckhoff, Simulating Dust Explosions with the First Version of DESC, Process Safety and Environmental Protection, Volume 83, Issue 2, March 2005,151?160 [2] A.E., Dahoe, 2000. Dust explosions: a study of flame propagation, PhD thesis, Delft University of Technology, Delft, Holland [3] M. Symlal, J. O'Brain, (1989), ?Computer Simulation of Bubbles in a Fluidized Bed?, AIChE Symp. Series 85, 22-31 [4] FLUENT (2010), ?FLUENT (2003), ?Ansys Fluent Theory Guide?, Fluent Inc [5] M. Symlal, J. O'Brain, W. Rogers, (1993), ?MFIX Documentation?, National Technical Information Service, Springfield, US. [6] C. T. Crowe, On models for turbulence modulation in fluid-particle, International Journal of Multiphase Flow 26 (2000) 719-727 [7] H. Kern, K. Held, H. Raupenstrauch, Investigations on the influence of the oxygen concentration on the flame propagation in lycopodium/air mixtures, ACHEMA 2012 Frankfurt, June 21st 2012 This paper has an Extended Abstract file available; you must purchase the conference proceedings to access it. Do you already own this? Log In for instructions on accessing this content. AIChE Pro Members $150.00 AIChE Graduate Student Members Free AIChE Undergraduate Student Members Free AIChE Explorer Members $225.00 Non-Members $225.00
{"url":"https://www.aiche.org/conferences/aiche-annual-meeting/2013/proceeding/paper/398al-modeling-dust-explosions-1","timestamp":"2024-11-05T09:23:54Z","content_type":"text/html","content_length":"109625","record_id":"<urn:uuid:3c1da593-7500-4a70-a422-ea0459dece35>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00769.warc.gz"}
Perfectoid Spaces This book contains selected chapters on perfectoid spaces, their introduction and applications, as invented by Peter Scholze in his Fields Medal winning work. These contributions are presented at the conference on "Perfectoid Spaces" held at the International Centre for Theoretical Sciences, Bengaluru, India, from 9–20 September 2019. The objective of the book is to give an advanced introduction to Scholze’s theory and understand the relation between perfectoid spaces and some aspects of arithmetic of modular (or, more generally, automorphic) forms such as representations mod p, lifting of modular forms, completed cohomology, local Langlands program, and special values of L-functions. All chapters are contributed by experts in the area of arithmetic geometry that will facilitate future research in the direction. Publication series Name Infosys Science Foundation Series Publisher Springer ISSN (Print) 2363-6149 ISSN (Electronic) 2363-6157 • Perfectoid spaces • Arithmetic geometry • Representation theory • Algebraic geometry • Modular forms • p-adic Hodge theory Dive into the research topics of 'Perfectoid Spaces'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/perfectoid-spaces","timestamp":"2024-11-03T18:50:22Z","content_type":"text/html","content_length":"45763","record_id":"<urn:uuid:824e0fe7-c6b5-4aa8-889f-9d578ce04451>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00758.warc.gz"}
Automation Single select - Change cell value Hi, I have single select with Group A, Group B and Group C. The goal is that when the person filling out the form picking one of these that one of these groups that on the sheet the price per unit fills out. If in an automation can I set it up so when a new row is added, depending on what single select group was picked, the cell value in the price per unit fill out is the correct price. I can't find a way for this to happen. Is it even possible. Thank you Best Answer • You can do this more easily with a formula vs automation. In your Price Per Unit column (change column names, values to fit your sheet:) =IF([Group]@row = "Group A", 1.50, IF([Group]@row = "Group B", 2.25, IF([Group]@row = "Group C", 3.75, ""))) This formula uses Nested IFs. The basic IF formula syntax is: IF(logical expression is true, value if true, value if false) IF Function | Smartsheet Learning Center We can nest these, so that the value if false is another IF. The formula above says in English: If the Group column is Group A, put a 1.50 value in this column; if it's not, then consider if the Group is Group B, and if it is Group B, set the value to 2.25; if it's not, consider if it's Group C, and if so, set the value to 3.75; If it's any other value, leave this column blank (empty quotes, ""). Jeff Reisman Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks! • You can do this more easily with a formula vs automation. In your Price Per Unit column (change column names, values to fit your sheet:) =IF([Group]@row = "Group A", 1.50, IF([Group]@row = "Group B", 2.25, IF([Group]@row = "Group C", 3.75, ""))) This formula uses Nested IFs. The basic IF formula syntax is: IF(logical expression is true, value if true, value if false) IF Function | Smartsheet Learning Center We can nest these, so that the value if false is another IF. The formula above says in English: If the Group column is Group A, put a 1.50 value in this column; if it's not, then consider if the Group is Group B, and if it is Group B, set the value to 2.25; if it's not, consider if it's Group C, and if so, set the value to 3.75; If it's any other value, leave this column blank (empty quotes, ""). Jeff Reisman Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks! • Thank you, works perfectly. • Hi, I'm using =IF([Group]@row ="Unit 1", 200, IF([Group]@row = "Unit 2", 300, IF([Group]@row = "Blank", "", ""))) with Group being the single select field and putting the price in the PPU field. I want a single select option "blank" that will clear the function codes so a custom price can be added. Is that possible? • You can't manually place values in a field that ordinarily contains a formula, without overwriting that formula. The way to do this would be to have a separate column for adding a custom price. Then, if the Group selection is "Blank", we reference the value in the custom price field. So let's call that column "Custom Price" =IF([Group]@row ="Unit 1", 200, IF([Group]@row = "Unit 2", 300, IF([Group]@row = "Blank", [Custom Price]@row, ""))) Jeff Reisman Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks! • @Jeff Reisman I came up with =IF([Group]@row = "Unit 1", 200, IF([Group]@row = "Unit 2", 300, IF([Group]@row, = "Custom", [Custom Price]@row, ""))) It says Incorrect Argument set. --- --- --- =IF([Group]@row ="Unit 1", 200, IF([Group]@row = "Unit 2", 300, IF([Group]@row = "Custom", [Custom Price]@row, "600"))) It doesn't send it to another column. Works in the, but keep the price in the same column as the function codes. --- --- --- I decided to try for something that put all the prices in another column beside the one the function codes is in. =IF([Group]@row ="Unit 1", [Custom Price]@row, "200", IF([Group]@row = "Unit 2", [Custom Price]@row, "300" IF([Group]@row = "Custom", [Custom Price]@row, ""))) Thank you so much for the help • This is a good approach, but having your price per unit values in quotes will make them text values, and it's hard to calculate a cost from a text value. Leave the quotes off if you want to perform calculations with the 200 and 300 values. Jeff Reisman Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
{"url":"https://community.smartsheet.com/discussion/88464/automation-single-select-change-cell-value","timestamp":"2024-11-14T04:25:46Z","content_type":"text/html","content_length":"434323","record_id":"<urn:uuid:0c474cab-141d-4d64-a885-0b0979a01c1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00796.warc.gz"}
Stability based ammonium control in waste water treatment Ammonium control In an activated sludge process oxygen is supplied to allow microorganisms to reduce nitrogen and organic carbon from the water. The oxygen concentration affects the respiration rate of the microorganisms (6,7) and thereby affects the rate by which they contribute to the removal of unwanted material in the water. Ammonium control is a popular strategy for activated sludge processes designed for nitrification. The input to this process is the aeration where the air flow rate is controlled by a valve. The valve position is controlled by inner dissolved oxygen (DO) controllers, thereby ensuring that the DO concentration is maintained at the right level. A too low level leads to inefficient ammonium control, while a too high level is costly since the aeration consumes much energy. The output of the process is the ammonium concentration which is delayed sue to the settler dynamics. The process is depicted in Fig. 1 which also appears in 2. Figure 1. Block diagram of a three tank ammonium control system in a waste water treatment plant. In the present work, the inner DO control loop is treated as a static nonlinearity using a saturation. The decreasing effect of an increasing DO concentration is also incorporated in the static nonlinearity. The reamining bio-chemical dynamics is treated as a linear dynamic process. Together with the delayed output signal, the resulting process model has a Hammerstein structure, as noted in 5. The Hammerstein model is much simpler than e.g. the activated sludge model no. 1 (ASM1) developed by the international water association, and the accuracy is therefore not as high. However, the simple structure of the Hammerstein model simplifies controller design and this is the topic here. More precisely the intention is to design controllers that are guaranteed to be globally stable (for the Hammerstein model), despite the saturation and the delayed output measurement. The so obtained controllers are then validated using simulation on the ASM1 model. Hammerstein model based identification of the ammonium dynamics As a basis for the controller design a dynamic Hammerstein model is needed. The paper 4 and the report 2 treat two methods to identify such models from data. The report 2 applied and compared black-box Hammerstein methods in a single tank setup. Discrete time transfer function operators were combined by piecewise linear parameterizations. The compared methods were shown to provide fairly accurate input-output descriptions of simulated data, which was generated by the benchmark simulation model no 1 (BSM1). The methods of 2 do however not utilize the available prior information of the static nonlinearity. Therefore, in the paper 4, a grey-box Hammerstein model was introduced where the static nonlinearity is modeled as a Monod function. This reduces the number of free parameters and forces the model to behave more like the waste-water plant. Again a single tank setup was used in 4. As in 2 , the grey-box Hammerstein model was shown to provide reasonably accurate models of the plant, simulated with the BSM1. Finally, to provide data for the controller design, the Monod based Hammerstein model was extended to a three tank version in 3. The result for the validation data set is shown in Fig. 2. Figure 2. The performance of the Monod based Hammerstein model when identified from data generated by a three tank BSM1 run. L_2 stable ammonium control The main idea of the controller design is to use the Hammerstein model to pre-compute the stabiltiy region of the control loop, in terms of selected controller parameters and the delay of the system. Given this information, a robustly stable (with respect to the loop delay) controller is selected. The Hammerstein based model of the ammonium control system is depicted in Fig. 3. Since there is a delay in the loop stability needs to be treated by input-output methods. Due to the saturation type static nonlinearity the relevant result to assess the stability of the ammonium control system is the the infinite-dimensional version of the Popov-criterion. This leads to a conclusion on L_2 stability of the control loop, cf. 3. Figure 3. Block diagram of the Hammerstein model based ammonium control loop. Note that the delay which is a part of G_p,2 has been moved to the feedback path to be consistent with framework of T. Wigren, "Low-frequency limitations in saturated and delayed networked control", Proc. IEEE CCA 2015, Sydney, Australia, pp. 564-571, Sep. 21-23, 2015. DOI: 10.1109/CCA.2015.7320690. That framework is used in the L_2 stability analysis of 2. The pre-computation of the stability region applies gridding of a selected number of controller parameters and the loop delay. Then, with a process model and a selected controller, the linear loop gain follows for each grid point. The Popov inequality, which depends on the maximum slope of the static nonlinearity, can then be evaluated for each grid point. This leads to a conclusion on L_2 stability of the control loop, for each grid point, cf. 3. These evaluations allow the maximum allowed delay that results in L_2 stability of the loop, to be computed for each value of the controller parameters. The resulting surface provides the description of the stability region. An example appears in Fig. 4 where the maximum delay is plotted as a function of the proportional and integrator gains of the leaky PI-controller. Figure 4. The subset of the stability region predicted by the Popov-criterion. As can be expected, Fig. 4 reveals that the stability margin is reduced when the amount of integration gain is increased. For fixed integration gain, there is a proportional gain that optimizes the stability margin. This is easy to understand since a very high loop gain together with a delay is known to be harmful for stability. At the same time, a too low proportional gain leads to a dominating contribution from the integral part of the controller. This is also well known to be negative for stability. The controller is then easily selected from the graphical representation of the stability region. To evaluate the stability and performance of the controller for the plant, the controller was simulated using BSM1. The result that appears in Fig. 5 indicates that the stability analysis is indeed relevant for the more complicated plant. Figure 5. Step response for the Hammerstein model based control loop (green) and the BSM1 control loop (red). The paper 1. augments the L_2 stable controller design approach, with feedforward. This feedforward from the measurable inflowing waste water was found to significantly improve the system performance, cf. Fig. 6. Figure 6. L_2-stable Ammonium control with and without feedforward from the inflow. 1. T. Chistiakova, T. Wigren and B. Carlsson, "Combined L_2-stable feedback and feedforward aeration control in a wastewater treatment plant", IEEE Trans. Contr. Sys. Tech., vol. 28, no. 3, pp. 1017-1024, 2020. DOI: 10.1109/TCST.2019.2891410. Available: https://ieeexplore.ieee.org/document/8618330 2. T. Chistiakova, P. Mattsson, B. Carlsson and T. Wigren, "Nonlinear system identification of the dissolved oxygen to effluent ammonium dynamics in an activated sludge process", Technical Reports from the Department of Information Technology, 2018-011, Uppsala University, Uppsala, Sweden, August, 2018. 3. T. Chistiakova, T. Wigren and B. Carlsson, "Input-Output Stability Based Controller Design for a Nonlinear Wastewater Treatment Process", ACC 2018, Milwaukee, USA, pp. 2964-2971, June, 27-29, 4. T. Chistiakova, B. Carlsson and T. Wigren, "Non-linear modelling of the dissolved oxygen to ammonium dynamics in a nitrifying activated sludge process", Proc. ICA 2017, Quebec City, Quebec, Canada, pp. 85-93, June 11-14, 2017. 5. T. Chistiakova, B. Carlsson and T. Wigren, "Nonlinear stability analysis of an ammonium feedback control system", Abstract (unpublished), Reglermöte 2016, Gothenburgh, Sweden, June 8-9, 2016. 6. B. Carlsson and T. Wigren, "On-line identification of the dissolved oxygen dynamics in an activated sludge process", Preprints 12th World Congress of IFAC, Sydney, Australia, vol. 7, pp. 421-426, 7. B. Carlsson and T. Wigren, "On-line identification of the dissolved oxygen dynamics in an activated sludge process", UPTEC 92111R, Department of Technology, Uppsala University, Uppsala, Sweden, September, 1992.
{"url":"https://www2.it.uu.se/katalog/tw/research/WasteWater","timestamp":"2024-11-03T22:10:39Z","content_type":"text/html","content_length":"26713","record_id":"<urn:uuid:c124e5fe-c103-433d-a8d5-e5614e0a5a22>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00594.warc.gz"}
What are hyperparameters in neural networks and how are they determined? Hyperparameters in neural networks refer to the configuration settings that are not learned during the training process but are set before the training starts. They control the behavior of the neural network and affect how it learns and performs. Some common hyperparameters include the number of hidden layers, the number of neurons in each layer, the learning rate, the batch size, the activation functions, the dropout rate, etc. Determining the hyperparameters is typically done through a process of trial and error or hyperparameter tuning. There are several techniques for hyperparameter tuning, such as: 1. Grid Search: It involves defining a grid of possible hyperparameter combinations and evaluating the performance of the model for each combination. 2. Random Search: It randomly selects hyperparameter combinations and evaluates the model accordingly. This helps in exploring a wider range of hyperparameters. 3. Bayesian Optimization: It uses a probabilistic model to estimate the performance of different hyperparameter configurations. It selects the most promising hyperparameters and updates the model 4. Evolutionary Algorithms: Inspired by natural evolution, they involve generating a population of hyperparameter combinations and iteratively evolving the population by selecting, recombining, and mutating hyperparameter sets based on their performance. These techniques help in finding the optimal or near-optimal hyperparameter values for a neural network model, which can greatly impact its performance and generalization ability.
{"url":"https://devhubby.com/thread/what-are-hyperparameters-in-neural-networks-and-how","timestamp":"2024-11-09T03:18:37Z","content_type":"text/html","content_length":"98205","record_id":"<urn:uuid:08d4ce61-5b4f-4086-a387-1e3d51a1c47d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00441.warc.gz"}
Discrete Mathematics Homework Help | Software Engineering Mathematics Introduction And Propositions In this tutorial, we will study the heart of discrete mathematics: • propositional logic: making statements • set theory: describing collections of objects • predicate logic: making statements about objects • relations, functions, sequences: describing relationships between objects • recursion and induction: reasoning about repeated application (and returning definitions If you need any help related to software engineering mathematics(Discrete Mathematics) then contact us or send your requirement details so we can help you. • Discrete mathematics • The Z notation • Propositions • Tautologies • Equivalences The syntax and semantics that we choose for discrete mathematics is that of the Z notation: • The logic is typed: every identifier in our mathematical document is associated with a unique basic set • Functions are partial by default: the result of applying a function to a particular object may be undefined • The various sub-languages are precisely defined: a Z document is easily parsed and type-checked A proposition is a statement that must be either true or false Note that we deal with a two-valued logic (cf. SQL) Propositions may be combined using logical connectives The meaning of a combination is determined by the meanings of the propositions involved. • 2 is even • 2 + 2 = 5 • tomorrow = tuesday • she is rich • he is tall • 2 / 0 = 0 • ¬ (2 is an even number) • she is rich ∧ he is tall • the map is wrong ∨ you are a poor navigator • (2 + 2 = 5) ⇒ (unemployment < 2 million) • (tomorrow = tuesday) ⇔ (today = monday) We use truth tables to give a precise meaning to our logical connective Contact us and send your requirement details at
{"url":"https://www.realcode4you.com/post/discrete-mathematics-homework-help-software-engineering-mathematics","timestamp":"2024-11-06T06:30:00Z","content_type":"text/html","content_length":"1050483","record_id":"<urn:uuid:68d572dc-2e2c-4c50-9afa-340cb3d18b66>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00828.warc.gz"}
Top 10 Hardest A-Level Maths Questions in 2024 - Think Student We’ve looked at the hardest GCSE Maths questions, so it only makes sense to look at the next level up and assess the hardest A-Level Maths questions. A-Level Maths is one of the most popular subject choices for students, as suitable maths qualifications are useful for many future degrees and careers. However, it is also notorious for being one of the most difficult A-Level subjects available. There are plenty of past paper questions that students have struggled with, and some of the hardest have been collated here. Whether you want some challenging practice questions to aim for the highest grades, or are simply curious about the difficulty of the current A-Level Maths, this article contains ten of the hardest past exam questions we could find. Disclaimer: Of course, everyone’s mathematical ability is different. Some people will find a particular topic really easy, even if others struggle with it. Therefore, not every single question on this list will be hard for everyone. Instead, this list uses information from examiner reports to find out which questions the cohort as a whole found really difficult to score highly on. If you have come across a particular A-Level Maths question you found really difficult that you think should be included on this list, feel free to vote for it in the poll below!# The three most common exam boards are AQA, Pearson Edexcel and OCR. All the questions included here are from one of these exam boards, from the most recent specification (which first came into use in 2017). Keep reading for the most difficult A-Level Maths questions we found! 1. OCR June 2018 – Paper 2, Question 13 This question, found on OCR’s Pure Mathematics and Statistics paper in 2018, was found to be really challenging for students. Click here for the full paper that contained this question, but a screenshot is included below. The main reason this was seen to be a difficult question was the combination of topics involved. Although the paper as a whole contained both pure and statistical work, they were both included in the same question here. Many people found it difficult to combine different topics involved – probability, as well as binomial expansion. According to the examiners’ report, which can be found here, very few students were able to link the different parts of the question together. In fact, for the last part of the question, it states that ‘Almost no candidates gave a correct solution based on their answer to part (iii)(a)’. This question was so contentious that there is a long forum thread discussing it, which can be found here on The Student Room. Alternatively, check out this link for the mark scheme to the paper. This has the correct answers, as well as a step-by-step demonstration of different methods that can be used to complete this question. 2. Edexcel June 2019 – Paper 1, Question 10 Although there were not a huge number of marks available for this next question, the students taking the exam found it difficult to score highly. Have a look at the question, shown below, or in the exam paper here. According to the examiners’ report, the mean mark for this question was just 1.7 out of 6. For this statistic, as well as other official comments on the question, you can find the examiners’ report on this link. The main difficulty with this problem was that many students were unfamiliar with methods of proof, which is essential for the first part of the question. The official report says that a surprising number of candidates left this question blank or found it difficult to make a significant attempt at it. Potentially, A-Level Maths students think that proof is a smaller topic, and they would rather focus on more common areas of the specification, such as calculus. This would suggest that it is not the question itself that is hard, but that students put less focus on this topic area. Click here for the mark scheme to this particular question. Alternatively, for some useful revision resources on proof, have a look at this link from Physics and Maths Tutor. It is designed specifically for Edexcel – the exam board this question was taken from. 3. Edexcel June 2019 – Paper 2, Question 8 This next question, from this Edexcel exam paper, focuses on the A-Level Maths topic of sequences and series. However, it also incorporates aspects of other topics, such as logarithms. While both parts of this question may seem daunting at first glance, they are actually not worth very many marks. This suggests there is not as much working out needed as you might think. In fact, the examiners’ report seems to find that many students were able to attempt this question, or start promisingly. However, it was more difficult to arrive at the final answer, with various common mistakes made. Nevertheless, this emphasises the importance of having a go at a question, even if you aren’t sure how to finish it. There are almost always marks available for the first steps to the right answer. For example, in this question, the report says more successful candidates began by writing out the terms of the series. For more information, this is all explained further in the examiners’ report here. You can also click on this link for the mark scheme, which shows various ways to solve this question. 4. Edexcel June 2019 – Paper 2, Question 10 Have a look at this next question, which is centred on the topic of vectors, found in this Edexcel paper from 2019. A range of marks were achieved for this question. The examiners’ report tells us that this question was able to distinguish between students of different abilities, because it began with more straightforward maths, and gradually got more difficult. Overall, very few students managed to get full marks. This report can be found here. It also contains explanations of common mistakes made when candidates attempted this question. It is helpful to look at this alongside the mark scheme, linked here. You can see the mistakes to avoid, as well as the correct answer and methods to reach it. Vectors is actually a topic at A-Level that uses many skills from GCSE. However, many students still struggled with this question. Have a look at this Think Student article for plenty more information on how hard A-Level Maths is in comparison to the GCSE. 5. AQA June 2019 – Paper 2, Question 6 Just 12% of students were able to get all the marks available for this next question, according to the examiners’ report, which can be found here. The main problem students had with this question was getting started. The trick is to notice that the equation of the curve can be rewritten in the form R cos ( x ± α ) or R sin ( x ± α ). If you want to have a go at this particular question, knowing this first step, it can be found in this AQA exam paper. The mark scheme to go along with it is available at this link. This conversion is usually taught as part of the A-Level specification. Often, it can be difficult to take a step back from a question and make connections to things you have learned in class. This is particularly hard in the heat of an exam situation. Have a look at this article from Think Student for a range of tips on time management in exams. This can help to make sure you are focussed on the question itself and what you can do to solve it, rather than the stress of the exam setting. 6. AQA June 2019 – Paper 3, Question 15 The next question, from this AQA paper, is part of the statistics section of the A-Level Maths course. It mainly requires skills and knowledge about hypothesis testing, but also needs students to have an awareness of correlation. The examiners’ report suggests that it was not the content of this question that students struggled with. Instead, they found it hard to know exactly what the question was asking, and therefore lost marks or had working out that gained no credit. Check out the mark scheme here for a guide to what, exactly, the marks were awarded for. For example, although they could compare the necessary values, very few students gained the final mark for the actual inference statement. You can read more about this in the report here. This is more of an issue with exam technique, and how familiar you are with question styles, than a difficulty with the maths skills involved to answer this question and get full marks. Therefore, it emphasises how important it is to practise exam questions, under exam conditions if you can. This makes sure that when the real exams come, you are not held back because you do not know what to expect from the questions’ format or wording. 7. AQA November 2021 – Paper 1, Question 11 According to the examiners’ report for this paper, less than a third of students were able to make any progress on this next question, from this 2021 paper. With 8 marks available, this is clearly a long question that will require a lot of working and mathematical skills. While many other questions are broken down step by step, you do not get the same guidance through this problem. The general topic tested here is calculus, specifically, forming and solving differential equations. This is an area many students find difficult when learning it in lessons, so it is no wonder this lengthy question proved so difficult in the exam itself. Indeed, it was the fact that a differential equation is involved that tripped up many students. Without this key step, they found it difficult to make a good attempt at this question. For more on this, have a look at the examiners’ report here. However, it is also worth noting that this was from November 2021 exams. Autumn exams are usually resits, so fewer people are taking them. In addition, disruption to education from COVID-19 that year may mean students found the exams harder than they would have otherwise. The solution to this question, along with the steps along the way that would get you credit in an exam, can be found in the mark scheme here. 8. AQA November 2021 – Paper 2, Question 9 The next question posed a real challenge for students sitting the exam. The examiners’ report, which you can have a look at here, says that many students left parts of the question blank, unable to attempt to solve the problem. The full question can be seen below. Alternatively, the full exam paper is available here. This is clearly a long question, with lots of information to read at the beginning, and lots of different stages. However, it has been broken down into different sections, which can be helpful to guide you through, rather than one large question worth all 9 marks. If you want to have a go at this question, check out the mark scheme here for step by step solutions to each part. One bit of advice the examiners’ report had was for students to make use of any diagrams given to them. Diagrams are really helpful in A-Level Maths. They often help you to visualise what the question is asking, and it can be helpful to annotate them as you work through the question. For more tips specific to A-Level Maths to help you get the top grades, have a look at this article from Think Student. 9. AQA November 2021 – Paper 2, Question 18 Next on the list is a question about the mechanics content of the A-Level Maths specification. The examiners’ report, which you can access here, shows that candidates struggled with all three parts of this question. It can be daunting to see a lot of information to read, followed by questions worth a lot of marks. As with the previous question, it can be helpful to use a diagram to make sense of what is involved in the problem. Ultimately, as mentioned, it is always useful to have a go at the question, even if you don’t understand it. The examiners’ report for this question said that many students left parts blank, so could not get any marks. Making a start might be worth credit in the mark scheme, but it can also make next steps for solving the question clearer. If you would like to try this question for yourself, the full past paper it is from can be found here, along with the mark scheme here. 10. Edexcel June 2022 – Paper 1, Question 16 The final question on this list was the final question on this Edexcel paper from 2022. The examiners’ report says that the mean mark was just 1.9, out of 9 total marks available. As the last question on the exam, you may expect it to be difficult. Indeed, students struggled with this, many leaving it blank. There were also a range of common mistakes made, which are outlined in the examiners’ report here. It requires calculus skills of both differentiation and integration, as well as knowledge of how parametric equations fit into this. This is not the only calculus-based question featured on this list, reinforcing the idea that this is one of the hardest areas of A-Level Maths. These pages of Physics and Maths Tutor may be useful to revise A-Level calculus: click here for differentiation, and here for integration. Alternatively, if you are ready to attempt the question above, you can find the mark scheme here. Where can you find more difficult A-Level Maths questions? Hopefully, this article has given you an idea of some of the hardest A-Level Maths questions available. If you think these are incredibly difficult, you are not alone. If you are looking to improve your maths skills to prepare for the final A-Level exams, the best way to do this is to practise. There are a huge number of past papers available on official exam board websites. Make sure to enter the correct qualification – you don’t want to accidentally practise with nothing but GCSE Maths papers! The following pages of exam board websites have a search tool to find the past papers you need: click on the exam board to be taken to their page: AQA, Edexcel, and OCR. You may also want exams categorised by topic. In this case, some useful websites include A-Level Maths Revision, Physics and Maths Tutor and Mr Barton Maths. While past questions aren’t the most fun thing to do, it really works! The more practice you get, the better prepared you will be for your final exams, and to tackle questions as hard as the ones in this article. 3 Comments Inline Feedbacks View all comments 2 years ago light work. took me 2 minutes Reply to fahad 2 years ago Took me 1 2 years ago These were the easiest sets of questions I’ve seen in my entire life. How could they ever be considered remotely hard. What has A-Level maths become.
{"url":"https://thinkstudent.co.uk/hardest-a-level-maths-questions/","timestamp":"2024-11-04T23:50:54Z","content_type":"text/html","content_length":"163523","record_id":"<urn:uuid:f8d36216-04d4-45a8-bc05-6161cf260e32>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00019.warc.gz"}
Computer Architecture Reference ALU: Arithmetic Logic Unit On 32-bits (MIPS architecture) Hardware building blocks are AND gate, OR gate, Inverter, Multiplexor Single-Bit Full Adder operand a (input) operand b (input) CarryIn (= CarryOut from previous adder = input) CarryOut (output) Sum (output) To subtract, add a multiplexor to choose between b and ~b for adder Single-Bit Half Adder Has CarryOut but no CarryIn Only two inputs and two outputs Carry Lookahead For 32-bit ALU, ripple adder connects multiple 1-bit adders together Takes too long to wait for sequential evaluation of adders, so anticipate the carry Use abstraction to cope with complexity Multiplicand = first operand Multiplier = second operand Product = final result 1st Multiplication Algorithm X 1 0 0 1 n X m multiplication = n + m bits 1st multiplication algorithm implements traditional pencil and paper multiplication 32-bit multiplier register and 64-bit multiplicand register Multiplicand register has 32-bit multiplicand in right half and initialized to 0 in left half Multiplicand register shifted left 1 bit each step to align with sum Sum is accumulated in 64-bit product register if least significant bit of multiplier is 1 add multiplicand to the product else, dont shift multiplicand register left 1 bit shift multiplier register right 1 bit repeat 32 times (on 32 bits) 2nd Multiplication Algorithm 1st multiplication algorithm inefficient because half the multiplicand bits were always 0 instead of shifting multiplicand left, shift product right if least significant bit of multiplier is 1 add multiplicand to left half of product place result in left half of product register else, dont shift product register right 1 bit shift multiplier register right 1 bit repeat 32 times (on 32 bits) 3rd Multiplication Algorithm 2nd version still not optimized enough product register had same amount of wasted space as size of multiplier 3rd algorithm combines rightmost half of product with multiplier no multiplier register b/c instead is placed in right half of product register if least significant bit of multiplier is 1 add multiplicand to left half of product place result in left half of product register else, dont shift product register right 1 bit repeat 32 times (on 32 bits) Booths Algorithm classify groups of bits into the beginning, middle, or end of a run of 1s four cases depending on value of multiplier o 10 = beginning of a run of 1s, subtract multiplicand from left half of product o 01 = end of a run of 1s, add multiplicand to left half of product o 11 = middle of a run of 1s, no operation o 00 middle of a run of 0s, no operation shift product register right 1 bit extend sign when product shifted to right (arithmetic shift since dealing with signed numbers as opposed to logical shift) Floating Point Addition shift smaller number right until exponent matches larger exponent add the significands normalize the sum round significand to appropriate number of bits normalize and round again if necessary Floating Point Multiplication add biased exponents and subtract the bias from the sum so its only counted once multiply the significands normalize the product round significand to appropriate number of bits normalize and round again if necessary Assembly Programming for MIPS $s0, $s1, used for registers that correspond to variables in C/C++ programs $t0, $t1, used for temporary registers needed to compile the program into MIPS instructions memory addresses are multiples of 4 all MIPS instructions / words / etc are 32 bits long add add $s1, $s2, $s3 s1 = s2 + s3 subtract sub $s1, $s2, $s3 s1 = s2 s3 add immediate (constant) addi $s1, $s2, 100 s1 = s2 + 100 load word lw $s1, 100($s2) s1 = A[s2 + 100] store word sw $s1, 100($s2) A[s2 + 100] = s1 branch on equal beq $s1, $s2, SOMEWHERE if (s1 == s2) go SOMEWHERE branch on not equal bne $s1, $s2, SOMEWHERE if (s1 != s2) go SOMEWHERE set on less than slt $s1, $s2, $s3 if (s2 < s3) s1 = 1 else s1 = 0 unconditional jump j SOMEWHERE goto SOMEWHERE Procedures in Assembly place parameters in a place where procedure can access them transfer control to the procedure acquire storage resources needed for the procedure perform the desired task place the result in a place where the calling program can access it return control to the point of origin could someone please give me an explanation of n's complement? he he much more tutorialsa needed We are working on an ALU Architecture in VHDL including instructions of Multiply and Divide. We are confused about how exactly the results from Multiplication or Division are sent outside the ALU. The adder subtractor have 8 bit results, but multiplication will have 16 bit and Division will have 8 bits quotient, and 8 bits remainder. So how exactly do we configure the outputs to send the whole information when we have a C as output from ALU of size 8 bits? Do we use 2 statements one after a specific time delay assigning the second half of the result, or is it some other way? For what it's worth, how about splitting the result between two memory locations? We are working on an ALU Architecture in VHDL including instructions of Multiply and Divide. We are confused about how exactly the results from Multiplication or Division are sent outside the ALU. The adder subtractor have 8 bit results, but multiplication will have 16 bit and Division will have 8 bits quotient, and 8 bits remainder. So how exactly do we configure the outputs to send the whole information when we have a C as output from ALU of size 8 bits? Do we use 2 statements one after a specific time delay assigning the second half of the result, or is it some other way? Reply to this Topic
{"url":"https://www.daniweb.com/programming/computer-science/threads/1766/computer-architecture-reference","timestamp":"2024-11-10T17:36:00Z","content_type":"text/html","content_length":"79583","record_id":"<urn:uuid:ff54106a-3fe0-4f5b-8840-ccfdbc6fe666>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00656.warc.gz"}
What you need to know about Determinant of a Matrix - Class 12 Mathematics I am Suman Mathews, online Math tutor and educator. I can tell you how exactly you should be learning Determinants for Class 12 Mathematics CBSE and ISC.. Reading this post, beforehand will ensure that you don't miss out on any topic before the exams. Determinants and Matrices is one of the easiest topics in Class 12. Almost all of my students get this topic right. Just ensure that you have a knowledge of Matrices before you start learning What is a Determinant of a Matrix? To start with, a determinant is a number attached to a square matrix. You will learn how to expand a determinant by any row or column. If A is a square matrix and k is any scalar, what is determinant of kA. You can access all the formulas in the youtube link provided here. You can use determinants to find the area of a triangle. This is an important concept. Note that, if the area of the triangle is zero, the points are collinear. Minors and Cofactors Minor of any element in a determinant of order n is a determinant of order n-1. Access the formula for minors and cofactors in this video. Once you learn how to calculate the cofactors of a matrix, you can progress to calculating the adjoint of a matrix. Access more properties of adjoint of a matrix in the video link. A square matrix is said to be singular, if its determinant is equal to zero. And again, if a matrix is non singular its, inverse There are umpteen problems on this concept. Again, note that this is an important topic which carries considerable weightage in your exams. Do you need extra help? Though this topic of Determinants is incredibly easy, you may still need some extra help in studying this. Feel free to register for my webinar on Determinants. The entire topic may require 4 to 5 classes of one hour each. I will be teaching MCQ based questions, questions based on logical reasoning and case study based questions as well together with long answer questions. These are in tune with the NEP model as framed by ISC, NCERT. For further queries, you can contact me on mathews.suman@gmail.com I hope this was useful to you. You can access the formulas given in the links. Would you like to help other students who need that little extra help in Mathematics. If so, kindly share this with your friends and relatives.All my students have scored well in their exams.You can also visit my website www.sumanmathews.com for more free resources.
{"url":"https://www.mathmadeeasy.co/post/what-you-need-to-know-about-determinant-of-a-matrix-class-12-mathematics","timestamp":"2024-11-04T17:50:36Z","content_type":"text/html","content_length":"1050489","record_id":"<urn:uuid:bdbd999a-e808-4138-8588-5549efde3ec2>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00021.warc.gz"}
Fares Al Rwaihy | Verification of the Kirchhoff’s laws Verification of the Kirchhoff’s laws Fares Al Rwaihy April 15, 2023 This series of blog posts aims to compare the hand calculation and numerical simulation techniques through technical software 2 min read Kirchhoff's laws are fundamental principles of electrical circuit analysis that are widely used in various fields of engineering. These laws are essential for designing and analyzing complex electrical circuits. Kirchhoff's current law (Node) and Kirchhoff's voltage law (Mesh) are two of the most important laws in circuit theory, which govern the behavior of electric circuits. The main objective of this project is to verify the validity of Kirchhoff's laws by comparing the results of a hand calculation with a NI Multisim simulation. Verification of the Kirchhoff’s laws Ni Multisim results: In this experiment we would like to show the verification of the Kirchhoff’s junction (node) and loop (mesh) laws by calculation and measurement. We can show in the first experiment that the difference voltage splits between each resistance, where we can find both of the differences of voltage that equal to the supply voltage. We set a voltmeter to show the difference in voltage as it pass a resistance On the second experiment we show that the currents splits between each node, So the sum ofthe total current splits equal to the beginning current. We set a Ammeter to show the current changes as the current pass into a nodes. Hand Calculation results: First we need to illustrate the equations of Parallel and Series electric circuit Series VS Parallel. in Series: $Ieq\cdot Req\:=\:I1\cdot R1\:+\:I2\cdot R2$ in Parallel: For the First experiment results $V1\:=\:2.2\cdot 3.125\:=\:6.875\left[V\right]$ $V2\:=\:1\cdot 3.125\:=\:3.125\left[V\right]$ For the Second experiment results, we know, as illustrated in the previous sections, about Parallel circuits $Ieq\:=\:\frac{Veq}{Req}\:=\:\frac{Veq}{\frac{R1\cdot R2}{R1+\:R2}}$ $Ieq\:=\:\frac{Veq}{\frac{1\cdot 2.2}{1+\:2.2}}=\:14.5454\:\left[mA\right]$
{"url":"https://www.mechfares.com/blog/verification-of-the-kirchhoff-s-laws","timestamp":"2024-11-15T04:05:41Z","content_type":"text/html","content_length":"78931","record_id":"<urn:uuid:ed6a5b52-36d2-4ebd-9bc2-7a79327a9fa2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00566.warc.gz"}
Some theoretical results for the photochemical decomposition of large molecules The expressions describing photochemical dissociation derived by Rice, McLaughlin, and Jortner are evaluated analytically for molecules satisfying Γ≫(l+r) and for times t≪h/ε, where T is the width of the initial state due to interaction with an intermediate manifold, ε is the level spacing of the manifold, and r is essentially the ratio of the manifold level widths to ε. Excitation is found to decay exponentially from the initial state with rate Γ/[ (1+r)h]. In contrast to the behavior originally perdicted, the decay is found to be nonsequential, and a constant ratio, equal to r, is maintained between the populations of the continuum and the manifold. The results should be applicable to large molecules having a single decomposition mode. Dive into the research topics of 'Some theoretical results for the photochemical decomposition of large molecules'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/some-theoretical-results-for-the-photochemical-decomposition-of-l","timestamp":"2024-11-09T22:56:48Z","content_type":"text/html","content_length":"52290","record_id":"<urn:uuid:6f0fb302-2003-4025-b5f6-c9c9a98aa229>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00017.warc.gz"}
Algebra 1 volume 1 answers algebra 1 volume 1 answers Related topics: solving with elimination non standard form calculations intermediate algebra final revie factor trinomial calculator how to calculate roots of polynomials ti-83 how to convert e to decimal how to translate square feet to linear feet Radicalcomplex Number Expressions Calculator free online module 8 papers maths Online Math Worksheets.com algebra 2: an integrated approach quadratice simultaneous equation solver online utilities to solve rationalize the denominator algebra help software Author Message Author Message Bronstil Posted: Wednesday 03rd of Jan 21:57 nikitian2 Posted: Monday 08th of Jan 11:29 I've always wanted to excel in algebra 1 volume 1 Is it really true that a software can perform like that? I answers, it seems like there's a lot that can be done don’t really know much anything about this with it that I can't do otherwise. I've searched the Algebrator but I am really seeking for some help so Reg.: 27.07.2003 internet for some good learning resources, and checked Reg.: 26.02.2004 would you mind suggesting me where could I find that the local library for some books, but all the data seems software ? Is it downloadable over the net ? I’m to be targeted at people who already know the subject. hoping for your fast reply because I really need help Is there any resource that can help new people as desperately. Techei-Mechial Posted: Tuesday 09th of Jan 17:20 kfir Posted: Friday 05th of Jan 14:37 Visit https://softmath.com/links-to-algebra.html and I understand your situation because I had the same hopefully all your problems will be resolved . issues when I went to high school. I was very weak in math, especially in algebra 1 volume 1 answers and my Reg.: 14.10.2001 Reg.: 07.05.2006 grades were terrible . I started using Algebrator to help me solve problems as well as with my homework and eventually I started getting A’s in math. This is a remarkably good product because it explains the problems in a step-by-step manner so we understand TC Posted: Wednesday 10th of Jan 18:11 them well. I am absolutely confident that you will find it useful too. I remember having difficulties with algebraic signs, function domain and simplifying expressions. Algebrator Koem Posted: Sunday 07th of Jan 14:16 is a truly great piece of algebra software. I have used it Reg.: 25.09.2001 through several algebra classes - Algebra 2, College Yes I agree, Algebrator is a really useful product . I Algebra and College Algebra. I would simply type in the bought it a few months back and I can say that it is the problem and by clicking on Solve, step by step solution main reason I am passing my math class. I have would appear. The program is highly recommended. Reg.: 22.10.2001 recommended it to my friends and they too find it very useful. I strongly recommend it to help you with your math homework.
{"url":"https://softmath.com/parabola-in-math/exponential-equations/algebra-1-volume-1-answers.html","timestamp":"2024-11-14T08:20:27Z","content_type":"text/html","content_length":"50817","record_id":"<urn:uuid:a3baafbe-c8e6-4df9-b988-88801ed230a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00470.warc.gz"}
2115 -- C Looooops C Looooops Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 42529 Accepted: 12542 A Compiler Mystery: We are given a C-language style for loop of type for (variable = A; variable != B; variable += C) I.e., a loop which starts by setting variable to value A and while variable is not equal to B, repeats statement followed by increasing the variable by C. We want to know how many times does the statement get executed for particular values of A, B and C, assuming that all arithmetics is calculated in a k-bit unsigned integer type (with values 0 <= x < 2 ) modulo 2 The input consists of several instances. Each instance is described by a single line with four integers A, B, C, k separated by a single space. The integer k (1 <= k <= 32) is the number of bits of the control variable of the loop and A, B, C (0 <= A, B, C < 2^k) are the parameters of the loop. The input is finished by a line containing four zeros. The output consists of several lines corresponding to the instances on the input. The i-th line contains either the number of executions of the statement in the i-th instance (a single integer number) or the word FOREVER if the loop does not terminate. Sample Input Sample Output CTU Open 2004
{"url":"http://poj.org/problem?id=2115","timestamp":"2024-11-10T06:04:55Z","content_type":"text/html","content_length":"6341","record_id":"<urn:uuid:6ab54dd0-b06f-4fd8-a54e-7fc847d5a956>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00782.warc.gz"}
How do you find the domain of g(x) = x² - 4? | HIX Tutor How do you find the domain of #g(x) = x² - 4#? Answer 1 The key realization here is that our function, #g(x)#, is a polynomial, so it is defined for all real numbers. Domain: #x in RR# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-domain-of-g-x-x-4-8f9af8dce3","timestamp":"2024-11-07T23:52:43Z","content_type":"text/html","content_length":"573844","record_id":"<urn:uuid:0e2b8d5e-841c-454d-9961-72e71a9202a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00605.warc.gz"}
A History of Gram-Schmidt Orthogonalization Printable PDF Department of Mathematics, University of California San Diego Center for Computational Mathematics Seminar Steven Leon UMass Dartmouth A History of Gram-Schmidt Orthogonalization It has been more than a hundred years since the appearance of the landmark 1907 paper by Erhard Schmidt where he introduced a method for finding an orthonormal basis for the span of a set of linearly independent vectors. This method has since become known as the classical Gram-Schmidt Process (CGS). In this talk we present a survey of the research on Gram-Schmidt orthogonalization, its related QR factorization, and the algebraic least squares problem. We begin by reviewing the two main versions of the Gram-Schmidt process and the related QR factorization and we briefly discuss the application of these concepts to least squares problems. This is followed by a short survey of eighteenth and nineteenth century papers on overdetermined linear systems and least squares problems. We then examine the original orthogonality papers of both Gram and Schmidt. The second part of the talk focuses on such issues as the use of Gram-Schmidt orthogonalization for stably solving least squares problems, loss of orthogonality, and reorthogonalization. In particular, we focus on noteworthy work by Ake Bjorck and Heinz Rutishauser and discuss later results by a host of contemporary authors. *S. J. Leon, Ake Bjorck and Walter Gander are co-authors of the paper Gram-Schmidt Orthogonalization: 100 years and more, Numer. Linear Algebra Appl (2013) This talk is to a large part based on that January 5, 2016 10:00 AM AP&M 2402
{"url":"https://math.ucsd.edu/seminar/history-gram-schmidt-orthogonalization","timestamp":"2024-11-04T15:28:50Z","content_type":"text/html","content_length":"33994","record_id":"<urn:uuid:afe48939-b5d7-4389-a5e3-3e23c18160e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00529.warc.gz"}
Quantum magic squares In a new paper in the Journal of Mathematical Physics, Tim Netzer and Tom Drescher from the Department of Mathematics and Gemma De las Cuevas from the Department of Theoretical Physics have introduced the notion of the quantum magic square, which is a magic square but instead of numbers one puts in matrices. This is a non-commutative, and thus quantum, generalization of a magic square. The authors show that quantum magic squares cannot be as easily characterized as their “classical” cousins. More precisely, quantum magic squares are not convex combinations of quantum permutation matrices. “They are richer and more complicated to understand,” explains Tom Drescher. “This is the general theme when generalizations to the non-commutative case are studied. Check out the paper! Quantum magic squares: Dilations and their limitations: Journal of Mathematical Physics: Vol 61, No 11 — Read on aip.scitation.org/doi/10.1063/5.0022344
{"url":"https://jrogel.com/quantum-magic-squares/","timestamp":"2024-11-06T19:04:57Z","content_type":"text/html","content_length":"121447","record_id":"<urn:uuid:f5d09b90-405c-4625-b692-be2d21bb5d61>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00040.warc.gz"}
In Python, how to create an NumPy array of random integers? Question #370 Submitted by Answiki on 01/10/2021 at 12:38:25 PM UTC In Python, how to create an NumPy array of random integers? Answer Submitted by Answiki on 01/10/2021 at 12:38:01 PM UTC The NumPy function numpy.random.randint(low, high=None, size=None, dtype='l') is devoted to return random numbers. By setting the size option, the function can return a numpy.ndarray of random integer : >>> import numpy as np >>> np.random.randint(low = 0, high = 10, size=(3,2)) array([[8, 7], [8, 2], [6, 2]]) Question by Answiki 01/10/2021 at 12:38:25 PM In Python, how to create an NumPy array of random integers? Answer by Answiki on 01/10/2021 at 12:38:01 PM The NumPy function numpy.random.randint(low, high=None, size=None, dtype='l') is devoted to return random numbers. By setting the size option, the function can return a numpy.ndarray of random integer : >>> import numpy as np >>> np.random.randint(low = 0, high = 10, size=(3,2)) array([[8, 7], [8, 2], [6, 2]]) Question by Answiki 01/10/2021 at 12:31:48 PM In Python, how to create an array of n x m random integers? Icons proudly provided by
{"url":"https://en.ans.wiki/370/in-python-how-to-create-an-numpy-array-of-random-integers/","timestamp":"2024-11-06T09:12:40Z","content_type":"text/html","content_length":"43341","record_id":"<urn:uuid:d601560c-d889-4e62-ae91-978d0af7227e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00234.warc.gz"}
: Loyola University Chicago AY 23-24 Dr. Shuwen Lou is the local organizer for the international conference "Recent Progress in Stochastic Analysis and its Applications". The conference will be held here at Loyola July 15-19, 2024. Click below to learn more! Congratulations to our 2024 graduates! The Department of Mathematics and Statistics is proud of you and all of your efforts these past years. Click to learn more about the awards and honors given to our graduates on May 1st. The department of Mathematics and Statistics mourns the passing of Dr. Joseph Mayne. Joe was a member of this department for 48 years before retiring in 2020. During his time at Loyola he served two terms as department chair and was a member of Faculty Council. He also taught countless students. His contributions were not limited to math; he also founded the Loyola Chamber Orchestra, and was the conductor for 27 years. Join the Math & Stat Department for our Colloquium on Thursday, April 18, which will feature a talk by Rigoberto Florez (The Citadel). His topic will be "The strong divisibility property and the resultant of generalized Fibonacci polynomials". Click to learn more details about the talk. Join the Math & Stat Department for our Colloquium on Thursday, April 11, which will feature a talk by Frank Baginski (George Washington University). His topic will be "The Shape of a High Altitude Balloon ".Click to learn more about the details! Please consider registering for the 2024 Chicago Symposium Series, Excellence in Teaching Mathematics and Science: Research and Practice, hosted at Loyola on Friday, April 12, 10am-5pm, check-in starting at 9:30am in McCormick Lounge. The Chicago Symposium Series is a twice-a-year symposium bringing together higher education faculty, staff, graduate students, and undergraduates who are interested in a discussion of issues and innovations in STEM education. Click to learn more about the details. DataFest is a data "hackathon" for students to motivate a data-analysis class project, that makes data analysis more fun and meaningful while incentivizing good scientific practice and presentation. Now sponsored by the American Statistical Association, ASA DataFest is run through several host institutions across the country, including right here at Loyola University Chicago (April 5-7). Sign up The Math/Stat DEI committee is hosting a movie and discussion for Women's History Month on March 20 (4:15pm in IES 110) and is inviting you to join. The movie is "Secrets of the Surface: The Mathematical Vision of Maryam Mirzakhani". A brief discussion of the film will follow the viewing. There will be pizza! The Department of Mathematics and Statistics would like to congratulate one of its own on receiving an honorary degree. The degree was awarded by Thai King Maha Vajiralongkorn for Tim's impactful service at the University of Phayao, Thailand. Well done, Tim! Join Data Science and the Math & Stat Department for their joint Colloquium on Thursday, February 29th, which will feature a talk by Claudia Solís-Lemus (Wisconsin Institute for Discovery). Her topic will be "Inferring Biological Networks" and pizza will be provided. Click to learn more about the details! The math student team won the Loyola intramural volleyball championship. Congratulations! Join the Math and Stats Club for a paper football tournament! We will begin by having Dr Gregory Matthews of the Stats department giving a short talk on statistical analysis in football. Click to see more details! The LUC Association of Women in Mathematics Student Chapter is happy to announce a talk by Loyola Alum, Dr. Emma Zajdela. Join us as she discusses her topic "Catalyzing collaborations: Modeling scientific team formation for global impacts". Click to see more details! In the lead up to Finals Week, the Committee for Diversity, Equity, and Inclusion in the Department of Mathematics and Statistics sponsored a Math Review Session. Dozens of Precalculus and Calculus students collaborated with many of the department's math teachers. Thanks to all involved for this successful event! The Department of Mathematics and Statistics is happy to announce our next Colloquium on Thursday, December 7th, which will feature a talk by Elizabeth Gross (University of Hawai`i at Mānoa). Join us as she discusses the topic: "The Algebra and Geometry of Evolutionary Biology ". Five Loyola students from the math department attended the Field of Dreams conference in Atlanta, GA. There they attended several workshops and learned about REU's and many post grad opportunities such as grad school and professions in industry. (l-r, Dr. Darius Wheeler, Maria Redle, Kellie Wijas, Isabel Renteria, Anurathi Madasi, and Joey Dingillo). Click the title above to learn more about the conference! On November 14th (4 to 5:30pm) several faculty members from Math and Stats will be on-hand to give information, advice and help to anyone interested in applying to do research as an undergrad. Doing research gives you a very different view of what you study and can be a great experience. Please come by Cuneo 312 Tuesday afternoon if you are interested! Our next Mathematics and Statistics Colloquium on Thursday, November 16th will feature a talk by Albert S. Berahas (University of Michigan). Join us as he discusses the topic: "Next Generation Algorithms for Stochastic Optimization with Constraints". Join us as we have three talks on exciting research done by our undergraduate students over the past year. The speakers are Cecily Bartsch, Cole Fleming, and Kathryn Cantrell. Click above to learn Join us on October 12th as we have three talks on exciting research done by our undergraduate students over the past year. The speakers will be Isabel Rentería, Amanda Newton, and Anurathi Madasi. Click above to learn more! Loyola's Math & Stat Club is offering drop-in tutoring this year on Mondays from 5pm - 7pm and Thursdays from 7pm - 9pm in IES 123 (Institute of Environmental Sustainability). If you are a student looking for assistance with your math or stat class, be sure to take advantage of this fantastic resource. Congratulations to Miriam Kabagorobya and Xiang Wan on being named fellows in the MATCH program, where they will be doing exciting work with middle school math students. Learn more about this amazing program by clicking above!
{"url":"https://www.luc.edu/math/stories/ay23-24/","timestamp":"2024-11-13T08:25:51Z","content_type":"text/html","content_length":"65614","record_id":"<urn:uuid:57893641-952f-4fca-9d1c-2a311fd6f55a>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00256.warc.gz"}
Feed Chickens, Not Landfills | BioCycle Maureen Breen When Austin, Texas implemented a policy to pay residents $75 to purchase a chicken coop so that people could feed household food scraps to backyard chickens, they did not track the amount of diverted food waste. My research in Philadelphia is measuring the amount of food scraps consumed by a typical backyard chicken. Backyard chickens are omnivorous and can eat most of the same foods that humans eat or discard. Three criteria were used to evaluate the value of backyard chickens in reducing MSW — financial cost, environmental impact, and the EPA’s food waste recovery hierarchy. Before performing any evaluations, I needed to know how much a typical backyard chicken eats. Eleven people volunteered to measure the food scraps they fed to their backyard chickens. Since Austin is the only municipality that currently has such a program, I used data from Austin and other financial data for the analysis. Eleven households in the Philadelphia metropolitan area weighed the food scraps fed to their 48 backyard chickens every day for one week a month from September 2018 to January 2019. Flocks ranged from two to nine chickens. The total weight of household food scraps fed to the backyard chickens for the five weeks studied was 349 pounds (lbs). When extrapolated to a full year of consumption, a backyard chicken is expected to consume approximately 1.6 lbs/week of food scraps, or 83.2 lbs/year. Given Austin’s 2018 cost of $163/ton to collect and compost or dispose of organic MSW, each backyard chicken saves $6.76/year in MSW management costs. A common size flock of four chickens would save $27.04/year. In addition to data from Austin, I used 3.4%, the average municipal bond rate in 2018, as the discount interest rate, and 10 years as a chicken’s expected life. I also performed sensitivity analysis for these values, including payback period, net present value (NPV), internal rate of return (IRR), and profitability index (PI). The payback period is the amount of time required for the cash inflows to equal the initial cost. In my research, the payback period is 2.80 years. That is, given the assumed input values, a municipality can expect to recover a cost of a $75/coop in 2.8 years. The highest initial cost that would still achieve a payback period of at least five years is $135 and the minimum annual cost savings necessary to achieve a payback period of five years is $15. The backyard chickens could eat approximately 55% of what they ate in my study and yield a payback period of five years. Net Present Value NPV is the sum of all the cash flows discounted at the assumed discount rate and length of time for the project. Based on the estimated values, the NPV is $150.90. That is, a municipality that pays citizens $75 to purchase a chicken coop can expect to gain $150.90 in present value dollars. Given the assumed input values, the highest initial cost that would yield a positive NPV is $225; a flock of four backyard chickens could eat approximately one-third of what they ate in my study and yield a positive NPV. Internal Rate of Return: The IRR is the discount rate that breaks even. The IRR in my study is 34.1%. A municipality that pays citizens $75 to purchase a chicken coop can expect to earn a return of 34.1%. Given the assumed values, the minimum annual cost savings necessary to achieve an IRR of 3.4% is $9.00; a flock of four backyard chickens could eat one-third of what they ate in my study and yield a project with an IRR of 3.4%. Profitability Index The PI is the ratio of the present value of the program cash flows to the initial investment. Based on the assumed values, the PI is approximately 3.0. That is, a municipality that pays citizens $75 to purchase a chicken coop can expect to earn three times the positive cash flow, in present value dollars, over the project. Backyard chickens are cost-effective at reducing the cost of MSW disposal. The sensitivity analysis revealed that the initial cost can be as high as $225, the annual savings can be as low as $2.25/ backyard chicken, or $9.00/flock, the discount rate can be as high as 33.7% or the project length can be as short as 3.0 years without changing the results. Climate Benefits The impact to climate change was determined using a life cycle assessment based on data available in the Ecoinvent database 3.0. The Ecoinvent 3.0 data related to the carbon dioxide equivalent (CO[2] -e) for production of meat chickens was applied to estimate the CO[2]-e for backyard chickens. I used the weight of 3 kgs or 6.5 pounds for a Rhode Island red hen, a very popular backyard chicken breed, and estimated that a backyard chicken will produce approximately 140 CO[2]-e kgs in its lifetime. The CO[2]-e production of manure for a backyard chicken is 0.56 CO[2]-e kgs. Thus, the total expected CO[2]-e kgs from a backyard chicken are 7.56 CO[2]-e kgs. With 24 chickens needed to consume a ton/year of MSW food waste, these 24 chickens would produce 181.4 kgs of CO2-e. The CO2-e kgs from placing a ton of food waste into a landfill, where it decomposes anaerobically, is 399 kgs. Chickens produce less than half of the CO[2]-e produced from the decomposition of the food waste. The backyard chickens are environmentally preferable to placing the food in landfills. Finally, on the EPA’s Food Recovery Hierarchy feeding animals is a use that ranks third of the six listed (prevention is at the top, and landfill is at the bottom of the pyramid). The backyard chickens produce eggs and fertilizer that can be used at the household where they reside, completing the cycle from household food scraps to food with no consumption of transportation resources. Clearly, it is better to put food scraps in a chicken coop than in a landfill. Maureen Breen is an accounting professor at Drexel University in Philadelphia, PA where she lives with her flock of five hens. She is also the President of Philadelphia Backyard Chickens and teaches chicken-related classes in the Philadelphia metropolitan area.
{"url":"https://www.biocycle.net/feed-chickens-not-landfills/","timestamp":"2024-11-14T12:34:18Z","content_type":"text/html","content_length":"120660","record_id":"<urn:uuid:a3d6338f-4013-495d-8bd7-30a2b7232a68>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00327.warc.gz"}
Cantilever Beam Slope and Deflection Calculator It is important for the civil engineers to calculate the beam deflection. The cantilever beam slope and deflection calculator help you to calculate the slope and deflection of a cantilever beam for a concentrated load at any point. Just enter the inputs to find the slope at free end and deflection of a beam. This cantilever beam slope and deflection calculator are based on the formula provided above. This calculator will make your calculations easy.
{"url":"https://www.calculators.live/cantilever-beam-slope-deflection","timestamp":"2024-11-08T14:54:21Z","content_type":"text/html","content_length":"10361","record_id":"<urn:uuid:6a140731-1050-401a-9807-5f787d08c32d>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00888.warc.gz"}
1023 - The perfect hamilton path There are N(2 <= N <= 13) cities and M bidirectional roads among the cities. There exist at most one road between any pair of the cities. Along every road, there are G pretty girls and B pretty boys(1 <= G,B <= 1000). You want to visit every city exactly once, and you can start from any city you want to. The degree of satisfaction is the ratio of the number of the pretty girls to the number of the pretty boys. You want to know the highest degree of satisfation. There are multiply test cases. First line: two integers N, M; The following M lines: every line with four integers i, j, G, B, response that there is a road between i and j with G and B. The highest degree of the satisfation, rounded to the third place after the decimal point. sample input sample output
{"url":"http://hustoj.org/problem/1023","timestamp":"2024-11-13T14:48:55Z","content_type":"text/html","content_length":"8019","record_id":"<urn:uuid:5dd58608-899a-46d2-b4a8-d4079c4d857d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00387.warc.gz"}
How to Calculate and do an Anova Test in Excel - Spreadsheeto (2024) What is ANOVA ANOVA stands for Analysis of Variance. This statistical test of Excel is used to check if the means of three or more independent data groups are statistically and significantly different, or not. It helps to determine if the differences between the means of these groups represent actual differences (between the means of population) or if they have occurred by chance 📝 ANOVA test is the appropriate choice to make when you want to compare the variance between three or more groups. For example, if you’re going to compare the sample data of student heights from three different schools. There are two kinds of ANOVA tests that you can perform in Excel. • One-way ANOVA (or One-variable ANOVA test): The One-way ANOVA test is used when you have one independent variable within multiple groups, and you want to see if this differs from the independent variable. For example, if we compare the test scores of students from three different Schools (Schools X, Y, and Z), the school is and independent variable, and the test scores are the dependent variable (dependent on how good each school is). Performing a one-way ANOVA test on the sample or scores of these students will help you know if the performance of each school is significantly different from the other (analysis of variance in scores) 🏫 • Two-way ANOVA (or ANOVA: two-factor test): The Two-way ANOVA test is to be used when you have two independent variables, and you want to test how these variables interact and cause differences to a dependent variable. Talking of the same scores and school example above. If you want to see how three different schools (first independent variable) and their faculty (second independent variable) affect the test scores of students, you can run the 2-way ANOVA analysis 👀 One-way ANOVA test in Excel Performing a one-way or one-variable Anova test in Excel is quite straightforward. Let me show it to you through an example here. Here I have data on test scores from three different schools, say School X, School Y, and School Z 🎓 To run an analysis of variance to see if the mean of test scores from these three schools is significantly different, follow the steps below. ANOVA test is a part of the Data Analysis Toolkit of Excel that won’t be there on your ribbon by default. To add it to the Excel ribbon: Step 1) Go to the File tab. Step 2) Go to Excel Options. Step 3) From the pane on the left, click on Add-ins. Step 4) From the bottom of the window, select Excel Add-ins and click on Go. Step 5) Check the option for Analysis ToolPak and click Ok. The Analysis group will be added to the Data tab 👇 Step 6) Go to the Data tab > Analysis Group > Data Analysis. Step 7) Select Anova Single Factor and click on Okay. Step 8) As the input range, select the data on which the ANOVA test is to be run (including the headers). Step 9) Our data is grouped as three different columns so I am checking ‘Columns’ as Grouped By. Step 10) Check labels in the first row as our data has labels/headers in the first row. Step 11) The Alpha Value is set to 0.05 (we’ll let it be that for now). Step 12) For the output range, define the cell range where you want the results of the ANOVA test populated. Step 13) All details are done, now click on Okay. Excel will run the one-way ANOVA test for you and return the results as follows. Running a one-way Anova test is this simple in Excel. The results might seem like too much to digest now. But keep your calm, we will break them down to understand what it means in the next section 🥂 Interpreting the results of the One-way ANOVA test in Excel Here are the ANOVA test results for the one-way ANOVA test that we ran for the test scores of three different schools above. What do they tell you about the dataset? Let’s see that here 😎 The Summary Table • Count: This simply counts the number of data points (test scores) in each group (school). • Sum: The sum tells the total of all test scores in each group. • Average: The average (mean) is calculated by dividing the sum of the test scores by the count of students. • Variance: Variance tells how much the test scores of each school deviate from the mean. The formula for variance is the sum of squared differences between each square and the group mean. Stats from the Summary table are easy to decode. More like some basic statistical figures for your dataset put together. ANOVA Table • Source of Variation: The variation in data can take two types. It can be between the different data groups and within the same data group. • Between Groups: Between groups stats represent the variation in means of different groups due to the independent variable (as discussed above). • SS (Sum of Squares): Before I explain to you what it represents, let me show you how is it calculated 🔎 1. Find the mean for each group which is already calculated in the Summary table as 87.3, 78, and 91.3. 2. Calculate the overall mean for all the three groups. It is 85.53. 3. Now calculate the sum of squares for each group by using the following formula: This will give you 31.21, 567.51, and 332.54 for School A, School B, and School C, respectively. Sum all these numbers up to get the sum of squares as 931.267. The sum of squares is the total variation between the group means. • Df (Degrees of Freedom): It is simply the number of groups less 1. We are analyzing three independent groups so the df is 2 (3 less 1). The degree of freedom tells the number of independent comparisons that you can make between the groups. This represents the number of independent comparisons you can make between the groups 👩🎓 • MS (Mean Square): Just like the Sum of Squares is the total variation between all three groups, the Mean Square represents the average variation between the three groups. To calculate it, you divide the SS (Sum of Squares that is 931.467) by the df (Degrees of Freedom that is 2). Let me show you the formula for the F-statistic and half the story behind it will automatically begin to make sense. The F-statistic is a measure that compares the variance between the group means and the within the groups. The higher this number is, the more these means are significantly different. The P-value comes from the F-distribution table based on the F-statistic and the df 👩🏫 Remember we set up the Alpha value to 0.05 when we were defining the input data to run the ANOVA test. Kasper Langmann, co-founder of Spreadsheeto The P-value is indicative of the probability of the null hypothesis (that the means of all the groups is the same) being true. A P-value equal to or less than 0.05 rejects the null hypothesis. This might mean that at least one of the groups’ mean is significantly different. • F crit (F critical value): F-crit is the critical value from the F-distribution table for a given level of significance (that we have and is usually set as 5%) and the degrees of freedom 💹 A 5% P-value (or significance level) is the probability threshold to reject the null hypothesis. This means there is a 5% chance that we will incorrectly reject the null hypothesis and that is acceptable to us. Kasper Langmann, co-founder of Spreadsheeto The same calculations (based on the same sense) continue within groups. Bottom Line for our ANOVA test For the ANOVA test we’ve just run above, we have the following key stats. • F-value: 71.35 • P-value: 1.66961E-11 (which is approximately 0.0000000000167) • F critical value (F crit): 3.354131 Since the P-value is way smaller than 0.05 (it’s not even 0.01), we can safely reject the null hypothesis to be true (that all the group means are the same). Talking of the F-value, 71.35 is significantly greater than 3.354 which further tells that the group means are significantly different from each other 🚀 This ANOVA results show that the mean test scores for students from all three schools, X, Y, and Z are very different. The independent variable (schools) makes a great difference to the dependent variable (test scores). That is how the ANOVA table helps hypothesis testing and summarizes and pictures the total variability in the given datasets. It shows variability between different groups and within groups. Following this step-by-step guide, you can run one-way ANOVA tests in Excel like a pro. ANOVA tests help you determine if the differences in the means of three or more independent groups are significant, or not 💪 Running an ANOVA test in Excel might not be as big of a challenge as is to interpret the results of this test. Rightly understanding the results of the ANOA test such as the P-value and F-statistic in its right sense will help you make informed decisions based on the results of your data. We have covered all of this knowledge in the guide above. Hope you enjoyed reading this guide and if you did, do not forget to check out other similar Excel tutorials by Spreadsheeto here. • How to Add Line of Best Fit in Excel (Easy Method) • How to Find Coefficient of Variation in Excel • How to Perform Linear Interpolation in Excel (Easy)
{"url":"https://herbnrenewal.com/article/how-to-calculate-and-do-an-anova-test-in-excel-spreadsheeto","timestamp":"2024-11-01T20:32:41Z","content_type":"text/html","content_length":"91000","record_id":"<urn:uuid:cf860dcb-11a7-423c-9900-e14b0ddf3cb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00492.warc.gz"}
Classifying Solids using Angle Deficiency Toni Beardon has chosen this article introducing a rich area for practical exploration and discovery in 3D geometry Like most NRICH articles the reader will get most from it by doing the mathematics before moving on from one paragraph to the next. (This article is based on a mathematics workshop for 12 year olds given by Warwick Evans and Alison Clark Jeavons in Cambridge in February 2000. The photos were taken at the ATE / NRICH Mathematics Superweek holiday for 10 to 13 year olds at Southam in August 2000. ) In this article we shall find out why there are only five regular polyhedra, that is solids where all the faces are regular polygons (triangles, squares, pentagons and hexagons). A shape is called 'regular' when all the sides are the same length, all the angles are the same, all the faces are the same and the pattern in which the faces meet at each vertex (the vertex form) is the same. We shall also explore the properties of the semi-regular polyhedra, the solids where the faces are two or more different regular polygons coming together in the same pattern at each vertex. In the pictures Ellie is holding a dodecahedron which is one of the five regular polyhedra and Vicky is holding a great rhombicosidodecahedron which is a semi-regular polyhedron with square, hexagonal and octagonal faces. All you need to follow this article is very simple arithmetic, to know what an angles is and to use simple logical thinking. Though it is not essential you will be able to visualise the solids much more easily if you can use a construction kit (such as the plastic shapes which fit together made by Polydron as shown in the photos) to make the solids while reading this article. The regular polyhedra are called the Platonic solids and the semi-regular ones the Archimedean solids after two famous Greek mathematicians. Using the plastic polygons made by Polydron, we can discover the three regular tessellations made by triangles, squares and hexagons. Next we can introduce a notation to describe the vertex formed by each tessellation. This notation, describing the number of edges of each polygon meeting at a vertex of a regular or semi-regular tessellation or solid, was devised by the Swiss mathematician Ludwig Schlafli (1814-1895). He was a schoolteacher who did mathematical research in his spare time. Each vertex in the hexagonal tessellation is surrounded by three 6-sided polygons and we say that the vertex form is 666. The square tessellation has vertex form 4444. In a similar way, each of the vertices in the triangular tessellation is surrounded by six 3-sided polygons and has the vertex form 333333. The interior angles of the triangle, square and hexagon are $60^{\circ}$, $90^{\circ}$ and $120^{\circ}$ respectively and it can be seen that the sum of the angles at a vertex in any of the three tessellations is $360^{\circ}$. We can see that these are the only possible regular plane tessellations because pentagons, with interior angles of $108^{\circ}$, cannot fit together around a vertex and for polygons with more than six sides the interior angles are more than $120^{\circ}$ so it is impossible to fit three or more together around a vertex. In any solid, the number of faces at a vertex must be more than two. We begin with triangles. Using only triangles, each vertex must have fewer than 6 faces at a vertex (otherwise we end up with a plane tessellation). We can join three 3-gons (i.e. triangles) to make a vertex with vertex form 333. Carry on constructing a solid so that each vertex has form 333 and we arrive at the tetrahedron with 4 triangular faces. We can join four 3-gons to make a vertex with vertex form 3333 and this solid is the octahedron with 8 triangular faces. We can join five 3-gons (i.e. triangles) to make a vertex with form 33333 and this solid is the icosahedron with 20 triangular faces. Now move on to squares. To make a solid, we must have more than 2 faces at a vertex and fewer than 4 [Why fewer than 4?]. This leaves us with the vertex form 444 and we construct the cube (or hexahedron) with 6 square faces. Next comes the pentagon. Each vertex of any solid must have more than 2 faces. Once you convince yourself that we cannot have a vertex of form 5555 we are left with the regular solid with vertex form 555 called the dodecahedron with 12 pentagonal faces. We cannot make a regular solid with any polygon with six or more sides [Why?]. We have shown that there are five and only five regular solids, and we can begin to complete the following table. In the photo you will see that Ross has constructed the five regular polyhedra (the Platonic Solids named after Plato). │ Name │Vertex Form│n(Faces) = F│n(Vertices) = V│n(Edges)=E│Angle Deficiency│Total Angle Deficiency │ │Tetrahedron │333 │4 │ │ │ │ │ │Octahedron │3333 │8 │ │ │ │ │ │Icosahedron │33333 │20 │ │ │ │ │ │Cube │444 │6 │ │ │ │ │ │Dodecahedron│555 │12 │ │ │ │ │ After careful counting of vertices and edges - it isn't as easy as it sounds - we can complete the next two columns. Try to do this for yourself then check your results . The conjecture that F + V - E = 2 should come from examining the numbers in the F, V and E columns. This is the famous (and so useful) Euler's Theorem . Here it is only a conjecture made from looking at our table but it is in fact true - finding a proof is left to the reader. The next column - Angle Deficiency - supplies the core theme of this activity. Consider the tetrahedron. Its vertex form is 333 and so the sum of the angles at each of its vertices is $60^{\circ}+60^{\circ}+60^{\circ}=180 ^{\circ}$ and we say that the angle deficiency is $360^ {\circ}-180^{\circ}$ that is $180^{\circ}$. It is what you get if you flatten the polyhedron at a vertex and measure the missing angle. Can you see that the angle sum at the vertex of any solid is bound to be less than $360^{\circ}$? [Why?] Definition: The angle deficiency at a vertex is $360^{\circ}$ minus (the angle sum at the vertex). We can fill in the next column in the table with the angle deficiency for each Platonic solid provided we know the interior angle of each $n$-gon. A little diversion here takes us back to the plane, it links ideas of angle deficiency with curvature and it proves the formula for the interior angle of a regular $n$-gon which is $(180-360/n)^{\ circ}$. From this formula we see that the interior angles of 3-, 4-, and 5-gons are 60$^{\circ}$, 90$^{\circ}$ and 108$^{\circ}$ respectively. If you walk all the way around a circle back to your starting point you turn through a total angle of $360^{\circ}$ or (using another measure) $2\pi$ radians. Imagine walking around a regular $n$-gon starting from one of the vertices. Now the curvature, instead of being evenly spread around the edge, is concentrated at the vertices. At each vertex you make the same turn to walk along the next edge. When you get back to your starting point you turn through the same angle to face the direction you were facing at the start and altogether you have turned through a total angle of $360^{\circ} $. Each of these turns is therefore $(360/n)^{\circ}$, the exterior angle at the vertex. The interior angle is therefore $(180-360/n)^{\circ}$. The final column in the table - total angle deficiency - is just the sum of the angle deficiencies at every vertex of a particular solid. Since, in each case, the vertex form is the same at each vertex, we can just multiply the number of vertices (column V) by the angle deficiency we have just calculated. We end up with the following table. │ Name │Vertex Form│n(Faces) = F│n(Vertices) = V│n(Edges)=E│Angle Deficiency│Total Angle Deficiency │ │Tetrahedron │333 │4 │4 │6 │180 │720 │ │Octahedron │3333 │8 │6 │12 │120 │720 │ │Icosahedron │33333 │20 │12 │30 │60 │720 │ │Cube │444 │6 │8 │12 │90 │720 │ │Dodecahedron│555 │12 │20 │30 │36 │720 │ We have ended up with the result that the total angle deficiency for each of the five Platonic Solids is $720^{\circ}$. But is the result generally true? Is the total angle deficiency for any solid $720^{\circ}$? Well, yes it is and you can take it `on trust' for now - but you had better check it with some other solids and you might like to seek a proof on the internet or in a book. We are generalising ideas from two dimensions (the plane) to three dimensions. In the plane the sum of the exterior angles of any polygon is always $360^{\circ}$ (or $2\pi$ radians); this is linked to the concept of curvature and the fact that, in the limit as $n$ tends to infinity, we get a circle of length $2\pi$ times its radius. In the plane the constant $360^{\circ}$ or $2\pi$ radians plays the same role as the total angle deficiency plays in 3-D. The total angle deficiency for any solid is $720^{\circ}$ or $4\pi$ radians. This is linked to the concept of curvature and the fact that, in the limit as $n$ tends to infinity, we get a sphere with surface area of $4\pi$ times the square of its radius. Now let us relax the condition of regularity. A semi-regular solid is one which is made up of more than one type of polygon but in which all vertices have the same vertex form. Suppose we choose the vertex form 366 so that each vertex is surrounded by one 3-gon and two 6-gons. Will this choice generate a semi-regular solid and, if so, how many hexagons and how many triangles do we need? We aim to use Euler's theorem and the Total Angle Deficiency theorem to help us fill in another row in our table. We already have the second and the last columns. │Name│Vertex Form│n(Faces) = F│n(Vertices) = V│n(Edges)=E│Angle Deficiency│Total Angle Deficiency │ │ │366 │ │ │ │ │720 │ Now we go through the following steps: • angle deficiency - since the vertex form is 366, the angle deficiency must be $360^{\circ}-300^{\circ}=60^{\circ}$ • number of vertices - since all vertices have the same form, we can divide 720 by 60 to get the number of vertices which is 12 • number of edges - since each vertex is surrounded by 3 faces (and therefore by 3 edges too) we can argue that the number of edges counted is 12 x 3 = 36 but in doing so we have counted the edges twice, [why? well once at each end] so the number of edges must be half of 36, that is 18 • number of faces - use Euler's Theorem F + V - E = 2 to get F = 8 │Name│Vertex Form│n(Faces) = F│n(Vertices) = V│n(Edges)=E│Angle Deficiency│Total Angle Deficiency │ │ │366 │8 │12 │18 │60 │720 │ But how many of the 8 faces are triangles and how many are hexagons? What is the shape called? Count the triangles first. Since each vertex has one 3-gon and there are 12 vertices, we can argue that the number of triangles must be $12\times 1= 12$ but in doing so we have counted each triangle 3 times over, once at each of its vertices since each triangle has 3 vertices. So the number of triangles must be $12/3=4$. Count the hexagons next. We could just say $8-4=4$ but let's double check. Since each vertex has two 6-gons and there are 12 vertices, we can argue that the number of hexagons must be $12\times 2=24$ but in doing so we have counted the number of hexagons 6 times since each hexagon has 6 vertices. So the number of hexagons must be $24/6=4$. │ Name │Vertex Form│n(Faces) = F│n(Vertices) = V│n(Edges)=E│Angle Deficiency│Total Angle Deficiency │ │ │ │4 3-gons │ │ │ │ │ │Truncated Tetrahedron│663 │ │12 │18 │60 │720 │ │ │ │4 6-gons │ │ │ │ │ In this photo Ross is holding one of these solids and if you look at it carefully you might be able to see that it could be obtained by "chopping off" the vertices of a tetrahedron. It is called a truncated tetrahedron. Why not just choose a possible vertex form, and see what happens? Your vertex form must be such that the angle sum is less than $360^{\circ}$. Things may go wrong. For example, if you were to choose the vertex form 335 then the angle deficiency is $360-120-108=132$ which does not divide 720 so no semi-regular solid with such a vertex form exists. Some vertex forms produce prisms, such as 344 the triangular prism which has three square faces and two triangular faces. Other vertex forms produce anti-prisms. In addition to the prisms and anti-prisms you will produce all the Archimedean solids (named after Archimedes) whose details are on the completed table below. You may like to fill in your own table and then check your results with this table at the end of this article. Ben has made a truncated cube with vertex form 388. Can you see that it could be made by cutting off a tetrahedron from each vertex of a cube? Duncan and Suzanne have each made a rhombicuboctahedron with vertex form 3444. By using 'skeleton' pieces for the faces rather than solid plastic polygons you can look inside the polyhedra to study The first known mention of the thirteen "Archimedean solids" is in a manuscript from the fifth book of the "Collection" of the Greek mathematician Pappus of Alexandria, who lived in the beginning of the fourth century AD. You will find illustrations of the polyhedral solids and much interesting information about these solids and their geometrical and practical construction on the World Wide Web site maintained by Tom Gettys and further information at http://mathworld.wolfram.com/ArchimedeanSolid.html The St Andrew's web site is a good starting point for finding more about Plato . and about Archimedes . The 5 Platonic and 13 Archimedean Solids │ Name of Solid │Vertex Form│Number of Faces│Number of Vertices│Number of Edges│Angle Deficiency│Total Angle Deficiency│ │Tetrahedron │3 3 3 │4 │4 │6 │180 │720 │ │Cube │4 4 4 │6 │8 │12 │90 │720 │ │Octahedron │3 3 3 3 │8 │6 │12 │120 │720 │ │Dodecahedron │5 5 5 │12 │20 │30 │36 │720 │ │Icosahedron │3 3 3 3 3 │20 │12 │30 │60 │720 │ │Truncated Tetrahedron │3 6 6 │8=4+4 │12 │18 │60 │720 │ │Truncated Cube │3 8 8 │14=8+6 │24 │36 │30 │720 │ │Truncated Octahedron │4 6 6 │14=6+8 │24 │36 │30 │720 │ │Truncated Dodecahedron │3 10 10 │32=20+12 │60 │90 │12 │720 │ │Truncated Icosahedron │5 6 6 │32=12+20 │60 │90 │12 │720 │ │Cuboctahedron │3 4 3 4 │14=8+6 │12 │24 │60 │720 │ │Icosidodecahedron │3 5 3 5 │32=20+12 │30 │60 │24 │720 │ │Snub Dodecahedron │3 3 3 3 5 │92=80+12 │60 │150 │12 │720 │ │Rhombicuboctahedron │3 4 4 4 │26=8+18 │24 │48 │30 │720 │ │Great Rhombicosidodecahedron │4 6 10 │62=30+20+12 │120 │180 │6 │720 │ │Rhombicosidodecahedron │3 4 5 4 │62=20+30+12 │60 │120 │12 │720 │ │Great Rhombicuboctahedron │6 4 8 │26=8+12+6 │48 │72 │15 │720 │ │Snub Cube │3 3 3 3 4 │38=32+6 │24 │60 │30 │720 │ Sadly Warwick died in July 2000. He was much loved by his colleagues and students as an inspiring teacher with original ideas and a wonderful way of helping people to understand and to enjoy mathematics. He had a great zest for life. Teachers' Resources This article takes you through the classification of the Platonic (regular) and Archimedean (semi-regular) solids, to find all of them and prove that there are no more. I found it difficult to make a choice of article for the NRICH Tenth Anniversary Celebration. I chose this one because it has been the basis of so many enjoyable sessions that I have had both with young learners and also with teachers. With any age group, from 10 upwards, I find the best approach is to explain the Schlafli code and demonstrate it with an actual Archimedean solid,and then to give individuals different codes with the task of making their solid for that code. According to how much time you have, and how much your group already know, you can structure the session so that they discover the Euler Relation and/or the total Angle Deficiency, you can arrive at a proof that there are only 5 Platonic Solids and you can fill in the tables for all the Archimedean Solids deducing the number of each shape of face as described in the article. There are many possibilities. In the ATE Maths Superweek where the photos were taken we had a great Holiday Director called Ian Johnston and a very good cook called Mrs Higgins and the children never knew that these were one and the same person. Just before lunch on the morning we spent on this topic Ian (an engineer) joined us and the children, having decided to test him, excitedly showed him their models "Ian look at mine, it's a 466" , "Ian mine is a 3434, can you explain that?" ... and so on. It did not take long for Ian (an engineer) to work it out and the children were impressed. Perhaps your class can make the models out of card and hang them from the ceiling.
{"url":"https://nrich.maths.org/problems/classifying-solids-using-angle-deficiency","timestamp":"2024-11-07T20:17:56Z","content_type":"text/html","content_length":"67432","record_id":"<urn:uuid:49c8289b-b6ae-4b8f-8114-56a66f881101>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00165.warc.gz"}
Is Mathematics in the Fabric of the Universe? Your complimentary articles You’ve read one of your four complimentary articles for this month. You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please Is Mathematics in the Fabric of the Universe? Adrian Brockless unravels the threads. Is mathematics invented or discovered? Do numbers exist? Many mathematicians tend towards versions of mathematical Platonism, broadly construed. This means that in one way or another, occasionally explicitly, but mostly implicitly, they presuppose that mathematics is in some sense inherent in reality, existing independently of the human mind. Eminent figures holding the view include Kurt Gödel, Roger Penrose, Alain Connes, Albert Lautman, and Hugh Woodin, to name but a few. This article aims to show them to be (mostly) wrong about this. I will discuss the idea by concentrating on the simplistic popular version of mathematical Platonism beloved of school teachers – the idea that ‘mathematics is contained within the fabric of the universe’. One popular way in which this idea is supported is through the argument that plants and animals are physical instantiations of the Fibonacci Sequence. Cruder versions of the argument often appeal to the Golden Ratio appearing in nature. The Fibonacci Sequence is a series of numbers built through the rule that in order to find the next term in the sequence you must add the previous two terms. This yields 1, 1, 2, 3, 5, 8, 13, 21, 34… and so on. It is a sequence said to appear widely in nature – in ammonites, flowers, pine cones, and more – and in this sense is supposed to show that mathematics is contained within the fabric of the universe. Using the Golden Ratio to support this same conclusion is perhaps less compelling. The relative proportions of physical objects can be relatively pleasing or displeasing to the eye, and one obvious question to ask is why this might be. Mathematicians, along with philosophers and scientists, have made varying attempts to answer it. Euclid is generally accepted to have provided an answer by saying that the Golden Ratio is the perfect harmonious ratio. It’s the ratio where, in a line made of length A plus length B, the ratio of these lengths is (A+B)/A=A/B. It comes out to about 1.618. Since the heyday of Classical Greece this ratio has been routinely expressed in architecture and art, and its importance in mathematics has led to it being given its own Greek letter, ϕ (phi). There are even apps which use it to let you know how attractive you are! The relationship between these aspects of mathematics and the empirical world is well known, and often believed to be a justification for the thought that mathematics is inherent in the fabric of the universe. I hope to show this justification to be hollow. Pattern image © Paul Gregory 2021 Stable & Unstable Meanings I will concentrate on two dimensions of the logic of mathematics. The first dimension is that convention determines which concepts are stable – for example, numbers – and which are not – for example, political concepts such as ‘right wing’, ‘left wing’ and ‘undemocratic’. Stipulations of the meaning of concepts result from linguistic rule-following and practice, and even the most stable of the meanings of concepts is still a matter of convention. The second dimension is the distinction between logical determination and causal determination – that is, the distinction between a concept being necessary or it being contingent. So ‘a triangle has three sides’ is necessarily true because of the meanings of the terms of the proposition, and highly stable precisely because there is no dispute concerning the meanings of those terms. The same is true of other necessary propositions such as 2+3=5, or that 34 succeeds 21 in the Fibonacci Sequence. In an important sense, however, the necessary natures of these propositions is dependent upon context. For example, were I teaching a child the meaning of the term ‘triangle’, then the predicate ‘has three sides’ would not be internal to or contained within the subject, that is, necessarily implied by the word ‘triangle’. Rather, in this kind of context, the predicate tells the child something new about the subject. Similarly, the number and other terms need to be understood before 2+3=5 can be understood as a necessary proposition. Once understanding of the meaning of the terms is achieved, however, no recourse to experience is required to verify such pure mathematical propositions. In this definitional sense, even necessary mathematical propositions are conventions of geometry and mathematics. However, there is stability of meaning in relation to the terms used. When I talk about a triangle or use number terms, I do not confer meaning on them each time I use them. Rather, I show their conventional meaning in my use of mathematics and geometry. Put another way: the giving of a definition for any term, or the stipulation of a rule (such as for the Fibonacci Sequence), is itself neither true nor false, but purely contingent: the same signs didn’t have to mean what they conventionally actually do mean. However, once the meaning of the terms has been established, the tautologous, necessary truth of the proposition ‘a triangle has three sides’, or of ‘2+3=5’ becomes evident. That I do not give the meanings to the terms myself shows an interconnectedness between public practice and truth and falsity. The ‘public practice’ aspect is that when I say ‘‘A triangle has three sides’’ I am speaking English, and using terms that have clear and accepted meanings and uses in geometry and mathematics. The proposition is, therefore, rendered true by English language practices in a way that ensures one does not need further recourse to experience in order to establish its truth-value. Rather, the proposition is true simply because the terms have been publically defined to mean what they do. This means that there is a logical connection between our concepts and our practices, including the stipulations we provide to govern the use of these concepts - these are also practices. So one is only doing mathematics if one fulfils certain criteria in practice – for instance, accepting the meanings of the terms involved in ‘2+3=5’. The proposition would be false or nonsense if one or more of its terms meant something other than it does. It is the stability of the established meanings of the terms which ensures this is not the case – and such stability is dependent upon our linguistic practices. By contrast, the arguments which surround what is deemed ‘undemocratic politics’ are wrapped up with a lack of agreement over the meaning of the term. In other words, it is because the meaning of ‘undemocratic’ is not universally agreed upon that many arguments occur. Stipulations of meaning (the creation of definitions) cannot be true or false, but nevertheless need to be generally accepted and stable if the propositions in which they are contained are to be truth-valued. But there are many varieties of political set-up in which there is an electorate which are on occasion termed ‘undemocratic’, some of which are incompatible with one another; for example, ‘first past the post’ versus ‘proportional representation’. For this reason there exists no way in which propositions that contain the term ‘undemocratic’ can be absolutely truth-valued. After all, there exists ‘The Democratic People's Republic of Korea’ – which shows both that the term ‘democratic’ does not have a clearly-accepted universal meaning, and that what counts as democratic or undemocratic practice is often not settled. The relationship between linguistic meaning and human practices is also clear here and obviously interdependent. But unlike the term ‘triangle’ or the number 2, the term ‘undemocratic’ cannot be used with anything like mathematical accuracy. Instead, within political arguments there are differing definitions and uses of the same term taking place simultaneously, amongst a variety of political set-ups. What is ‘democratic’ or ‘undemocratic’ cannot, therefore, refer to one particular kind. The Language of Mathematics Fibonacci Flower © Anna Benczur 2015 For those who maintain that mathematics is in the fabric of the universe, the stability that exists in mathematical propositions is understood to exist in the nature of things - in features such as ideal shapes or ratios: perfect circles and so on. Empirical justification has been attempted for this kind of mathematical Platonism: we can see these perfect forms expressed in imperfect copies in nature – in circles, plant cells, etc. As mentioned, it is also thought that there’s a necessary relationship between certain shapes in nature and such pure mathematical concepts as the Fibonacci Sequence or the Golden Ratio. Genuine mathematical knowledge is held to be absolute and beyond any empirical justification, which is vulnerable to error. Thus, it is believed, the stringency of justification needed to attain mathematical knowledge needs to be higher than that for knowledge gained through observation. Nevertheless, mathematical perfection is visible within nature, the argument goes. So whilst mathematical knowledge requires a chain of error-free justification, it is subsequently quite possible to observe that mathematical perfection in nature, and to understand it as such. So – this thought continues – it is demonstrable that mathematics is inherent in the fabric of the universe. However, an important observation needs to be made here in relation to mathematical propositions (for example, 2+3=5) and empirical propositions which can be expressed mathematically (for example, Newton’s second law of motion, which can be expressed as F=ma). The observation is that a mathematical proposition such as 2+3=7 is not a mathematical proposition. Granted, it bears a resemblance to such a proposition because of its use of numbers and other mathematical terms, but it is necessarily false. Accordingly, there is never a possibility that it can be true. This is not the case with empirical propositions. It is logically possible for the structure of the universe to have been (or be) quite different from the way it is with different physical laws and so on. In this sense, it is, at least in principle, possible for Newton's second law of motion to be false if the structure of the universe were other than the way it is. So it seems that an empirical proposition can be false but remain an empirical proposition, whereas a mathematical proposition cannot be false and remain a mathematical proposition. But why can 2+3=5 not be otherwise? What makes it immutably, necessarily true? In order to correctly say that we are doing mathematics we need to follow mathematical rules. Much as we are required to follow certain rules in order for it to be correctly said that we are playing chess, so the same is true of mathematics. If we say ‘2+3=7’, then, clearly, the mathematical rules are not being followed (this is a further reason why it is not a mathematical proposition). We can therefore say two things. Firstly, the rules required in order to say that we’re doing mathematics or playing chess express the essence or the fundamental dimensions of that activity. Secondly, following and developing rules (of any kind) is a human activity. These rules do not exist independently of humanity. Art © Dror Rosenski 2021 Consider both of these claims in relation to rules that change – such as the rules of cricket, which have seen regular changes since the first test match was played back in 1877. We now have ‘50 over cricket’, ‘20-20 cricket’, and the brand new ‘the hundred’. Had these new formats been shown to the cricket-playing nations of the late nineteenth and early twentieth centuries, they would doubtless have said that they were ‘not cricket’. The developments and changes that have characterized the game are human activities, and, whatever one’s opinion of those changes, there needs to be agreement in practice between the players. This requires an understanding of the rules. So the rules, firstly, express the essence of the game of cricket, and, secondly, show it to be an entirely human practice, entirely the result of human stipulations. When we share rules, we share practices. If we did not agree upon the rules of chess, then we would not be able to play chess. Yet, as Ludwig Wittgenstein put it, this is not a question of an agreement of opinion – it is not merely my opinion that these are the rules of chess or cricket. Rather, playing cricket or chess is a shared practice and the shared practice itself provides the criteria for saying whether or not the rules have been followed correctly. One can only be right or wrong about the rules of cricket, chess, or mathematics because they’ve already been defined. So the practices themselves are not opinions, even though opinions can be expressed in relation to them (for example, I might object to 20-20 cricket). What we have when we follow rules is not agreement in opinion, then, but rather, agreement in judgement, or, to use Wittgenstein’s language again, agreement in form of life or practice. The uniformity of the rules of mathematics – the fact that they express a form of human practice that is universally shared – provides a stable framework through which we are able to describe reality; the Fibonacci Sequence and Golden Ratio being two examples where we do so. The necessity of mathematical propositions is based on the agreed rules as to what mathematical terms mean in much the same way that rules in chess require public agreement in order for the game of chess to exist and be played at all. In this way the essence of mathematics is determined by agreement in judgement concerning mathematical terms. Put another way: the conceptual structure which we use to describe the world is dependent on the criteria we use for saying we have applied our concepts correctly or incorrectly. The rules that govern the application of mathematical concepts may be used to express the essence of the world; but that does not mean that such rules are contained within the fabric of the universe itself. Rules require certain things of us, but that is no reason to infer that the relationship between rule and practice is causal. If such a relationship were causal it would make our mathematical practices contingent, as it is always possible for a causal relationship not to hold in any given instance (the relationship is not necessary). Were the relationship between mathematical rules and practices contingent, it would be possible for the relationship between a mathematical rule and its application not to hold in any given instance, as this would be an empirical rather than a logical matter. For example, a contingent mathematics would mean that it was possible that 2+3 not equal 5 in a given instance; that sometimes 2+3 might equal 7. However, the fact that one only follows a rule in mathematics if one fulfils certain practices instead shows there to be a logical connection between our mathematical rules, practices, and concepts. If our practices did not exist – if human beings did not exist – then mathematical practices would not exist, and, accordingly, neither would the mathematical rules which are the criteria for saying whether or not mathematics is being done. The rules of mathematics are not, therefore, in the fabric of the universe, existing independently of human beings. So, since the rules of an activity are the essence of that activity, mathematics cannot be held to be in the fabric of the universe. But here’s a final thought: human beings do exist: we are creatures in the universe, and we have developed forms of life. These forms of life are interdependent with our practices, which practices, in turn, provide the criteria for determining both the necessity of mathematical propositions and whether we have applied mathematical rules correctly. Certainly, it is a contingency that human beings exist – much as it is a contingency that the rest of the universe exists; but that we do exist with our mathematical practices perhaps shows that mathematics is inherent in the fabric of the universe after all! © Adrian Brockless 2021 Adrian Brockless has taught at Heythrop College, London, and the University of Hertfordshire. He was Head of Philosophy at Sutton Grammar School from 2012 to 2015, and currently teaches philosophy at Woking College. Email: a.brockless@gmail.com.
{"url":"https://philosophynow.org/issues/146/Is_Mathematics_in_the_Fabric_of_the_Universe","timestamp":"2024-11-14T13:39:18Z","content_type":"text/html","content_length":"33852","record_id":"<urn:uuid:21d01861-80a9-4a88-9d22-de15b4a86ee2>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00288.warc.gz"}
Analysis and Design of Unconstrained Nonlinear MPC Schemes for Finite and Infinite Dimensional Systems Title data Grüne, Lars: Analysis and Design of Unconstrained Nonlinear MPC Schemes for Finite and Infinite Dimensional Systems. In: SIAM Journal on Control and Optimization. Vol. 48 (2009) Issue 2 . - pp. 1206-1228. ISSN 1095-7138 DOI: https://doi.org/10.1137/070707853 Abstract in another language We present a technique for computing stability and performance bounds for unconstrained nonlinear model predictive control (MPC) schemes. The technique relies on controllability properties of the system under consideration and the computation can be formulated as an optimization problem whose complexity is independent of the state space dimension. Based on the insight obtained from the numerical solution of this problem we derive design guidelines. Further data
{"url":"https://eref.uni-bayreuth.de/id/eprint/63338/","timestamp":"2024-11-07T10:52:58Z","content_type":"application/xhtml+xml","content_length":"20140","record_id":"<urn:uuid:2226ea27-801b-4d92-9af6-a35fa9e8e9ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00560.warc.gz"}
What does a : -> b mean in haskell? In Haskell, a -> b represents a function type that takes an input of type a and returns an output of type b. It can be read as "a function that maps values of type a to values of type b". For example, the type signature Int -> Bool describes a function that takes an Int as input and returns a Bool as output. Similarly, String -> Int describes a function that takes a String and returns an Int. The -> symbol is right associative, meaning that a -> b -> c is equivalent to a -> (b -> c). This indicates that a function can also take multiple arguments.
{"url":"https://devhubby.com/thread/what-does-a-b-mean-in-haskell","timestamp":"2024-11-14T18:47:05Z","content_type":"text/html","content_length":"116437","record_id":"<urn:uuid:1f32de95-442a-45e0-b4c1-e02ee27c6320>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00026.warc.gz"}
MathSciDoc: An Archive for Mathematician The modified Camassa-Holm (also called FORQ) equation is one of numerous <i>cousins</i> of the Camassa-Holm equation possessing non-smoth solitons (<i>peakons</i>) as special solutions. The peakon sector of solutions is not uniquely defined: in one peakon sector (dissipative<sup>a</sup>) the Sobolev <i>H</i><sup>1</sup> norm is not preserved, in the other sector (conservative), introduced in [2], the time evolution of peakons leaves the <i>H</i><sup>1</sup> norm invariant. In this Letter, it is shown that the conservative peakon equations of the modified Camassa-Holm can be given an appropriate Poisson structure relative to which the equations are Hamiltonian and, in fact, Liouville integrable. The latter is proved directly by exploiting the inverse spectral techniques, especially asymptotic analysis of solutions, developed elsewhere [3].
{"url":"https://archive.ymsc.tsinghua.edu.cn/pacm_category/0111?show=time&size=5&from=10&target=searchall","timestamp":"2024-11-02T23:58:18Z","content_type":"text/html","content_length":"69568","record_id":"<urn:uuid:53820151-0e8a-42b5-be86-282ee2f319ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00697.warc.gz"}
More on that custom widget – more about data analytics and researching stuff Let’s talk about lay research, one of the cornerstones of science, and reproducibility, another, and doing one’s research and homework before putting something out in the world when developing even trivial things like iOS Shortcuts-driven Widgets. Problem/Position statement (or, how did I get into this mess?): So previously, I wrote up how I was displaying temperature from a local PurpleAir sensor into a widget on the home screen of my iPhone, recently upgraded to iOS 14 (14.1 now), and what my product search journey was to get that done to my liking. After I proudly showed my result to my partner, she challenged me to also display the AQI in the same widget, “perhaps with a gradient”. Which is cool. I had to figure out how to do more complex visual stuff, and I had to get the AQI. “Easy,” thought I, innocently. I mean it only took a day. Or two. Complexity: You see, the sensors only output AQI as a measurement in micrograms per cubic meter, and the US EPA’s AQI index is a unit-less number, so how do you get from one to the other? My task was to figure this out. It’s all public information, right? Because the US EPA is a Federal department. So to Google I went. This turned out to yield a non-trivial answer. I found: Armed with this information, I tried to replicate what figures I was currently seeing with PurpleAir. I took the values I could from the local sensors realtime JSON output, and I compared trial output I generated in Excel with those formulas to PurpleAir’s visual mapping utility. AND I DID NOT GET ANYWHERE NEAR the values PurpleAir was generating. So… back to the drawing board. It occurred to me to look at PurpleAir’s map and see if there was any info in their Conversion help topic on the map itself. It was perfect. They linked to the updated version of the EPA write up (Version 4?) which actually has the correct formula for converting. How do I know? I took that version and reproduced it in Excel, and I took real-time sensor data and calculated what I should see on PurpleAir’s sensor map and I did that 5 times – every time the data or the map changed, to make sure the conversion was working reasonably The missing link here was that while there was a correction algorithm for the micrograms per cubic meter, there was still an implied algorithm in the AQI “breakpoint” charts – converting each segment of micrograms per cubic meter to the unit-less Air Quality Index value. Though it was pretty easy to come up with the algorithm, I’m still proud of the careful work behind both. Upon vetting that I knew with reasonable certainty that I had the right conversion formula, I wrote the algorithm in the Shortcuts interpreted language. This is the sort of basic minimum, for my work, that I see a lot of people new to programming, and new to statistics, number- and calculation-based science, and new to data analytics, fail to do. And it’s a shame. But seriously, folks. If you want to do something math-y, and you want to put it in your widget or your app or your website, PLEASE: 1. Find out what the official math is, or what you think it is. 2. Model it in a spreadsheet or some basic workbook like a Jupyter Notebook. 3. Find an official source to vet your results against. 4. Run tests for 5 or 10 or 100 or 1000 sample values that you can vet against to be sure you either have 100% fidelity or a close approximation to 100% fidelity. 5. If everything checks out, and you think you do have a good approximation of the official math, THEN publish your widget or app or whatever. Don’t use lay-researchers as your beta testers for not doing your homework and releasing crap math out into the world. We have enough global strategic reserves of crappy math and crappy science. We don’t also need yours to gum up the works. If you want to drill down into the algorithm in the Shortcut: Shortcuts is a high level programming language, but it doesn’t have deluxe programming features. While other languages might have a case/ switch structure to do complex comparison-driven algorithms, you have to do nested If-then/else statements. This Shortcut is currently at 119 steps, or actions. Here’s a sample of the logic from the AQI correction algorithm and the beginning of the micrograms per cubit meter value to the AQI within the Shortcuts app: The primary steps handling data in the app are: 1. Gather temperature reading from the sensor and correct it by subtracting 8F from the value read for the heating factor PurpleAir publishes about the plastic housing for the sensor for outdoor 2. Figure out the color for the corrected temperature. 3. Gather the appropriate readings from the A and B sensors for the PM2.5 Value. 4. Gather the humidity reading from the sensor. 5. Calculate the correction value for the micrograms per cubic meter using the formula: 0.52*(Average of A and B readings for the PM2.5 value)-0.085*(Humidity)+5.71 6. Use the EPA AQI Breakpoint chart to algorithmically convert the micrograms per cubic meter to the AQI value: 1. Less than 12 ug/m^3: CorrectedAQI/12*50 2. Between 12 and 35.4: (Corrected AQI – 12)/23.4*50+50 Note: the – 12 is the lower breakpoint high value, 23.4 is the difference between 35.4 and 12, the range of this breakpoint section, the first 50 is the range of the AQI index bracket being figured, and the second 50 is the high AQI value of the next lower bracket. 3. This goes on for values of ug/m^3 up to and over 500.4. 4. Take the AQI and round it to the nearest integer. 7. Figure out the EPA index color from the AQI value, ranging from green to maroon. 8. Take all of this together with WidgetPack to present the data and the graphics.
{"url":"https://geekblog.malcolmgin.com/2020/10/22/more-on-that-custom-widget-more-about-data-analytics-and-researching-stuff/","timestamp":"2024-11-04T21:31:38Z","content_type":"text/html","content_length":"97946","record_id":"<urn:uuid:fd9eeadd-4d8e-4de4-823f-c59fdc1d1e98>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00801.warc.gz"}
Understanding Fractions: A Comprehensive Guide for Kids - Chimpvine Understanding Fractions: A Comprehensive Guide for Kids Fractions are an essential part of mathematics and are used to represent parts of a whole or a collection of objects. Understanding fractions is crucial for kids as it helps them grasp the concept of sharing, dividing, and comparing quantities. What are Fractions? A fraction represents a part of a whole or a collection and consists of a numerator (the top number) and a denominator (the bottom number). The numerator indicates the number of parts being considered, while the denominator represents the total number of equal parts that make up the whole or the collection. Equal and Unequal Parts Fractions are equal parts of a whole object or group of objects. Ways to Represent a Fraction Fractions can be represented in various ways, including the fractional form (e.g., 3/4), decimal form (e.g., 0.75), and percentage form (e.g., 75%). Each representation provides a different way of understanding and visualizing the fraction. Fractional Representation Fractions are commonly expressed as \frac{a}{b}, in which ‘a’ is the numerator whereas ‘b’ is the denominator. It represents the fraction is of ‘a’ parts when a whole of divided into ‘b’ parts. We can take a numeral example to understand. Fractions as Decimal Number Fractions can also be represented in a decimal form. To express a fraction in a decimal format, we divide the numeration by the denominator. Percentage Representation A fraction can also be represented in percentage form. We multiply the fraction by 100 to convert it into percentage. Fractions on a Number Line A number line is a visual representation of numbers, and fractions can be plotted on a number line to show their position between whole numbers. This helps kids understand the relative size and value of different fractions. Understanding Different Types of Fractions Fractions can be classified into various types based on their properties and relationships. These include unit fractions, proper fractions, improper fractions, mixed fractions, like fractions, unlike fractions, and equivalent fractions. Unit fraction: A fraction where the numerator is 1. Example:\frac{1}{4} (one-fourth). Proper fraction: A fraction where the numerator is less than the denominator. Example:\frac{3}{5}(three-fifths). Improper fraction: A fraction where the numerator is greater than or equal to the denominator. Example:\frac{7}{4} (seven-fourths). Mixed fraction: A whole number combined with a proper fraction. Example: 2\frac{1}{3} (two and one-third). Like fraction: Fractions that have the same denominator. Example: \frac{2}{5} and \frac{3}{5}. Unlike fraction: Fractions that have different denominators. Example:\frac{1}{3} and \frac{2}{5}. Equivalent fraction: Fractions that represent the same value but may have different numerators and denominators. Example: \frac{1}{2} and \frac{2}{4}are equivalent fractions. Understanding Fractions for Kids For kids, learning about fractions can be made fun and engaging by using relatable examples. They can understand fractions as a part of a whole, such as a fraction of a pizza or a fraction of a cake. Additionally, they can explore fractions as a part of a collection of objects, such as a fraction of a set of toys. Examples of Fractions for Kids Fraction of a Whole: If a pizza is divided into 8 equal slices and 3 slices are eaten, the fraction of the pizza eaten is 3/8. Fraction of a Collection of Objects: If there are 10 marbles, and 4 of them are blue, the fraction of blue marbles is 4/10. 1. Understanding Fractions as Parts of a Whole Scenario: Explaining fractions using relatable examples for kids. Tip: Use everyday objects such as pizzas, cakes, or candies to demonstrate fractions as parts of a whole. This helps kids visualize and understand the concept better. 2. Visualizing Fractions on a Number Line Scenario: Helping kids understand the relative size of fractions. Tip: Use a number line to visually represent fractions and show their position between whole numbers. This helps kids compare and order fractions. 3. Exploring Equivalent Fractions Scenario: Introducing the concept of equivalent fractions to kids. Tip: Show kids how different fractions can represent the same part of a whole or a collection. Use visual aids and manipulatives to demonstrate equivalent fractions. 4. Identifying Types of Fractions Scenario: Teaching kids about the different types of fractions. Tip: Engage kids in activities that involve identifying and categorizing fractions based on their properties. Use examples to illustrate each type of fraction. 5. Converting Fractions to Decimals and Percentages Scenario: Introducing kids to different ways of representing fractions. Tip: Show kids how fractions can be converted to decimals and percentages to provide alternative ways of understanding and expressing the same quantity. Story: “The Fraction Adventure of Emma and Noah” Emma and Noah were two curious kids who loved exploring the world around them. As they ventured through various real-life scenarios, they encountered fractions in everyday situations, learning how fractions are used in practical ways. Scenario 1: Baking with Fractions Emma and Noah decided to bake cookies and followed a recipe that required 3/4 cup of flour. They used measuring cups to accurately measure the fraction of flour needed for the recipe, understanding the importance of precise measurements in baking. Scenario 2: Sharing Treats Equally During a playdate, Emma and Noah had a box of chocolates to share with their friends. They divided the chocolates into equal parts, ensuring that each friend received a fair fraction of the treats. This helped them understand the concept of fair sharing and fractions as parts of a whole. Scenario 3: Understanding Discounts While shopping with their parents, Emma and Noah noticed a store offering a 25% discount on toys. They realized that the discount represented a fraction of the original price, and they calculated the discounted amount using their knowledge of percentages and fractions. Scenario 4: Measuring Ingredients In the kitchen, Emma and Noah helped their parents measure ingredients for a recipe. They used measuring spoons to add 1/2 teaspoon of salt, understanding how fractions are used in cooking and baking to ensure the right balance of flavors. Scenario 5: Planning a Garden Emma and Noah decided to plant a garden with different types of flowers. They used a plan that divided the garden into sections, each representing a fraction of the total area. This helped them visualize and plan the layout of the garden effectively. Unit fractions represent the quantity of one part of a whole or a collection. They have a numerator of 1, such as 1/2, 1/3, 1/4, and so on, where the numerator is always 1. Proper fractions have a numerator smaller than the denominator, representing a value less than 1. Improper fractions have a numerator greater than or equal to the denominator, representing a value equal to or greater than 1. Kids can understand equivalent fractions by recognizing that different fractions can represent the same part of a whole or a collection. They can use visual aids and manipulatives to compare and identify equivalent fractions. Representing fractions on a number line helps kids visualize the relative size and position of different fractions. It allows them to compare and order fractions, providing a clear understanding of their value in relation to whole numbers. Fractions can be converted to decimals by dividing the numerator by the denominator. To convert a fraction to a percentage, the fraction can be multiplied by 100. These conversions provide alternative ways of representing fractions and understanding their value in different contexts. Like? Share it with your friends
{"url":"https://site.chimpvine.com/article/what-are-fractions/","timestamp":"2024-11-04T07:58:56Z","content_type":"text/html","content_length":"229451","record_id":"<urn:uuid:2a63b183-adeb-40b7-9b57-19a23a0751c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00377.warc.gz"}
[tlaplus] TLAPS proof of increment and update [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [tlaplus] TLAPS proof of increment and update I have been trying different proofs with TLAPS. In the spec attached in this conversation, I tried a simple example of increment and update of two variables. That is, Increment the first variable at a given time and then update the second variable with the incremented value of the first variable at a different time. This spec has two variables - valX and valY. valX and valY are represented as a record with two fields: val that can take Natural numbers and ts as timestamp associated with it. We use a global clock for time. valX is incremented by 1 and valY is updated with the new value of valX. This increment and update pattern is an abstraction that can be used during server/worker zero-downtime updates. I was able to use TLAPS to prove the safety property of the spec. But it required two extra enabling conditions in the Inc action for the proof to work: /\ valX.ts <= clock \*<-this is required only for proof /\ valY.ts <= clock \*<-this is required only for proof I am not clear as to why we would need these two conditions. It should follow from the induction hypothesis. I would appreciate it if someone can provide me some more insights into the workings of the TLAPS proof. I have attached the spec with this conversation. ps: I am unable to post any attached TLA+ spec to this group in my conversation. so renamed the file with .txt. You received this message because you are subscribed to the Google Groups "tlaplus" group. To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx. To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/90b306a3-c413-4b09-810e-98aedb44d91fn%40googlegroups.com. ----------------------- MODULE IncrementUpdatePattern ----------------------- This spec demonstrate an example of Increment and update pattern of two variables - Increment the first variable at a time and then update the second variable with the incremented value of the first variable. This spec has two variables - valX and valY. valX and valY are represented as a record with two fields: val that can take Natural numbers and ts as time stamp associated with it. valX is incremented by 1 and valY is updated with new value of valX. This increment and update pattern is an abstraction that can be used during server/worker zero-downtime updates. We use TLAPS to prove the safety property of the spec. EXTENDS Naturals, TLAPS CONSTANTS MaxNum \* Maximum number X can take ASSUME SpecAssumption == /\ MaxNum \in (Nat \ {0}) \* MaxNum can not be zero VARIABLES valX, valY, clock, n vars == <<valX, valY, clock, n>> Invariants and Temporal Properties \* An invariant: ensures all the variables maintain the proper types. TypeInvariant == /\ valX \in [val:Nat, ts: Nat] /\ valY \in [val:Nat, ts: Nat] /\ clock \in Nat /\ n \in Nat (* If val of valX is greater than the val of valY, then the time of update of valX is greater than or equal to ValY *) SafetyProperty == (valX.val > valY.val) => valX.ts >= valY.ts Init == /\ valX = [val|->1, ts|->0] /\ valY = [val|->0, ts|->0] /\ clock = 0 /\ n = 0 Inc == /\ n < MaxNum /\ valX.ts <= clock \*<-this is required only for proof /\ valY.ts <= clock \*<-this is required only for proof /\ n' = n+1 /\ clock' = clock + 1 /\ valX'=[val|->valX.val + 1, ts|->clock'] /\ UNCHANGED<<valY>> Update == /\ valY.val < valX.val /\ clock' = clock + 1 /\ valY'=[val|->valX.val, ts|->clock'] /\ UNCHANGED<<valX,n>> Next == Inc \/ Update Spec == Init /\ [][Next]_vars IInv == TypeInvariant THEOREM TypeCorrect == Spec => []IInv <1>1. Init => IInv BY SpecAssumption DEF Init, IInv, TypeInvariant <1>2. IInv /\ [Next]_vars => IInv' <2> SUFFICES ASSUME IInv, PROVE IInv' <2>. USE SpecAssumption DEF Init, IInv, TypeInvariant <2>1. CASE Inc BY <2>1 DEF Inc <2>2. CASE Update BY <2>2 DEF Update <2>3. CASE UNCHANGED vars BY SpecAssumption, <2>3 DEF vars <2>4. QED BY <2>1, <2>2, <2>3 DEF Next <1>. QED BY <1>1, <1>2, PTL DEF Spec THEOREM Spec => []SafetyProperty <1>1. Init => SafetyProperty <2> SUFFICES ASSUME Init PROVE SafetyProperty <2>. USE SpecAssumption DEF Init, IInv, TypeInvariant, SafetyProperty <2>1. CASE Inc BY <2>1 DEF Inc <2>2. CASE Update BY <2>2 DEF Update <2>3. CASE UNCHANGED vars BY SpecAssumption, <2>3 DEF vars <2>4. QED BY <2>1, <2>2, <2>3 DEF SafetyProperty <1>2. IInv /\ SafetyProperty /\ [Next]_vars => SafetyProperty' <2> SUFFICES ASSUME IInv, PROVE SafetyProperty' <2>. USE SpecAssumption DEF Init, IInv, TypeInvariant, SafetyProperty <2>1. CASE Inc BY <2>1 DEF Inc <2>2. CASE Update BY <2>2 DEF Update <2>3. CASE UNCHANGED vars BY <2>3 DEF vars <2>4. QED BY <2>1, <2>2, <2>3 DEF Next <1>. QED BY <1>1, <1>2, TypeCorrect, PTL DEF Spec \* Modification History \* Last modified Mon Apr 12 12:32:29 CDT 2021 by spadhy \* Created Tue Mar 30 13:58:07 CDT 2021 by spadhy
{"url":"https://discuss.tlapl.us/msg04244.html","timestamp":"2024-11-02T00:05:44Z","content_type":"text/html","content_length":"9979","record_id":"<urn:uuid:1cc98c31-e0eb-4cec-b345-70f8be4415b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00058.warc.gz"}
Statisticians vs Mathematicians salary - Salaries Info Based on data from the US Bureau of Labor Statistics, on average, Statisticians make $99,450 annually, while Mathematicians make $112,430 per year. As a result, Mathematicians earn a wage that is higher than Statisticians. However, it's worth mentioning that wages can vary depending on factors like location, experience, and the specific setting in which the employee works. As an example, Statisticians who work in New York (with salary averaging $127,380) may earn more than those who work in Oklahoma ($57,690). And Mathematicians in Washington DC earn 84% more on average compared to those in Nevada. Statisticians vs Mathematicians overview Statisticians and Mathematicians are crucial to the Professional, Scientific, and Technical Services industry. People are often interested in learning about the distinctions between these job titles, including the average earnings for each of them. Statistician job description Statisticians develop or apply mathematical or statistical theory and methods to collect, organize, interpret, and summarize numerical data to provide usable information. May specialize in fields such as biostatistics, agricultural statistics, business statistics, or economic statistics. Includes mathematical and survey statisticians. Statistician education and experience The majority of Statisticians (65%) hold a Master's Degree. However this occupation also includes some employees who have a Doctoral Degree (20%) and a Bachelor's Degree (15%). Regarding experience, about a quarter of Statistician jobs require no previous experience. A smaller part of jobs (25%) require a previous experience of 1 to 2 years. Statistician average salary According to data from the US Bureau of Labor Statistics, the number of Statisticians employed by the United States in 2021 was 31,370, and their average annual salary was $99,450. The bottom 10 percent had a salary of $49,350 or less, and the top 10 percent had a salary of $157,300 or more. The average salary has grown by 2.3% compared the previous year. Do Statisticians make good money? Statisticians are usually paid well, as their average salary is 71% higher than the average salary in the United States ($58,260). Moreover, they earn 9% more than the average pay of the Professional, Scientific, and Technical Services industry ($91,150). Statisticians are usually paid well, as their average salary is 71% higher than the average salary in the United States ($58,260). Statisticians job growth There has been a decrease in the employment of Statisticians in the last two years. In 2021, there have been 7,490 less roles than the previous year nationwide, which marks a decrease of 19.3%. Job growth has averaged -7.3% for the last 3 years. Mathematician job description Mathematicians conduct research in fundamental mathematics or in application of mathematical techniques to science, management, and other fields. Solve problems in various fields using mathematical Mathematician education and experience Most Mathematicians (50%) have completed a Doctoral Degree. But additionally, among employees with this job title, there are also some with a Master's Degree (25%) and a Bachelor's Degree (8%). With regard to experience, about a third of Mathematician occupations do not require any previous experience. A smaller number of roles (29%) require a previous experience of 1 to 2 years. Mathematician average salary As reported by the US Bureau of Labor Statistics, in 2021, the number of Mathematicians employed in the United States was 1,770, and they earned an average of $112,430 per year. The bottom 10 percent had earnings of $61,760 or less, and the top 10 percent had earnings of $169,500 or more. Do Mathematicians make good money? Mathematicians are typically paid well, since their mean salary is 93% above the average wage in the United States ($58,260). Moreover, they make 66% more than the mean earnings of the Federal, State, and Local Government industry ($67,800). Mathematicians are typically paid well, since their mean salary is 93% above the average wage in the United States ($58,260). Mathematicians job growth Mathematicians have seen a decrease in employment over the past two years. In 2021, there have been 690 less positions than the previous year across the nation, and that marks a decrease of 28%. Over the past 3 years, job growth has averaged -10.9%. Do Statisticians or Mathematicians make more? Mathematicians make 13% more than Statisticians. The average annual salary for Statisticians is $99,450, while Mathematicians earn $112,430 per year. How long does it take to become a Statistician vs Mathematician? Becoming a Statistician typically requires a Master's Degree. It usually takes between 2 years and 3 years to complete a Master's Degree. And, given that a Bachelor's Degree is a prerequisite, the full educational process could take around 7 years to complete. While on the other hand, becoming a Mathematician typically requires a Doctoral Degree. It usually takes between 4 years and 6 years to complete a Doctoral Degree. And, given that a Master's Degree is a prerequisite, the full educational process could take around 13 years to complete. Is it harder to become a Statistician vs Mathematician? It is more difficult to become a Mathematician than a Statistician, since it takes 6 more years of education. Related Job Comparisons
{"url":"https://salariesinfo.com/statisticians-vs-mathematicians-salary/","timestamp":"2024-11-07T17:14:41Z","content_type":"text/html","content_length":"38456","record_id":"<urn:uuid:b3369a85-c291-45c8-adb6-343671fb0051>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00592.warc.gz"}
Reduced designs constructed by Key-Moori Method $2$ and their connection with Method $3$ Document Type : Original Article School of Mathematical and Computer Sciences, University of Limpopo (Turfloop) Sovenga, South Africa For a 1-$(v,k,\lambda)$ design $\mathcal{D}$ containing a point $x$, we study the set $I_x$, the intersection of all blocks of $\mathcal{D}$ containing $x$. We use the set $I_x$ together with the Key-Moori Method 2 to construct reduced designs invariant under some families of finite simple groups. We also show that there is a connection between reduced designs constructed by Method 2 and the new Moori Method 3. Main Subjects
{"url":"https://ajmc.aut.ac.ir/article_5004.html","timestamp":"2024-11-10T14:19:20Z","content_type":"text/html","content_length":"45221","record_id":"<urn:uuid:5f2a409f-3351-430c-aa07-b39110a72b5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00270.warc.gz"}
Director - Kerala School of Mathematics (KSoM) Prof. Ratnakumar P K My area of specialisation is broadly Harmonic Analysis. I am more interested in the applications of Harmonic Analysis methods in PDE, hence my work is mostly in Euclidean Harmonic analysis. I also have some work on the Heisenberg group, which is in the non commutative set up, which actually inspired my works related to the Twisted Laplacian on C^n. 1. Dr. Vijay Kumar Sohani finished his Ph. D. under my supervision in 2014. Thesis title: Non-Linear Schr ̈odinger Equation and the Twisted Laplacian. Dr. Sohani received the best thesis award of HBNI, and currently employed as an Assistant Professor in the Dept. of Mathematics at IIT Indore. 2. Dr. Divyang G. Bhimani finished Ph. D. under my supervision in 2016. Thesis title: Modulation Spaces and Nonlinear Evolution Equations”. Dr.Bhimani received “outstanding doctoral student award” under mathematical sciences discipline in Mathematics from HBNI and currently working as Assistant Professor in the Dept. of Mathematics at IISER, Pune. 3. Dr. Ramesh Manna finished his Ph. D. under my supervision in 2017. Thesis title: Fourier Integral Operators, Wave Equation and Maximal Op-erators. Dr. Manna received the C V Raman Post Doctoral Fellowship from IISC and currently working as Assistant Professor in the Dept. of Mathematics at NISER, Bhubaneswar. 4. Mr. Arup Maity has finished his Ph. D. under my supervision in 2023. Thesis title: On Fourier and Weyl Multipliers. 5. Currently Mr. Uday Patel is working with me for his Ph. D. degree on the project on “Oscillatory Integrals and Fourier Multipliers”. 1. Saurabh Shrivastava (2012). Currently a Professor at IISER Bhopal. 2. Anupam Gumber (2018). Currently a PDF in Math. at Univ. of Genova. 1. A localisation theorem for Laguerre expansions, Proc. Indian Acad. Sci., Vol 105, Aug 1995, 303-314. 2. Analogues of Besicovitch - Wiener theorem for the Heisenberg group, jointly with S. Thangavelu, J. Fourier Anal. and Appl. Vol 2, No.4, 1996, 407-414. 3. A restriction theorem for the Heisenberg motion group, jointly with Rama Rawat and S. Thangavelu, Studia Math., Vol 126, No.1, 1997, 1-12. 4. Spherical means, wave equations and Hermite - Laguerre expansions, jointly with S. Thangavelu, J. Funct. Anal., Vol 154, No.2, 1998, 253-290. 5. Gelfand pairs, K- spherical means and injectivity on the Heisenberg group, jointly with G. Sajith, J. D. Analyse Math., Vol 78, 1999, 245-262. 6. Spherical maximal operator on constant curvature spaces, jointly with Amos Nevo, Trans. Amer. math.Soc., Vol 355, No.3, 2002, 1167-1182. 7. Spherical maximal operator on symmetric spaces, an end point estimate, Indag. Math., Vol 14, No.1, 2003, 63-79. 8. Schr ̈odinger equation and the oscillatory semi-group for the Hermite op-erator, jointly with A.K. Nandakumaran, J. Funct. Anal., Vol 224, 2005, 371-385. Corrigendum, Schr ̈odinger equation and the oscillatory semi-group for the Hermite operator, jointly with A.K. Nandakumaran, J. Funct. Anal, Vol 224, (2006) 719-720. 9. On Schr ̈odinger propagator for the special Hermite operator, J. Fourier Anal. and Appl., Vol 14, No.2, 2008, 286-300. 10. Benedick’s theorem for the Heisenberg group, jointly with E.K. Narayanan, Proc. Amer. Math. Soc., Vol 138, No.6, (2010), 2135-2140. 11. Schr ̈odinger equation, a survey on regularity questions, J. Analysis, Vol 17, (2009), 47-59. Proc. of the Symposium on Fourier Analysis and its Applications, Ramanujan Institute, Chennai, 2009, (Published by Forum D’ Analysis, Chennai). 12. On Bilinear Littlewood-Paley Square functions, jointly with Saurabh Shri-vastava, Proc. Amer. Math. Soc., Vol 140, No.2 (2012), 4285-4293. 13. Analyticity of the Schr ̈odinger propagator on the Heisenberg group, jointly with S. Parui and S. Thangavelu, Monatsh. Math. Vol 168, No.2 (2012), 279-303. 14. Non linear Schr ̈odinger equation for the twisted Laplacian, jointly with Vijay Kumar Sohani, J. Funct. Anal., Vol 265, (2013), 1-27.1 15. A remark on bilinear Littlewood-Paley square functions, jointly with Saurabh Shrivastava, Monatsh. Math. Vol 176, No.4 (2015), 615-622. 16. Non linear Schr ̈odinger equation and the twisted Laplacian-Global well-posedness, jointly with Vijay Kumar Sohani, Math. Z. Vol 280 No.1-2, (2015) 583-605. 17. Functions operating on Mp,1 and nonlinear dispersive equations, jointly with Divvying Bhimani. J. Funct. Anal., Vol 270, (2016), 621-648. 18. 18. A Hardy-Sobolev inequality for the twisted Laplacian, jointly with Adimurthi and Vijay Kumar Sohani. Proceedings of the Royal Society of Edinburgh Section A: Mathematics, Vol. 147, No.1 (2017), 19. Maximal functions along hyper-surfaces, jointly with Ramesh Manna. J. Ramanujan Math. Soc. 33, No.3 (2018) 283-296. 20. Local smoothing of Fourier integral operators and Hermite functions, jointly with Ramesh Manna, Advances in Harmonic Analysis and Partial Differential Equations. Trends in Mathematics. Birkh ̈ auser, Cham (2020). 21. Translation and Modulation invariant Hilbert spaces, jointly with Joachim Toft, Anupam Gumber and Ramesh Manna, Monatshefte f ̈ur Mathematik, Vol.196, (2021) 389–398. 22. Global Fourier integral operators in the plane and the square function, jointly with Ramesh Manna, J Fourier Anal Appl Vol. 28, 25 (2022). 23. On Lp → L q boundedness of the twisted convolution operators, jointly with Arup Kumar Maity, J. Anal 31, 945–950 (2023). 24. On Young’s inequality for the twisted convolution, J. Fourier Anal. Appl. 29, 69 (2023). 25. Fourier multipliers via twisted convolution, jointly with Arup Kumar Maity (submitted) 26. Weyl multipliers for (Lp, Lq), jointly with Arup Kumar Maity (submitted) Courses Taught M.Sc./Ph.D Coursework 1. Real Analysis 2. Measure and Integration 3. Real Variable Methods in Harmonic Analysis 4. Ordinary Differential Equations 5. Partial Differential Equations 6. Mathematical Methods 7. Classical Mechanics 8. Algebraic Topology 9. Differential Manifolds Mini Courses in Workshops 1. Fourier Series and Applications (QIP Short Term Course on Algebra, Analysis and Applications, IIT BHU, July 2017). 2. Lectures in Fourier Analysis (HRI Summer Programme in Mathematics, 2017). 3. Heisenberg Group from the Geometrical View Point (Workshop on Geometry and Analysis on CR Manifolds, held at HRI in October 2016). 4. Lectures on Weyl Quantisation (Discussion Meeting on Geometry and Analysis held at HRI in March 2012). 5. Basic Differential Geometry (HRI Summer Programme in Mathematics, 2007 and 2012). 6. Elementary Fourier Analysis (HRI Summer Programme in Mathematics 2008 and also at Kerala School of Mathematics in 2010). 7. Lectures in Measure Theory (HRI Summer Programme in Mathematics, 2009). 8. Fourier Multipliers-Basics: (HRI Workshop on Harmonic Analysis, 2024). KSCSTE-Kerala School of Mathematics P.O. Kunnamangalam Kozhikode, Kerala, India PIN: 673571 Phone Number : PA to Director : +91 495 2809007 Director: +91 495 2809001 Email: director@ksom.res.in
{"url":"https://ksom.res.in/director/","timestamp":"2024-11-07T00:48:01Z","content_type":"text/html","content_length":"66259","record_id":"<urn:uuid:d5ba0973-89b7-4881-827d-c6bddc43b3a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00399.warc.gz"}
Everything in the Universe Is Made of Math – Including You It’s a Friday morning in Princeton when I find this gem in my inbox from a senior professor I know: Subject: Not an easy e-mail to write ... Dear Max, Your crackpot papers are not helping you. First, by submitting them to good journals and being unlucky so that they get published, you remove the "funny" side of them. ... I am the Editor of the leading journal...and your paper would have never passed. This might not be that important except that colleagues perceive this side of your personality as a bad omen on future development. ... You must realize that, if you do not fully separate these activities from your serious research, perhaps eliminating them altogether, and relegate them to the pub or similar places you may find your future in jeopardy. I’ve had cold water poured on me before, but this was one of those great moments when I realized I’d set a new personal record, the new high score to try to top. When I forwarded this email to my dad, who’s greatly inspired my scientific pursuits, he referenced Dante: Segui il tuo corso et lascia dir le genti! “Follow your own path, and let people talk!” I’d fallen in love with physics precisely because I was fascinated with the biggest questions, yet it seemed clear that if I just followed my heart, then my next job would be at McDonald’s. I developed a secret strategy that I called my Dr. Jekyll/Mr. Hyde Strategy, and it exploited a sociological loophole: What you do after work is your own business and won’t be held against you as long as it doesn’t distract from your day job. So whenever authority figures asked what I worked on, I transformed into the respectable Dr. Jekyll and told them I worked on mainstream topics in cosmology. But secretly, when nobody was watching, I’d transform into the evil Mr. Hyde and do what I really wanted to do. This devious strategy worked beyond my wildest expectations, and I’m extremely grateful that I get to work without having to stop thinking about my greatest interests. But now, as a physics professor at MIT, I feel that I have a debt to pay to the science community. I have a moral obligation to more junior scientists to bring Mr. Hyde out of the academic closet and do my part to push the boundary a little. So what paper of mine triggered that “stop or you’ll ruin your career” email above? It was about the core idea that I’m about to discuss: that our physical world is a giant mathematical object. Math, Math Everywhere! What’s the answer to the ultimate question of life, the universe and everything? In Douglas Adams’ science fiction spoof The Hitchhiker’s Guide to the Galaxy, the answer was 42; the hardest part turned out to be finding the real question. I find it very appropriate that Adams joked about 42 because mathematics has played a striking role in our growing understanding of the universe. The idea that everything is, in some sense, mathematical goes back at least to the Pythagoreans of ancient Greece and has spawned centuries of discussion among physicists and philosophers. In the 17th century, Galileo famously stated that our universe is a “grand book” written in the language of mathematics. More recently, the Nobel laureate Eugene Wigner argued in the 1960s that “the unreasonable effectiveness of mathematics in the natural sciences” demanded an explanation. Soon, we’ll explore a really extreme explanation. However, first we need to clear up exactly what we’re trying to explain. Please stop reading for a few moments and look around you. Where’s all this math that we’re going on about? Isn’t math all about numbers? You can probably spot a few numbers here and there — for example the page numbers of this magazine — but these are just symbols invented and printed by people, so they can hardly be said to reflect our universe being mathematical in any deep way. When you look around you, do you see any geometric patterns or shapes? Here again, human-made designs like the rectangular shape of this magazine don’t count. But try throwing a pebble, and watch the beautiful shape that nature makes for its trajectory! The trajectories of anything you throw have the same shape, called an upside-down parabola. When we observe how things move around in orbits in space, we discover another recurring shape: the ellipse. Moreover, these two shapes are related: The tip of a very elongated ellipse is shaped almost exactly like a parabola. So, in fact, all of these trajectories are simply parts of ellipses. We humans have gradually discovered many additional recurring shapes and patterns in nature, involving not only motion and gravity, but also electricity, magnetism, light, heat, chemistry, radioactivity and subatomic particles. These patterns are summarized by what we call our laws of physics. Just like the shape of an ellipse, all these laws can be described using mathematical Equations aren’t the only hints of mathematics that are built into nature: There are also numbers. As opposed to human creations like the page numbers in this magazine, I’m now talking about numbers that are basic properties of our physical reality. For example, how many pencils can you arrange so that they’re all perpendicular (at 90 degrees) to each other? The answer is 3, by placing them along the three edges emanating from a corner of your room. Where did that number 3 come sailing in from? We call this number the dimensionality of our space, but why are there three dimensions rather than four or two or 42? There’s something very mathematical about our universe, and the more carefully we look, the more math we seem to find. So what do we make of all these hints of mathematics in our physical world? Most of my physics colleagues take it to mean that nature is for some reason described by mathematics, at least approximately, and leave it at that. But I’m convinced that there’s more to it, and let’s see if it makes more sense to you than to that professor who said it would ruin my career.
{"url":"https://www.discovermagazine.com/the-sciences/everything-in-the-universe-is-made-of-math-including-you","timestamp":"2024-11-05T07:30:36Z","content_type":"text/html","content_length":"103423","record_id":"<urn:uuid:a37acf7d-9b4a-4449-bbb2-b67be36e67f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00721.warc.gz"}
mhurdle 1.3-1 • For robust estimations (the default), the model is now internally updated with robust = FALSE and iterlim = 0 so that the gradient and the hessian of the structural model are computed. • Interface to sandwich (estfun and bread methods) and to nonnest2 (llcont method). mhurdle 1.3-0 • predict(object, newdata = data) where data is the data frame used to fit the model now returns the same as fitted(object). Bug fixed thanks to Achim Zeileis and Rebekka Topp. • Interface with foreign packages (prediction, margins, and modelsummary). • Unit tests added. mhurdle 1.1-6 • The EV formula is fixed for the log-normal model. • Improved version of the texreg method. mhurdle 1.0-1 • Major revision for the code and the vignette; bc and ihs transformation, heteroscedasticity are introduced. mhurdle 0.1-3 • Minor changes of the vignette. mhurdle 0.1-2 • Major update of the whole code. • Much improved vignette. mhurdle 0.1-0 • Some encoding problems in the vignette are fixed. mhurdle 0.1-0
{"url":"https://cran.rstudio.org/web/packages/mhurdle/news/news.html","timestamp":"2024-11-02T05:05:08Z","content_type":"application/xhtml+xml","content_length":"2825","record_id":"<urn:uuid:00bf881f-9ad7-4e9c-93bf-9b2282a7bead>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00036.warc.gz"}
We have explored the use of DC network analyzers for the solution of steady state partial differential equations. These circuits can be used as computers where the topology of the computer is equivalent to the topology of the problem being solved. Network analyzers were used extensively in the past as modelling devices in Physics, Chemistry and Engineering Design. Recent advances in circuit design and packaging allow us to build network analyzer circuits that were not possible at that time. In particular, we have investigated the stability properties of circuits which serve as analogs for the linear wave equation where the operator is indefinite. We show that this stability behavior is similar to the behavior of iterative solutions for similar equations in numerical analysis. In addition, we have been investigating a method that incorporates analog and digital components in an iterative scheme similar to Newton's method and makes use of the best features of Original language English (US) Pages 309-312 Number of pages 4 State Published - Apr 1984 All Science Journal Classification (ASJC) codes Dive into the research topics of 'ELECTRICAL SIMULATOR FOR SOLVING LINEAR AND NON-LINEAR PARTIAL DIFFERENTIAL EQUATIONS WITH AN ITERATIVE METHOD FOR REFINING APPROXIMATE SOLUTIONS.'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/electrical-simulator-for-solving-linear-and-non-linear-partial-di","timestamp":"2024-11-10T19:51:54Z","content_type":"text/html","content_length":"47826","record_id":"<urn:uuid:8105d6ff-7dfe-4ed8-94bd-61507f2b3165>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00575.warc.gz"}
The next function divides a 16-bit number in MODREG1:MODREG0 by any N<=16 bit number in TEMP1:TEMP0. The quotient will be stored in DIVREG1:DIVREG:0 and the modulo in MODREG1:MODREG0. (in this notation, MODREG1 is th HI byte, etc.) It is optimized for dividing numbers by a constant. - The divisor in TEMP1:TEMP0 should be left-aligned (eg, padded with zeroes on the LSB side). This is easy if you divide by 8-bits. - The high bit of the divisor (TEMP1.7) should be 1. If the high bit of the divisor is not 1 then the result may overflow. This will cause the (17-N)-bit quotient in DIVREG1:DIVREG0 to contain all ones and can be further detected by comparing MODREG1:MODREG0 by the used divisor (which was btw destroyed by this function). Bottomline, it is better to avoid this. - No detection for a divisor of zero (which would mean a divisor of 0 bits long which is nonsense) Worst case timing: 6+(17-N)*23 (where N is the number of bits in the divisor) N=4: T=305 N=8: T=213 N=12: T=121 Here's the universal version: ;;DIVIDE A 16-BIT NUMBER BY AN N-BIT NUMBER ;; MODREG1:MODREG0 = 16-BIT DIVIDEND ;; TEMP1:TEMP0 = N-BIT DIVISOR, LEFT ALIGNED, WITH TEMP1.7 = 1 ;; DIVREG1:DIVREG0 = (17-N)-BIT WIDE QUOTIENT ;; MODREG1:MODREG0 = N-BIT REMAINDER ;; COUNT = 0 ;; TEMO1:TEMP0 = ORIGINAL TEMO1:TEMP0 << (17-N) ;; THAT MEANS TEMO1:TEMP0 = 0 IF INPUT IS CORRECT MOVLW 17-N MOVWF COUNT CLRF DIVREG0 CLRF DIVREG1 MOVF TEMP0,W ;W=MOD-DIVISOR SUBWF MODREG0,W MOVF TEMP1,W BTFSS STATUS,C ;PROCESS BORROW ADDLW 1 SUBWF MODREG1,W BTFSS STATUS,C ;IF W<0 GOTO DIV16_NOSUB MOVF TEMP0,W ;MOD=MOD-DIVISOR SUBWF MODREG0,F BTFSS STATUS,C DECF MODREG1,F MOVF TEMP1,W SUBWF MODREG1,F BSF STATUS,C RLF DIVREG0,F ;DIV << 1 + CARRY RLF DIVREG1,F BCF STATUS,C ;DIVISOR>>=1 RRF TEMP1,F RRF TEMP0,F DECFSZ COUNT,F GOTO DIV16_LOOP In the next example, it has been configured to divide a 16-bit number by a 6-bit number in WREG. It will put it in TEMP1 (and clear TEMP0) and shift it to the left by 2 bits to align it left. The result is 11-bits wide in DIVREG1:DIVREG0 and the remainder is of course 6-bits wide (same width as the divisor) and still occupies MODREG1:MODREG0. ;;DIVIDE A 16-BIT NUMBER BY A 6-BIT WREG. RESULT IS 11-BIT WIDE ;SHIFT LEFT BY 10 BITS MOVWF TEMP1 CLRF TEMP0 BCF STATUS,C RLF TEMP1,F RLF TEMP1,F MOVLW 11 MOVWF COUNT CLRF DIVREG0 CLRF DIVREG1 MOVF TEMP0,W ;W=MOD-DIVISOR SUBWF MODREG0,W MOVF TEMP1,W BTFSS STATUS,C ;PROCESS BORROW ADDLW 1 SUBWF MODREG1,W BTFSS STATUS,C ;IF W<0 GOTO DIV16_6_NOSUB MOVF TEMP0,W ;MOD=MOD-DIVISOR SUBWF MODREG0,F BTFSS STATUS,C DECF MODREG1,F MOVF TEMP1,W SUBWF MODREG1,F BSF STATUS,C RLF DIVREG0,F ;DIV << 1 + CARRY RLF DIVREG1,F BCF STATUS,C ;DIVISOR>>=1 RRF TEMP1,F RRF TEMP0,F DECFSZ COUNT,F GOTO DIV16_6_LOOP
{"url":"http://www.piclist.com/techref/microchip/math/div/16by8lz.htm","timestamp":"2024-11-06T01:35:36Z","content_type":"text/html","content_length":"26152","record_id":"<urn:uuid:8a86ea59-0530-4cb1-bcf9-71440a039685>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00041.warc.gz"}
A certain gluon scattering amplitude 1286 views I am stuck with this process of calculating the tree-level scattering amplitude of two positive helicity (+) gluons of momentum say $p_1$ and $p_2$ scattering into two gluons of negative (-) helicity with momentum $p_3$ and $p_4$. This is apparently $0$ for the diagram where one sees this process as two 3 gluon amplitudes with a propagating gluon (of say momentum $p$) and $p_1$ and $p_2$ are attached one each to the two 3 gluon amplitudes. I want to be able to prove this vanishing. So let $p_2^+$ be with $p$ and $p_3^-$ and rest on the other 3 gluon vertex. I am working in the colour stripped formalism. Let the Lorentz indices be $\rho$, $\sigma$ for the propagating gluon. And for the external gluons $p_1^+$, $p_2^+$, $p_3^-$, $p_4^-$ let $\nu, \lambda, \beta, \mu$ respectively be their Lorentz indices. Let the auxiliary vectors chosen to specify the polarizations of these external gluons be $p_4, p_4, p_1, p_1$ respectively. So the "wave-functions" of these four gluons be denoted as, $\epsilon^{+/-}(p,n)$ where $p$ stands for its momentum and $n$ its auxiliary vector and in the spinor-helicity formalism one would write, 1. $ \epsilon^{+}_\mu(p,n) = \frac{<n|\gamma_\mu|p]}{\sqrt{2}<n|p>} $ 2. $\epsilon^{-}_\mu(p,n) = \frac{[n|\gamma_\mu|p>}{\sqrt{2}[p|n]} $ Hence I would think that this amplitude is given by, $\epsilon^{-}_{\mu}(p_4,p_1)\epsilon_{\nu}^{+}(p_1,p_4)\epsilon_\lambda^+(p_2,p_4)\epsilon_\beta^-(p_3,p_1)\left( \frac{ig}{\sqrt{2}} \right)^2 \times \{ \eta^{\mu \nu}(p_4-p_1)^\rho + \eta^{\nu \ rho}(p_1-p)^\mu + \eta^{\rho \mu}(p - p_4)^\nu\} \left ( \frac{-i\eta_{\rho \sigma}}{p^2}\right)\{ \eta^{\lambda \beta}(p_2-p_3)^\sigma + \eta^{\beta \sigma}(p_3-p)^\lambda + \eta^{\sigma \lambda}(p - p_2)^\beta \} $ One observes the following, 1. $\epsilon^{-}_\mu(k_1,n). \epsilon^{- \mu}(k_2,n) = \epsilon^{+}_\mu(k_1,n).\epsilon^{+\mu} (k_2,n) = 0$ 2. $\epsilon^{+}_\mu(k_1,n_1).\epsilon^{-\mu}(k_2,n_2) \propto (1-\delta_{k_2 n_1})(1-\delta_{k_1,n_2})$ Using the above one sees that in the given amplitude the only non-vanishing term that remains is (upto some prefactors), $\epsilon^{-}_{\mu} (p_4,p_1) \epsilon_{\nu}^{+}(p_1,p_4) \epsilon{_\lambda}^{+}(p_2,p_4)\epsilon_{\beta}^{-}(p_3,p_1) \left\{ \eta^{\nu}_{\sigma}(p_1-p)^\mu + \eta_\sigma^\mu(p - p_4)^\nu\right\}\ times \{ \eta^{\lambda \beta}(p_2-p_3)^\sigma\}$ (..the one that is the product of the last two terms of the first vertex factor (contracted with the index of the propagator) and the first term from the second vertex factor..} • Why is this above term zero? (..the only way the whole diagram can vanish..) This post has been migrated from (A51.SE) I can't see why my last equation has gotten garbled! It would be great if someone can rectify that. This post has been migrated from (A51.SE) It's not physical to talk about a diagram vanishing. Diagrams aren't gauge invariant, amplitudes are. It only makes sense when you specify a gauge, i.e. a choice of polarization vectors. (I haven't read any of your details, but maybe thinking about the general fact will be useful to you.) This post has been migrated from (A51.SE) @Matt Reece Thanks for your comment. I guess the point you are raising is now clarified with me specifying the polarization vectors ($\epsilon^{+/-}(p,n)$) that I had in mind. I have added their explicit expressions. There are $3$ colour ordered diagrams contributing to this scattering process, the one whose expression I have written above is one of the two of those 3 which vanish. It would be very helpful if you can explain as to why is the final expression of mine zero (or if there is something wrong about the expression itself!) This post has been migrated from (A51.SE) Choosing a gauge means choosing a reference spinor $\left|n\right>$ (or $\left|n\right]$, depending on helicity). Often you want to choose different particles to have the same reference spinor, or maybe the reference spinor to come from another momentum in the process, to make as many terms as possible zero. This post has been migrated from (A51.SE) @Matt Reece Thats exactly what I have clarified in the third paragraph. My choice for the reference spinors correspond to momenta $p_4, p_4, p_1, p_1$ for particles with momentum-and-helicity $p_1^ +$, $p_2^+$, $p_3^-$, $p_4^-$ respectively. I guess that clarifies the complete meaning of the amplitude that I have written down. This post has been migrated from (A51.SE) I was trying to nudge you in the right direction, but here's the explicit calculation. Focus on the vertex where gluons 1 and 4 meet. There you have a factor $\epsilon(p_1)_\nu \epsilon(p_4)_\mu \ left(\eta^{\mu \nu} (p_4 - p_1)^\rho + \eta^{\nu\rho} (p_1 - p)^\mu + \eta^{\rho\mu} (p - p_4)^\nu\right)$. But, by construction, $\epsilon(p_1) \cdot \epsilon(p_4) = 0$, so the $\eta^{\mu \nu}$ term vanishes. In the second term, we use $p = -p_1 - p_4$ to note that we have a factor $(2 p_1 - p_4) \cdot \epsilon(p_4)$. But $\epsilon(p_4) \cdot p_4 = 0$ because gauge bosons are transverse, whereas $p_1 \cdot \epsilon(p_4) = 0$ by your choice of the reference spinor for gluon 4. So the second term is zero. The last term is zero in analogous fashion. So this vertex is zero and you don't need to think about the rest of the diagram. This post has been migrated from (A51.SE)
{"url":"https://www.physicsoverflow.org/944/a-certain-gluon-scattering-amplitude","timestamp":"2024-11-03T07:58:48Z","content_type":"text/html","content_length":"136048","record_id":"<urn:uuid:ba2bacdd-7f4f-4b51-8788-bb91686bc564>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00404.warc.gz"}
Running cross_validate from cvms in parallel - Ludvig Renbo OlsenRunning cross_validate from cvms in parallel I have spent the last couple of days adding functionality for performing repeated cross-validation to cvms and groupdata2. In this quick post I will show an example. In cross-validation, we split our training set into a number (often denoted “k”) of groups called folds. We repeatedly train our machine learning model on k-1 folds and test it on the last fold, such that each fold becomes test set once. Then we average the results and celebrate with food and music. The benefits of using groupdata2 to create the folds are 1) that it allows us to balance the ratios of our output classes (or simply a categorical column, if we are working with linear regression instead of classification), and 2) that it allows us to keep all observations with a specific ID (e.g. participant/user ID) in the same fold to avoid leakage between the folds. The benefit of cvms is that it trains all the models and outputs a tibble (data frame) with results, predictions, model coefficients, and other sweet stuff, which is easy to add to a report or do further analyses on. It even allows us to cross-validate multiple model formulas at once to quickly compare them and select the best model. Repeated Cross-validation In repeated cross-validation we simply repeat this process a couple of times, training the model on more combinations of our training set observations. The more combinations, the less one bad split of the data would impact our evaluation of the model. For each repetition, we evaluate our model as we would have in regular cross-validation. Then we average the results from the repetitions and go back to food and music. As stated, the role of groupdata2 is to create the folds. Normally it creates one column in the dataset called ".folds", which contains a fold identifier for each observation (e.g. 1,1,2,2,3,3,1,1,3,3,2,2). In repeated cross-validation it simply creates multiple of such fold columns (".folds\_1", ".folds\_2", etc.). It also makes sure they are unique, so we actually train on different subsets. # Install groupdata2 and cvms from github # Attach packages library(cvms) # cross_validate() library(groupdata2) # fold() library(knitr) # kable() library(dplyr) # %>% # Set seed for reproducibility # Load data data <- participant.scores # Fold data # Create 3 fold columns # cat_col is the categorical column to balance between folds # id_col is the column with IDs. # Observations with the same ID will be put in the same fold. # num_fold_cols determines the number of fold columns, # and thereby the number of repetitions. data <- fold(data, k = 4, cat_col = 'diagnosis', id_col = 'participant', num_fold_cols = 3) # Show first 15 rows of data data %>% head(10) %>% kable() Data Subset with 3 Fold Columns In the cross_validate function, we specify our model formula for a logistic regression that classifies diagnosis. cvms currently supports linear regression and logistic regression, including mixed effects modelling. In the fold_cols (previously called folds_col), we specify the fold column names. CV <- cross_validate(data, "diagnosis~score", fold_cols = c('.folds_1', '.folds_2', '.folds_3'), family = 'binomial', REML = FALSE) # Show results Output tibble Due to the number of metrics and useful information, it helps to break up the output into parts: CV %>% select(1:7) %>% kable() Evaluation metrics (subset 1) CV %>% select(8:14) %>% kable() Evaluation metrics (subset 2) CV$Predictions[[1]] %>% head() %>% kable() Nested predictions (subset) CV$`Confusion Matrix`[[1]] %>% head() %>% kable() Nested confusion matrices (subset) CV$Coefficients[[1]] %>% head() %>% kable() Nested model coefficients (subset) CV$Results[[1]] %>% select(1:8) %>% kable() Nested results per fold column (subset) We could have trained multiple models at once by simply adding more model formulas. That would add rows to the output, making it easy to compare the models. The linear regression version has different evaluation metrics. These are listed in the help page at ?cross_validate. cvms and groupdata2 now have the functionality for performing repeated cross-validation. We have briefly talked about this technique and gone through a short example. Check out cvms for more 🙂
{"url":"https://www.ludvigolsen.dk/2019/11/running-cross_validate-from-cvms-in-parallel/","timestamp":"2024-11-07T23:29:19Z","content_type":"text/html","content_length":"75901","record_id":"<urn:uuid:be3b8d6f-1879-4b50-92ac-5ac1f159a50b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00436.warc.gz"}
How to add zero before number in Javascript? To add a leading zero before a number in JavaScript, you can convert the number to a string, then use the slice method to extract a substring of the desired length, and concatenate the zero at the beginning. For example: 1 function addLeadingZero(number, totalDigits) { 2 let numberString = number.toString(); 3 while (numberString.length < totalDigits) { 4 numberString = "0" + numberString; 5 } 6 return numberString; 7 } In this function, number is the input number that you want to add a leading zero to, and totalDigits is the total number of digits you want the result to have. The function converts the number to a string, and then uses a while loop to concatenate zeros at the beginning of the string until it reaches the desired length. The final result is the updated string, which will have leading zeros if the original number was not long enough.
{"url":"https://devhubby.com/thread/how-to-add-zero-before-number-in-javascript","timestamp":"2024-11-12T04:21:58Z","content_type":"text/html","content_length":"128340","record_id":"<urn:uuid:8f725f59-c60e-4c54-87d9-b36a5e4b85f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00035.warc.gz"}
Coefficient Definition Coefficient means the number that is written with a variable or multiplied with a variable. One coefficient can have at least one variable. But it may have multiple variables too. It is a pure mathematical terminology that can represent a variable value. For example, You have three terms 3x, 4y, and 7xy. In these terms, 3, 4, and 7 are coefficients while x and y are variables. Table of Contents Operations that Involve Coefficient A coefficient of any variable can take part in all basic Mathematical operations. You need to use them in addition, multiplication, subtraction, and division. In short, you have to get aid from them while using basic operations on variables. Normally, a coefficient is an integer but it can also be in other forms like decimal, fraction, imaginary number, or another alphabet. In the most general form of equations, we will see coefficients represented by another alphabet. For example, the standard form of a linear equation is represented as ax + b = 0. In this equation,and “b” are coefficients while is a variable. Facts about coefficient • A coefficient can be a positive or negative number • When there is no coefficient with a variable, it means the value of the coefficient is • A variable with a coefficient will have no value because it will make the variable also • A number without a variable is just an integer, not a coefficient. For example, +3 is an integer but in the expression 3x, is a coefficient. Can we find the coefficient from an expression? Yes, it is pretty simple to find a coefficient from any mathematical expression. You only have to find the variable and highlight it along with the power if it has. The remaining part of the expression will be a coefficient. Is the constant and coefficient same in the expression? No, the coefficient will always be placed with the variable while a constant will not have a variable. you can read about Constant on this page in detail.
{"url":"https://calculatorsbag.com/definitions/coefficient","timestamp":"2024-11-12T20:10:20Z","content_type":"text/html","content_length":"38692","record_id":"<urn:uuid:b55f1d72-08f8-4db8-ae3c-81469d05d21a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00401.warc.gz"}
Estimation or Calculation of errors Estimation or Calculation of Errors Get the notes of other chapters of class 11 Physics Absolute Error The absolute error in each measured value is the difference between true value (mean value) and that measured value. Let the Measured values in an experiment are : a1, a2, a3, a4, a5……...an Then, mean or true value is Here, amean is considered to be the true value. Hence absolute error in each measured value is ∆a1 = am - a1 ∆a2 = am - a2 ∆a3 = am - a3 ∆a4 = am - a4 ∆a5 = am - a5 ∆an = am - an So, the Mean or Average of absolute errors is given by Hence the true value can be written as Relative error It is the ratio of average absolute error to the true value of measured quantity. It is also known as fractional error. Percentage error When the relative error is expressed in terms of percentage, then it is known as Percentage Error. Get notes of class 11 Physics Post a Comment
{"url":"https://www.educationhub.tech/2020/09/estimation-or-calculation-of-errors.html","timestamp":"2024-11-13T03:03:57Z","content_type":"application/xhtml+xml","content_length":"215919","record_id":"<urn:uuid:9df4cb72-5d53-4c28-9374-f8f4832bb173>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00842.warc.gz"}
Radial Vis Gadgets The RadialVisGadgets package provides interactive Shiny gadgets for interactive radial visualizations. By interacting with the gadgets, Exploratory Data Analysis can be performed. The gadgets can be used at any time during the analysis. They allow the exploration of the underlying nature of the data in tasks related to cluster analysis, outlier detection, and exploratory data analysis, e.g., by investigating the effect of specific dimensions on the separation of the data. Star Coordinates Star Coordinate’s (SC) goal is to generate a configuration of the dimensional vectors which reveals the underlying nature of the data. Let’s look at the well known Iris dataset [1]. 5.1 3.5 1.4 0.2 setosa 4.9 3.0 1.4 0.2 setosa 4.7 3.2 1.3 0.2 setosa 4.6 3.1 1.5 0.2 setosa 5.0 3.6 1.4 0.2 setosa 5.4 3.9 1.7 0.4 setosa One can observe four numerical attributes and one factor. The traditional Star Coordinates approach is defined for numerical attributes only. Therefore, as default we set attempt the conversion of all factors to numerical attributes. This can be disabled with numericRepresentation = FALSE to be described below. Following the traditional approach [2] the five attributes are placed at equal angle steps from each other. You can move your move towards the endings of the dimensional vectors. The circle at the end will be highlighted. As you can see in the figure below. You can move these axes in order to create a configuration that you believe suitable and brush a selection of points. Orthographic Star Coordinates Approach Orthographic Star Coordinates are supported by the Star Coordinates by adding the approach=“OSC” parameter. The axes are reconditioned with every movement as described by Lehmann & Theisel [3]. The interaction is kept the same as before. With this approach, the dimensional vectors are constrained under conditions described in [3]. Numeric Representation = FALSE The traditional approach [2] was defined for numerical attributes only. However [4] extended the approach to mixed datasets. The axis for the factor dimensions are divided according to the frequency of each categorical value within the categorical dimension. Given that the 3 species labels are uniformly distributed, 2 ticks appear separating the 3 blocks for each categorical dimension. By clicking at the axis, you can activate it. The categorical value blocks are now visible on the selected factor. By double-clicking on a categorical block, the value the block represents is highlighted. If another categorical block is selected by double-clicking, then those two blocks will swap with each other. Allowing to shift categorical values in one dimension. You can disable a categorical selection by double clicking a second time in the same categorical block. Labels in analysis By sending a factor dimension name in colorVar, the analysis can be performed on labeled data. The points are then coloured according to the selected dimension. The “Standard” and “OSC” approach are avaible for both analysis. Hints are used to describe possible movements if a label and a function is provided. A button named Hint will appear. An increase in the evaluation of the function defines an increase in projection quality i.e. larger values are better. Details on the hints usage are defined in [4]. The thickness of the segments represent an increase in quality. In the figure below, it would imply that interacting with Petal.Width by moving it down will result an increase in quality. The absolute maximum increase in quality is shown in the Hint Button, allowing for early termination. The hints are computed on-demand only and are based on the current vector configuration. Once a movement is performed, the hints will disappear. func <- function(points, labels){ dunn(Data=points, clusters=labels)} StarCoordinates(iris, colorVar="Species", clusterFunc = func) Notes On Data Processing • Missing Data: Only complete-cases are used i.e. rows where data is missing are removed • Zero Variance: Zero or close to zero variance dimensions are removed. • Scaling: If the values are not mean centered then each dimension is scaled from 0..1. • Mean Centered: A normalization step as described in [5] is performed if meanCentered =TRUE (default). 1. Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annals of eugenics, 7(2), 179-188. 2. Kandogan, E. (2001, August). Visualizing multi-dimensional clusters, trends, and outliers using star coordinates. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 107-116). 3. Lehmann, D. J., & Theisel, H. (2013). Orthographic star coordinates. IEEE Transactions on Visualization and Computer Graphics, 19(12), 2615-2624. 4. Matute, J., & Linsen, L. (2020). Hinted Star Coordinates for Mixed Data. In Computer Graphics Forum (Vol. 39, No. 1, pp. 117-133). 5. Rubio-Sánchez, M., & Sanchez, A. (2014). Axis calibration for improving data attribute estimation in star coordinates plots. IEEE transactions on visualization and computer graphics, 20(12), RadViz’s goal is to generate a configuration which reveals the underlying nature of the data for cluster analysis, outlier detection, and exploratory data analysis, e.g., by investigating the effect of specific dimensions on the separation of the data. Each dimension is assigned to a point known as dimensional anchors across a unit-circle. Each sample is projected according to the relative attraction to each of the anchors. We will follow with the iris dataset. RadViz is not defined for non-numerical dimensions and given it’s non-linear behavior for the projection generation it would be “even more” misleading to convert the factors to numeric. As with Star Coordinates, we can interact in order to change the projection. The anchors represented by the circles can be moved around the unit circle. However, even a factor dimension can be used for the coloring the points according to a label. This can be done by supplying the name of the column as a color. Notes On Data Processing • Missing Data: Only complete-cases are used i.e. rows where data is missing are removed • Zero Variance: Zero or close to zero variance dimensions are removed. • Scaling: If the values are scaled from 0..1. 1. Sharko, J., Grinstein, G., & Marx, K. A. (2008). Vectorized radviz and its application to multiple cluster datasets. IEEE transactions on Visualization and Computer Graphics, 14(6), 1444-1427.
{"url":"https://mirror.ibcp.fr/pub/CRAN/web/packages/RadialVisGadgets/vignettes/RadialVisGadgets.html","timestamp":"2024-11-11T20:22:35Z","content_type":"text/html","content_length":"475001","record_id":"<urn:uuid:0afbfaad-6677-4e77-bbdd-fcdbfb3efe58>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00128.warc.gz"}
How to Add Zero’s before any Figure in Excel Some time it is required to add Zero’s before any number to make it of constant length e.g. How it can be done in excel In Excel it can be done by using simple formula of Text for example If you want to convert a number to 7 Digit number by adding Zero before it can be don’t this by this formula If figures are written in coulomb A then past this formula in next coulomb Add number of zeros between inverted comas that much Digits figure you want to get If you want to convert 123 à 00123 the its 5 digits figure so formula will become
{"url":"https://www.4gaccounts.com/how-to-add-zeros-before-any-figure-in-excel/","timestamp":"2024-11-04T20:12:27Z","content_type":"text/html","content_length":"46010","record_id":"<urn:uuid:5751d3fc-b633-458e-b896-b419d9b31b7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00135.warc.gz"}
Financial Maths, Part 5 - David The Maths TutorFinancial Maths, Part 5 Financial Maths, Part 5 Last time we saw that our $1000 invested at 3% compounded annually will result in $1343.92 in 10 years. To get even better results, the interest rate can be compounded more frequently than annually. Let’s say the interest is applied mid-year. Then the interest earned during the first half of the year will be added to the principal amount and that total will be used to apply the second half interest to. This will change the formula A[n] = A[0](1 + r)^n a bit though. First of all, we can’t apply the full per annum interest rate to the initial investment as only half a year has passed. So only half the interest rate will be used. Also, the period length is now half a year so the number of periods refers to how many half-years have passed. So in our example, the annual interest rate is 3%, so the 6 month interest rate, r, is 3/2 = 1.5% since there are 2 six-month periods in a year. If we want to know how much we will have after 10 years, the number of periods, n, is now 10×2 = 20 since there are 20 half -year periods in 10 years. So we can use the same formula with r = 1.5% = 0.015 and n = 20: A[20] = A[0](1 + r)^20 = 1000(1 + 0.015)^20 = $1346.86 So the extra compounding has made us a bit more money. You might ask (OK – I’ll ask for you), would we make more money by compounding more frequently? Yes we would! Let’s compound every quarter-year. this means the interest rate we apply each quarter is 3/4 = 0.75% and the number of periods after 10 years is 10×4 = 40: A[40] = A[0](1 + r)^40 = 1000(1 + 0.0075)^40 = $1348.35 Looks like we want more yet. What about monthly? Here r = 3/12 = 0.25% = 0.0025 and n = 10×12 = 120: A[120] = A[0](1 + r)^120 = 1000(1 + 0.0025)^120 = $1349.35 Better, but notice that this is not much better that quarterly. It appears that we will reach a limit as to how much we can make. Let’s try compounding daily. Here r = 3/365 = 0.0082% = 0.000082 and n = 10×365 = 3650: A[3650] = A[0](1 + r)^3650 = 1000(1 + 0.000082)^3650 = $1349.84 Well that’s disappointing. There is only a 0.49 difference between compound monthly and daily after 10 years. There is a maths formula that computes the amount of interest if the investment is compounded continuously. This is the limit of what you can make by compounding. For our example, after 10 years, the most compounding will get us after 10 years is $1349.86, just 2 cents more than compounding daily.
{"url":"https://davidthemathstutor.com.au/2019/09/11/financial-maths-part-5/","timestamp":"2024-11-03T12:02:34Z","content_type":"text/html","content_length":"44424","record_id":"<urn:uuid:ed784a54-e795-4d60-819c-7e40ccd29a8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00392.warc.gz"}
he RAM Model The RAM modeling language is adapted from the basic RAM model developed by McArdle (1980). For brevity, models specified by the RAM modeling language are called RAM models. You can also specify these so-called RAM models by other general modeling languages that are supported in PROC CALIS. Types of Variables in the RAM Model A variable in the RAM model is manifest if it is observed and is defined in the input data set. A variable in the RAM model is latent if it is not manifest. Because error variables are not explicitly named in the RAM model, all latent variables in the RAM model are considered as factors (non-error-type latent variables). A variable in the RAM model is endogenous if it ever serves as an outcome variable in the RAM model. That is, an endogenous variable has at least one path (or an effect) from another variable in the model. A variable is exogenous if it is not endogenous. Endogenous variables are also referred to as dependent variables, while exogenous variables are also referred to as independent variables. In the RAM model, distinctions between exogenous and endogenous and between latent and manifest for variables are not essential to the definitions of model matrices, although they are useful for conceptual understanding when the model matrices are partitioned. Naming Variables in the RAM Model Manifest variables in the RAM model are referenced in the input data set. Their names must not be longer than 32 characters. There are no further restrictions beyond those required by the SAS System. Latent variables in the RAM model are those not being referenced in the input data set. Their names must not be longer than 32 characters. Unlike the LINEQS model, you do not need to use any specific prefix (for example, 'F' or 'f') for the latent factor names. The reason is that error or disturbance variables in the RAM model are not named explicitly in the RAM model. Thus, any variable names that are not referenced in the input data set are for latent factors. As a general naming convention, you should not use Intercept as either a manifest or latent variable name. Model Matrices in the RAM Model In terms of the number of model matrices involved, the RAM model is the simplest among all the general structural equations models that are supported by PROC CALIS. Essentially, there are only three model matrices in the RAM model: one for the interrelationships among variables, one for the variances and covariances, and one for the means and intercepts. These matrices are discussed in the following subsections. The row and column variables of matrix The pattern of matrix Mathematically, you do not need to arrange the set of variables for matrix Partitions of the RAM Model Matrices and Some Restrictions for details. The row and column variables of matrix The off-diagonal elements of partial covariance between the two variables. This partial covariance is unsystematic, in the sense that it is not explained by the interrelationships of variables in the model. In most cases, you can interpret a partial covariance as the error covariance between the two endogenous variables involved. An off-diagonal element in The row variables of vector Covariance and Mean Structures Assuming that The structured mean vector of all variables is shown as follows: The covariance and mean structures of all manifest variables are obtained by selecting the elements in The structured mean vector of all observed variables is expressed as the following: Partitions of the RAM Model Matrices and Some Restrictions There are some model restrictions in the RAM model matrices. Although these restrictions do not affect the derivation of the covariance and mean structures, they are enforced in the RAM model For convenience, it is useful to assume that Model Restrictions on the As shown in the matrix partitions, there are four submatrices. The two submatrices at the lower parts are seemingly structured to zeros. However, this should not be interpreted as restrictions imposed by the model. The zero submatrices are artifacts created by the exogenous-endogenous arrangement of the row and column variables. The only restriction on the It is useful to denote the lower partitions of the Although they are zero matrices in the initial model specification, their entries could become non-zero (paths) in an improved model when you modify your model by using the Lagrange multiplier statistics (see the section Modification Indices or the MODIFICATION option). Hence, you might need to reference these two submatrices when you apply the customized LM tests on them during the model modification process (see the LMTESTS statement). For the purposes of defining specific parameter regions in customized LM tests, you might also partition the In your initial model, because of the arrangement of the endogenous and exogenous variables new endogenous variables, which is exactly what the NEWENDO region means in the LMTESTS statement. Partition of the There are virtually no model restrictions placed on these submatrices. However, in most statistical applications, errors for endogenous variables represent unsystematic sources of effects and therefore they are not to be correlated with other systematic sources such as the exogenous variables in the RAM model. This means that in most practical applications Partition of the Summary of Matrices and Submatrices in the RAM Model Matrix Name Description Dimensions Model Matrices _A_ or _RAMA_ Effects of column variables on row variables, or paths from the column variables to the row variables _P_ or _RAMP_ (Partial) variances and covariances _W_ or _RAMW_ Intercepts and means _RAMBETA_ Effects of endogenous variables on endogenous variables _RAMGAMMA_ Effects of exogenous variables on endogenous variables _RAMA_LL_ The null submatrix at the lower left portion of _A_ _RAMA_LR_ The null submatrix at the lower right portion of _A_ _RAMA_LEFT_ The left portion of _A_, including _RAMA_RIGHT_ The right portion of _A_, including _RAMA_UPPER_ The upper portion of _A_, including _RAMA_LOWER_ The lower portion of _A_, including _RAMP11_ Error variances and covariances for endogenous variables _RAMP21_ Covariances between exogenous variables and error terms for endogenous variables _RAMP22_ Variances and covariances for exogenous variables _RAMALPHA_ Intercepts for endogenous variables _RAMNU_ Means for exogenous variables Specification of the RAM Model In PROC CALIS, the RAM model specification is a matrix-oriented modeling language. That is, you have to define the row and column variables for the model matrices and specify the parameters in terms of matrix entries. The VAR= option specifies the variables (including manifest and latent) in the model. For example, the following statement specifies five variables in the model: The order of variables in the VAR= option is important. The same order is used for the row and column variables in the model matrices. After you specify the variables in the model, you can specify three types of parameters, which correspond to the elements in the three model matrices. The three types of RAM entries are described in the following. (1) Specification of Effects or Paths in Model Matrix If there is a path from V2 to V1 in your model and the associated effect parameter is named parm1 with RAM statement: var= v1 v2 v3, _A_ 1 2 parm1(0.5); The ram-entry that starts with _A_ means that an element of the ram matrix v1, and the column number 2 refers to variable v2. Therefore, the effect of V2 on V1 is a parameter named parm1, with an initial value of 0.5. You can specify fixed values in the ram-entries too. Suppose the effect of v3 on v1 is fixed at 1.0. You can use the following specification: var= v1 v2 v3, _A_ 1 2 parm1(0.5), _A_ 1 3 1.0; (2) Specification of the Latent Factors in the Model In the RAM model, you specify the list of variables in VAR= list of the RAM statement. The list of variables can include the latent variables in the model. Because observed variables have references in the input data sets, those variables that do not have references in the data sets are treated as latent factors automatically. Unlike the LINEQS model, you do not need to use 'F' or 'f' prefix to denote latent factors in the RAM model. It is recommended that you use meaningful names for the latent factors. See the section Naming Variables and Parameters for the general rules about naming variables and parameters. For example, suppose that SES_Factor and Education_Factor are names that are not used as variable names in the input data set. These two names represent two latent factors in the model, as shown in the following specification: var= v1 v2 v3 SES_FACTOR Education_Factor, _A_ 1 4 b1, _A_ 2 5 b2, _A_ 3 5 1.0; This specification shows that the effect of SES_Factor on v1 is a free parameter named b1, and the effects of Education_Factor on v2 and v3 are a free parameter named b2 and a fixed value of 1.0, However, naming latent factors is not compulsory. The preceding specification is equivalent to the following specification: var= v1 v2 v3, _A_ 1 4 b1, _A_ 2 5 b2, _A_ 3 5 1.0; Although you do not name the fourth and the fifth variables in the VAR= list, PROC CALIS generates the names for these two latent variables. In this case, the fourth variable is named _Factor1 and the fifth variable is named _Factor2. (3) Specification of (Partial) Variances and (Partial) Covariances in Model Matrix Suppose now you want to specify the variance of v2 as a free parameter named parm2. You can add a new ram-entry for this variance parameter, as shown in the following statement: var= v1 v2 v3, _A_ 1 2 parm1(0.5), _A_ 1 3 1.0, _P_ 2 2 parm2; The ram-entry that starts with _P_ means that an element of the RAM matrix v2, is a parameter named parm2. You do not specify an initial value for this parameter. You can also specify the error variance of v1 similarly, as shown in the following statement: var= v1 v2 v3, _A_ 1 2 parm1(0.5), _A_ 1 3 1.0, _P_ 2 2 parm2, _P_ 1 1; In the last ram-entry, the (1,1) element of v1, is an unnamed free parameter. Covariance parameters are specified in the same manner. For example, the following specification adds a ram-entry for the covariance parameter between v2 and v3: var= v1 v2 v3, _A_ 1 2 parm1(0.5), _A_ 1 3 1.0, _P_ 2 2 parm2, _P_ 1 1, _P_ 2 3 (.5); The covariance between v2 and v3 is an unnamed parameter with an initial value of 0.5. (4) Specification of Means and Intercepts in Model Matrix _W_ To specifying means or intercepts, you need to start the ram-entries with the _W_ keyword. For example, the last two entries of following statement specify the intercept of v1 and the mean of v2, var= v1 v2 v3, _A_ 1 2 parm1(0.5), _A_ 1 3 1.0, _P_ 2 2 parm2, _P_ 1 1 , _P_ 2 3 (.5), _W_ 1 1 int_v1, _W_ 2 2 mean_v2; The intercept of v1 is a free parameter named int_v1, and the mean of v2 is a free parameter named mean_v2. Default Parameters in the RAM Model There are two types of default of parameters of the RAM model in PROC CALIS. One is the free parameters; the other is the fixed zeros. By default, certain sets of model matrix elements in the RAM model are free parameters. These parameters are set automatically by PROC CALIS, although you can also specify them explicitly in the ram-entries. In general, default free parameters enable you to specify only what are absolutely necessary for defining your model. PROC CALIS automatically sets those commonly assumed free parameters so that you do not need to specify them routinely. The sets of default free parameters of the RAM model are as follows: • Diagonal elements of the _P_ matrix—this includes the variance of exogenous variables (latent or observed) and error variances of all endogenous variables (latent or observed) • The off-diagonal elements that pertain to the exogenous variables of the _P_ matrix—this includes all the covariances among exogenous variables, latent or observed • If the mean structures are modeled, the elements that pertain to the observed variables (but not the latent variables) in the _W_ vector— this includes all the means of exogenous observed variables and the intercepts of all endogenous observed variables For example, suppose you are fitting a RAM model with three observed variables x1, x2, and y3, you specify a simple multiple-regression model with x1 and x2 predicting y3 by the following statements: proc calis meanstr; ram var= x1-x2 y3, _A_ 3 1 , _A_ 3 2 ; In the RAM statement, you specify that path coefficients represented by _A_[3,1] and _A_[3,2] are free parameters in the model. In addition to these free parameters, PROC CALIS sets several other free parameters by default. _P_[1,1], _P_[2,2], and _P_[3,3] are set as free parameters for the variance of x1, the variance of x2, and the error variance of x3, respectively. _P_[2,1] (and hence _P_ [1,2]) is set as a free parameter for the covariance between the exogenous variables x1 and x2. Because the mean structures are also analyzed by the MEANSTR option in the PROC CALIS statement, _W_ [1,1], _W_[2,1], and _W_[3,1] are also set as free parameters for the mean of x1, the mean of x2, and the intercept of x3, respectively. In the current situation, this default parameterization is consistent with using PROC REG for multiple regression analysis, where you only need to specify the functional relationships among variables. If a matrix element is not a default free parameter in the RAM model, then it is a fixed zero by default. You can override almost all default fixed zeros in the RAM model matrices by specifying the ram-entries. The diagonal elements of the _A_ matrix are exceptions. These elements are always fixed zeros. You cannot set these elements to free parameters or other fixed values—this reflects a model restriction that prevents a variable from having a direct effect on itself.
{"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_calis_sect069.htm","timestamp":"2024-11-13T01:51:15Z","content_type":"application/xhtml+xml","content_length":"68676","record_id":"<urn:uuid:dc9b70dc-2ef6-475b-bb0e-7b9e1aed9633>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00872.warc.gz"}
Student teams earn prizes for their ADC circuit designs in EECS 511 Students in the graduate level course, Integrated Analog/Digital Interface Circuits (EECS 511), competed for cash prizes while presenting their final design projects thanks to the support of Analog Devices, Inc. The winning projects were designed for battery-operated mobile applications as well as instrumentation and measure applications. EECS 511 is taught by Prof. Michael Flynn, an expert in analog and mixed-signal circuits, analog-to-digital conversion, and other interface circuits.The top four projects were sent to Analog Devices, where a group of senior circuit designers selected the following two winning teams: First Place ($1,500) A 50MS/s, 10.5-Bits, 21.3fJ/conv.steps Pipeline ADC using Ring Amplifier, by Yong Lim, Mehmet Batuhan Dayanik, and David Moore “The power efficiency of the analog-to-digital converter (ADC) is one of the most important features for battery-operated mobile applications,” stated Yong. “This work is based on the concept of the ring amplifier introduced at ISSCC 2012. We improve the power efficiency of the pipeline ADC by a factor of about two compared to the state of the art for pipeline ADCs by improving the ring amplifier’s power efficiency. We were able to reduce power consumption without sacrificing performance. A 1.5b per stage with total of 10.5-bits 50MS/s pipeline ADC is implemented in 0.13 µm CMOS as a proof of concept prototype. The simulated effective number of bits is 10.17bits for a Nyquist frequency input. The power consumption is only 1.22mW.” Second Place ($500) 18b Incremental Zoom-ADC in 0.13µm CMOS, by Seok-hyeon Jeong, Wanyeong Jung, and Sechang Oh In this project, the students described their project as an energy-efficient, high-resolution incremental analog-to-digital converter (ADC) for instrumentation and measurement applications. Instead of using a conventional sigma-delta architecture, a two-step conversion which combines 6bit course SAR and 13bit fine sigma-delta achieves high resolution as well as energy efficiency. The complete ADC is implemented in 0.13µm CMOS. Simulation results give an effective number of bits (ENOB) of 18 bits and ±1.6ppm integral nonlinearity (INL).
{"url":"https://theory.engin.umich.edu/stories/student-teams-earn-prizes-for-their-adc-circuit-designs-in-eecs-511-2","timestamp":"2024-11-02T14:10:40Z","content_type":"text/html","content_length":"44096","record_id":"<urn:uuid:8db25f15-52a9-4936-b366-77c16be17031>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00152.warc.gz"}
Linear Arrangement - Reasoning Study Material & NotesLinear Arrangement - Reasoning Study Material & Notes Linear Arrangement – Reasoning Study Material & Notes Linear Arrangement is essentially ordering the items given in a sequence (in a single line). The questions of this type are also referred to as “Seating Arrangement”. The word “seating arrangement” should not be misconstrued – it should not be treated as consisting of questions involving only persons sitting as per specified conditions. Essentially, these questions involve arranging subjects or people or things while satisfying the given conditions. The arrangement is done only on one “axis” and hence, the position of the subjects assumes importance here in terms of other like first position, second position, etc. Let us look at some examples to understand the concept. Seven persons Paul, Queen, Sam, Tom, Rax, Unif, and Vali are sitting in a row facing us. Rax and Sam sit next to each other. There must be exactly four persons between Queen and Vali. Sam sits in the immediate right of Queen. Q1. If Paul and Tom are separated exactly by two persons, then who sits to the immediate left of Vali? A1. Let us write down the conditions given in short form and then represent them pictorally. Also, let us treat the left of the persons sitting as “left” and their “right” for interpreting the Rax and Sam sit next to each other, which menas its either [R-S] or [S-R] There are exactly 4 persons between Queen and Vali, which means [ Q—-V] or [V—-Q] Sam sits to the immediate right of Queen , which means [SQ] Now let us analyse the data and conditions given and then put the three conditions together. Let us number the seats from our left to the right as Seat 1 to 7. Since S is to the right of Q and since R and S have to be next to each other, R can come only to the immediate right of S. Thus, R, S and Q will be in the order RSQ. Since there are four persons between Q and V, Q can be placed in seats 1,2,6 or 7. But if Q is in seat 1 or 2 , then there are no seats for R and S. Hence, there are only two seats available for Q. Let us now fix the positions of R, S and V in each of these two positions of Q and write them down. << Click here to read about functions of Finance Commission>> The directions Left and Right are as shown below. Arrangement I: Arrangement II: These are the only two arrangements possible for the four persons V, R, S and Q. The other three persons Paul, Tom, and Unif can sit in the three vacant seats in any order, as no information is given about them. Now let us look at the question, Paul and Tom are separated by exactly two persons. Arrangement I is the only one possible as in Arrangement II, Paul and Tom cannot have exactly two persons between them. So, we have the arrangement as follows: T/P – V – U – P/T – R – S – Q So, Unif must be sitting to the immediate left of Vali. The answer is Unif. Share This Post, Choose Your Platform!
{"url":"https://exampariksha.com/linear-seating-arrangement-logical-reasoning-study-material-notes/","timestamp":"2024-11-03T13:16:13Z","content_type":"text/html","content_length":"99282","record_id":"<urn:uuid:9633998f-9e77-4819-a3d0-edd10008d037>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00862.warc.gz"}
Can You Join the World’s Biggest Zoom Call? 2 votes (25%) 1 vote (12.5%) 2 votes (25%) 2 votes (25%) 4 votes (50%) 4 votes (50%) 2 votes (25%) 1 vote (12.5%) No votes (0%) 1 vote (12.5%) 8 members have voted This puzzles comes to us courtesy of the Riddler Classic at fivethirtyeight.com. I think I can't improve upon their wording, so will copy and paste it: Quote: fivethirtyeight.com From Jim Crimmins comes a puzzle about what would presumably be the largest Zoom meeting of all time: One Friday morning, suppose everyone in the U.S. (about 330 million people) joins a single Zoom meeting between 8 a.m. and 9 a.m. — to discuss the latest Riddler column, of course. This being a virtual meeting, many people will join late and leave early. In fact, the attendees all follow the same steps in determining when to join and leave the meeting. Each person independently picks two random times between 8 a.m. and 9 a.m. — not rounded to the nearest minute, mind you, but any time within that range. They then join the meeting at the earlier time and leave the meeting at the later time. What is the probability that at least one attendee is on the call with everyone else (i.e., the attendee’s time on the call overlaps with every other person’s time on the call)? I have what I think is the correct answer, but it involves some hand-waving logic that I don't know would get me full credit for a proper solution. "For with much wisdom comes much sorrow." -- Ecclesiastes 1:18 (NIV) (I think) I didn't answer the question as asked (so it is probably wrong or only a "partial answer" at best): 1. "At least one person" would have to be in the "Zoom Meeting" for the whole hour (8 to 9) to guarantee "that the attendee’s time on the call overlaps with every other person’s time on the call". 2. The "..unit of time" I used was 100 ms *** (0.1 of a second) ***: See bottom of the first answer on this link >>> <<< where it says "...somewhere in the range of 80 ms to 125 ms..." Using the above (probably wrong, lol) assumptions, i get: ~22.48% as the chance that "at least least one attendee is on the call with everyone else" Between 8 am and 9 am there are: 36,000 "1/10's of a second" There is a 1/36,000 chance that a person will be there at 8 am and there is a 1/36,000 chance that a person will leave at 9 am. 1/36,000 x 1/36,000 = 1/1,296,000,000 = "the chance that a person will be there at 8 am and leave at 9 am". The chance that a person will NOT " be there at 8 am and leave at 9 am" is therefore 1,295,999,999/1,296,000,000 There are 330 million people expected to be in the "zoom meeting". "the chance that at least one person will be there at 8 am and leave at 9 am"= 1 - (1,295,999,999/1,296,000,000)^330 million = 1 - 0.7752... = 0.2247... Therefore the chance of at least one person being there for the meeting (whole hour###) is ~22.48%. ###: I know this was not what was asked in the OP, so this is why I think my answer is wrong, but may still be helpful. At least I didn't try to use "Planck Time" or "zeptoseconds" in my attempt (The chance figure would probably have been "practically zero", if I used those). Edit (about 325 am) Last edited by: ksdjdj on Jun 3, 2020 • Threads: 210 • Posts: 11058 Joined: Nov 2, 2009 Is “ hand-waving logic” similar to magic wand waving? 🤪 But seriously, there’s always going to be those people that want to be on the call for the entire duration. Towards that end, they’ll get on before 8 and get off after 9. So, 100%...? So I‘d change the parameters of the puzzle slightly to allow for this situation, by stating that people can get on and off anytime they wish, but they can be on the call for a random duration not to exceed one hour. So now the question is, what’s the chance one of those max duration people nails it and gets on at 8:00:00? My response: dunno. 🤔 I invented a few casino games. Info: http://www.DaveMillerGaming.com/ ————————————————————————————————————— Superstitions are silly, childish, irrational rituals, born out of fear of the unknown. But how much does it cost to knock on wood? 😁 Quote: Wizard ...I have what I think is the correct answer, but it involves some hand-waving logic that I don't know would get me full credit for a proper solution. a very small number: 1.26 x 10^-6992 EDIT: I found an error. Now, I get: 1.4 x 10^-13984 Last edited by: ChesterDog on Jun 3, 2020 I haven't looked at it in great detail but if someone was able to join at 8am+a and leave at 9am-b where a and b were sufficently small then that person would cover everyone unless someone else managed to get in an out in a or b. (e.g. Person X gets in at 8:00:05 and leaves at 8:59:54, it would need someone, a spoiler, to either join and leave before 8:00:05 or join and leave after 8:59:54 for X not cover everyone.) The chance of person X joining at or before 8+a is about a, similar 9-b is about b (where a and b are in hours). So the chances are ab. Similarly the chances of a spoiler would be about a^2+b^2. I guess you could then integrate the chances of various times for person X, and then look at the chances of them not having a spoiler. This is a mathematical question employing randomness, not a question involving real people. Let's also recognize that time is almost infinitely divisible down to something like 10E-64 seconds (the plank limit on measureable time due to the Heisenberg Uncertainty principle of quantum Now let's restate the question: Given 330 million people, we must estimate the expected values of the earliest departure time and the latest arrival time. And then estimate what the probability is that one of the other 330 million people gets on the Zoom call before the earliest departure time and also randomly draws a departure time later than the latest arrival time? Given 330 million people, we must estimate the expected values of the earliest departure time and the latest arrival time. If you divide an hour into 26,000 intervals the odds of randomly getting both an arrival and departure time within a single given interval (of 1/26,000 hour) is about 1 in 337,000,000. So, I expect that the earliest departure time will roughly be 8:00 a.m +1/26,000 of a an hour. Similarly the latest arrival time should be 9:00 p.m. - 1/26,000 of an hour. So, what are the odds of any given person getting an arrival time in the first 1/26,000 of an hour AND a departure time in the last 1/26,000 of an hour? about 1 in 337 million! So, as an answer, I estimate a probability of 50%. Because, even though there is one heroic person who is expected to have arrival and departure times in the same intervals as the earliest departer and the latest arriver, we must consider when in the intervals our heroic person has arrived and departed and whether is before the earliest departer and after the latest arriver. And by a quick analysis I get that 50% is the about the correct answer. So many better men, a few of them friends, are dead. And a thousand thousand slimy things live on, and so do I. I expect the answer to be very close to 1. The probability that a given person arrives between 8:00:00 and 8:00:36 and leaves between 8:59:24 and 9:00:00 is 1 / 500,000. The probability that nobody in a group of 330,000,000 do is (499,999 / 500,000)^330,000,000; I calculate this as 1 in 10^304. Therefore, almost certainly, at least one person will arrive before 8:00:36 and leave after 8:59:24. The probability that anybody arrives and leaves before 8:00:36, or arrives and leaves after 8:59:24, is rather small. If you chage it to arriving between 8:00:00 and 8:00:03.6 and leaving between 8:59:56.4 and 9:00:00, I get a 734/735 probability that at least one person out of 330 million does this. Quote: ThatDonGuy I expect the answer to be very close to 1. The probability that a given person arrives between 8:00:00 and 8:00:36 and leaves between 8:59:24 and 9:00:00 is 1 / 500,000. The probability that nobody in a group of 330,000,000 do is (499,999 / 500,000)^330,000,000; I calculate this as 1 in 10^304. Therefore, almost certainly, at least one person will arrive before 8:00:36 and leave after 8:59:24. The probability that anybody arrives and leaves before 8:00:36, or arrives and leaves after 8:59:24, is rather small. If you chage it to arriving between 8:00:00 and 8:00:03.6 and leaving between 8:59:56.4 and 9:00:00, I get a 734/735 probability that at least one person out of 330 million does this. I estimate that the chances are about 1 in 330,000,000 that a person will arrive and then depart within 0.2 seconds. Bin an hour into 18,000 intervals of 0.2 seconds. The probability of arriving and departing within the first interval of 0.2 seconds is 1/ (18000^2)= 1/324,000,000 (approx). Given 330 million, we expect one person to have arrived and departed within the first 0.2 second of the Zoom video call. The same applies to the last 0.2 second interval of the Zoom call. So what is the probability of another individual arriving within the first 0.2 second and departing in the last 0.2 second? about 1 in 326,000 million. However, we need a person to arrive (on average) within the first 0.14 seconds and depart within the last 0.14 seconds to arrive earlier than the 1st departer and to depart later than the last arriver. So -without doing integration - I have a rough estimate that the probability is about 50%. So many better men, a few of them friends, are dead. And a thousand thousand slimy things live on, and so do I. Quote: ChesterDog a very small number: 1.26 x 10^-6992 EDIT: I found an error. Now, I get: 1.4 x 10^-13984 "For with much wisdom comes much sorrow." -- Ecclesiastes 1:18 (NIV) Thanked by Quote: gordonm888 So -without doing integration - I have a rough estimate that the probability is about 50%. This is my general area too. "For with much wisdom comes much sorrow." -- Ecclesiastes 1:18 (NIV) Quote: Wizard Quote: ChesterDog a very small number: 1.26 x 10^-6992 EDIT: I found an error. Now, I get: 1.4 x 10^-13984 I agree that I wasn't even close. Now, I get about 0.697 0.596354. I had to do a little hand waving, too. I also tried the calculation with 50 people instead of 330 million and got about 0.605. EDIT: I caught another error, and my new probability is 0.596354 for 330,000,000 people. Last edited by: ChesterDog on Jun 4, 2020 Running a few simulations I get the following, but the variance is too high to imply there's a trend as the number of people grow. 100 people 66.72% 1 000 people 66.68% 10 000 people 66.94% 100 000 people 66.87% 1 000 000 people 65.80% Not surprisingly, the simulations show the answer is around Ln(2) or 1-1/e 1-1/e = 0.632 would be the answer for (not a) derangement Ln(2) = 0.693 for the alternating harmonic series Last edited by: Ace2 on Jun 4, 2020 It’s all about making that GTA Thanked by Here is the answer from This is my answer, but I got it by doing an exact solution for two and three people and noticed the answer was the same for both. With 330 million being an arbitrary number, I figured the answer was probably the same for any number of people, much like the previous puzzle about the probability the last passenger to board a plan would sit in his correct seat. "For with much wisdom comes much sorrow." -- Ecclesiastes 1:18 (NIV) I went back to this problem for the four-person case. Of course, the answer was still 2/3. I will probably make an Ask the Wizard question out of this. Here is my . I welcome all comments. "For with much wisdom comes much sorrow." -- Ecclesiastes 1:18 (NIV)
{"url":"https://wizardofvegas.com/forum/questions-and-answers/math/34750-can-you-join-the-world-s-biggest-zoom-call/","timestamp":"2024-11-12T10:16:48Z","content_type":"text/html","content_length":"90463","record_id":"<urn:uuid:80e91d6c-7234-4b09-b295-cfb207f36f22>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00837.warc.gz"}
ball mill throughput WEBOct 1, 2018 · Radhakrishnan developed a modelbased controller for ball mill and hydrocyclone circuit to serve as a supervisory optimizing controller on the regulatory PID loops. ... Installation of the FECS as a supervisory controller on the Sungun DCS control system showed an increase of % in mill throughput and decrease in feeding . WhatsApp: +86 18838072829 WEBA Slice Mill is the same diameter as the production mill but shorter in length. Request Price Quote. Click to request a ball mill quote online or call to speak with an expert at Paul O. Abbe® to help you determine which design and size ball mill would be best for your process. See our Size Reduction Options. WhatsApp: +86 18838072829 WEBSep 23, 2021 · This study proposed the use of an instrumented grinding media to assess solid loading inside a ball mill, with size and density of the instrumented ball comparable to that of the ordinary grinding media. ... A comparative study of prediction methods for semiautogenous grinding mill throughput. Minerals Engineering (IF ) Pub Date: 202310 . WhatsApp: +86 18838072829 WEBAug 4, 2023 · High throughput: SAG mills are capable of processing large amounts of ore, making them ideal for operations that require high production can handle both coarse and fine grinding, resulting in improved overall efficiency. Energy savings: Compared to traditional ball mills, SAG mills consume less energy, leading to significant cost . WhatsApp: +86 18838072829 WEBModifying blasting practices to achieve a more suitable mill feed size – which varies according to the crushing/grinding circuit – can achieve up to a 30% increase in throughput. Following an initial benchmarking of an operation's practices, SRK can advise on how valueadded blasting will deliver improvements in both mill capacity and ... WhatsApp: +86 18838072829 WEBFeb 17, 2023 · Operators will need to decide how to take the dividend of increased ball mill efficiency, which could be seen as an opportunity either to drive throughput or to reduce grind size and increase recoveries at a constant throughput. The optimal choice will depend on the properties of the ore body and the existing configuration of the . WhatsApp: +86 18838072829 WEBFeb 13, 2017 · CERAMIC LINED BALL MILL. Ball Mills can be supplied with either ceramic or rubber linings for wet or dry grinding, for continuous or batch type operation, in sizes from 15″ x 21″ to 8′ x 12′. High density ceramic linings of uniform hardness male possible thinner linings and greater and more effective grinding volume. WhatsApp: +86 18838072829 WEBNov 12, 2021 · ball mill throughput due to its proportional relationship to TPH in Bond's law (Equation (1)) and its strong correlation in the present dataset shown in T able 2. Figure 7 a shows a. WhatsApp: +86 18838072829 WEBJan 1, 2024 · A case study at the Tropicana Gold mining complex is shown that utilizes penetration rates from blasthole drilling and measurements of the comminution circuit to construct a datadriven, geometallurgical throughput prediction model of the ball mill, underlining the importance of compositional approaches for nonadditive geometric . WhatsApp: +86 18838072829 WEBSep 1, 2013 · smallest ball size in the mill [mm] d max. largest ball size in the mill [mm] f j. weight fraction in size interval i in the feed to the mill [–] f 90. 90% passing size of the feed [L] F optimum weight fraction of mm balls in the makeup [–] g i. weight fraction in size interval i in the feed to the circuit [–] J. volume ... WhatsApp: +86 18838072829 WEBSep 20, 2022 · Bunker Hill Secures Ball Mill Capable of Increasing Annual Production Throughput Capacity to 2,100 tpd Bunker Hill Mining Corp. Tue, Sep 20, 2022, 4:00 AM 3 min read WhatsApp: +86 18838072829 WEBOct 11, 2021 · The optimization of comminution circuits has traditionally relied on wellaccepted comminution laws and ore hardness and grindability indices for ball/rod mills [9,10] and SAG mills [11,12,13].These comminution models are routinely used for optimized grinding circuit design, using averages or ranges of ore hardness tests of the . WhatsApp: +86 18838072829 WEBAug 6, 2015 · Rod mill grinding efficiencies have been shown to increase in the range of 5 to 15% with more dilute discharge slurry (, increased feed water) at least down to the range of 45 to 50% solids by volume. The higher the discharge % solids in the plant rod mill, the better the candidate for feed water addition rate test work. WhatsApp: +86 18838072829 WEBDownload. The Mixer Mill MM 400 is a true multipurpose mill designed for dry, wet and cryogenic grinding of small volumes up to 2 x 20 ml. It mixes and homogenizes powders and suspensions with a frequency of 30 Hz within seconds – unbeatably fast and easy to operate. The compact benchtop unit is suitable for classic homogenization processes ... WhatsApp: +86 18838072829 WEBJun 29, 2018 · Results from these analyses are presented below. SAG mill water addition, as characterized by the percent solids content of the mill discharge, exerts a very strong influence on SAG mill performance. Each line of constant volumetric loading is made up of data of similar ore type and SAG ball charge levels. The beneficial effects of increased ... WhatsApp: +86 18838072829 WEBThe ball charge and ore charge volume is a variable, subject to what is the target for that operation. The type of mill also is a factor as if it is an overflow mill (subject to the diameter of the discharge port) is usually up to about 4045%. If it is a grate discharge you will have more flexibility of the total charge. WhatsApp: +86 18838072829 WEBJan 1, 1991 · Controlled variable Manipulated variable (i) Mill throughput rate (i) Fresh feed solids rate (ii) Mill discharge density (ii) Fresh water rate (iii) Cyclone feed density (iii) Sump water rate (iv) Cyclone mass feed rate (iv) Pumping rate (y) Sump level (vi) Overflow product particle size (vii) Mill power CSemical Engineering Science, Vol. 46, No. 3, pp. . WhatsApp: +86 18838072829 WEBOct 28, 2021 · In addition, the value of the BWi determines the energy required for crushing and milling (depends on the BWi type,, crushing work index, Bond ball mill work index, and Bond rod mill work index), and this measure can be used to optimize processing throughput. Bond Ball/Rod Mill Work Index WhatsApp: +86 18838072829 WEBJan 1, 2005 · Once operational, the selected SAG mill size and operating conditions primarily control circuit throughput, while the ballmill circuit installed power controls the final grind size. Particularly at mines where ore has become more competent during the life of the mine (and ore rarely gets softer as the mines go deeper) or at operations where ... WhatsApp: +86 18838072829 WEBMar 15, 2015 · Prediction of ball filling effects on mill throughput for the TIS model. Simulation conditions: d = 40 mm and ϕ c = 70% of critical. The trend that could be observed was that a high ball filling leads to faster production of m 2, which agrees with accepted industrial practice ( Austin et al., 1984 ). WhatsApp: +86 18838072829 WEBOct 9, 2015 · The circulating loads generated in a typical ball mill cyclone circuit contain a small fraction of bypassed fines. The concept that high circulating loads will result in overgrinding can be refuted by regarding increases in circulating load in the same vein as multistage grinding. That is, for every incremental increase in circulating load of ... WhatsApp: +86 18838072829 WEBThe maximum throughput may be SAG or Ball Mill limited, depending on ore conditions. A*b + BMWi Pebble crusher tph F80 tph P80 SAG (kW) tph T80 BM (kW) BMWi Figure 12. Conceptual interaction of SAG and Ball Mill circuits The sensitivity of the Excel model to A*b was fine tuned using JKSimMet (Wiseman and Richardson, 1991) to better capture . WhatsApp: +86 18838072829 WEBHere's the best way to solve it. If the power draw on a ball mill is 1,180 kW, what is the Bond work index for the following charge: Throughput = 150 th1 80 % passing size of the feed = 450 um 80 % passing size of the output from the mill = 75 um. WhatsApp: +86 18838072829 WEBJul 1, 2003 · The lack of constraints in ball mill capacity in the published ball mill models may result in unrealistic predictions of mill throughput. This paper presents an overfilling indior for wet overflow discharge ball mills. The overfilling indior is based on the slurry residence time in a given mill and given operational conditions. WhatsApp: +86 18838072829 WEBSep 1, 2016 · Abstract. The lack of constraints in ball mill capacity in the published ball mill models may result in unrealistic predictions of mill throughput. This paper presents an overfilling indior for wet overflow discharge ball mills. The overfilling indior is based on the slurry residence time in a given mill and given operational conditions. WhatsApp: +86 18838072829 WEBMay 1, 2013 · This model structure is proposed for multicompartment ball mills for the calculation of breakage (r) and discharge rate (d) functions in different segments of the mill using the experimentally measured mill inside size distributions (p 1 and p 2) and calculated mill feed and discharge size distributions (p 3), in addition to segment holdup (s 1, s 2, . WhatsApp: +86 18838072829 WEBMar 6, 2023 · The throughput of a cement grinding ball mill depends on several factors, including the size and hardness of the feed material, the size and speed of the mill, the filling level of the grinding ... WhatsApp: +86 18838072829
{"url":"https://bernardgueringuide.fr/Jan/14_ball-mill-throughput.html","timestamp":"2024-11-07T07:34:54Z","content_type":"application/xhtml+xml","content_length":"25840","record_id":"<urn:uuid:8fad3be3-6f6c-4c22-9a9a-b8796e44bceb>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00525.warc.gz"}
Continuity and Infinitesimals First published Wed Jul 27, 2005; substantive revision Mon Jul 20, 2009 The usual meaning of the word continuous is “unbroken” or “uninterrupted”: thus a continuous entity—a continuum—has no “gaps.” We commonly suppose that space and time are continuous, and certain philosophers have maintained that all natural processes occur continuously: witness, for example, Leibniz's famous apothegm natura non facit saltus—“nature makes no jump.” In mathematics the word is used in the same general sense, but has had to be furnished with increasingly precise definitions. So, for instance, in the later 18th century continuity of a function was taken to mean that infinitesimal changes in the value of the argument induced infinitesimal changes in the value of the function. With the abandonment of infinitesimals in the 19th century this definition came to be replaced by one employing the more precise concept of limit. Traditionally, an infinitesimal quantity is one which, while not necessarily coinciding with zero, is in some sense smaller than any finite quantity. For engineers, an infinitesimal is a quantity so small that its square and all higher powers can be neglected. In the theory of limits the term “infinitesimal” is sometimes applied to any sequence whose limit is zero. An infinitesimal magnitude may be regarded as what remains after a continuum has been subjected to an exhaustive analysis, in other words, as a continuum “viewed in the small.” It is in this sense that continuous curves have sometimes been held to be “composed” of infinitesimal straight lines. Infinitesimals have a long and colourful history. They make an early appearance in the mathematics of the Greek atomist philosopher Democritus (c. 450 B.C.E.), only to be banished by the mathematician Eudoxus (c. 350 B.C.E.) in what was to become official “Euclidean” mathematics. Taking the somewhat obscure form of “indivisibles,” they reappear in the mathematics of the late middle ages and later played an important role in the development of the calculus. Their doubtful logical status led in the nineteenth century to their abandonment and replacement by the limit concept. In recent years, however, the concept of infinitesimal has been refounded on a rigorous basis. We are all familiar with the idea of continuity. To be continuous^[1] is to constitute an unbroken or uninterrupted whole, like the ocean or the sky. A continuous entity—a continuum—has no “gaps”. Opposed to continuity is discreteness: to be discrete^[2] is to be separated, like the scattered pebbles on a beach or the leaves on a tree. Continuity connotes unity; discreteness, plurality. While it is the fundamental nature of a continuum to be undivided, it is nevertheless generally (although not invariably) held that any continuum admits of repeated or successive division without limit. This means that the process of dividing it into ever smaller parts will never terminate in an indivisible or an atom—that is, a part which, lacking proper parts itself, cannot be further divided. In a word, continua are divisible without limit or infinitely divisible. The unity of a continuum thus conceals a potentially infinite plurality. In antiquity this claim met with the objection that, were one to carry out completely—if only in imagination—the process of dividing an extended magnitude, such as a continuous line, then the magnitude would be reduced to a multitude of atoms—in this case, extensionless points—or even, possibly, to nothing at all. But then, it was held, no matter how many such points there may be—even if infinitely many—they cannot be “reassembled” to form the original magnitude, for surely a sum of extensionless elements still lacks extension^[3]. Moreover, if indeed (as seems unavoidable) infinitely many points remain after the division, then, following Zeno, the magnitude may be taken to be a (finite) motion, leading to the seemingly absurd conclusion that infinitely many points can be “touched” in a finite time. Such difficulties attended the birth, in the 5^th century B.C.E., of the school of atomism. The founders of this school, Leucippus and Democritus, claimed that matter, and, more generally, extension, is not infinitely divisible. Not only would the successive division of matter ultimately terminate in atoms, that is, in discrete particles incapable of being further divided, but matter had in actuality to be conceived as being compounded from such atoms. In attacking infinite divisibility the atomists were at the same time mounting a claim that the continuous is ultimately reducible to the discrete, whether it be at the physical, theoretical, or perceptual level. The eventual triumph of the atomic theory in physics and chemistry in the 19^th century paved the way for the idea of “atomism”, as applying to matter, at least, to become widely familiar: it might well be said, to adapt Sir William Harcourt's famous observation in respect of the socialists of his day, “We are all atomists now.” Nevertheless, only a minority of philosophers of the past espoused atomism at a metaphysical level, a fact which may explain why the analogous doctrine upholding continuity lacks a familiar name: that which is unconsciously acknowledged requires no name. Peirce coined the term synechism (from Greek syneche, “continuous”) for his own philosophy—a philosophy permeated by the idea of continuity in its sense of “being connected”^[4]. In this article I shall appropriate Peirce's term and use it in a sense shorn of its Peircean overtones, simply as a contrary to atomism. I shall also use the term “divisionism” for the more specific doctrine that continua are infinitely divisible. Closely associated with the concept of a continuum is that of infinitesimal.^[5] An infinitesimal magnitude has been somewhat hazily conceived as a continuum “viewed in the small,” an “ultimate part” of a continuum. In something like the same sense as a discrete entity is made up of its individual units, its “indivisibles”, so, it was maintained, a continuum is “composed” of infinitesimal magnitudes, its ultimate parts. (It is in this sense, for example, that mathematicians of the 17^th century held that continuous curves are “composed” of infinitesimal straight lines.) Now the “coherence” of a continuum entails that each of its (connected) parts is also a continuum, and, accordingly, divisible. Since points are indivisible, it follows that no point can be part of a continuum. Infinitesimal magnitudes, as parts of continua, cannot, of necessity, be points: they are, in a word, nonpunctiform. Magnitudes are normally taken as being extensive quantities, like mass or volume, which are defined over extended regions of space. By contrast, infinitesimal magnitudes have been construed as intensive magnitudes resembling locally defined intensive quantities such as temperature or density. The effect of “distributing” or “integrating” an intensive quantity over such an intensive magnitude is to convert the former into an infinitesimal extensive quantity: thus temperature is transformed into infinitesimal heat and density into infinitesimal mass. When the continuum is the trace of a motion, the associated infinitesimal/intensive magnitudes have been identified as potential magnitudes—entities which, while not possessing true magnitude themselves, possess a tendency to generate magnitude through motion, so manifesting “becoming” as opposed to “being”. An infinitesimal number is one which, while not coinciding with zero, is in some sense smaller than any finite number. This sense has often been taken to be the failure to satisfy the Principle of Archimedes, which amounts to saying that an infinitesimal number is one that, no matter how many times it is added to itself, the result remains less than any finite number. In the engineer's practical treatment of the differential calculus, an infinitesimal is a number so small that its square and all higher powers can be neglected. In the theory of limits the term “infinitesimal” is sometimes applied to any sequence whose limit is zero. The concept of an indivisible is closely allied to, but to be distinguished from, that of an infinitesimal. An indivisible is, by definition, something that cannot be divided, which is usually understood to mean that it has no proper parts. Now a partless, or indivisible entity does not necessarily have to be infinitesimal: souls, individual consciousnesses, and Leibnizian monads all supposedly lack parts but are surely not infinitesimal. But these have in common the feature of being unextended; extended entities such as lines, surfaces, and volumes prove a much richer source of “indivisibles”. Indeed, if the process of dividing such entities were to terminate, as the atomists maintained, it would necessarily issue in indivisibles of a qualitatively different nature. In the case of a straight line, such indivisibles would, plausibly, be points; in the case of a circle, straight lines; and in the case of a cylinder divided by sections parallel to its base, circles. In each case the indivisible in question is infinitesimal in the sense of possessing one fewer dimension than the figure from which it is generated. In the 16^th and 17^th centuries indivisibles in this sense were used in the calculation of areas and volumes of curvilinear figures, a surface or volume being thought of as a collection, or sum, of linear, or planar indivisibles respectively. The concept of infinitesimal was beset by controversy from its beginnings. The idea makes an early appearance in the mathematics of the Greek atomist philosopher Democritus c. 450 B.C.E., only to be banished c. 350 B.C.E. by Eudoxus in what was to become official “Euclidean” mathematics. We have noted their reappearance as indivisibles in the sixteenth and seventeenth centuries: in this form they were systematically employed by Kepler, Galileo's student Cavalieri, the Bernoulli clan, and a number of other mathematicians. In the guise of the beguilingly named “linelets” and “timelets”, infinitesimals played an essential role in Barrow's “method for finding tangents by calculation”, which appears in his Lectiones Geometricae of 1670. As “evanescent quantities” infinitesimals were instrumental (although later abandoned) in Newton's development of the calculus, and, as “inassignable quantities”, in Leibniz's. The Marquis de l'Hôpital, who in 1696 published the first treatise on the differential calculus (entitled Analyse des Infiniments Petits pour l'Intelligence des Lignes Courbes), invokes the concept in postulating that “a curved line may be regarded as being made up of infinitely small straight line segments,” and that “one can take as equal two quantities differing by an infinitely small quantity.” However useful it may have been in practice, the concept of infinitesimal could scarcely withstand logical scrutiny. Derided by Berkeley in the 18^th century as “ghosts of departed quantities”, in the 19^th century execrated by Cantor as “cholera-bacilli” infecting mathematics, and in the 20^th roundly condemned by Bertrand Russell as “unnecessary, erroneous, and self-contradictory”, these useful, but logically dubious entities were believed to have been finally supplanted in the foundations of analysis by the limit concept which took rigorous and final form in the latter half of the 19^th century. By the beginning of the 20^th century, the concept of infinitesimal had become, in analysis at least, a virtual “unconcept”. Nevertheless the proscription of infinitesimals did not succeed in extirpating them; they were, rather, driven further underground. Physicists and engineers, for example, never abandoned their use as a heuristic device for the derivation of correct results in the application of the calculus to physical problems. Differential geometers of the stature of Lie and Cartan relied on their use in the formulation of concepts which would later be put on a “rigorous” footing. And, in a technical sense, they lived on in the algebraists' investigations of nonarchimedean fields. A new phase in the long contest between the continuous and the discrete has opened in the past few decades with the refounding of the concept of infinitesimal on a solid basis. This has been achieved in two essentially different ways, the one providing a rigorous formulation of the idea of infinitesimal number, the other of infinitesimal magnitude. First, in the nineteen sixties Abraham Robinson, using methods of mathematical logic, created nonstandard analysis, an extension of mathematical analysis embracing both “infinitely large” and infinitesimal numbers in which the usual laws of the arithmetic of real numbers continue to hold, an idea which, in essence, goes back to Leibniz. Here by an infinitely large number is meant one which exceeds every positive integer; the reciprocal of any one of these is infinitesimal in the sense that, while being nonzero, it is smaller than every positive fraction 1/n. Much of the usefulness of nonstandard analysis stems from the fact that within it every statement of ordinary analysis involving limits has a succinct and highly intuitive translation into the language of The second development in the refounding of the concept of infinitesimal took place in the nineteen seventies with the emergence of synthetic differential geometry, also known as smooth infinitesimal analysis. Based on the ideas of the American mathematician F. W. Lawvere, and employing the methods of category theory, smooth infinitesimal analysis provides an image of the world in which the continuous is an autonomous notion, not explicable in terms of the discrete. It provides a rigorous framework for mathematical analysis in which every function between spaces is smooth (i.e., differentiable arbitrarily many times, and so in particular continuous) and in which the use of limits in defining the basic notions of the calculus is replaced by nilpotent infinitesimals, that is, of quantities so small (but not actually zero) that some power—most usefully, the square—vanishes. Smooth infinitesimal analysis embodies a concept of intensive magnitude in the form of infinitesimal tangent vectors to curves. A tangent vector to a curve at a point p on it is a short straight line segment l passing through the point and pointing along the curve. In fact we may take l actually to be an infinitesimal part of the curve. Curves in smooth infinitesimal analysis are “locally straight” and accordingly may be conceived as being “composed of” infinitesimal straight lines in de l'Hôpital's sense, or as being “generated” by an infinitesimal tangent vector. The development of nonstandard and smooth infinitesimal analysis has breathed new life into the concept of infinitesimal, and—especially in connection with smooth infinitesimal analysis—supplied novel insights into the nature of the continuum. The opposition between Continuity and Discreteness played a significant role in ancient Greek philosophy. This probably derived from the still more fundamental question concerning the One and the Many, an antithesis lying at the heart of early Greek thought (see Stokes [1971]). The Greek debate over the continuous and the discrete seems to have been ignited by the efforts of Eleatic philosophers such as Parmenides (c. 515 B.C.E.), and Zeno (c. 460 B.C.E.) to establish their doctrine of absolute monism^[6]. They were concerned to show that the divisibility of Being into parts leads to contradiction, so forcing the conclusion that the apparently diverse world is a static, changeless unity.^[7] In his Way of Truth Parmenides asserts that Being is homogeneous and continuous. However in asserting the continuity of Being Parmenides is likely no more than underscoring its essential unity. Parmenides seems to be claiming that Being is more than merely continuous—that it is, in fact, a single whole, indeed an indivisible whole. The single Parmenidean existent is a continuum without parts, at once a continuum and an atom. If Parmenides was a synechist, his absolute monism precluded his being at the same time a divisionist. In support of Parmenides' doctrine of changelessness Zeno formulated his famous paradoxes of motion. (see entry on Zeno's paradoxes) The Dichotomy and Achilles paradoxes both rest explicitly on the limitless divisibility of space and time. The doctrine of Atomism,^[8] which seems to have arisen as an attempt at escaping the Eleatic dilemma, was first and foremost a physical theory. It was mounted by Leucippus (fl. 440 B.C.E.) and Democritus (b. 460–457 B.C.E.) who maintained that matter was not divisible without limit, but composed of indivisible, solid, homogeneous, spatially extended corpuscles, all below the level of Atomism was challenged by Aristotle (384–322 B.C.E.), who was the first to undertake the systematic analysis of continuity and discreteness. A thoroughgoing synechist, he maintained that physical reality is a continuous plenum, and that the structure of a continuum, common to space, time and motion, is not reducible to anything else. His answer to the Eleatic problem was that continuous magnitudes are potentially divisible to infinity, in the sense that they may be divided anywhere, though they cannot be divided everywhere at the same time. Aristotle identifies continuity and discreteness as attributes applying to the category of Quantity^[9]. As examples of continuous quantities, or continua, he offers lines, planes, solids (i.e., solid bodies), extensions, movement, time and space; among discrete quantities he includes number^[10] and speech^[11]. He also lays down definitions of a number of terms, including continuity. In effect, Aristotle defines continuity as a relation between entities rather than as an attribute appertaining to a single entity; that is to say, he does not provide an explicit definition of the concept of continuum. He observes that a single continuous whole can be brought into existence by “gluing together” two things which have been brought into contact, which suggests that the continuity of a whole should derive from the way its parts “join up”. Accordingly for Aristotle quantities such as lines and planes, space and time are continuous by virtue of the fact that their constituent parts “join together at some common boundary”. By contrast no constituent parts of a discrete quantity can possess a common boundary. One of the central theses Aristotle is at pains to defend is the irreducibility of the continuum to discreteness—that a continuum cannot be “composed” of indivisibles or atoms, parts which cannot themselves be further divided. Aristotle sometimes recognizes infinite divisibility—the property of being divisible into parts which can themselves be further divided, the process never terminating in an indivisible—as a consequence of continuity as he characterizes the notion. But on occasion he takes the property of infinite divisibility as defining continuity. It is this definition of continuity that figures in Aristotle's demonstration of what has come to be known as the isomorphism thesis, which asserts that either magnitude, time and motion are all continuous, or they are all discrete. The question of whether magnitude is perpetually divisible into smaller units, or divisible only down to some atomic magnitude leads to the dilemma of divisibility (see Miller [1982]), a difficulty that Aristotle necessarily had to face in connection with his analysis of the continuum. In the dilemma's first, or nihilistic horn, it is argued that, were magnitude everywhere divisible, the process of carrying out this division completely would reduce a magnitude to extensionless points, or perhaps even to nothingness. The second, or atomistic, horn starts from the assumption that magnitude is not everywhere divisible and leads to the equally unpalatable conclusion (for Aristotle, at least) that indivisible magnitudes must exist. As a thoroughgoing materialist, Epicurus^[12] (341–271 B.C.E.) could not accept the notion of potentiality on which Aristotle's theory of continuity rested, and so was propelled towards atomism in both its conceptual and physical senses. Like Leucippus and Democritus, Epicurus felt it necessary to postulate the existence of physical atoms, but to avoid Aristotle's strictures he proposed that these should not be themselves conceptually indivisible, but should contain conceptually indivisible parts. Aristotle had shown that a continuous magnitude could not be composed of points, that is, indivisible units lacking extension, but he had not shown that an indivisible unit must necessarily lack extension. Epicurus met Aristotle's argument that a continuum could not be composed of such indivisibles by taking indivisibles to be partless units of magnitude possessing extension. In opposition to the atomists, the Stoic philosophers Zeno of Cition (fl. 250 B.C.E.) and Chrysippus (280–206 B.C.E.) upheld the Aristotelian position that space, time, matter and motion are all continuous (see Sambursky [1963], [1971]; White [1992]). And, like Aristotle, they explicitly rejected any possible existence of void within the cosmos. The cosmos is pervaded by a continuous invisible substance which they called pneuma (Greek: “breath”). This pneuma—which was regarded as a kind of synthesis of air and fire, two of the four basic elements, the others being earth and water—was conceived as being an elastic medium through which impulses are transmitted by wave motion. All physical occurrences were viewed as being linked through tensile forces in the pneuma, and matter itself was held to derive its qualities form the “binding” properties of the pneuma it contains. The scholastic philosophers of Medieval Europe, in thrall to the massive authority of Aristotle, mostly subscribed in one form or another to the thesis, argued with great effectiveness by the Master in Book VI of the Physics, that continua cannot be composed of indivisibles. On the other hand, the avowed infinitude of the Deity of scholastic theology, which ran counter to Aristotle's thesis that the infinite existed only in a potential sense, emboldened certain of the Schoolmen to speculate that the actual infinite might be found even outside the Godhead, for instance in the assemblage of points on a continuous line. A few scholars of the time, for example Henry of Harclay (c. 1275–1317) and Nicholas of Autrecourt (c. 1300–69) chose to follow Epicurus in upholding atomism reasonable and attempted to circumvent Aristotle's counterarguments (see Pyle [1997]). This incipient atomism met with a determined synechist rebuttal, initiated by John Duns Scotus (c. 1266–1308). In his analysis of the problem of “whether an angel can move from place to place with a continuous motion” he offers a pair of purely geometrical arguments against the composition of a continuum out of indivisibles. One of these arguments is that if the diagonal and the side of a square were both composed of points, then not only would the two be commensurable in violation of Book X of Euclid, they would even be equal. In the other, two unequal circles are constructed about a common centre, and from the supposition that the larger circle is composed of points, part of an angle is shown to be equal to the whole, in violation of Euclid's axiom V. William of Ockham (c. 1280–1349) brought a considerable degree of dialectical subtlety^[13] to his analysis of continuity; it has been the subject of much scholarly dispute^[14]. For Ockham the principal difficulty presented by the continuous is the infinite divisibility of space, and in general, that of any continuum. The treatment of continuity in the first book of his Quodlibet of 1322–7 rests on the idea that between any two points on a line there is a third—perhaps the first explicit formulation of the property of density—and on the distinction between a continuum “whose parts form a unity” from a contiguum of juxtaposed things. Ockham recognizes that it follows from the property of density that on arbitrarily small stretches of a line infinitely many points must lie, but resists the conclusion that lines, or indeed any continuum, consists of points. Concerned, rather, to determine “the sense in which the line may be said to consist or to be made up of anything.”, Ockham claims that “no part of the line is indivisible, nor is any part of a continuum indivisible.” While Ockham does not assert that a line is actually “composed” of points, he had the insight, startling in its prescience, that a punctate and yet continuous line becomes a possibility when conceived as a dense array of points, rather than as an assemblage of points in contiguous succession. The most ambitious and systematic attempt at refuting atomism in the 14^th century was mounted by Thomas Bradwardine (c. 1290 – 1349). The purpose of his Tractatus de Continuo (c. 1330) was to “prove that the opinion which maintains continua to be composed of indivisibles is false.” This was to be achieved by setting forth a number of “first principles” concerning the continuum—akin to the axioms and postulates of Euclid's Elements—and then demonstrating that the further assumption that a continuum is composed of indivisibles leads to absurdities (see Murdoch [1957]). The views on the continuum of Nicolaus Cusanus (1401–64), a champion of the actual infinite, are of considerable interest. In his De Mente Idiotae of 1450, he asserts that any continuum, be it geometric, perceptual, or physical, is divisible in two senses, the one ideal, the other actual. Ideal division “progresses to infinity”; actual division terminates in atoms after finitely many Cusanus's realist conception of the actual infinite is reflected in his quadrature of the circle (see Boyer [1959], p. 91). He took the circle to be an infinilateral regular polygon, that is, a regular polygon with an infinite number of (infinitesimally short) sides. By dividing it up into a correspondingly infinite number of triangles, its area, as for any regular polygon, can be computed as half the product of the apothegm (in this case identical with the radius of the circle), and the perimeter. The idea of considering a curve as an infinilateral polygon was employed by a number of later thinkers, for instance, Kepler, Galileo and Leibniz. The early modern period saw the spread of knowledge in Europe of ancient geometry, particularly that of Archimedes, and a loosening of the Aristotelian grip on thinking. In regard to the problem of the continuum, the focus shifted away from metaphysics to technique, from the problem of “what indivisibles were, or whether they composed magnitudes” to “the new marvels one could accomplish with them” (see Murdoch [1957], p. 325) through the emerging calculus and mathematical analysis. Indeed, tracing the development of the continuum concept during this period is tantamount to charting the rise of the calculus. Traditionally, geometry is the branch of mathematics concerned with the continuous and arithmetic (or algebra) with the discrete. The infinitesimal calculus that took form in the 16^th and 17^th centuries, which had as its primary subject matter continuous variation, may be seen as a kind of synthesis of the continuous and the discrete, with infinitesimals bridging the gap between the two. The widespread use of indivisibles and infinitesimals in the analysis of continuous variation by the mathematicians of the time testifies to the affirmation of a kind of mathematical atomism which, while logically questionable, made possible the spectacular mathematical advances with which the calculus is associated. It was thus to be the infinitesimal, rather than the infinite, that served as the mathematical stepping stone between the continuous and the discrete. Johann Kepler (1571–1630) made abundant use of infinitesimals in his calculations. In his Nova Stereometria of 1615, a work actually written as an aid in calculating the volumes of wine casks, he regards curves as being infinilateral polygons, and solid bodies as being made up of infinitesimal cones or infinitesimally thin discs (see Baron [1987], pp. 108–116; Boyer [1969], pp. 106–110). Such uses are in keeping with Kepler's customary use of infinitesimals of the same dimension as the figures they constitute; but he also used indivisibles on occasion. He spoke, for example, of a cone as being composed of circles, and in his Astronomia Nova of 1609, the work in which he states his famous laws of planetary motion, he takes the area of an ellipse to be the “sum of the radii” drawn from the focus. It seems to have been Kepler who first introduced the idea, which was later to become a reigning principle in geometry, of continuous change of a mathematical object, in this case, of a geometric figure. In his Astronomiae pars Optica of 1604 Kepler notes that all the conic sections are continuously derivable from one another both through focal motion and by variation of the angle with the cone of the cutting plane. Galileo Galilei (1564–1642) advocated a form of mathematical atomism in which the influence of both the Democritean atomists and the Aristotelian scholastics can be discerned. This emerges when one turns to the First Day of Galileo's Dialogues Concerning Two New Sciences (1638). Salviati, Galileo's spokesman, maintains, contrary to Bradwardine and the Aristotelians, that continuous magnitude is made up of indivisibles, indeed an infinite number of them. Salviati/Galileo recognizes that this infinity of indivisibles will never be produced by successive subdivision, but claims to have a method for generating it all at once, thereby removing it from the realm of the potential into actual realization: this “method for separating and resolving, at a single stroke, the whole of infinity” turns out simply to the act of bending a straight line into a circle. Here Galileo finds an ingenious “metaphysical” application of the idea of regarding the circle as an infinilateral polygon. When the straight line has been bent into a circle Galileo seems to take it that that the line has thereby been rendered into indivisible parts, that is, points. But if one considers that these parts are the sides of the infinilateral polygon, they are better characterized not as indivisible points, but rather as unbendable straight lines, each at once part of and tangent to the circle^[15]. Galileo does not mention this possibility, but nevertheless it does not seem fanciful to detect the germ here of the idea of considering a curve as a an assemblage of infinitesimal “unbendable” straight lines.^[16] It was Galileo's pupil and colleague Bonaventura Cavalieri (1598–1647) who refined the use of indivisibles into a reliable mathematical tool (see Boyer [1959]); indeed the “method of indivisibles” remains associated with his name to the present day. Cavalieri nowhere explains precisely what he understands by the word “indivisible”, but it is apparent that he conceived of a surface as composed of a multitude of equispaced parallel lines and of a volume as composed of equispaced parallel planes, these being termed the indivisibles of the surface and the volume respectively. While Cavalieri recognized that these “multitudes” of indivisibles must be unboundedly large, indeed was prepared to regard them as being actually infinite, he avoided following Galileo into ensnarement in the coils of infinity by grasping that, for the “method of indivisibles” to work, the precise “number” of indivisibles involved did not matter. Indeed, the essence of Cavalieri's method was the establishing of a correspondence between the indivisibles of two “similar” configurations, and in the cases Cavalieri considers it is evident that the correspondence is suggested on solely geometric grounds, rendering it quite independent of number. The very statement of Cavalieri's principle embodies this idea: if plane figures are included between a pair of parallel lines, and if their intercepts on any line parallel to the including lines are in a fixed ratio, then the areas of the figures are in the same ratio. (An analogous principle holds for solids.) Cavalieri's method is in essence that of reduction of dimension: solids are reduced to planes with comparable areas and planes to lines with comparable lengths. While this method suffices for the computation of areas or volumes, it cannot be applied to rectify curves, since the reduction in this case would be to points, and no meaning can be attached to the “ratio” of two points. For rectification a curve has, it was later realized, to be regarded as the sum, not of indivisibles, that is, points, but rather of infinitesimal straight lines, its microsegments. René Descartes (1596–1650) employed infinitesimalist techniques, including Cavalieri's method of indivisibles, in his mathematical work. But he avoided the use of infinitesimals in the determination of tangents to curves, instead developing purely algebraic methods for the purpose. Some of his sharpest criticism was directed at those mathematicians, such as Fermat, who used infinitesimals in the construction of tangents. As a philosopher Descartes may be broadly characterized as a synechist. His philosophical system rests on two fundamental principles: the celebrated Cartesian dualism—the division between mind and matter—and the less familiar identification of matter and spatial extension. In the Meditations Descartes distinguishes mind and matter on the grounds that the corporeal, being spatially extended, is divisible, while the mental is partless. The identification of matter and spatial extension has the consequence that matter is continuous and divisible without limit. Since extension is the sole essential property of matter and, conversely, matter always accompanies extension, matter must be ubiquitous. Descartes' space is accordingly, as it was for the Stoics, a plenum pervaded by a continuous medium. The concept of infinitesimal had arisen with problems of a geometric character and infinitesimals were originally conceived as belonging solely to the realm of continuous magnitude as opposed to that of discrete number. But from the algebra and analytic geometry of the 16^th and 17^th centuries there issued the concept of infinitesimal number. The idea first appears in the work of Pierre de Fermat (see Boyer [1959]) (1601–65) on the determination of maximum and minimum (extreme) values, published in 1638. Fermat's treatment of maxima and minima contains the germ of the fertile technique of “infinitesimal variation”, that is, the investigation of the behaviour of a function by subjecting its variables to small changes. Fermat applied this method in determining tangents to curves and centres of gravity. Isaac Barrow^[17] (1630–77) was one of the first mathematicians to grasp the reciprocal relation between the problem of quadrature and that of finding tangents to curves—in modern parlance, between integration and differentiation. In his Lectiones Geometricae of 1670, Barrow observes, in essence, that if the quadrature of a curve y = f(x) is known, with the area up to x given by F(x), then the subtangent to the curve y = F(x) is measured by the ratio of its ordinate to the ordinate of the original curve. Barrow, a thoroughgoing synechist, regarded the conflict between divisionism and atomism as a live issue, and presented a number of arguments against mathematical atomism, the strongest of which is that atomism contradicts many of the basic propositions of Euclidean geometry. Barrow conceived of continuous magnitudes as being generated by motions, and so necessarily dependent on time, a view that seems to have had a strong influence on the thinking of his illustrious pupil Isaac Newton^[18] (1642–1727). Newton's meditations during the plague year 1665–66 issued in the invention of what he called the “Calculus of Fluxions”, the principles and methods of which were presented in three tracts published many years after they were written^[19] : De analysi per aequationes numero terminorum infinitas; Methodus fluxionum et serierum infinitarum; and De quadratura curvarum. Newton's approach to the calculus rests, even more firmly than did Barrow's, on the conception of continua as being generated by motion. But Newton's exploitation of the kinematic conception went much deeper than had Barrow's. In De Analysi, for example, Newton introduces a notation for the “momentary increment” (moment)—evidently meant to represent a moment or instant of time—of the abscissa or the area of a curve, with the abscissa itself representing time. This “moment”—effectively the same as the infinitesimal quantities previously introduced by Fermat and Barrow—Newton denotes by o in the case of the abscissa, and by ov in the case of the area. From the fact that Newton uses the letter v for the ordinate, it may be inferred that Newton is thinking of the curve as being a graph of velocity against time. By considering the moving line, or ordinate, as the moment of the area Newton established the generality of and reciprocal relationship between the operations of differentiation and integration, a fact that Barrow had grasped but had not put to systematic use. Before Newton, quadrature or integration had rested ultimately “on some process through which elemental triangles or rectangles were added together”, that is, on the method of indivisibles. Newton's explicit treatment of integration as inverse differentiation was the key to the integral calculus. In the Methodus fluxionum Newton makes explicit his conception of variable quantities as generated by motions, and introduces his characteristic notation. He calls the quantity generated by a motion a fluent, and its rate of generation a fluxion. The fluxion of a fluent x is denoted by x·, and its moment, or “infinitely small increment accruing in an infinitely short time o”, by x·o. The problem of determining a tangent to a curve is transformed into the problem of finding the relationship between the fluxions x· and z· when presented with an equation representing the relationship between the fluents x and z. (A quadrature is the inverse problem, that of determining the fluents when the fluxions are given.) Thus, for example, in the case of the fluent z = x^n, Newton first forms z· + z·o = (x· + x·o)^n, expands the right-hand side using the binomial theorem, subtracts z = x^n, divides through by o, neglects all terms still containing o, and so obtains z· = nx^n−1 x·. Newton later became discontented with the undeniable presence of infinitesimals in his calculus, and dissatisfied with the dubious procedure of “neglecting” them. In the preface to the De quadratura curvarum he remarks that there is no necessity to introduce into the method of fluxions any argument about infinitely small quantities. In their place he proposes to employ what he calls the method of prime and ultimate ratio. This method, in many respects an anticipation of the limit concept, receives a number of allusions in Newton's celebrated Principia mathematica philosophiae naturalis of Newton developed three approaches for his calculus, all of which he regarded as leading to equivalent results, but which varied in their degree of rigour. The first employed infinitesimal quantities which, while not finite, are at the same time not exactly zero. Finding that these eluded precise formulation, Newton focussed instead on their ratio, which is in general a finite number. If this ratio is known, the infinitesimal quantities forming it may be replaced by any suitable finite magnitudes—such as velocities or fluxions—having the same ratio. This is the method of fluxions. Recognizing that this method itself required a foundation, Newton supplied it with one in the form of the doctrine of prime and ultimate ratios, a kinematic form of the theory of limits. The philosopher-mathematician G. W. F. Leibniz^[20] (1646–1716) was greatly preoccupied with the problem of the composition of the continuum—the “labyrinth of the continuum”, as he called it. Indeed we have it on his own testimony that his philosophical system—monadism—grew from his struggle with the problem of just how, or whether, a continuum can be built from indivisible elements. Leibniz asked himself: if we grant that each real entity is either a simple unity or a multiplicity, and that a multiplicity is necessarily an aggregation of unities, then under what head should a geometric continuum such as a line be classified? Now a line is extended and Leibniz held that extension is a form of repetition, so, a line, being divisible into parts, cannot be a (true) unity. It is then a multiplicity, and accordingly an aggregation of unities. But of what sort of unities? Seemingly, the only candidates for geometric unities are points, but points are no more than extremities of the extended, and in any case, as Leibniz knew, solid arguments going back to Aristotle establish that no continuum can be constituted from points. It follows that a continuum is neither a unity nor an aggregation of unities. Leibniz concluded that continua are not real entities at all; as “wholes preceding their parts” they have instead a purely ideal character. In this way he freed the continuum from the requirement that, as something intelligible, it must itself be simple or a compound of simples. Leibniz held that space and time, as continua, are ideal, and anything real, in particular matter, is discrete, compounded of simple unit substances he termed monads. Among the best known of Leibniz's doctrines is the Principle or Law of Continuity. In a somewhat nebulous form this principle had been employed on occasion by a number of Leibniz's predecessors, including Cusanus and Kepler, but it was Leibniz who gave to the principle “a clarity of formulation which had previously been lacking and perhaps for this reason regarded it as his own discovery” (Boyer 1959, p. 217). In a letter to Bayle of 1687, Leibniz gave the following formulation of the principle: “in any supposed transition, ending in any terminus, it is permissible to institute a general reasoning in which the final terminus may be included.” This would seem to indicate that Leibniz considered “transitions” of any kind as continuous. Certainly he held this to be the case in geometry and for natural processes, where it appears as the principle Natura non facit saltus. According to Leibniz, it is the Law of Continuity that allows geometry and the evolving methods of the infinitesimal calculus to be applicable in physics. The Principle of Continuity also furnished the chief grounds for Leibniz's rejection of material atomism. The Principle of Continuity also played an important underlying role in Leibniz's mathematical work, especially in his development of the infinitesimal calculus. Leibniz's essays Nova Methodus of 1684 and De Geometri Recondita of 1686 may be said to represent the official births of the differential and integral calculi, respectively. His approach to the calculus, in which the use of infinitesimals, plays a central role, has combinatorial roots, traceable to his early work on derived sequences of numbers. Given a curve determined by correlated variables x, y, he wrote dx and dy for infinitesimal differences, or differentials, between the values x and y: and dy/dx for the ratio of the two, which he then took to represent the slope of the curve at the corresponding point. This suggestive, if highly formal procedure led Leibniz to evolve rules for calculating with differentials, which was achieved by appropriate modification of the rules of calculation for ordinary Although the use of infinitesimals was instrumental in Leibniz's approach to the calculus, in 1684 he introduced the concept of differential without mentioning infinitely small quantities, almost certainly in order to avoid foundational difficulties. He states without proof the following rules of differentiation: If a is constant, then da = 0 d(ax) = a dx d(x+y−z) = dx + dy − dz d(xy) = x dy + y dx d(x/y) = [x dy − y dx]/y^2 d(x^p) = px^p−1dx, also for fractional p But behind the formal beauty of these rules—an early manifestation of what was later to flower into differential algebra—the presence of infinitesimals makes itself felt, since Leibniz's definition of tangent employs both infinitely small distances and the conception of a curve as an infinilateral polygon. Leibniz conceived of differentials dx, dy as variables ranging over differences. This enabled him to take the important step of regarding the symbol d as an operator acting on variables, so paving the way for the iterated application of d, leading to the higher differentials ddx = d^2x, d^3x = dd^2x, and in general d^n^+1x = dd^nx. Leibniz supposed that the first-order differentials dx, dy,…. were incomparably smaller than, or infinitesimal with respect to, the finite quantities x, y,…, and, in general, that an analogous relation obtained between the (n+1)^th-order differentials d^n+1x and the n^th-order differentials d^nx. He also assumed that the n^th power (dx)^n of a first-order differential was of the same order of magnitude as an n^th-order differential d^nx, in the sense that the quotient d^nx/(dx)^n is a finite quantity. For Leibniz the incomparable smallness of infinitesimals derived from their failure to satisfy Archimedes' principle; and quantities differing only by an infinitesimal were to be considered equal. But while infinitesimals were conceived by Leibniz to be incomparably smaller than ordinary numbers, the Law of Continuity ensured that they were governed by the same laws as the latter. Leibniz's attitude toward infinitesimals and differentials seems to have been that they furnished the elements from which to fashion a formal grammar, an algebra, of the continuous. Since he regarded continua as purely ideal entities, it was then perfectly consistent for him to maintain, as he did, that infinitesimal quantities themselves are no less ideal—simply useful fictions, introduced to shorten arguments and aid insight. Although Leibniz himself did not credit the infinitesimal or the (mathematical) infinite with objective existence, a number of his followers did not hesitate to do so. Among the most prominent of these was Johann Bernoulli (1667–1748). A letter of his to Leibniz written in 1698 contains the forthright assertion that “inasmuch as the number of terms in nature is infinite, the infinitesimal exists ipso facto.” One of his arguments for the existence of actual infinitesimals begins with the positing of the infinite sequence 1/2, 1/3, 1/4,…. If there are ten terms, one tenth exists; if a hundred, then a hundredth exists, etc.; and so if, as postulated, the number of terms is infinite, then the infinitesimal exists. Leibniz's calculus gained a wide audience through the publication in 1696, by Guillaume de L'Hôpital (1661–1704), of the first expository book on the subject, the Analyse des Infiniments Petits Pour L'Intelligence des Lignes Courbes. This is based on two definitions: 1. Variable quantities are those that continually increase or decrease; and constant or standing quantities are those that continue the same while others vary. 2. The infinitely small part whereby a variable quantity is continually increased or decreased is called the differential of that quantity. And two postulates: 1. Grant that two quantities, whose difference is an infinitely small quantity, may be taken (or used) indifferently for each other: or (what is the same thing) that a quantity, which is increased or decreased only by an infinitely small quantity, may be considered as remaining the same. 2. Grant that a curve line may be considered as the assemblage of an infinite number of infinitely small right lines: or (what is the same thing) as a polygon with an infinite number of sides, each of an infinitely small length, which determine the curvature of the line by the angles they make with each other. Following Leibniz, L'Hôpital writes dx for the differential of a variable quantity x. A typical application of these definitions and postulates is the determination of the differential of a product d(xy) = (x + dx)(y +dy) − xy = y dx + x dy + dx dy = y dx + x dy. Here the last step is justified by Postulate I, since dx dy is infinitely small in comparison to y dx + x dy. Leibniz's calculus of differentials, resting as it did on somewhat insecure foundations, soon attracted criticism. The attack mounted by the Dutch physician Bernard Nieuwentijdt^[21] (1654–1718) in works of 1694-6 is of particular interest, since Nieuwentijdt offered his own account of infinitesimals which conflicts with that of Leibniz and has striking features of its own. Nieuwentijdt postulates a domain of quantities, or numbers, subject to a ordering relation of greater or less. This domain includes the ordinary finite quantities, but it is also presumed to contain infinitesimal and infinite quantities—a quantity being infinitesimal, or infinite, when it is smaller, or, respectively, greater, than any arbitrarily given finite quantity. The whole domain is governed by a version of the Archimedean principle to the effect that zero is the only quantity incapable of being multiplied sufficiently many times to equal any given quantity. Infinitesimal quantities may be characterized as quotients b/m of a finite quantity b by an infinite quantity m. In contrast with Leibniz's differentials, Nieuwentijdt's infinitesimals have the property that the product of any pair of them vanishes; in particular each infinitesimal is “nilsquare” in that its square and all higher powers are zero. This fact enables Nieuwentijdt to show that, for any curve given by an algebraic equation, the hypotenuse of the differential triangle generated by an infinitesimal abscissal increment e coincides with the segment of the curve between x and x + e. That is, a curve truly is an infinilateral polygon. The major differences between Nieuwentijdt's and Leibniz's calculi of infinitesimals are summed up in the following table: Leibniz Nieuwentijdt Infinitesimals are variables Infinitesimals are constants Higher-order infinitesimals exist Higher-order infinitesimals do not exist Products of infinitesimals are not absolute zeros Products of infinitesimals are absolute zeros Infinitesimals can be neglected when infinitely small with respect to other quantities (First-order) infinitesimals can never be neglected In responding to Nieuwentijdt's assertion that squares and higher powers of infinitesimals vanish, Leibniz objected that it is rather strange to posit that a segment dx is different from zero and at the same time that the area of a square with side dx is equal to zero (Mancosu 1996, 161). Yet this oddity may be regarded as a consequence — apparently unremarked by Leibniz himself — of one of his own key principles, namely that curves may be considered as infinilateral polygons. Consider, for instance, the curve y = x^2. Given that the curve is an infinilateral polygon, the infinitesimal straight stretch of the curve between the abscissae 0 and dx must coincide with the tangent to the curve at the origin — in this case, the axis of abscissae — between these two points. But then the point (dx, dx^2) must lie on the axis of abscissae, which means that dx^2 = 0. Now Leibniz could retort that that this argument depends crucially on the assumption that the portion of the curve between abscissae 0 and dx is indeed straight. If this be denied, then of course it does not follow that dx^2 = 0. But if one grants, as Leibniz does, that that there is an infinitesimal straight stretch of the curve (a side, that is, of an infinilateral polygon coinciding with the curve) between abscissae 0 and e, say, which does not reduce to a single point then e cannot be equated to 0 and yet the above argument shows that e^2 = 0. It follows that, if curves are infinlateral polygons, then the “lengths” of the sides of these latter must be nilsquare infinitesimals. Accordingly, to do full justice to Leibniz's (as well as Nieuwentijdt's) conception, two sorts of infinitesimals are required: first, “differentials” obeying= as laid down by Leibniz — the same algebraic laws as finite quantities; and second the (necessarily smaller) nilsquare infinitesimals which measure the lengths of the sides of infinilateral polygons. It may be said that Leibniz recognized the need for the first, but not the seccond type of infinitesimal and Nieuwentijdt, vice-versa. It is of interest to note that Leibnizian infinitesimals (differentials) are realized in nonstandard analysis, and nilsquare infinitesimals in smooth infinitesimal analysis (for both types of analysis see below). In fact it has been shown to be possible to combine the two approaches, so creating an analytic framework realizing both Leibniz's and Nieuwentijdt's conceptions of The insistence that infinitesimals obey precisely the same algebraic rules as finite quantities forced Leibniz and the defenders of his differential calculus into treating infinitesimals, in the presence of finite quantities, as if they were zeros, so that, for example, x + dx is treated as if it were the same as x. This was justified on the grounds that differentials are to be taken as variable, not fixed quantities, decreasing continually until reaching zero. Considered only in the “moment of their evanescence”, they were accordingly neither something nor absolute zeros. Thus differentials (or infinitesimals) dx were ascribed variously the four following properties: 1. dx ≈ 0 2. neither dx = 0 nor dx ≠ 0 3. dx^2 = 0 4. dx → 0 where “≈” stands for “indistinguishable from”, and “→ 0” stands for “becomes vanishingly small”. Of these properties only the last, in which a differential is considered to be a variable quantity tending to 0, survived the 19^th century refounding of the calculus in terms of the limit concept^[22]. The leading practitioner of the calculus, indeed the leading mathematician of the 18^th century, was Leonhard Euler^[23] (1707–83). Philosophically Euler was a thoroughgoing synechist. Rejecting Leibnizian monadism, he favoured the Cartesian doctrine that the universe is filled with a continuous ethereal fluid and upheld the wave theory of light over the corpuscular theory propounded by Euler rejected the concept of infinitesimal in its sense as a quantity less than any assignable magnitude and yet unequal to 0, arguing: that differentials must be zeros, and dy/dx the quotient 0/0. Since for any number α, α · 0 = 0, Euler maintained that the quotient 0/0 could represent any number whatsoever^[24]. For Euler qua formalist the calculus was essentially a procedure for determining the value of the expression 0/0 in the manifold situations it arises as the ratio of evanescent increments. But in the mathematical analysis of natural phenomena, Euler, along with a number of his contemporaries, did employ what amount to infinitesimals in the form of minute, but more or less concrete “elements” of continua, treating them not as atoms or monads in the strict sense—as parts of a continuum they must of necessity be divisible—but as being of sufficient minuteness to preserve their rectilinear shape under infinitesimal flow, yet allowing their volume to undergo infinitesimal change. This idea was to become fundamental in continuum mechanics. While Euler treated infinitesimals as formal zeros, that is, as fixed quantities, his contemporary Jean le Rond d'Alembert (1717–83) took a different view of the matter. Following Newton's lead, he conceived of infinitesimals or differentials in terms of the limit concept, which he formulated by the assertion that one varying quantity is the limit of another if the second can approach the other more closely than by any given quantity. D'Alembert firmly rejected the idea of infinitesimals as fixed quantities, and saw the idea of limit as supplying the methodological root of the differential calculus. For d'Alembert the language of infinitesimals or differentials was just a convenient shorthand for avoiding the cumbrousness of expression required by the use of the limit concept. Infinitesimals, differentials, evanescent quantities and the like coursed through the veins of the calculus throughout the 18^th century. Although nebulous—even logically suspect—these concepts provided, faute de mieux, the tools for deriving the great wealth of results the calculus had made possible. And while, with the notable exception of Euler, many 18^th century mathematicians were ill-at-ease with the infinitesimal, they would not risk killing the goose laying such a wealth of golden mathematical eggs. Accordingly they refrained, in the main, from destructive criticism of the ideas underlying the calculus. Philosophers, however, were not fettered by such constraints. The philosopher George Berkeley (1685–1753), noted both for his subjective idealist doctrine of esse est percipi and his denial of general ideas, was a persistent critic of the presuppositions underlying the mathematical practice of his day (see Jesseph [1993]). His most celebrated broadsides were directed at the calculus, but in fact his conflict with the mathematicians went deeper. For his denial of the existence of abstract ideas of any kind went in direct opposition with the abstractionist account of mathematical concepts held by the majority of mathematicians and philosophers of the day. The central tenet of this doctrine, which goes back to Aristotle, is that the mind creates mathematical concepts by abstraction, that is, by the mental suppression of extraneous features of perceived objects so as to focus on properties singled out for attention. Berkeley rejected this, asserting that mathematics as a science is ultimately concerned with objects of sense, its admitted generality stemming from the capacity of percepts to serve as signs for all percepts of a similar form. At first Berkeley poured scorn on those who adhere to the concept of infinitesimal. maintaining that the use of infinitesimals in deriving mathematical results is illusory, and is in fact eliminable. But later he came to adopt a more tolerant attitude towards infinitesimals, regarding them as useful fictions in somewhat the same way as did Leibniz. In The Analyst of 1734 Berkeley launched his most sustained and sophisticated critique of infinitesimals and the whole metaphysics of the calculus. Addressed To an Infidel Mathematician^[25], the tract was written with the avowed purpose of defending theology against the scepticism shared by many of the mathematicians and scientists of the day. Berkeley's defense of religion amounts to the claim that the reasoning of mathematicians in respect of the calculus is no less flawed than that of theologians in respect of the mysteries of the divine. Berkeley's arguments are directed chiefly against the Newtonian fluxional calculus. Typical of his objections is that in attempting to avoid infinitesimals by the employment of such devices as evanescent quantities and prime and ultimate ratios Newton has in fact violated the law of noncontradiction by first subjecting a quantity to an increment and then setting the increment to 0, that is, denying that an increment had ever been present. As for fluxions and evanescent increments themselves, Berkeley has this to say: And what are these fluxions? The velocities of evanescent increments? And what are these same evanescent increments? They are neither finite quantities nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities? Nor did the Leibnizian method of differentials escape Berkeley's strictures. The opposition between continuity and discreteness plays a significant role in the philosophical thought of Immanuel Kant (1724–1804). His mature philosophy, transcendental idealism, rests on the division of reality into two realms. The first, the phenomenal realm, consists of appearances or objects of possible experience, configured by the forms of sensibility and the epistemic categories. The second, the noumenal realm, consists of “entities of the understanding to which no objects of experience can ever correspond”, that is, things-in-themselves. Regarded as magnitudes, appearances are spatiotemporally extended and continuous, that is infinitely, or at least limitlessly, divisible. Space and time constitute the underlying order of phenomena, so are ultimately phenomenal themselves, and hence also continuous. As objects of knowledge, appearances are continuous extensive magnitudes, but as objects of sensation or perception they are, according to Kant, intensive magnitudes. By an intensive magnitude Kant means a magnitude possessing a degree and so capable of being apprehended by the senses: for example brightness or temperature. Intensive magnitudes are entirely free of the intuitions of space or time, and “can only be presented as unities”. But, like extensive magnitudes, they are continuous. Moreover, appearances are always presented to the senses as intensive magnitudes. In the Critique of Pure Reason (1781) Kant brings a new subtlety (and, it must be said, tortuousity) to the analysis of the opposition between continuity and discreteness. This may be seen in the second of the celebrated Antinomies in that work, which concerns the question of the mereological composition of matter, or extended substance. Is it (a) discrete, that is, consists of simple or indivisible parts, or (b) continuous, that is, contains parts within parts ad infinitum? Although (a), which Kant calls the Thesis and (b) the Antithesis would seem to contradict one another, Kant offers proofs of both assertions. The resulting contradiction may be resolved, he asserts, by observing that while the antinomy “relates to the division of appearances”, the arguments for (a) and (b) implicitly treat matter or substance as things-in-themselves. Kant concludes that both Thesis and Antithesis “presuppose an inadmissible condition” and accordingly “both fall to the ground, inasmuch as the condition, under which alone either of them can be maintained, itself falls.” Kant identifies the inadmissible condition as the implicit taking of matter as a thing-in-itself, which in turn leads to the mistake of taking the division of matter into parts to subsist independently of the act of dividing. In that case, the Thesis implies that the sequence of divisions is finite; the Antithesis, that it is infinite. These cannot be both be true of the completed (or at least completable) sequence of divisions which would result from taking matter or substance as a thing-in-itself.^[26] Now since the truth of both assertions has been shown to follow from that assumption, it must be false, that is, matter and extended substance are appearances only. And for appearances, Kant maintains, divisions into parts are not completable in experience, with the result that such divisions can be considered, in a startling phrase, “neither finite nor infinite”. It follows that, for appearances, both Thesis and Antithesis are false. Later in the Critique Kant enlarges on the issue of divisibility, asserting that, while each part generated by a sequence of divisions of an intuited whole is given with the whole, the sequence's incompletability prevents it from forming a whole; a fortiori no such sequence can be claimed to be actually infinite. The rapid development of mathematical analysis in the 18^th century had not concealed the fact that its underlying concepts not only lacked rigorous definition, but were even (e.g., in the case of differentials and infinitesimals) of doubtful logical character. The lack of precision in the notion of continuous function—still vaguely understood as one which could be represented by a formula and whose associated curve could be smoothly drawn—had led to doubts concerning the validity of a number of procedures in which that concept figured. For example it was often assumed that every continuous function could be expressed as an infinite series by means of Taylor's theorem. Early in the 19^th century this and other assumptions began to be questioned, thereby initiating an inquiry into what was meant by a function in general and by a continuous function in particular. A pioneer in the matter of clarifying the concept of continuous function was the Bohemian priest, philosopher and mathematician Bernard Bolzano (1781–1848). In his Rein analytischer Beweis of 1817 he defines a (real-valued) function f to be continuous at a point x if the difference f(x + ω) − f(x) can be made smaller than any preselected quantity once we are permitted to take w as small as we please. This is essentially the same as the definition of continuity in terms of the limit concept given a little later by Cauchy. Bolzano also formulated a definition of the derivative of a function free of the notion of infinitesimal (see Bolzano [1950]). Bolzano repudiated Euler's treatment of differentials as formal zeros in expressions such as dy/dx, suggesting instead that in determining the derivative of a function, increments Δx, Δy, …, be finally set to zero. For Bolzano differentials have the status of “ideal elements”, purely formal entities such as points and lines at infinity in projective geometry, or (as Bolzano himself mentions) imaginary numbers, whose use will never lead to false assertions concerning “real” quantities. Although Bolzano anticipated the form that the rigorous formulation of the concepts of the calculus would assume, his work was largely ignored in his lifetime. The cornerstone for the rigorous development of the calculus was supplied by the ideas—essentially similar to Bolzano's—of the great French mathematician Augustin-Louis Cauchy (1789–1857). In Cauchy's work, as in Bolzano's, a central role is played by a purely arithmetical concept of limit freed of all geometric and temporal intuition. Cauchy also formulates the condition for a sequence of real numbers to converge to a limit, and states his familiar criterion for convergence^[27] , namely, that a sequence <s[n]> is convergent if and only if s[n+r] − s[n] can be made less in absolute value than any preassigned quantity for all r and sufficiently large n. Cauchy proves that this is necessary for convergence, but as to sufficiency of the condition merely remarks “when the various conditions are fulfilled, the convergence of the series is assured.” In making this latter assertion he is implicitly appealing to geometric intuition, since he makes no attempt to define real numbers, observing only that irrational numbers are to be regarded as the limits of sequences of rational numbers. Cauchy chose to characterize the continuity of functions in terms of a rigorized notion of infinitesimal, which he defines in the Cours d'analyse as “a variable quantity [whose value] decreases indefinitely in such a way as to converge to the limit 0.” Here is his definition of continuity. Cauchy's definition of continuity of f(x) in the neighbourhood of a value a amounts to the condition, in modern notation, that lim[x→a]f(x) = f(a). Cauchy defines the derivative f ′(x) of a function f(x) in a manner essentially identical to that of Bolzano. The work of Cauchy (as well as that of Bolzano) represents a crucial stage in the renunciation by mathematicians—adumbrated in the work of d'Alembert—of (fixed) infinitesimals and the intuitive ideas of continuity and motion. Certain mathematicians of the day, such as Poisson and Cournot, who regarded the limit concept as no more than a circuitous substitute for the use of infinitesimally small magnitudes—which in any case (they claimed) had a real existence—felt that Cauchy's reforms had been carried too far. But traces of the traditional ideas did in fact remain in Cauchy's formulations, as evidenced by his use of such expressions as “variable quantities”, “infinitesimal quantities”, “approach indefinitely”, “as little as one wishes” and the like^[28]. Meanwhile the German mathematician Karl Weierstrass (1815–97) was completing the banishment of spatiotemporal intuition, and the infinitesimal, from the foundations of analysis. To instill complete logical rigour Weierstrass proposed to establish mathematical analysis on the basis of number alone, to “arithmetize”^[29] it—in effect, to replace the continuous by the discrete. “Arithmetization” may be seen as a form of mathematical atomism. In pursuit of this goal Weierstrass had first to formulate a rigorous “arithmetical” definition of real number. He did this by defining a (positive) real number to be a countable set of positive rational numbers for which the sum of any finite subset always remains below some preassigned bound, and then specifying the conditions under which two such “real numbers” are to be considered equal, or strictly less than one another. Weierstrass was concerned to purge the foundations of analysis of all traces of the intuition of continuous motion—in a word, to replace the variable by the static. For Weierstrass a variable x was simply a symbol designating an arbitrary member of a given set of numbers, and a continuous variable one whose corresponding set S has the property that any interval around any member x of S contains members of S other than x. Weierstrass also formulated the familiar (ε, δ) definition of continuous function^[30] : a function f(x) is continuous at a if for any ε > 0 there is a δ > 0 such that |f(x ) − f(a)| < ε for all x such that |x − a| < δ.^[31] Following Weierstrass's efforts, another attack on the problem of formulating rigorous definitions of continuity and the real numbers was mounted by Richard Dedekind (1831–1916). Dedekind focussed attention on the question: exactly what is it that distinguishes a continuous domain from a discontinuous one? He seems to have been the first to recognize that the property of density, possessed by the ordered set of rational numbers, is insufficient to guarantee continuity. In Continuity and Irrational Numbers (1872) he remarks that when the rational numbers are associated to points on a straight line, “there are infinitely many points [on the line] to which no rational number corresponds” so that the rational numbers manifest “a gappiness, incompleteness, discontinuity”, in contrast with the straight line's “absence of gaps, completeness, continuity.” Dedekind regards this principle as being essentially indemonstrable; he ascribes to it, rather, the status of an axiom “by which we attribute to the line its continuity, by which we think continuity into the line.” It is not, Dedekind stresses, necessary for space to be continuous in this sense, for “many of its properties would remain the same even if it were discontinuous.” The filling-up of gaps in the rational numbers through the “creation of new point-individuals” is the key idea underlying Dedekind's construction of the domain of real numbers. He first defines a cut to be a partition (A[1], A[2]) of the rational numbers such that every member of A[1] is less than every member of A[2]. After noting that each rational number corresponds, in an evident way, to a cut, he observes that infinitely many cuts fail to be engendered by rational numbers. The discontinuity or incompleteness of the domain of rational numbers consists precisely in this latter fact. It is to be noted that Dedekind does not identify irrational numbers with cuts; rather, each irrational number is newly “created” by a mental act, and remains quite distinct from its associated cut. Dedekind goes on to show how the domain of cuts, and thereby the associated domain of real numbers, can be ordered in such a way as to possess the property of continuity, viz. “if the system ℜ of all real numbers divides into two classes [1], [s] such that every number a[1] of the class [1] is less than every number a[2] of the class [2], then there exists one and only one number by which this separation is produced.” The most visionary “arithmetizer” of all was Georg Cantor^[32] (1845–1918). Cantor's analysis of the continuum in terms of infinite point sets led to his theory of transfinite numbers and to the eventual freeing of the concept of set from its geometric origins as a collection of points, so paving the way for the emergence of the concept of general abstract set central to today's mathematics. Like Weierstrass and Dedekind, Cantor aimed to formulate an adequate definition of the real numbers which avoided the presupposition of their prior existence, and he follows them in basing his definition on the rational numbers. Following Cauchy, he calls a sequence a[1], a[2],…, a[n],… of rational numbers a fundamental sequence if there exists an integer N such that, for any positive rational ε, there exists an integer N such that |a[n+m] − a[n]| < ε for all m and all n > N. Any sequence <a[n]> satisfying this condition is said to have a definite limit b. Dedekind had taken irrational numbers to be “mental objects” associated with cuts, so, analogously, Cantor regards these definite limits, as nothing more than formal symbols associated with fundamental sequences. The domain B of such symbols may be considered an enlargement of the domain A of rational numbers. After imposing an arithmetical structure on the domain B, Cantor is emboldened to refer to its elements as (real) numbers. Nevertheless, he still insists that these “numbers” have no existence except as representatives of fundamental sequences. Cantor then shows that each point on the line corresponds to a definite element of B. Conversely, each element of B should determine a definite point on the line. Realizing that the intuitive nature of the linear continuum precludes a rigorous proof of this property, Cantor simply assumes it as an axiom, just as Dedekind had done in regard to his principle of continuity. For Cantor, who began as a number-theorist, and throughout his career cleaved to the discrete, it was numbers, rather than geometric points, that possessed objective significance. Indeed the isomorphism between the discrete numerical domain B and the linear continuum was regarded by Cantor essentially as a device for facilitating the manipulation of numbers. Cantor's arithmetization of the continuum had the following important consequence. It had long been recognized that the sets of points of any pair of line segments, even if one of them is infinite in length, can be placed in one-one correspondence. This fact was taken to show that such sets of points have no well-defined “size”. But Cantor's identification of the set of points on a linear continuum with a domain of numbers enabled the sizes of point sets to be compared in a definite way, using the well-grounded idea of one-one correspondence between sets of numbers. Cantor's investigations into the properties of subsets of the linear continuum are presented in six masterly papers published during 1879–84, Über unendliche lineare Punktmannigfaltigkeiten (“On infinite, linear point manifolds”). Remarkable in their richness of ideas, these papers provide the first accounts of Cantor's revolutionary theory of infinite sets and its application to the classification of subsets of the linear continuum. In the fifth of these papers, the Grundlagen of 1883,^[33] are to be found some of Cantor's most searching observations on the nature of the Cantor begins his examination of the continuum with a tart summary of the controversies that have traditionally surrounded the notion, remarking that the continuum has until recently been regarded as an essentially unanalyzable concept. It is Cantor's concern to “develop the concept of the continuum as soberly and briefly as possible, and only with regard to the mathematical theory of sets”. This opens the way, he believes, to the formulation of an exact concept of the continuum. Cantor points out that the idea of the continuum has heretofore merely been presupposed by mathematicians concerned with the analysis of continuous functions and the like, and has “not been subjected to any more thorough inspection.” Repudiating any use of spatial or temporal intuition in an exact determination of the continuum, Cantor undertakes its precise arithmetical definition. Making reference to the definition of real number he has already provided (i.e., in terms of fundamental sequences), he introduces the n-dimensional arithmetical space G[n] as the set of all n-tuples of real numbers <x[1],x[2],…,x[n]>, calling each such an arithmetical point of G[n.] The distance between two such points is given by Cantor defines an arithmetical point-set in G[n] to be any “aggregate of points of the points of the space G[n] that is given in a lawlike way”. After remarking that he has previously shown that all spaces G[n] have the same power as the set of real numbers in the interval (0,1), and reiterating his conviction that any infinite point sets has either the power of the set of natural numbers or that of (0,1),^[34] Cantor turns to the definition of the general concept of a continuum within G[n][.]. For this he employs the concept of derivative or derived set of a point set introduced in a paper of 1872 on trigonometric series. Cantor had defined the derived set of a point set P to be the set of limit points of P, where a limit point of P is a point of P with infinitely many points of P arbitrarily close to it. A point set is called perfect if it coincides with its derived set^[35]. Cantor observes that this condition does not suffice to characterize a continuum, since perfect sets can be constructed in the linear continuum which are dense in no interval, however small: as an example of such a set he offers the set^[36 ] consisting of all real numbers in (0,1) whose ternary expansion does not contain a “1”. Accordingly an additional condition is needed to define a continuum. Cantor supplies this by introducing the concept of a connected set. A point set T is connected in Cantor's sense if for any pair of its points t, t′ and any arbitrarily small number ε there is a finite sequence of points t[1], t[2],…, t[n] of T for which the distances [tt[1]], [t[1]t[2]], [t[2]t[3]], …, [t[ n]t′], are all less than ε. Cantor now defines a continuum to be a perfect connected point set. Cantor has advanced beyond his predecessors in formulating what is in essence a topological definition of continuum, one that, while still dependent on metric notions, does not involve an order relation^[37]. It is interesting to compare Cantor's definition with the definition of continuum in modern general topology. In a well-known textbook (see Hocking and Young [1961]) on the subject we find a continuum defined as a compact connected subset of a topological space. Now within any bounded region of Euclidean space it can be shown that Cantor's continua coincide with continua in the sense of the modern definition. While Cantor lacked the definition of compactness, his requirement that continua be “complete” (which led to his rejecting as continua such noncompact sets as open intervals or discs) is not far away from the idea. Throughout Cantor's mathematical career he maintained an unwavering, even dogmatic opposition to infinitesimals, attacking the efforts of mathematicians such as du Bois-Reymond and Veronese^[38] to formulate rigorous theories of actual infinitesimals. As far as Cantor was concerned, the infinitesimal was beyond the realm of the possible; infiinitesimals were no more than “castles in the air, or rather just nonsense”, to be classed “with circular squares and square circles”. His abhorrence of infinitesimals went so deep as to move him to outright vilification, branding them as “Cholera-bacilli of mathematics.” Cantor's rejection of infinitesimals stemmed from his conviction that his own theory of transfinite ordinal and cardinal numbers exhausted the realm of the numerable, so that no further generalization of the concept of number, in particular any which embraced infinitesimals, was admissible. Despite the great success of Weierstrass, Dedekind and Cantor in constructing the continuum from arithmetical materials, a number of thinkers of the late 19^th and early 20^th centuries remained opposed, in varying degrees, to the idea of explicating the continuum concept entirely in discrete terms. These include the philosophers Brentano and Peirce and the mathematicians Poincaré, Brouwer and Weyl. In his later years the Austrian philosopher Franz Brentano (1838–1917) became preoccupied with the nature of the continuous (see Brentano [1988]). In its fundamentals Brentano's account of the continuous is akin to Aristotle's. Brentano regards continuity as something given in perception, primordial in nature, rather than a mathematical construction. He held that the idea of the continuous is abstracted from sensible intuition. Brentano suggests that the continuous is brought to appearance by sensible intuition in three phases. First, sensation presents us with objects having parts that coincide. From such objects the concept of boundary is abstracted in turn, and then one grasps that these objects actually contain coincident boundaries. Finally one sees that this is all that is required in order to have grasped the concept of a continuum. For Brentano the essential feature of a continuum is its inherent capacity to engender boundaries, and the fact that such boundaries can be grasped as coincident. Boundaries themselves possess a quality which Brentano calls plerosis (“fullness”). Plerosis is the measure of the number of directions in which the given boundary actually bounds. Thus, for example, within a temporal continuum the endpoint of a past episode or the starting point of a future one bounds in a single direction, while the point marking the end of one episode and the beginning of another may be said to bound doubly. In the case of a spatial continuum there are numerous additional possibilities: here a boundary may bound in all the directions of which it is capable of bounding, or it may bound in only some of these directions. In the former case, the boundary is said to exist in full plerosis; in the latter, in partial plerosis. Brentano believed that the concept of plerosis enabled sense to be made of the idea that a boundary possesses “parts”, even when the boundary lacks dimensions altogether, as in the case of a point. Thus, while the present or “now” is, according to Brentano, temporally unextended and exists only as a boundary between past and future, it still possesses two “parts” or aspects: it is both the end of the past and the beginning of the future. It is worth mentioning that for Brentano it was not just the “now” that existed only as a boundary; since, like Aristotle he held that “existence” in the strict sense means “existence now”, it necessarily followed that existing things exist only as boundaries of what has existed or of what will exist, or both. Brentano took a somewhat dim view of the efforts of mathematicians to construct the continuum from numbers. His attitude varied from rejecting such attempts as inadequate to according them the status of “fictions”^[39]. This is not surprising given his Aristotelian inclination to take mathematical and physical theories to be genuine descriptions of empirical phenomena rather than idealizations: in his view, if such theories were to be taken as literal descriptions of experience, they would amount to nothing better than “misrepresentations”. Brentano's analysis of the continuum centred on its phenomenological and qualitative aspects, which are by their very nature incapable of reduction to the discrete. Brentano's rejection of the mathematicians' attempts to construct it in discrete terms is thus hardly surprising. The American philosopher-mathematician Charles Sanders Peirce's (1839–1914) view of the continuum^[40] was, in a sense, intermediate between that of Brentano and the arithmetizers. Like Brentano, he held that the cohesiveness of a continuum rules out the possibility of it being a mere collection of discrete individuals, or points, in the usual sense. And even before Brouwer Peirce seems to have been aware that a faithful account of the continuum will involve questioning the law of excluded middle. Peirce also held that any continuum harbours an unboundedly large collection of points—in his colourful terminology, a supermultitudinous collection—what we would today call a proper class. Peirce maintained that if “enough” points were to be crowded together by carrying insertion of new points between old to its ultimate limit they would—through a logical “transformation of quantity into quality”—lose their individual identity and become fused into a true continuum. Peirce's conception of the number continuum is also notable for the presence in it of an abundance of infinitesimals, Peirce championed the retention of the infinitesimal concept in the foundations of the calculus, both because of what he saw as the efficiency of infinitesimal methods, and because he regarded infinitesimals as constituting the “glue” causing points on a continuous line to lose their individual identity. The idea of continuity played a central role in the thought of the great French mathematician Henri Poincaré^[41] (1854–1912). While accepting the arithmetic definition of the continuum, he questions the fact that (as with Dedekind and Cantor's formulations) the (irrational) numbers so produced are mere symbols, detached from their origins in intuition. Unlike Cantor, Poincaré accepted the infinitesimal, even if he did not regard all of the concept's manifestations as useful. The Dutch mathematician L. E. J. Brouwer (1881–1966) is best known as the founder of the philosophy of (neo)intuitionism (see Brouwer [1975]; van Dalen [1998]). Brouwer's highly idealist views on mathematics bore some resemblance to Kant's. For Brouwer, mathematical concepts are admissible only if they are adequately grounded in intuition, mathematical theories are significant only if they concern entities which are constructed out of something given immediately in intuition, and mathematical demonstration is a form of construction in intuition. While admitting that the emergence of noneuclidean geometry had discredited Kant's view of space, Brouwer held, in opposition to the logicists (whom he called “formalists”) that arithmetic, and so all mathematics, must derive from temporal intuition. Initially Brouwer held without qualification that the continuum is not constructible from discrete points, but was later to modify this doctrine. In his mature thought, he radically transformed the concept of point, endowing points with sufficient fluidity to enable them to serve as generators of a “true” continuum. This fluidity was achieved by admitting as “points”, not only fully defined discrete numbers such as √2, π, e, and the like—which have, so to speak, already achieved “being”—but also “numbers” which are in a perpetual state of becoming in that their the entries in their decimal (or dyadic) expansions are the result of free acts of choice by a subject operating throughout an indefinitely extended time. The resulting choice sequences cannot be conceived as finished, completed objects: at any moment only an initial segment is known. In this way Brouwer obtained the mathematical continuum in a way compatible with his belief in the primordial intuition of time—that is, as an unfinished, indeed unfinishable entity in a perpetual state of growth, a “medium of free development”. In this conception, the mathematical continuum is indeed “constructed”, not, however, by initially shattering, as did Cantor and Dedekind, an intuitive continuum into isolated points, but rather by assembling it from a complex of continually changing overlapping parts. The mathematical continuum as conceived by Brouwer displays a number of features that seem bizarre to the classical eye. For example, in the Brouwerian continuum the usual law of comparability, namely that for any real numbers a, b either a < b or a = b or a > b, fails. Even more fundamental is the failure of the law of excluded middle in the form that for any real numbers a, b, either a = b or a ≠ b. The failure of these seemingly unquestionable principles in turn vitiates the proofs of a number of basic results of classical analysis, for example the Bolzano-Weierstrass theorem, as well as the theorems of monotone convergence, intermediate value, least upper bound, and maximum value for continuous functions^[42]. While the Brouwerian continuum may possess a number of negative features from the standpoint of the classical mathematician, it has the merit of corresponding more closely to the continuum of intuition than does its classical counterpart. Far from being bizarre, the failure of the law of excluded middle for points in the intuitionistic continuum may be seen as fitting in well with the character of the intuitive continuum. In 1924 Brouwer showed that every function defined on a closed interval of his continuum is uniformly continuous. As a consequence the intuitionistic continuum is indecomposable, that is, cannot be split into two disjoint parts in any way whatsoever. In contrast with a discrete entity, the indecomposable Brouwerian continuum cannot be composed of its parts. Brouwer's vision of the continuum has in recent years become the subject of intensive mathematical investigation. Hermann Weyl (1885–1955), one of most versatile mathematicians of the 20^th century, was preoccupied with the nature of the continuum (see Bell [2000]). In his Das Kontinuum of 1918 he attempts to provide the continuum with an exact mathematical formulation free of the set-theoretic assumptions he had come to regard as objectionable. As he saw it, there is an unbridgeable gap between intuitively given continua (e.g., those of space, time and motion) on the one hand, and the discrete exact concepts of mathematics (e.g., that of real number) on the other. For Weyl the presence of this split meant that the construction of the mathematical continuum could not simply be “read off” from intuition. Rather, he believed that the mathematical continuum must be treated and, in the end, justified in the same way as a physical theory. However much he may have wished it, in Das Kontinuum Weyl did not aim to provide a mathematical formulation of the continuum as it is presented to intuition, which, as the quotations above show, he regarded as an impossibility (at that time at least). Rather, his goal was first to achieve consistency by putting the arithmetical notion of real number on a firm logical basis, and then to show that the resulting theory is reasonable by employing it as the foundation for a plausible account of continuous process in the objective physical Later Weyl came to repudiate atomistic theories of the continuum, including that of his own Das Kontinuum. He accordingly welcomed Brouwer's construction of the continuum by means of sequences generated by free acts of choice, thus identifying it as a “medium of free Becoming” which “does not dissolve into a set of real numbers as finished entities”. Weyl felt that Brouwer, through his doctrine of intuitionism, had come closer than anyone else to bridging that “unbridgeable chasm” between the intuitive and mathematical continua. In particular, he found compelling the fact that the Brouwerian continuum is not the union of two disjoint nonempty parts—that it is indecomposable. “A genuine continuum,” Weyl says, “cannot be divided into separate fragments.” In later publications he expresses this more colourfully by quoting Anaxagoras to the effect that a continuum “defies the chopping off of its parts with a hatchet.” Once the continuum had been provided with a set-theoretic foundation, the use of the infinitesimal in mathematical analysis was largely abandoned. And so the situation remained for a number of years. The first signs of a revival of the infinitesimal approach to analysis surfaced in 1958 with a paper by A. H. Laugwitz and C. Schmieden^[43]. But the major breakthrough came in 1960 when it occurred to the mathematical logician Abraham Robinson (1918–1974) that “the concepts and methods of contemporary Mathematical Logic are capable of providing a suitable framework for the development of the Differential and Integral Calculus by means of infinitely small and infinitely large numbers.” (see Robinson [1996], p. xiii) This insight led to the creation of nonstandard analysis,^[44] which Robinson regarded as realizing Leibniz's conception of infinitesimals and infinities as ideal numbers possessing the same properties as ordinary real numbers. After Robinson's initial insight, a number of ways of presenting nonstandard analysis were developed. Here is a sketch of one of them. Starting with the classical real line ℜ, a set-theoretic universe—the standard universe—is first constructed over it: here by such a universe is meant a set U containing ℜ which is closed under the usual set-theoretic operations of union, power set, Cartesian products and subsets. Now write U, ∈), where ∈ is the usual membership relation on U: associated with this is the extension L(U) of the first-order language of set theory to include a name u for each element u of U. Now, using the well-known compactness theorem for first-order logic, U, *∈), called a nonstandard universe, satisfying the following key principle: Saturation Principle. Let Φ be a collection of L(U)-formulas with exactly one free variable. If Φ is finitely satisfiable in U which satisfies all the formulas of Φ′ in U which satisfies all the formulas of Φ in * The saturation property expresses the intuitive idea that the nonstandard universe is very rich in comparison to the standard one. Indeed, while there may exist, for each finite subcollection F of a given collection of properties P, an element of U satisfying the members of F in U satisfying all the members of P. The saturation of *U which satisfies, in *P. For example, suppose the set ℕ of natural numbers is a member of U; for each n ∈ ℕ let P[n](x) be the property x ∈ ℕ & n < x. Then clearly, while each finite subcollection of the collection P = {P[n]: n ∈ ℕ} is satisfiable in U satisfying P in *infinite number. From the saturation property it follows that * Transfer Principle. If σ is any sentence of L(U), then σ holds in The transfer principle may be seen as a version of Leibniz's continuity principle: it asserts that all first-order properties are preserved in the passage to or “transfer” from the standard to the nonstandard universe. The members of U are called standard sets, or standard objects; those in *U − U nonstandard sets or nonstandard objects: *U thus consists of both standard and nonstandard objects. The members of *U will also be referred to as *-sets or *-objects Since U ⊆ *U, under this convention every set (object) is also a *-set (object) The *-members of a *-set A are the *-objects x for which x *∈ A. If A is a standard set, we may consider the collection —the inflate of A—consisting of all the *-members of A: this is not necessarily a set nor even a *-set. The inflate  of a standard set A may be regarded as the same set A viewed from a nonstandard vantage point. While clearly A ⊆ Â,  may contain “nonstandard” elements not in A. It can in fact be shown that infinite standard sets always get “inflated” in this way. Using the transfer principle, any function f between standard sets automatically extends to a function—also written f—between their inflates. If A, R,…) is a mathematical structure, we may consider the structure Â,Rˆ). From the transfer principle it follows that Now suppose that the set ℕ of natural numbers is a member of U. Then so is the set ℜ of real numbers, since each real number may be identified with a set of natural numbers. ℜ may be regarded as an ordered field, and the same is therefore true of its inflate ℜˆ, since the latter has precisely the same first-order properties as ℜ. ℜˆ is called the hyperreal line, and its members hyperreals. A standard hyperreal is then just a real, to which we shall refer for emphasis as a standard real. Since ℜ is infinite, nonstandard hyperreals must exist. The saturation principle implies that there must be an infinite (nonstandard) hyperreal,^[45] that is, a hyperreal a such that a > n for every n ∈ ℕ. In that case its reciprocal 1/a is infinitesimal in the sense of exceeding 0 and yet being smaller than 1/n+1 for every n ∈ ℕ. In general, we call a hyperreal a infinitesimal if its absolute value |a| is less than 1/n+1 for every n ∈ ℕ. In that case the set I of infinitesimals contains not just 0 but a substantial number (in fact, infinitely many) other elements. Clearly I is an additive subgroup of ℜ, that is, if a, b ∈ I, then a − b ∈ I. The members of the inflate ℕˆ of ℕ are called hypernatural numbers. As for the hyperreals, it can be shown that ℕˆ also contains nonstandard elements which must exceed every member of ℕ; these are called infinite hypernatural numbers. For hyperreals a, b we define a ≈ b and say that a and b are infinitesimally close if a − b ∈ I. This is an equivalence relation on the hyperreal line: for each hyperreal a we write μ(a) for the equivalence class of a under this relation and call it the monad of a. The monad of a hyperreal a thus consists of all the hyperreals that are infinitesimally close to a: it may be thought of as a small cloud centred at a. Note also that μ(0) = I. A hyperreal a is finite if it is not infinite; this means that |a| < n for some n ∈ ℕ . It is not difficult to show that finiteness is equivalent to the condition of near-standardness; here a hyperreal a is near-standard if a ≈ r for some standard real r. Much of the usefulness of nonstandard analysis stems from the fact that statements of classical analysis involving limits or the (ε, δ) criterion admit succinct, intuitive translations into statements involving infinitesimals or infinite numbers, in turn enabling comparatively straightforward proofs to be given of classical theorems. Here are some examples of such translations:^[46] • Let <s[n]> be a standard infinite sequence of real numbers and let s be a standard real number. Then s is the limit of <s[n]> within ℜ, lim[n→∞] s[n] = s in the classical sense, if and only if s [n] ≈ s for all infinite subscripts n. • A standard sequence <s[n]> converges if and only if s[n] ≈ s[m] for all infinite n and m. (Cauchy's criterion for convergence.) Now suppose that f is a real-valued function defined on some open interval (a, b). We have remarked above that f automatically extends to a function—also written f—on • In order that the standard real number c be the limit of f(x) as x approaches x[0], lim[x→x[0]]f(x) = c, with x[0] a standard real number in (a, b), it is necessary and sufficient that f(x) ≈ f(x [0]) for all x ≈ x[0]. • The function f is continuous at a standard real number x[0] in (a, b) if and only if f(x) ≈ f(x[0]) for all x ≈ x[0]. (This is equivalent to saying that f maps the monad of x[0] into the monad of • In order that the standard number c be the derivative of f at x[0] it is necessary and sufficient that for all x ≠ x[0] in the monad of x[0]. Many other branches of mathematics admit neat and fruitful nonstandard formulations. The original motivation for the development of constructive mathematics was to put the idea of mathematical existence on a constructive or computable basis. While there are a number of varieties of constructive mathematics (see Bridges and Richman [1987]), here we shall focus on Bishop's constructive analysis (see Bishop and Bridges [1985]; Bridges [1994], [1999]; and Bridges and Richman [1987]) and Brouwer's intuitionistic analysis (see Dummett [1977]). In constructive mathematics a problem is counted as solved only if an explicit solution can, in principle at least, be produced. Thus, for example, “There is an x such that P(x)” means that, in principle at least, we can explicitly produce an x such that P(x). This fact led to the questioning of certain principles of classical logic, in particular, the law of excluded middle, and the creation of a new logic, intuitionistic logic (see entry on intuitionistic logic). It also led to the introduction of a sharpened definition of real numbers—the constructive real numbers. A constructive real number is a sequence of rationals (r[n]) = r[1], r[2], … such that, for any k, a number n can be computed in such a way that |r[n+p] − r[n]| ≤ 1/k. Each rational number a may be regarded as a real number by identifying it with the real number (α, α, …). The set R of all constructive real numbers is the constructive real line. Now of course, for any “given” real number there are a variety of ways of giving explicit approximating sequences for it. Thus it is necessary to define an equivalence relation, “equality on the reals”. The correct definition here is: r =[ℜ] s iff for any k, a number n can be found so that |r[n+p] − s[n+p]| ≤ 1/k, for all p. To say that two real numbers are equal is to say that they are equivalent in this sense. The real number line can be furnished with an axiomatic description. We begin by assuming the existence of a set R with • a binary relation > (greater than) • a corresponding apartness relation # defined by x # y ⇔ x > y or y > x • a unary operation x x • binary operations (x, y) x + y (addition) and (x, y) xy (multiplication) • distinguished elements 0 (zero) and 1 (one) with 0 ≠ 1 • a unary operation x x^−1 on the set of elements ≠ 0. The elements of R are called real numbers. A real number x is positive if x > 0 and negative if −x > 0. The relation ≥ (greater than or equal to) is defined by x ≥ y ⇔ ∀z(y > z ⇒ x > z). The relations < and ≤ are defined in the usual way; x is nonnegative if 0 ≤ x. Two real numbers are equal if x ≥ y and y ≥ x, in which case we write x = y. The sets N of natural numbers, N^+ of positive integers, Z of integers and Q of rational numbers are identified with the usual subsets of R; for instance N^+ is identified with the set of elements of R of the form 1 + 1 + … + 1. These relations and operations are subject to the following three groups of axioms, which, taken together, form the system CA of axioms for constructive analysis, or the constructive real numbers (see Bridges [1999]). Field Axioms • x + y = y + x • (x + y) + z = x + (y + z) • 0 + x = x • x + (−x) = 0 xy = yx • (xy)z = x(yz) • 1x = x • xx^−1 = 1 if x # 0 • x(y + z) = xy + xz Order Axioms • ¬(x > y ∧ y > x) • x > y ⇒ ∀z(x > z ∨ z > y) • ¬(x # y) ⇒ x = y • x > y ⇒ ∀z(x + z > y + z) • (x > 0 ∧ y > 0) ⇒ xy > 0. The last two axioms introduce special properties of > and ≥. In the second of these the notions bounded above, bounded below, and bounded are defined as in classical mathematics, and the least upper bound, if it exists, of a nonempty^[47] set S of real numbers is the unique real number b such that • b is an upper bound for S, and • for each c < b there exists s ∈ S with s > c. Special Properties of >. Archimedean axiom. For each x ∈ R such that x ≥ 0 there exists n ∈ N such that x < n. The least upper bound principle. Let S be a nonempty subset of R that is bounded above relative to the relation ≥, such that for all real numbers a, b with a < b, either b is an upper bound for S or else there exists s ∈ S with s > a. Then S has a least upper bound. The following basic properties of > and ≥ can then be established. • ¬(x > x) • x ≥ x • x > y ∧ y > z ⇒ x > z • ¬(x > y ∧ y ≥ x) • (x > y ≥ z) ⇒ x > z • ¬(x > y) ⇔ y ≥ x • ¬¬(x ≥ y) ⇔ ¬¬(y > x) • (x ≥ y ≥ z) ⇒ x ≥ z • (x ≥ y ∧ y ≥ x) ⇒ x = y • ¬(x > y ∧ x = y) • x ≥ 0 ⇔ ∀ε>0(x < ε) • x + y > 0 ⇒ (x > 0 ∨ y > 0) • x > 0 ⇒ −x < 0 • (x > y ∧ z < 0) ⇒ yz > xz • x # 0 ⇒ x^2 > 0 • 1 > 0 • x^2 > 0 • 0 < x < 1 ⇒ x > x^2 • x^2 > 0 ⇒ x # 0 • n ∈ N^+ ⇒ n^−1 > 0 • if x > 0 and y ≥ 0, then ∃n∈Z(nx > y) • x > 0 ⇒ x^−1 > 0 • xy > 0 ⇒ (x ≠ 0 ∨ y ≠ 0) • if a < b, then ∃r∈Q(a < r < b) The constructive real line R as introduced above is a model of CA. Are there any other models, that is, models not isomorphic to R. If classical logic is assumed, CA is a categorical theory and so the answer is no. But this is not the case within intuitionistic logic, for there it is possible for the Dedekind and Cantor reals to fail to be isomorphic, despite the fact that they are both models of CA. In constructive analysis, a real number is an infinite (convergent) sequence of rational numbers generated by an effective rule, so that the constructive real line is essentially just a restriction of its classical counterpart. Brouwerian intuitionism takes a more liberal view of the matter, resulting in a considerable enrichment of the arithmetical continuum over the version offered by strict constructivism. As conceived by intutionism, the arithmetical continuum admits as real numbers not only infinite sequences determined in advance by an effective rule for computing their terms, but also ones in whose generation free selection plays a part. The latter are called (free) choice sequences. Without loss of generality we may and shall assume that the entries in choice sequences are natural numbers. While constructive analysis does not formally contradict classical analysis, and may in fact be regarded as a subtheory of the latter, a number of intuitionistically plausible principles have been proposed for the theory of choice sequences which render intuitionistic analysis divergent from its classical counterpart. One such principle is Brouwer's Continuity Principle: given a relation Q(α, n) between choice sequences α and numbers n, if for each α a number n may be determined for which Q(α, n) holds, then n can already be determined on the basis of the knowledge of a finite number of terms of α.^[48] From this one can prove a weak version of the Continuity Theorem, namely, that every function from R to R is continuous. Another such principle is Bar Induction, a certain form of induction for well-founded sets of finite sequences^[49]. Brouwer used Bar Induction and the Continuity Principle in proving his Continuity Theorem that every real-valued function defined on a closed interval is uniformly continuous, from which, as has already been observed, it follows that the intuitionistic continuum is indecomposable. Brouwer gave the intuitionistic conception of mathematics an explicitly subjective twist by introducing the creative subject. The creative subject was conceived as a kind of idealized mathematician for whom time is divided into discrete sequential stages, during each of which he may test various propositions, attempt to construct proofs, and so on. In particular, it can always be determined whether or not at stage n the creative subject has a proof of a particular mathematical proposition p. While the theory of the creative subject remains controversial, its purely mathematical consequences can be obtained by a simple postulate which is entirely free of subjective and temporal elements. The creative subject allows us to define, for a given proposition p, a binary sequence <a[n]> by a[n] = 1 if the creative subject has a proof of p at stage n; a[n] = 0 otherwise. Now if the construction of these sequences is the only use made of the creative subject, then references to the latter may be avoided by postulating the principle known as Kripke's Scheme For each proposition p there exists an increasing binary sequence <a[n]> such that p holds if and only if a[n] = 1 for some n. Taken together, these principles have been shown to have remarkable consequences for the indecomposability of subsets of the continuum. Not only is the intuitionistic continuum indecomposable (that is, cannot be partitioned into two nonempty disjoint parts), but, assuming the Continuity Principle and Kripke's Scheme, it remains indecomposable even if one pricks it with a pin. The intuitionistic continuum has, as it were, a syrupy nature, so that one cannot simply take away one point. If in addition Bar Induction is assumed, then, still more surprisingly, indecomposability is maintained even when all the rational points are removed from the continuum. Finally, it has been shown that a natural notion of infinitesimal can be developed within intuitionistic mathematics (see Vesley [1981]), the idea being that an infinitesimal should be a “very small” real number in the sense of not being known to be distinguishable—that is, strictly greater than or less than—zero. A major development in the refounding of the concept of infinitesimal took place in the nineteen seventies with the emergence of synthetic differential geometry, also known as smooth infinitesimal analysis (SIA)^[50]. Based on the ideas of the American mathematician F. W. Lawvere, and employing the methods of category theory, smooth infinitesimal analysis provides an image of the world in which the continuous is an autonomous notion, not explicable in terms of the discrete. It provides a rigorous framework for mathematical analysis in which every function between spaces is smooth (i.e., differentiable arbitrarily many times, and so in particular continuous) and in which the use of limits in defining the basic notions of the calculus is replaced by nilpotent infinitesimals, that is, of quantities so small (but not actually zero) that some power—most usefully, the square—vanishes. Since in SIA all functions are continuous, it embodies in a striking way Leibniz's principle of continuity Natura non facit saltus. In what follows, we use bold R to distinguish the real line in SIA from its counterparts in classical and constructive analysis. In the usual development of the calculus, for any differentiable function f on the real line R, y = f(x), it follows from Taylor's theorem that the increment δy = f(x + δx) − f(x) in y attendant upon an increment δx in x is determined by an equation of the form δy = f ′(x)δx + A(δx)^2, (1) where f ′(x) is the derivative of f(x) and A is a quantity whose value depends on both x and δx. Now if it were possible to take δx so small (but not demonstrably identical with 0) that (δx)^2 = 0 then (1) would assume the simple form f(x + δx) − f(x) = δy = f ′(x) δx. (2) We shall call a quantity having the property that its square is zero a nilsquare infinitesimal or simply a microquantity. In SIA “enough” microquantities are present to ensure that equation (2) holds nontrivially for arbitrary functions f: R → R. (Of course (2) holds trivially in standard mathematical analysis because there 0 is the sole microquantity in this sense.) The meaning of the term “nontrivial” here may be explicated in following way. If we replace δx by the letter ε standing for an arbitrary microquantity, (2) assumes the form f(x + ε) − f(x) = εf ′(x). (3) Ideally, we want the validity of this equation to be independent of ε , that is, given x, for it to hold for all microquantities ε. In that case the derivative f ′(x) may be defined as the unique quantity D such that the equation f(x + ε) − f(x) = εD holds for all microquantities ε. Setting x = 0 in this equation, we get in particular f(ε) = f(0) + εD, (4) for all ε. It is equation (4) that is taken as axiomatic in smooth infinitesimal analysis. Let us write Δ for the set of microquantities, that is, Δ = {x: x ∈ R ∧ x^2 = 0}. Then it is postulated that, for any f: Δ → R, there is a unique D ∈ R such that equation (4) holds for all ε. This says that the graph of f is a straight line passing through (0, f(0)) with slope Δ. Thus any function on Δ is what mathematicians term affine, and so this postulate is naturally termed the principle of microaffineness. It means that Δ cannot be bent or broken: it is subject only to translations and rotations—and yet is not (as it would have to be in ordinary analysis) identical with a point. Δ may be thought of as an entity possessing position and attitude, but lacking true Now consider the space Δ^Δ of maps from Δ to itself. It follows from the microaffineness principle that the subspace (Δ^Δ)[0] of Δ^Δ consisting of maps vanishing at 0 is isomorphic to R^[51]. The space Δ^Δ is a monoid^[52] under composition which may be regarded as acting on Δ by evaluation: for f ∈ Δ^Δ, f · ε = f (ε). Its subspace (Δ^Δ)[0] is a submonoid naturally identified as the space of ratios of microquantities. The isomorphism between (Δ^Δ)[0] and R noted above is easily seen to be an isomorphism of monoids (where R is considered a monoid under its usual multiplication). It follows that R itself may be regarded as the space of ratios of microquantities. This was essentially the view of Euler, who regarded (real) numbers as representing the possible results of calculating the ratio 0/0. For this reason Lawvere has suggested that R be called the space of Euler reals. If we think of a function y = f(x) as defining a curve, then, for any a, the image under f of the “microinterval” Δ + a obtained by translating Δ to a is straight and coincides with the tangent to the curve at x = a. In this sense each curve is “infinitesimally straight”. From the principle of microaffineness we deduce the important principle of microcancellation, viz. If εa = εb for all ε, then a = b. For the premise asserts that the graph of the function g: Δ → R defined by g(ε) = aε has both slope a and slope b: the uniqueness condition in the principle of microaffineness then gives a = b. The principle of microcancellation supplies the exact sense in which there are “enough” infinitesimals in smooth infinitesimal analysis. From the principle of microaffineness it also follows that all functions on R are continuous, that is, send neighbouring points to neighbouring points. Here two points x, y on R are said to be neighbours if x − y is in Δ, that is, if x and y differ by a microquantity. To see this, given f: R → R and neighbouring points x, y, note that y = x + ε with ε in Δ, so that f(y) − f(x) = f(x + ε) − f(x) = εf ′(x). But clearly any multiple of a microquantity is also a microquantity, so εf ′(x) is a microquantity, and the result follows. In fact, since equation (3) holds for any f, it also holds for its derivative f ′; it follows that functions in smooth infinitesimal analysis are differentiable arbitrarily many times, thereby justifying the use of the term “smooth”. Let us derive a basic law of the differential calculus, the product rule: (fg)′ = f′g + fg′. To do this we compute (fg)(x + ε) = (fg)(x) + ε(fg)′(x) = f(x)g(x) + ε(fg)′(x), (fg)(x + ε) = f(x + ε)g(x + ε) = [f(x) + f′(x)]·[g(x) + g′(x)] = f(x)g(x) + ε(f′g + fg′) + ε^2f′g′ = f(x)g(x) + ε(f′g + fg′), since ε^2 = 0. Therefore ε(fg)′ = ε(f′g + fg′), and the result follows by microcancellation. A stationary point a in R of a function f: R → R is defined to be one in whose vicinity “infinitesimal variations” fail to change the value of f, that is, such that f(a + ε) = f(a) for all ε. This means that f(a) + εf ′(a)= f(a), so that εf′(a) = 0 for all ε, whence it follows from microcancellation that f′(a) = 0. This is Fermat's rule. An important postulate concerning stationary points that we adopt in smooth infinitesimal analysis is the Constancy Principle. If every point in an interval J is a stationary point of f: J → R (that is, if f′ is identically 0), then f is constant. Put succinctly, “universal local constancy implies global constancy”. It follows from this that two functions with identical derivatives differ by at most a constant. In ordinary analysis the continuum R is connected in the sense that it cannot be split into two non empty subsets neither of which contains a limit point of the other. In smooth infinitesimal analysis it has the vastly stronger property of indecomposability: it cannot be split in any way whatsoever into two disjoint nonempty subsets. For suppose R = U ∪ V with U ∩ V = ∅. Define f: R → {0, 1} by f(x) = 1 if x ∈ U, f(x) = 0 if x ∈ V. We claim that f is constant. For we have (f(x) = 0 or f(x) = 1) & (f(x + ε) = 0 or f(x + ε) = 1). This gives 4 possibilities: (i) f(x) = 0 & f(x + ε) = 0 (ii) f(x) = 0 & f(x + ε) = 1 (iii) f(x) = 1 & f(x + ε) = 0 (iv) f(x) = 1 & f(x + ε) = 1 Possibilities (ii) and (iii) may be ruled out because f is continuous. This leaves (i) and (iv), in either of which f(x) = f(x + ε). So f is locally, and hence globally, constant, that is, constantly 1 or 0. In the first case V = ∅ , and in the second U = ∅. We observe that the postulates of smooth infinitesimal analysis are incompatible with the law of excluded middle of classical logic. This incompatibility can be demonstrated in two ways, one informal and the other rigorous. First the informal argument. Consider the function f defined for real numbers x by f(x) = 1 if x = 0 and f(x) = 0 whenever x ≠ 0. If the law of excluded middle held, each real number would then be either equal or unequal to 0, so that the function f would be defined on the whole of R. But, considered as a function with domain R, f is clearly discontinuous. Since, as we know, in smooth infinitesimal analysis every function on R is continuous, f cannot have domain R there^[53]. So the law of excluded middle fails in smooth infinitesimal analysis. To put it succinctly, universal continuity implies the failure of the law of excluded middle. Here now is the rigorous argument. We show that the failure of the law of excluded middle can be derived from the principle of infinitesimal cancellation. To begin with, if x ≠ 0, then x^2 ≠ 0, so that, if x^2 = 0, then necessarily not x ≠ 0. This means that for all infinitesimal ε, not ε ≠ 0. (*) Now suppose that the law of excluded middle were to hold. Then we would have, for any ε, either ε = 0 or ε ≠ 0. But (*) allows us to eliminate the second alternative, and we infer that, for all ε, ε = 0. This may be written for all ε, ε·1 = ε·0, from which we derive by microcancellation the falsehood 1 = 0. So again the law of excluded middle must fail. The “internal” logic of smooth infinitesimal analysis is accordingly not full classical logic. It is, instead, intuitionistic logic, that is, the logic derived from the constructive interpretation of mathematical assertions. In our brief sketch we did not notice this “change of logic” because, like much of elementary mathematics, the topics we discussed are naturally treated by constructive means such as direct computation. What are the algebraic and order structures on R in SIA? As far as the former is concerned, there is little difference from the classical situation: in SIA R is equipped with the usual addition and multiplication operations under which it is a field. In particular, R satisfies the condition that each x ≠ 0 has a multiplicative inverse. Notice, however, that since in SIA no microquantity (apart from 0 itself) is provably ≠ 0, microquantities are not required to have multiplicative inverses (a requirement which would lead to inconsistency). From a strictly algebraic standpoint, R in SIA differs from its classical counterpart only in being required to satisfy the principle of infinitesimal cancellation. The situation is different, however, as regards the order structure of R in SIA. Because of the failure of the law of excluded middle, the order relation < on R in SIA cannot satisfy the trichotomy x < y ∨ y < x ∨ x = y, and accordingly < must be a partial, rather than a total ordering. Since microquantities do not have multiplicative inverses, and R is a field, any microquantity ε must satisfy ¬ε < 0 ∧ ¬ε > 0. Accordingly, if we define the relation < by x < y iff ¬(y < x), then, for any microquantity ε we have ε ≤ 0 ∧ ε ≥ 0. Using these ideas we can identify three infinitesimal neighbourhoods of 0 on R in SIA, each of which is included in its successor. First, the set Δ of microquantities itself, next, the set I = {x ∈ R : ¬x ≠ 0} of elements indistinguishable from 0; finally, the set J = {x ∈ R : x ≤ 0 ∧ x ≥ 0} of elements neither less nor greater than 0. These three may be thought of as the infinitesimal neighbourhoods of 0 defined algebraically, logically, and order-theoretically, respectively. In certain models of SIA the system of natural numbers possesses some subtle and intriguing features which make it possible to introduce another type of infinitesimal—the so-called invertible infinitesimals—resembling those of nonstandard analysis, whose presence engenders yet another infinitesimal neighbourhood of 0 properly containing all those introduced above. In SIA the set N of natural numbers can be defined to be the smallest subset of R which contains 0 and is closed under the operation of adding 1. In some models of SIA, R satisfies the Archimedean principle that every real number is majorized by a natural number. However, models of SIA have been constructed (see Moerdijk and Reyes [1991]) in which R is not Archimedean in this sense. In these models it is more natural to consider, in place of N, the set N* of smooth natural numbers defined by N* = {x ∈ R: 0 ≤ x ∧ sin πx = 0}. N* is the set of points of intersection of the smooth curve y = sin πx with the positive x-axis. In these models R can be shown to possess the Archimedean property provided that in the definition N is replaced by N*. In these models, then, N is a proper subset of N*: the members of N* − N may be considered nonstandard integers. Multiplicative inverses of nonstandard integers are infinitesimals, but, being themselves invertible, they are of a different type from the ones we have considered so far. It is quite easy to show that they, as well as the infinitesimals in J (and so also those in Δ and I) are all contained in the set—a further infinitesimal neighbourhood of 0— K = {x ∈ R: ∀n ∈ N(−1/n+1 < x < 1/n+1)} of infinitely small elements of R. The members of the set In = {x ∈ K: x ≠ 0} of invertible elements of K are naturally identified as invertible infinitesimals. Being obtained as inverses of “infinitely large” reals (i.e., reals r satisfying ∀n∈N(n < r) ∨ ∀n∈N(r < −n)) the members of In are the counterparts in SIA of the infinitesimals of nonstandard analysis. Finally, a brief word on the models of SIA. These are the so-called smooth toposes, categories (see entry on category theory) of a certain kind in which all the usual mathematical operations can be performed but whose internal logic is intuitionistic and in which every map between spaces is smooth, that is, differentiable without limit. It is this “universal smoothness” that makes the presence of infinitesimal objects such as Δ possible. The construction of smooth toposes (see Moerdijk and Reyes [1991]) guarantees the consistency of SIA with intuitionistic logic. This is so despite the evident fact that SIA is not consistent with classical logic. • Aristotle (1980). Physics, 2 volumes, trans. Cornford and Wickstead. Loeb Classical Library, Cambridge, MA: Harvard University Press and Heinemann. • ––– (1996). Metaphysics, Oeconomica, Magna Moralia, 2 volumes, trans. Cooke and Tredinnick. Loeb Classical Library, Cambridge, MA: Harvard University Press. • ––– (1996a). The Categories, On Interpretation, Prior Analytics, trans. Cooke and Tredinnick. Loeb Classical Library, Cambridge, MA: Harvard University Press. • ––– (2000). On the Heavens, trans. Guthrie. Loeb Classical Library, Cambridge, MA: Harvard University Press. • ––– (2000a). On Sophistical Refutations, On Coming-to-Be and Passing Away, On the Cosmos, trans. Forster and Furley. Loeb Classical Library, Cambridge, MA: Harvard University Press. • Barnes, J. (1986).The Presocratic Philosophers, London: Routledge. • Baron, M. E. (1987). The Origins of the Infinitesimal Calculus, New York: Dover. • Beeson, M.J. (1985). Foundations of Constructive Mathematics, Berlin: Springer-Verlag. • Bell, E. T. (ed.) (1945). The Development of Mathematics, 2^nd, New York: McGraw-Hill. • ––– (1965). Men of Mathematics, 2 volumes, London: Penguin Books. • Bell, J. L. (1998). A Primer of Infinitesimal Analysis, Cambridge: Cambridge University Press. • ––– (2000). “Hermann Weyl on intuition and the continuum,” Philosophia Mathematica, 8: 259–273. • ––– (2001). “The continuum in smooth infinitesimal analysis,” in Schuster, Berger and Osswald (2001), pp. 19–24. • ––– (2003). “Hermann Weyl’s later philosophical views: his divergence from Husserl,” in Husserl and the Sciences, R. Feist (ed.), Ottawa: University of Ottawa Press. • ––– (2005) The Continuous and the Infinitesimal in Mathematics and Philosophy, Milan: Polimetrica S.A. • Berkeley, G. (1960). Principles of Human Knowledge, New York: Doubleday. • Bishop, E. (1967). Foundations of Constructive Analysis, New York: McGraw-Hill. • Bishop, E., and D. Bridges (1985). Constructive Analysis, Berlin: Springer. • Bolzano, B. (1950). Paradoxes of the Infinite, trans. Prihovsky, London: Routledge & Kegan Paul. • Bos, H. (1974). “Differentials, higher order differentials and the derivative in the Leibnizian Calculus,” Archive for History of Exact Sciences, 14: 1–90. • Boyer, C., (1959). The History of the Calculus and its Conceptual Development, New York: Dover. • ––– (1968). A History of Mathematics, New York: Wiley. • ––– and U. Merzbach (1989). A History of Mathematics, 2^nd edition, New York: Wiley. • Brentano, F. (1988). Philosophical Investigations on Space, Time and the Continuum trans. Smith, London: Croom Helm. • Bridges, D. (1994). “A constructive look at the real line,” in Ehrlich (ed.) (1994), pp. 29–92. • ––– (1999). “Constructive mathematics: a foundation for computable analysis,” Theoretical Computer Science, 219: 95–109. • ––– and F. Richman (1987). Varieties of Constructive Mathematics, Cambridge: Cambridge University Press. • Brouwer, L. E. J. (1975). Collected Works: 1. A. Heyting (ed.), Amsterdam: North-Holland. • Burns, C. D. (1916). “William of Ockham on continuity,” Mind, 25: 506–12. • Cajori, F. (1919). A Short History of the concepts of Limits and Fluxions, Chicago: Open Court. • Cantor, G., (1961). Contributions to the Founding of the Theory of Transfinite Numbers, New York: Dover. • Carnot, L. (1832). Reflexions sur la Métaphysique du Calcul Infinitesimal, trans. Browell, Oxford: Parker. • Chevalier, G. et al. (1929). “Continu et Discontinu,” Cahiers de la Nouvelle Journée, 15: PAGES. • Child, J. (1916) The Geometrical Lectures of Isaac Barrow, Chicago: Open Court. • Cusanus, N. (1954). Of Learned Ignorance, trans. Heron, London: Routledge and Kegan Paul. • D’ Alembert, J., and D. Diderot (1966). Encyclopédie, ou, Dictionnaire raisonné des sciences, des arts et des métiers / (mis en ordre et publié par Diderot, quant à la partie mathématique, par d'Alembert), Stuttgart: Cannstatt = Frommann. • Dauben, J. (1979). Georg Cantor: His Mathematics and Philosophy of the Infinite, Cambridge, MA: Harvard University Press. • Descartes, R. (1927). Discourse on Method, Meditations, and Principles of Philosophy (Everyman's Library), London: Dent. • Dedekind, R. (1963). Essays on the Theory of Numbers, New York: Dover. • Dugas, R. (1988). A History of Mechanics, New York: Dover. • Dummett, M. (1977). Elements of Intuitionism, Oxford: Clarendon Press. • Ehrlich, P., ed. (1994). Real Numbers, Generalizations of the Reals, and Theories of Continua, Dordrecht: Kluwer. • Ehrlich, P. (1994a). All numbers great and small, in Ehrlich (ed.) (1994), pp. 239–258. • Euler, L. (1843). Letters of Euler on Different Subjects in Natural Philosophy: Addressed to a German Princess; with Notes, and a Life of Euler, by David Brewster; Containing a Glossary of Scientific Term[s] with Additional Notes, by John Griscom, New York: Harper and Brothers. • Euler, L. (1990). Introduction to Analysis of the Infinite, trans. Blanton, New York: Springer. • Evans, M. (1955). “Aristotle, Newton, and the Theory of Continuous Magnitude,” Journal of the History of Ideas, 16: 548–557. • Ewald, W. (1999). From Kant to Hilbert, A Source Book in the Foundations of Mathematics, Volumes I and II, Oxford: Oxford University Press. • Fisher, G. (1978). “Cauchy and the Infinitely Small,” Historia Mathematica, 5: 313–331. • ––– (1981), “The infinite and infinitesimal quantities of du Bois-Reymond and their reception,” Archive for History of Exact Sciences, 24/2: 101–163. • ––– (1994) “Veronese's non-Archimedean linear continuum,” in Ehrlich (ed.) (1994), pp. 107–146. • Folina, J. (1992). Poincaré and the Philosophy of Mathematics, New York: St. Martin's Press. • Furley, D. (1967). Two Studies in the Greek Atomists. Princeton: Princeton University Press. • ––– (1982). “The Greek commentators' treatment of Aristotle's theory of the continuous,” in Kretzmann (1982), pp. 17–36. • ––– (1987). The Greek Cosmologists. Cambridge: Cambridge University Press. • ––– and R. Allen (eds.) (1970). Studies in Presocratic Philosophy, Vol. I. London: Routledge and Kegan Paul. • Galilei, G. (1954). Dialogues Concerning Two New Sciences, trans. Crew and De Salvio. New York: Dover. • Gray, J. (1973). Ideas of Space: Euclidean, Non-Euclidean, and Relativistic, Oxford: Clarendon Press. • Grant, E. (ed.) (1974). A Source Book in Medieval Science, Cambridge, MA: Harvard University Press • Gregory, J. (1931). A Short History of Atomism, London: A. & C. Black. • Grünbaum, A. (1967). Zeno's Paradoxes and Modern Science, London: Allen and Unwin. • Hallett, M. (1984). Cantorian Set Theory and Limitation of Size, Oxford: Clarendon Press. • Heath, T. (1949). Mathematics in Aristotle, Oxford: Oxford University Press. • ––– (1981). A History of Greek Mathematics, 2 volumes, New York: Dover. • Heyting, A. (1956). Intuitionism: An Introduction, Amsterdam: North-Holland. • Hobson, E. W. (1957). The Theory of Functions of a Real Variable, New York: Dover. • Hocking, J.G. and G. S. Young (1961). Topology, Reading, MA: Addison-Wesley. • Houzel, C., et al. (1976). Philosophie et Calcul de l'Infini, Paris: Maspero. • Hyland, J. (1979) “Continuity in spatial toposes,” in Fourman et al. (1979), pp. 442–465. • Jesseph, D. (1993). Berkeley's Philosophy of Mathematics, Chicago: University of Chicago Press. • Johnstone, P. T. (1977). Topos Theory, London: Academic Press. • ––– (1982). Stone Spaces (Cambridge Studies in Advanced Mathematics, Volume 3), Cambridge: Cambridge University Press. • ––– (1983). “The point of pointless topology,” Bull. Amer. Math. Soc. (New Series) 8 (1): 41–53. • ––– (2002. Sketches of an Elephant: A Topos Theory Compendium, Vols. I and II (Oxford Logic Guides, Volumes 43 and 44), Oxford: Clarendon Press. • Kahn, C. (2001). Pythagoras and the Pythagoreans: A Brief History, Indianapolis: Hackett. • Kant, I. (1964). Critique of Pure Reason, London: Macmillan. • ––– (1970). Metaphysical Foundations of natural Science, trans. Ellington, Indianapolis: Bobbs-Merrill. • ––– (1977). Prolegomena to Any Future Metaphysics, trans. Carus, revised Ellington, Indianapolis: Hackett. • ––– (1992). Theoretical Philosophy 1755–1770, Watford and Meerbote (eds.), Cambridge: Cambridge University Press. • Keisler, H. (1994). “The hyperreal line,” in Ehrlich (ed.) (1994), pp. 207–238. • Kirk, G. S., J. E. Raven, and M. Schofield (1983). The Presocratic Philosophers, 2^nd edition, Cambridge: Cambridge University Press. • Kline, M. (1972). Mathematical Thought from Ancient to Modern Times, 3 volumes, Oxford: Oxford University Press. • Kock, A. (1981). Synthetic Differential Geometry, Cambridge: Cambridge University Press. • Körner, S. (1955). Kant, Harmondsworth: Penguin Books. • ––– (1960). The Philosophy of Mathematics, London: Hutchinson. • Kretzmann, N., (ed.) (1982). Infinity and Continuity in Ancient and Medieval Thought, Ithaca: Cornell University Press. • Lavendhomme, R. (1996). Basic Concepts of Synthetic Differential Geometry, Dordrecht: Kluwer. • Lawvere, F. W. (1971). “Quantifiers and sheaves,” in Actes du Congrès Intern. Des Math. Nice 1970, tome I. Paris: Gauthier-Villars, pp. 329-34. • ––– (1980). “Toward the description in a smooth topos of the dynamically possible motions and deformations of a continuous body,” Cahiers de Top. et Géom. Diff., 21: 377–92. • Leibniz, G. (1951). Selections. Wiener (ed.), New York: Scribners. • ––– (1961). Philosophical Writings, trans. Morris (Everyman Library), London: Dent. • ––– (2001). The Labyrinth of the Continuum : Writings on the Continuum Problem, 1672–1686, trans. Arthur, New Haven: Yale University Press. • Mac Lane, S. (1986). Mathematics: Form and Function, Berlin: Springer-Verlag. • Mancosu, P. (1996). Philosophy of Mathematics and Mathematical Practice in the Seventeenth Century, New York, Oxford: Oxford University Press. • ––– (1998). From Brouwer to Hilbert: The Debate on the Foundations of Mathematics in the1920s, Oxford: Clarendon Press. • McLarty, C. (1988). “Defining sets as sets of points of spaces,” J. Philosophical Logic, 17: 75–90. • ––– (1992). Elementary Categories, Elementary Toposes, Oxford: Oxford University Press. • Miller, F. (1982). “Aristotle against the atomists,” in Kretzmann (1982), pp. 87–111. • Moerdijk, I. and Reyes, G. E. (1991). Models for Smooth Infinitesimal Analysis, Berlin: Springer-Verlag. • Moore, A. W. (1990). The Infinite, London: Routledge. • Murdoch, J. (1957). Geometry and the Continuum in the 14^th Century: A Philosophical Analysis of Th. Bradwardinne's Tractatus de Continuo. Ph.D. Dissertation, University of Wisconsin. • ––– (1982). “William of Ockham and the logic of infinity and continuity,” in Kretzmann (1982), pp. 165–206. • Newton, I. (1730). Opticks, New York: Dover, 1952. • Newton, I., (1962). Principia, Vols, I, II, trans. Motte, revised. Cajori, Berkeley: University of California Press. • Nicholas of Autrecourt (1971). The Universal Treatise, trans. Kennedy et al., Milwaukee: Marquette University Press. • Peirce, C.S. (1976). The New Elements of Mathematics, Volume III, Carolyn Eisele (ed.), The Hague: Mouton Publishers and Humanities Press. • ––– (1992). Reasoning and the Logic of Things, Kenneth Laine Ketner (ed.), Cambridge, MA: Harvard University Press. • Peters, F. (1967), Greek Philosophical Terms, New York: New York University Press. • Poincaré. H. (1946). Foundations of Science, trans. G. Halsted, New York: Science Press. • ––– (1963). Mathematics and Science: Last Essays, New York: Dover. • Pycior, H. (1987). “Mathematics and Philosophy: Wallis, Hobbes, Barrow, and Berkeley.” Journal of the History of Ideas, 48 (2): 265–286. • Pyle, A. (1997). Atomism and Its Critics, London: Thoemmes Press. • Rescher, N. (1967). The Philosophy of Leibniz, Upper Saddle River: Prentice-Hall. • Robinson, A. (1996). Non-Standard Analysis, Princeton: Princeton University Press. • Russell, B. (1958). A Critical Exposition of the Philosophy of Leibniz, 2^nd edition, London: Allen & Unwin. • Sambursky, S. (1963). The Physical World of the Greeks, London: Routledge and Kegan Paul. • ––– (1971). Physics of the Stoics, London: Hutchinson. • Schuster, P., U. Berger and H. Osswald (2001). Reuniting the Antipodes—Constructive and Nonstandard Views of the Continuum, Dordrecht: Kluwer. • Sorabji, R. (1982). “Atoms and time atoms,” in Kretzmann (1982), pp. 37–86. • ––– (1983). Time, Creation and the Continuum, Ithaca: Cornell University Press. • Stokes, M. (1971). One and Many in Presocratic Philosophy, Cambridge, MA: Harvard University Press. • Stones, G.B. (1928). “The atomic view of matter in the XV^th, XVI^th and XVII^th centuries,” Isis, 10: 444–65. • Struik, D. (1948). A Concise History of Mathematics, New York: Dover • Truesdell, C. (1972). Leonard Euler, Supreme Geometer. In Irrationalism in the Eighteenth Century, H. Pagliaro (ed.), Cleveland: Case Western Reserve University Press. • Van Atten, M., D. van Dalen and R. Tieszen (2002). “The phenomenology and mathematics of the intuitive continuum,” Philosophia Mathematica, 10: 203–236. • Van Cleve, J. (1981). “Reflections on Kant's 2^nd antinomy,” Synthese, 47: 1147–50. • Van Dalen, D. (1995). “Hermann Weyl's intuitionistic mathematics,” Bulletin of Symbolic Logic, 1 (2): 145–169. • ––– (1997). “How connected is the intuitionistic continuum?,” Journal of Symbolic Logic, 62: 1147–50. • ––– (1998). “From a Brouwerian point of view,” Philosophia Mathematica, 6: 209–226. • Van Melsen, A. (1952). From Atomos to Atom, trans. Koren, Pittsburgh: Duquesne University Press. • Vesley, R. (1981). An Intuitionistic Infinitesimal Calculus (Lecture Notes in Mathematics: 873), Berlin: Springer-Verlag, pp. 208–212. • Wagon, S. (1993). The Banach-Tarski Paradox, Cambridge: Cambridge University Press. • Weyl, H. (1929). Consistency in Mathematics, Pamphlet 16, Houston: Rice Institute Pamphlets, pp. 245–265. Reprinted in Weyl (1968) II, pp. 150–170. • ––– (1932). The Open World: Three Lectures on the Metaphysical Implications of Science, New Haven: Yale University Press. • ––– (1940). “The ghost of modality,” Philosophical Essays in Memory of Edmund Husserl, Cambridge, MA: Harvard University Press. Reprinted in Weyl (1968) III, pp. 684–709. • ––– (1949). Philosophy of Mathematics and Natural Science, Princeton: Princeton University Press. • ––– (1950). Space-Time-Matter, trans. Brose, New York: Dover. (English translation of Raum, Zeit, Materie, Berlin: Springer Verlag, 1921.) • ––– (1954). “Address on the unity of knowledge,” Columbia University Bicentennial Celebration. Reprinted in Weyl (1968) IV, pp. 623–630. • ––– (1968). Gesammelte Abhandlungen, I-IV, K. Chandrasehharan (ed.), Berlin: Springer-Verlag. • ––– (1969). “Insight and reflection,” Lecture delivered at the University of Lausanne, Switzerland, May 1954. Translated from German original in Studia Philosophica 15, 1955. In T.L. Saaty and F.J. Weyl (eds.), The Spirit and Uses of the Mathematical Sciences, pp. 281–301. New York: McGraw-Hill, 1969. • ––– (1985). “Axiomatic versus constructive procedures in mathematics,” T. Tonietti (ed.), Mathematical Intelligencer, 7 (4): 10–17, 38. • ––– (1987). The Continuum: A Critical Examination of the Foundation of Analysis, trans. S. Pollard and T. Bole, Philadelphia: Thomas Jefferson University Press. (English translation of Das Kontinuum, Leipzig: Veit, 1918.) • ––– (1998). “On the New Foundational Crisis in Mathematics,” in Mancosu (1998), pp. 86–122. (English translation of ‘Über der neue Grundlagenkrise der Mathematik,’ Mathematische Zeitschrift 10, 1921, pp. 37–79.) • ––– (1998a). “On the Current Epistemological Situation in Mathematics,” in Mancosu (1998), pp. 123–142. (English translation of ‘Die heutige Erkenntnislage in der Mathematik, Symposion 1, 1925–27, pp. 1–32.) • White, M. J. (1992). The Continuous and the Discrete: Ancient Physical Theories from a Contemporary Perspective, Oxford: Clarendon Press, Oxford. • Whyte, L. (1961). Essay on Atomism from Peano to 1960, Middletown, CT: Wesleyan University Press. • Wike, V. (1982). Kant's Antinomies of Reason: Their Origin and Resolution, Lanham, MD: University Press of America. Aristotle | Berkeley, George | Bolzano, Bernard | Brentano, Franz | Brouwer, Luitzen Egbertus Jan | category theory | change | Cusanus, Nicolaus [Nicolas of Cusa] | Democritus | Descartes, René | Duns Scotus, John | Epicurus | Galileo Galilei | geometry: finitism in | Kant, Immanuel | Kepler, Johannes | Leibniz, Gottfried Wilhelm | Leucippus | logic: intuitionistic | mathematics: constructive | Newton, Isaac | Ockham [Occam], William | Peirce, Charles Sanders | Zeno of Elea: Zeno's paradoxes For a comprehensive account of the evolution of the concepts of continuity and the infinitesimal, see Bell (2005), on which the present article is based.
{"url":"https://plato.stanford.edu/ARCHIVES/WIN2009/entries/continuity/","timestamp":"2024-11-02T10:49:19Z","content_type":"application/xhtml+xml","content_length":"181752","record_id":"<urn:uuid:1e5e37f3-9e44-4a3c-bd0f-5b7d7475ca56>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00438.warc.gz"}
Projective integration of expensive multiscale stochastic simulation Detailed microscale simulation is typically too computationally ex- pensive for the long time simulations necessary to explore macroscale dynamics. Projective integration uses bursts of the microscale simula- tor, on microscale time steps, and then computes an approximation to the system over a macroscale time step by extrapolation. Projective integration has the potential to be an effective method to compute the long time dynamic behaviour of multiscale systems. However, many multiscale systems are significantly in influenced by noise. By a maximum likelihood estimation, we fit a linear stochastic differential equation to short bursts of data. The analytic solution of the linear stochastic differential equation then estimates the solution over a macroscale, projective integration, time step. We explore how the noise affects the projective integration in two different methods. Monte Carlo simula- tion suggests design parameters offering stability and accuracy for the algorithms. The algorithms developed here may be applied to compute the long time dynamic behaviour of multiscale systems with noise and to exploit parallel computation. Original language English (US) Pages (from-to) C661-C677 Journal ANZIAM Journal Volume 52 State Published - 2010 All Science Journal Classification (ASJC) codes • Mathematics (miscellaneous) Dive into the research topics of 'Projective integration of expensive multiscale stochastic simulation'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/projective-integration-of-expensive-multiscale-stochastic-simulat","timestamp":"2024-11-03T00:02:41Z","content_type":"text/html","content_length":"46108","record_id":"<urn:uuid:faba28b6-e699-48fe-9366-ea088693a14b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00621.warc.gz"}
Left Rotation | HackerRank A left rotation operation on an array of size shifts each of the array's elements unit to the left. Given an integer, , rotate the array that many steps left and return the result. After rotations, . Function Description Complete the rotateLeft function in the editor below. rotateLeft has the following parameters: • int d: the amount to rotate by • int arr[n]: the array to rotate • int[n]: the rotated array The first line contains two space-separated integers that denote , the number of integers, and , the number of left rotations to perform. The second line contains space-separated integers that describe . To perform left rotations, the array undergoes the following sequence of changes:
{"url":"https://www.hackerrank.com/challenges/array-left-rotation/problem","timestamp":"2024-11-11T21:16:13Z","content_type":"text/html","content_length":"930116","record_id":"<urn:uuid:1db1e584-0c0d-4873-874e-ba96ecada6e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00236.warc.gz"}
Ask a new question for Free By Image Drop file here or Click Here to upload Math Problem Analysis Mathematical Concepts Classical Mechanics Newton's Laws of Motion Force of gravity F_g = m * g Normal force N = F_g Frictional force F_f = μ * N Newton's second law F_net = m * a Newton's Second Law of Motion Suitable Grade Level High School (Grades 10-12)
{"url":"https://math.bot/q/determine-force-gravity-normal-force-friction-5kg-object-FMOGXedj","timestamp":"2024-11-05T21:51:00Z","content_type":"text/html","content_length":"88247","record_id":"<urn:uuid:e15dce79-b99e-43b8-bb3a-e3d9472cd142>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00190.warc.gz"}
n a certain town, 4% of people commute to work by bicycle MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. MUST SHOW WORK FOR EACH QUESTION, 1-12. 1. (5 points) In a certain town, 4% of people commute to work by bicycle. If a person is selected randomly from the town, what are the odds against selecting someone who commutes by bicycle? A. 1:24 B. 24:1 C. 1:25 D. 24:25 2. (5 points) Among the contestants in a competition are 43 women and 21 men. If 5 winners are randomly selected, what is the probability that they are all men? A. 0.02114 B. 0.00267 C. 0.00367 D. 0.13691 3. (5 points) A tourist in France wants to visit 6 different cities. If the route is randomly selected, what is the probability that she will visit the cities in alphabetical order? A. 1/720 B. 1/36 C. 1/6 D. 720 4. (5 points) A police department reports that the probabilities that 0, 1, 2, and 3 burglaries will be reported in a given day are 0.49, 0.42, 0.06 and 0.03, respectively. What is the mean of the given probability distribution? A. 0.63 B. 1.08 C. 0.56 D. 1.5 5. (5 points) The standard deviation for the binomial distribution with n=40 and p=0.4 is: A. 7.58 B. 3.46 C. 6.73 D. 3.10 6. (5 points) The probability that a person has immunity to a particular disease is 0.3. Find the mean number who have immunity in samples of size 18. A. 5.4 B. 9.0 C. 6.7 D. 7.2 7. (5 points) The incomes of trainees at a local mill are normally distributed with a mean of $1100 and a standard deviation of $120. What percentage of trainees earn less than $900 a month? A. 74.5% B. 9.18% C. 40.82% D. 4.75% 8. (5 points) For a standard normal distribution, find the percentage of data that are between 3 standard deviations below the mean and 2 standard deviation above the mean. A. 84.00% B. 65.33% C. 97.59% D. 15.74% SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question. Express percents as decimals. Round dollar amounts to the nearest cent. 9. (15 points) Most of us hate buying mangos that are picked too early. Unfortunately, by waiting until the mangos are almost ripe to pick carries a risk of having 7% of the picked rot upon arrival at the packing facility. If the packing process is all done by machines without human inspection to pick out any rotten mangos, what would be the probability of having at most 2 rotten mangos packed in a box of 12? 10. We have 7 boys and 3 girls in our church choir. There is an upcoming concert in the local town hall. Unfortunately, we can only have 5 youths in this performance. This performance team of 5 has to by picked randomly from the crew of 7 boys and 3 girls. a. (5 points) What is the probability that all 3 girls are picked in this team of 5? b. (5 points) What is the probability that none of the girls are picked in this team of 5? c. (5 points) What is the probability that 2 of the girls are picked in this team of 5? 11. (15 points) A soda company want to stimulate sales in this economic climate by giving customers a chance to win a small prize for every bottle of soda they buy. There is a 20% chance that a customer will find a picture of a dancing banana ( ) at the bottom of the cap upon opening up a bottle of soda. The customer can then redeem that bottle cap with this picture for a small prize. Now, if I buy a 6-pack of soda, what is the probability that I will win something, i.e., at least win a single small prize? 12. (15 points) A department store manager has decided that dress code is necessary for team coherence. Team members are required to wear either blue shirts or red shirts. There are 9 men and 7 women in the team. On a particular day, 5 men wore blue shirts and 4 other wore red shirts, whereas 4 women wore blue shirts and 3 others wore red shirt. Apply the Addition Rule to determine the probability of finding men or blue shirts in the team. MATHS - Misc. 10 MCQs and Short Answer Problems with Step by Step Solution Purchase the answer to view it • maths_-_misc._10_mcqs_and_short_answer_problems.docx
{"url":"https://www.sweetstudy.com/content/maths-certain-town-4-people-commute-work-bicycle","timestamp":"2024-11-06T02:42:43Z","content_type":"text/html","content_length":"109324","record_id":"<urn:uuid:828a130a-2cba-4eb2-9940-ecc7cd499b01>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00791.warc.gz"}
Town Ball Weekly Dave Overlund Dave Overlund Dave Overlund Back for a sixth season at 1390 Granite City Sports with your weekly amateur baseball (Town Team) baseball reports. Including game summaries, upcoming schedules, league standings, tournament summaries and an occasional special interest feature. The plan is to continue to include: Lakewood League/Section 2B, Central Valley League, Sauk Valley League, Victory League South and Stearns County, Clearwater River Cats teams throughout their league playoffs, regional and state tourney games. Check out the Minnesota Amateur Baseball website for additional information: http:// COLD SPRING SPRINGERS 13 SAUK RAPIDS CYCLONES 3 The Springers defeated the Lakewood League and Section 2B rivals the Cyclones, backed by eight timely hits and a grand slam. Lefty Sam Hanson started on the mound for the Spingers, he threw five innings to earn the win. He gave up three hits, issued four walks, surrendered one run and he recorded nine strikeouts. Jack Arnold threw two innings in relief, he gave up three hits and he surrendered two runs. The Springers were led on offense by Brian Hansen, he went 3-for-5 with a grand slam home run and a double for seven huge RBI’s. He was hit by a pitch and he scored a pair of runs. His home run to left center field was estimated at 390’ plus. Drew Bulson went 2-for-4 for a RBI and he scored two runs, he called another great game from behind the plate. Joe Dempsey went 1-for-4 with a sacrifice bunt, he was hit by a pitch and he scored two runs. Brad Olson went 1-for-4 for a RBI, he earned a walk and he scored one run. Jeron Terres was credited with a RBI, he was hit by a pitch and he scored three runs. Alex Jungels earned a pair of walks and he scored a pair of runs, Zach Femrite earned two walks and he scored one run and Eric Loxtercamp went 1-for-4. The Cyclones starting pitcher Noah Klinefelter threw three innings, he gave up four hits, issued one walk and he recorded one strikeout. Alex Kreiling threw 2 2/3 innings in relief, he surrendered four runs. Mitch Loegering threw 1/3 of an inning in relief, he gave up two hits, issued one walk and surrendered six runs. Luis Massa threw the final inning in relief, he issued one walk and he recorded two strikeouts. The Cyclones were led of offense by Tyler Hemker, he went 1-for-2 for a RBI and he earned a walk and Louis Massa went 1-for-4 for a RBI. Bjorn Hanson went 2-for-4 and he scored a run and Matt Johnson went 1-for-3 and he scored a run. Mitch Loegering went 1-for-3, veteran Scott Geiger was credited with a RBI and he scored a run and Carlos Gomez earned a pair of walks and he had a stolen base NICK BELL TOURNAMENT GAMES (SECTION 2B GAMES) COLD SPRING SPRINGERS 12 SOBIESKI SKIS 4 The Springers defeated their Section 2B rivals the Skis of the Victory League, backed by sixteen hits, including nine extra base hits. This gave veteran right hander Zach Femrite a great deal of support. He threw a complete game, he did give up thirteen hits, of which elven were singles. He issued three walks, surrendered four runs and he recorded ten strikeouts. The Springers offense was led by four players with multi-hit games with Eric Loxtercamp had a big game, he went 4-for-5 with a home run and two doubles for a RBI and he scored three runs. Eric did cover acres of ground in center field, with flashes of Byron Buxton. Drew Bulson went 4-for-5 with two triples for four RBI’s and he scored a run. Joe Dempsey went 3-for-5 with two doubles for three RBI’s and he scored two runs. Zach Femrite went 3-for-5 with a double for a RBI and he scored four runs. Jeron Terres had a sacrifice fly for a RBI, he earned a walk and he scored one run. Veteran Ryan Holthaus went 1-for-3 and he earned a walk, Alex Jungels went 1-for-5 and Jack Arnold earned a walk, he had a sacrifice bunt and he scored one run. The Skis starting pitcher Scott Litchy threw a complete game, he gave up sixteen hits, issued three walks and he surrendered twelve runs. The Skis were led on offense by Riley Hirsch, he went 4-for-4 with two sacrifice flys for two RBI’s and he scored two runs. Collin Eckman went 1-for-2 with a sacrifice fly for a RBI and he was hit by a pitch. Scott Litchy went 2-for-4, he earned a walk and he scored one run. Tyler Jendro went 1-for-5 with a sacrifice fly for a RBI and Austin Weiss went 2-for-4 with two doubles. Thomas Miller went 1-for-5 and scored a run and Matt Baier went 1-for-5. SAUK RAPIDS CYCLONES 5 MOORHEAD MUDCATS 3 The Cyclones defeated their Section 2B rivals the Mudcats, backed by ten hits, including two extra base hits, including a walk off three run home run. Tyler Bjork started on the mound, he threw seven innings, he gave up seven hits, issued three walks, no runs and he recorded nine strikeouts. Brendan Ehlers threw one inning in relief, he gave up one hit and he surrendered one run. David Kroger Jr. threw two innings in relief to earn the win, he gave up three hits, surrendered two runs and he recorded five strikeouts. The Cyclones were led on offense by Mitch Loegering, he went 3-for-5 with a walk off three run home run and he scored a pair of runs. David Kroger Jr. went 1-for-3 with a double and he was hit by a pitch and Luis Massa went 1-for-5 with a double and he scored one run. Logan Siemers was credited with a RBI, he had called a great game behind the plate. Bjorn Hanson went 1-for-4, he earned a walk and he scored a run. Tyler Bjork went 1-for-4 and he scored a run and Tommy Wippler went 1-for-5. Veteran Scott Geiger went 1-for-2, Matt Johnson went 1-for-4 and Brendan Ehlers had a sacrifice bunt. The Mudcats starting pitcher Drew Olsonawski threw four innings, he gave up five hits, issued one walk, surrendered one run and he recorded three strikeouts. Tanner McBain threw four innings in relief, he gave two hits, surrendered one run and he recorded four strikeouts. Ty Syverson threw 2/3 of an inning, he gave up three hits, surrendered three runs and he recorded one strikeout. The Mudcats were led on offense by Ben Swanson, he went 3-for-4 with a double for a RBI and Alex Erickson went 2-for-4 with two doubles for a RBI and he scored a run. Alec Sames went 2-for-4 with a double and he earned a walk and Dylan Fox went 2-for-4 and he scored one run. Brett Erickson and Toby Sayles both earned one walk and Ty Syverson scored one run. COLD SPRING SPRINGERS 6 MOORHEAD MUDCATS 5 The Springers defeated their Section 2B rivals the Mudcats, backed by seven hits, including four extra base hits. The Springers Nick Pennick started on the mound he threw five innings to earn the win. He gave up nine hits, issued two walks, surrendered four runs and he recorded two strikeouts. Ben Etzell threw two innings to earn the save, he gave up two hits, issued one walk, surrendered one run and he recorded two strikeouts. The Springers were led on offense by Brad Olson, he went 1-for-3 with a double for two RBI’s and he scored a run. Joe Dempsey went 1-for-3 with a triple for a RBI and he scored one run. Drew VanLoy went 1-for-3 with a double for a RBI and he scored one run. Drew Bulson went 2-for-3 for a RBI and he scored one run. Ryan Holthaus went 1-for-2 with a double, he earned a walk and he scored one run and Nate Hinkemeyer went 1-for-2 and he scored a run. The Mudcats starting pitcher Ty Syverson threw six innings, he gave up seven hits, issued one walk, surrendered six runs and he recorded three strikeouts. They were led on offense by Ben Swanson, he went 3-for-4 for a RBI and Alex Sames went 1-for-3 with a sacrifice fly for a RBI and he scored a run. Toby Sayles went 1-for-4 for two RBI’s and Ben Wilmer went 2-for-3, he earned a walk and he scored one run. Dylan Fox went 1-for-4 and he scored two runs and Reece Kramer went 1-for-4 and he scored a run. Mason Penske and Brett Erickson both went 1-for-4 and Alex Erickson earned a pair of MOORHEAD BREWERS 11 SOBIESKI SKIS 9 The Brewers defeated their Section 2B rivals from the Victory League the Skis, backed by thirteen hits, including four extra base hits. The big one being a grand slam in the top of the seventh inning to give them the lead. The Brewers starting pitcher Jason Beilke started on the mound, he threw six innings, he gave up eleven hits, issued two walks, surrendered seven runs and he recorded seven strikeouts. Parker Trewin threw the final inning in relief to earn the win, he recorded two strikeouts. The Brewers were led on offense by Spencer Flaten, he went 2-for-3 with a grand slam, a double and a sacrifice fly for five huge RBI’s. Denver Blinn went 2-for-4 with a triple for two RBI’s and he scored a run. Veteran Jeremy Peschel went 2-for-4 for a RBI and he scored two runs. David Ernst went 1-for-3 with a sacrifice fly for two RBI’s and he scored one run. Jayse McLean went 1-for-3 for a RBI, he earned a walk and he scored two runs. Joe Hallock went 2-for-4 with a double and he scored two runs. Nick Salentine went 2-for-4 and he scored a run and Mike Peschel went 1-for-4. The Skis starting pitcher Thomas Miller threw a complete game, he gave up thirteen hits, issued one walk, surrendered eleven runs and he recorded two strikeouts. The Skis were led on offense by Austin Weisz, he went 3-for-4 for two RBI’s, he had a stolen base and he scored two runs. Dusty Parker went 1-of-4 with a double for two RBI’s and Tyler Jendro went 2-for-4 with a RBI and he scored one run. Scott Litchy went 3-for-4, he earned a walk, he had a stolen base and he scored two runs. Riley Hirsch went 1-for-4 for a RBI, Collin Eckman went 1-for-3 with a stolen base and he scored one run and Thomas Miller earned a walk and he scored one run. MOORHEAD BREWERS 2 BRAINERD BEES 1 The Brewers defeated their Section 2B rivals the Bees, backed by six hits, good defense and a good pitching performance. Brook Lyter started on the mound for the Brewers, he threw a complete game to earn the win. He gave up six hits, surrendered one run and he recorded seven strikeouts. The Brewers were led on offense by David Ernst, when 1-for-3 for a RBI and he earned a walk and Jayse McLean went 1-for-3, he earned a walk and he had a stolen base. Denver Blinn went 2-for-4 with a double and he scored two runs and Nick Salentine went 1-for-3. Joe Hallock earned a pair of walks, Jeremy Peschel and Spencer Flaten both earned one walk. The Bees starting pitcher McCale Peterson also threw a complete game, he gave up six hits, issued six walks, surrendered two runs and he recorded three strikeouts. The Bees offense was led by Bryce Flanagan, he went 3-for-3 and McCale Peterson went 2-for-3. Joel Martin went 1-for-3 and he scored a run and Tim Martin was hit by a pitch. BEAUDREAUS SAINTS 7 MOORHEAD MUDCATS 2 The Saints defeated their Section 2B rivals the Mudcats, backed by eleven hits, including six extra base hits. The Saints starting pitcher Chris Koenig threw a complete game to earn the win. He gave up six hits, issued three walks, surrendered two runs and he recorded four strikeouts. The Saints were led on offense by Tommy Auger, he went 2-for-3 with a homer in, he earned a walk and he scored two runs. Nick Maiers went 2-for-3 with two doubles for a RBI, he earned a walk and he scored a pair of runs. Brindley Theisen went 2-for-3 with a double for a RBI and Brian Minks went 1-for-4 with a triple. Steven Neutzling went 2-for-3, he earned a walk and he scored a pair of runs. Reese Gregory went 1-for-4 with a double and he scored a run. Nick Hengel was credited with a RBI and he earned a walk and Tom Imholte was credited with a RBI. The Mudcats Beau Wilmer started on the mound, he threw three innings, he gave up five hits, issued two walks, surrendered three runs and he recorded three strikeouts. Josh Schmidt threw one inning in relief, he gave up four hits, issued one walk and he surrendered four runs. Mason Penske threw two innings in relief, he gave up two hits, issued one walk and he recorded one strikeout. The Mudcats were led on offense by Mason Penske, he went 2-for-3 with a double for a RBI and he earned a walk. Ben Swanson went 2-for-3 and he scored a run and Beau Wilmer went 1-for-2 with a double and he scored a run. Toby Sayles went 1-for-3, Reece Kramer was credited with a RBI and he was hit by a pitch and Alec Sames earned a walk. BEAUDREAUS SAINTS 6 BRAINERD BEES 0 The Saints defeated their Lakewood League and Section 2B rivals the Bees, backed by seven very timely hits and a good pitching performance. Brindley Theisen threw a complete game to earn the win, he gave up four hits, issued one walk and he recorded six strikeouts. The Saints were led on offense by Reese Gregory, he went 2-for-4 for three RBI’s and Tom Imholte went 1-for-4 for a RBI. Steven Neutzling went 2-for-3, he earned a walk and he scored a run and Andy Auger was credited with a RBI, he earned a walk and he scored one run. Nick Maiers went 1-for-3, he earned a walk and he scored one run and Brindley Theisen went 1-for-3 with a stolen base. Nick Hengel was credited with a RBI, he earned a walk, he was hit by a pitch and he scored one run. Brian Minks, earned a pair of walks, he was hit by a pitch, he had a stolen base and he scored one run and Tommy Auger earned a walk and he was hit by a pitch. The Bees starting pitcher, Casey Welsh threw three innings, he gave up six hits, issued six walks, surrendered six runs and he recorded one strikeout. Max Boran threw three innings in relief, he gave up one hit, issued one walk and he recorded four strikeouts. The Bees were led on offense by Joel Martin, Alex Haapajok and Grant Toivonen all went 1-for-3. Veteran Tim Martin went 1-for-2 and Bryce Flanagan earned a walk. COLD SPRING SPRINGERS 13 BRAINERD BEES 7 The Springers defeated their Lakewood League and Section 2B rivals the Bees, backed by ten hits, including four doubles and a home run. Sam Hanson started on the mound, he threw three innings, he gave up three hits, issued four walks, surrendered three runs and he recorded three runs. Jack Arnold threw five innings in relief to earn the win. He gave up four hits, issued one walk, surrendered two runs and he recorded four strikeouts. Justin Thompson threw the final inning in relief, he gave four hits, two runs and he recorded one strikeout. The Springers offense was led by Jeron Terres, he went 2-for-4 with a double for two RBI’s, he was hit by a pitch and he scored two runs. Drew Bulson went 2-for-3 with a double and a sacrifice fly for a RBI, he had a stolen base and he scored one run. Brad Olson went 2-for-5 for a RBI, he earned a walk and he scored one run. Joe Dempsey went 1-for-3 with a home run, he earned a walk and he scored two runs. Drew VanLoy went 2-for-3 for a RBI, he earned two walks and he scored one run. Ryan Holthaus went 1-for-2 with a double for a RBI and Nick Pennick was credited with two RBI’s, he earned a walk and he scored one run. Alex Jungels earned three walks and he scored two runs and Justin Thompson earned a walk, he was hit twice by a pitch, he had a stolen base and he scored three The Bees stating pitcher was Bryce Flanagan, he threw five innings, he gave up six hits, issued six walks and he surrendered four runs. Max Boran threw 1 1/3 innings in relief, he issued three walks and he surrendered three runs. Casey Welsh threw 1 2/3 innings in relief, he gave up four hits, issued four walks and he surrendered six runs. The Bees were led on offense by Phil Zinda, he went 2-for-5 for a RBI and he scored a run. Tim “Never Aging” Martin went 2-for-3 with a sacrifice fly for two RBI’is and he scored a run. Joel Martin went 2-for-5 with a triple for three RBI’s and he scored a run. Casey Welsh went 2-for-5 with a double for a RBI and Grant Toivonen went 2-for-4, he was hit by a pitch and he scored one run. Colin Kleffman went 1-for-4, he earned a walk, he had a stolen base and he scored one run. Alex Haapajaki earned three walks and he scored a run and Max Born earned a walk and he scored one run. MOORHEAD BREWERS 7 BEAUDREAUS SAINTS 0 The Brewers defeated their Section 2B rivals the Saints, backed by thirteen hits, including three doubles and a good pitching performance. David Ernst started on the mound, he threw eight innings to earned the win. He gave up four hits, issued one walk and he recorded four strikeouts. Tanner Adam threw the final inning in relief, he gave up one hit and he recorded two strikeouts. The Brewers were led on offense by veteran Mike Peschel, he went 3-for-5 for a RBI and he scored one run and Denver Blinn went 1-for-5 with a double for two RBI’s and he scored a run. Jayse McLean went 1-for-4 with a double for two RBI’s and he earned a walk. Spencer Haley went 2-for-3, he earned two walks, he had two stolen bases and he scored one run. Joe Hallock went 3-for-5 with a double and he scored at run and Nick Salentine went 1-for-5 for a RBI and he scored one run. Jeremy Peschel went 1-for-4 and he scored a run, Matt Oye went 1-for-4 and Chris Clemenson earned a walk and he scored one run. The Saints starting pitcher, Tommy Auger threw 1 2/3 innings, he gave up seven hits and he surrendered five runs. Nick Maiers threw 5 1/3 innings, he gave up five hits, issued four walks, surrendered two runs and he recorded seven strikeouts. Reese Gregory threw two innings in relief, he gave up one hit and he recorded two strikeouts. The Saints were led on offense by Reese Gregory, he went 2-for-4 with a double and Steve Neutzling went 1-for-4 with a double. Nick Maiers went 1-for-4 with a double, Brian Minks went 1-for-4 and Jack Schramel earned a walk. SOBIESKI SKIS 15 MOORHEAD MUDCATS 10 The Skis from the Victory League come from behind to defeat their Section 2B foes the Mudcats. They were backed by eleven hits, including six doubles and a triple. Dusty Parker started on the mound, he threw 4 2/3 innings, he gave up nine hits, issued two walks, surrendered seven runs and he recorded seven strikeouts. Chris Reller threw 1 2/3 innings, he gave up seven hits, issued one walk and he surrendered three runs. Collin Eckman threw 2 2/3 innings in relief to close it out. The Skis were led on offense by Scott Litchy, he had a great game, he went 4-for-5 with a triple and two doubles for seven RBI’s, he earned a walk, a stolen base and he scored one run. Riley Hirsch went 1-for-4 with a double for two RBI’s and he earned two walks. Matt Baier went 2-for-5 with a double for two RBI’s and he scored a pair of runs. Tyler Jendro went 1-for-2 with a double for a RBI, he earned a pair of walks, he was hit by a pitch and he scored three runs. Collin Eckmann went 1-for-1 with a double for a RBI, he earned a walk and he scored two runs. Dusty Parker was credited with a RBI, he earned a walk, he was hit twice by a pitch and he scored four runs. Joey Hanowski went 2-for-5 and he scored a run and Chris Reller had a pair of sacrifice bunts. Austin Weisz earned a pair of walks and he scored one run and Thomas Miller earned a walk and he scored one run. The Mudcats starting pitcher Alex Erickson threw four innings, he gave up four hits, issued five walks, surrendered six runs and he recorded two strikeouts. Toby Sayles threw 2 1/3 innings in relief, he gave up two hits, issued two walks, surrendered four runs and he recorded six one strikeout. Josh Schmidt threw 2/3 of an inning, he gave up two hits, issued one walk and he surrendered two runs. Mason Penske threw one inning in relief, he gave up three hits, issued two walks, surrendered three runs and he recorded one strikeout. The Mudcats were led on offense by Beau Wilmer, he went 2-for-4 with a home run for two RBI’s, he earned one walk, he had three stolen bases and he scored three runs. Mason Penske went 3-for-5 with two doubles for two RBI’s and he scored a pair of runs. Alex Sames went 3-for-5 for two RBI’s and he scored one run and Toby Sayles went 2-for-5 with a triple for a RBI and he scored one run. Dylan Fox went 2-for-5 with a double for a RBI and Reece Kramer went 1-for-5 with a double for a RBI. Eric Watt went 1-for-2, he earned a walk and he scored two runs, Brett Erickson went 1-for-4 and he scored a run and Ben Swanson went 1-for-5. Cold Spring Springers at Brainerd Bees (1:00) Beaudreaus Saints at Sobieski Skis (12:00) Brainerd Bees at Sobieksi Skis (3:00) WATKINS CLIPPERS 21 EDEN VALLEY HAWKS 0 (7 Innings) The Clippers defeated their Central Valley League rivals the Hawks, backed by nineteen hits, including six players with multi-hit games and a very good pitching performance by a pair of Clippers arms. Veteran lefty Danny Berg started on the mound, he threw six innings to earn the win. He gave up two hits, issued one walk and he recorded seven strikeouts. Dustin Kramer threw the final inning in relief, he gave up one hit and he recorded two strikeouts. The Clippers were led by Danny Berg, he went 4-for-5 with two doubles for two RBI’s, he earned a walk and he scored five runs. Carter Block went 2-for-5 for four RBI’s, he was hit by a pitch and he scored two runs. Reese Jansen went 3-for-4 for four RBI’s, he was hit by a pitch and he scored one run. Lincoln Haugen went 3-for-5 for three RBI’s, he was hit by a pitch and he scored four runs. Brendan Ashton went 1-for-4 with a double for two RBI’s, he earned two walks and he scored three runs. Player/manager Matt Geislinger went 2-for-4 for three RBI’s, he earned two walks and he scored a pair of runs. Carson Geislinger went 3-for-3 for a RBI and he scored one run. Kevin Kramer went 1-for-4, he was hit by a pitch twice and he scored three runs and Dustin Kramer was credited with a RBI and he earned three walks. The Hawks starting pitcher Stephen Pennertz threw 2 1/3 innings, he gave up nine hits, issued four walks, surrendered ten runs and he recorded one strikeout. Nick Pauly threw 2 1/3 innings in relief, he gave up four hits, issued two walks and he surrendered five runs. Matthew Pennertz threw one inning in relief, he gave up three hits, issue one walk, surrendered two runs and he recorded one The Hawks were led on offense by Alex Geislinger, he went 2-for-3, David Pennertz went 1-for-3 and Matt Lies earned a walk. COLD SPRING ROCKIES 5 PEARL LAKE LAKERS 1 The Rockies defeated their Central Valley League rivals the Lakers in a very good game. They collected eleven hits, played good defense and they had a good pitching performance by a pair of Rockies arms. This was a 2-1 game in the favor of the Rockies, till the seventh inning, when they put up three runs. Lefty Jake Brinker started on the mound, he threw eight innings to earn the win. He gave up six hits, issued one walk and he recorded four strikeouts. Brandon Gill threw the final inning in relief, he issued one walk and he recorded one strikeout. The Rockies were led on offense by Calvin Kalthoff, he went 2-for-4 for two RBI’s and Austin Dufner went 1-for-4 for a RBI and he scored a run, he showed flashes of the Twins center fielder, he made several nice plays. Collin Eskew went 1-for-3 for a RBI and he scored a run and Brock Humbert went 1-for-4 for a RBI and he had a stolen base. Jordan Neu went 2-for-2, he earned a pair of walks, he had a stolen base and he scored a run. Brandon Gill went 2-for-4 with a stolen base and he scored one run, Nick Sklucazek went 1-for-4 and he scored one run and David Jonas had a sacrifice fly. The Lakers starting pitcher, Justin Kunkel threw 6 2/3 innings, he gave up ten hits, issued two walks, surrendered five runs and he recorded four strikeouts. Tommy Linn threw 1 2/3 inning, he gave up one hit and he recorded a strikeout. The Lakers were led on offense by Tommy Linn, he went 1-for-4 for a RBI and Max Fuchs went 1-for-5 with a stolen base and he scored one run. Colton Fruth went 1-for-3 and he earned a walk and Chadd Kunkel earned a walk. Mitch Wieneke, Justin Kunkel and Andy Linn all went 1-for-4. LUXEMBURG BREWERS 10 ST. NICHOLAS NICKS 2 The Brewers defeated their Central Valley League rivals the Nicks, backed by eleven hits, including three home runs and a pair of doubles. Starting pitcher JT Harren threw seven innings to earn the win. Sam Iten threw two innings in relief to close it out. The Brewers were led on offense by Sam Iten he went 1-for-4 with a home run for two RBI’s, he earned a walk and he scored two runs. Luke Harren went 1-for-2 with a home run for two RBI’s, he earned two walks and he scored a pair of runs. Chase Aleshire went 1-for-3 with a homer run and he earned a pair of walks and Derrick Orth went 2-for-5 for two RBI’s and Isaac Matchinsky was credited with two RBI’s. Austin Klaverkamp went 2-for-5 with a double and he scored a run and Reed Pfannenstein went 2-for-5 with a double and he scored one run. Logan Aleshire went 2-for-4, he earned a walk and he scored one run and Ethyn Fruth earned a walk, had a stolen base and he scored one run. The starting pitcher for the Nicks, Travis Hanson threw five innings, he was the pitcher of record. Grant Mrozak threw three innings in relief to close it out. No information on the Nicks offense. ST. AUGUSTA GUSSIES 8 KIMBALL EXPRESS 6 The Gussies defeated their Central Valley League rivals the Express, backed by eight hits, including four doubles. Tyler Bautch started on the mound, he threw three innings, he gave up six hits, issued three walks, surrendered four runs and he recorded two strikeouts. Veteran lefty Zach Laudenbach threw six innings in relief to earn the win. He gave up five hits, issued two walks and he recorded six strikeouts. The Gussies were led on offense by Marcus Lommel, he went 1-for-3 with a double for two RBI’s and he earned a pair of walks. Tommy Friesen went 1-for-4 with a double for a RBI and he earned a walk. Nate Laudenbach went 1-for-4 with a double for a RBI, he earned a walk and he scored a pair of runs. Dustin Schultzetenberg went 1-for-3 for a RBI, he earned a walk, he was hit by a pitch and he scored a pair of runs. Aaron Fruth went 1-for-5 for a RBI and Adam Gwost was credited with a RBI and he earned a walk. Nate Gwost went 2-for-4 for a double and he scored two runs, Mitch Gwost went 1-for-2, he earned a pair of walks, he was hit by a pitch and he scored one run and Brady Grafft earned two walks and he scored one run. The Express’s starting pitcher Zach Wallner threw 4 2/3 innings, he gave up six hits, issued five walks, surrendered four runs and he recored seven strikeouts. Max Koprek threw 3 1/3 innings in relief, he gave up two hits, issued five walks, surrendered four runs and he recorded seven strikeouts. They were led on offense by Ben Johnson, he went 2-for-4 with two home runs for two RBI’s and he earned a walk. Zach Wallner went 2-for-3 for a RBI and he scored one run and Zach Dingmann went 1-for-4 for a RBI, he earned a walk and he scored one run. Michael Hoffman went 2-for-4 and he scored a run, Matt Dingmann earned two walks and Scott and Brooks Marquardt both went 1-for-5. WATKINS CLIPPERS 4 COLD SPRING ROCKIES 3 The Clippers defeated their Central Valley League rivals the Rockies in a pitching dual. The Rockies held a early lead, but the Clippers come back to earn the win. They had some very timely hitting and good defense to support their pitcher/manager Matt Geislinger. The lefty started on the mound, he threw a complete game to earn the win. He gave up just four hits, issued two walks, surrendered three runs and he recorded thirteen strikeouts. The Clippers were led on offense by Brendan Ashton, he went 1-for-3 with a double for two RBI’s, he was hit by a pitch and he scored a run. Lincoln Haugen went 2-for-4 for a RBI and he scored run. Kevin Kramer had big game, he went 3-for-4 with three doubles and he scored one run. Matt Geislinger went 1-for-3 for a RBI and he earned a walk and veteran Dan Berg went 1-for-3, he earned walk and he scored one run. The Rockies starting pitcher Eli Backes threw six innings, he gave up seven hits, issued one walk, he surrendered from runs and he recorded two strikeouts. Rick Burtzel threw two innings in relief, he gave up one hit, issued one walk and he recorded three strikeouts. The Rockies were led on offense by Nick Skluzacek, he went 2-for-4 with a home run, he had a stolen base and he scored two runs. Veteran David Jonas went 1-for-3 with a home run and he earned a walk. Jordan Neu went 1-for-4 and Brandon Gill was credited with a RBI, he earned a walk and he had a stolen base. ST. AUGUSTA GUSSIES 7 LUXEMBURG BREWERS 3 The Gussies defeated their Central Valley League rivals the Brewers, backed by eight hits, including a pair of doubles. Their starting pitcher Travis Laudenbach threw a complete game to earn the win. He gave up five hits, issued five walks, surrendered three runs and he recorded six strikeouts. The Gussies were led on offense by Nate Laudenbach, he went 1-for-4 with a double for two big RBI’s and he scored one run. Brady Grafft went 1-for-3 for two RBI’s and he earned a walk and Aaron Fruth went 2-for-4 for a RBI and he scored a run. Nate Gwost was credited with two RBI’s and he earned a walk and Tommy Friesen went 2-for-5 with a double and he scored one run. Marcus Lommel went 1-for-3, he earned a walk and he scored two runs and Adam Gwost went 1-for-3. Mitch Gwost earned a walk and he scored one run and Eric Primus scored one run. The Brewers starting pitcher, Reed Pfannenstein threw six innings, he gave up six hits, issued four walks, surrendered six runs and he recorded four strikeouts. Austin Klaverkamp threw three innings in relief, he gave up two hits, issued one walk, surrendered one run and he recorded three strikeouts. The Brewers offense was led by Ethyn Fruth, he went 2-for-3 with a double for RBI and he earned a walk. Luke Harren went 2-for-4, he had a stolen base and he scored a run. Reed Pfannenstein went 1-for-2 with a double, Chase Aleshire earned a pair of walks, Derrick Orth earned a walk and he scored one run and Cory Wenz scored one run. KIMBALL EXPRESS 6 ST. NICHOLAS NICKS 5 (10 Innings) The Express defeated their Central Valley League rivals the NIcks, backed by seven very timely hits, including a big home run. Zach Dingmann started on the mound, he threw seven innings, he gave up eight hits, issued three walks, surrendered four runs and he recorded seven strikeouts. The Express’s offense was led by veteran Adam Beyer, he went 2-for-5 with a home run for five big RBI’s. Brooks Marquardt went 2-for-3, he earned a pair of walks, had a stolen base and he scored one run. Matt Dingmann went 2-for-4, he earned a walk, had a stolen base and he scored two runs. Scott Marquardt went 1-for-2 with a stolen base and Brian Marquardt earned a pair of walks and he scored one run. Zach Dingmann had two sacrifice bunts, Zak Wallner earned a walk and he had a sacrifice bunt and Ben Johnson had a stolen base and he scored one run. The Nicks starting pitcher Derek Kuechle threw 8 1/3 innings, he gave up seven hits, issued five walks, surrendered five runs and he recorded six strikeouts. Andrew Bautch threw one inning in relief, he issued one walk, surrendered one run and he recorded two strikeouts. The Nicks were led on offense by Jeff Lutgen, went 1-for-3 for a RBI, he earned a walk, he was hit by a pitch and he scored one run. Mike Bautch went 1-for-4 for a RBI, he earned a walk and he scored one run. Andrew Bautch went 1-for-4 for a RBI and he earned a walk and Al Foehrenbacher was credited with a RBI and he earned a walk. Tanner Anderson went 3-for-5 and Matt Schindler went 2-for-5 and he scored a run. Damian Lincoln went 1-for-4, he earned a walk, he had two stolen bases and he scored two runs and Dylan Rausch earned a walk. PEARL LAKE LAKERS 7 EDEN VALLEY HAWKS 6 The Lakers defeated the Central Valley League rivals the Hawks, backed by eight hits, including a home run and a double. Mitchell Wieneke started on the mound, he threw five innings, he gave up eight hits, issued six walks, surrendered six runs and he recorded six strikeouts. Mitch Ergen threw four innings, he gave up one hit, gave up one run and he recorded three strikeouts. The Lakers were led on offense by Ryan Wieneke, he went 3-for-4 with a home run for three RBI’s, he was hit by a pitch, he had a stolen base and he scored two runs. Tommy Linn wen 3-for-5 with a double for a RBI, he had a stolen base and he scored two runs. Chadd Kunkel went 1-for-2 for a RBI, he earned three walks and he had two stolen bases. Justin Kunkel was credited with a RBI, he earned two walks and he scored one run and Max Fuchs was credited with a RBI and he scored one run. Andy Linn went 1-for-3 and he earned a walk and Ryan Heslop scored a run. The Hawks starting pitcher Ben Arends threw 6 2/3 innings, he gave up seven hits, issued six walks, surrendered six runs and he recorded six strikeouts. Tanner Olean threw 1 1/3 inning in relief, he gave up one hit, one run and he recorded three strikeouts. The Hawks were led on offense by Tanner Olean, he went 2-for-5 for a RBI, he had a stolen base and he scored a run. Alex Geislinger went 2-for-4 with a double, he earned a walk and he had a stolen base. David Pennertz went 1-for-5 for a RBI and he scored a run and Ben Arends went 1-for-4 for a RBI and he scored a run. Matt Unterberger went 2-for-3 for a RBI, he earned a walk and he scored two runs. Austin Berg went 1-for-4, Matthew Pennertz earned three walks, Steve Pennertz earned a pair of walks and Jackson Geislinger scored a run. St. Augusta Gussies at Watkins Clippers (2:00) Kimball Express at Luxemburg Brewers (2:00) Pearl Lake Lakers at Cold Spring Rockies (7:30) SAUK VALLEY LEAGUE (PLAYOFFS) ROGERS RED DEVILS 7 CLEAR LAKE LAKERS 0 The Red Devils defeated the Sauk Valley League rival the Lakers 7-0, backed ten hits, and a very good pitching performance. The Red Devils starting pitcher Luke Welle threw a complete game to earn the win. He gave up just three hits, issued one walk and he recorded five strikeouts. The Red Devils were led on offense by Eric Simon, he went 3-for-4 for two RBI’s and he scored two runs. Adam Kruger went 1-for-3 with a double for two RBI’s and he earned a walk. Dustin Carlson went 2-for-4 for a RBI and he scored a run and Luke Selken went 2-for-3, he earned a walk and he scored one run. Player/manager Bryan McCallum went 1-for-4 and he scored a run and Mitch Annis went 1-for-3 and he earned a walk. Both Luke Welle and Calen Kirkland each earned a walk and both scored one run. The Lakers starting pitcher, player/manager Mike Smith threw 6 1/3 innings, he gave up ten hits, issued five walks, surrendered seven runs and he recorded eight strikeouts. Ran Skymanski threw 1 2/3 inning in relief, he recorded two strikeouts. The Lakers were led on offense by Ben Anderson went 1-for-2 with a sacrifice fly and Tyler Maurer went 1-for-4 with a double. Justin Hagstrom went 1-for-2 and he earned a walk and Mike Smith was hit by a pitch. STONE PONEYS 8 ALBERTVILLE VILLAINS 7 The Stone Poneys took an early led, they they gave up four runs in the seventh inning to fall behind one run. They came back in the bottom of the ninth inning on a walk off double by Player/manager Jeff Amann, to drive in the tying and winning runs. The Stone Poneys starting pitcher, lefty Sean Minder threw seven innings, he gave up nine hits, issued two walks, surrendered five runs and he recorded four strikeouts. Brandon Hartun threw 1 1/3 inning in relief, he gave up two hits, surrendered two runs and he recorded one run. Right hander, Cameron Knudsen threw one inning in relief to earn the win. He gave up one hit and he recorded one strikeout. The Villains starting pitcher Mike Wallace threw eight innings, he gave up six hits, issued two walks, surrendered six runs and he recorded eight strikeouts. Jack Bloomstrand threw 1/3 of an inning in relief, he gave up two hits, issued one walks and he surrendered two runs. The Villains were led on offense by Jace Pribyl, he went 4-for-4 with a double for four RBI’s, he earned a walk and he scored a pair of runs. Kyle Hayden went 2-for-5 for a RBI and Luke Schumacher went 2-for-5, he was hit by a pitch and he scored two runs. Ian Jungles went 1-for-5 for a RBI, he had a stolen base and he scored a one run. Justin Cornell went 1-for-4 for a RBI, he earned a walk and he scored one run. Jim Althoff went 1-for-4 with a pair of stolen bases, he was hit by a pitch and he scored one run. Ryan Hagerty went 1-for-5. SARTELL MUSKIES 9 ROGERS RED DEVILS 0 (FORFEIT) BIG LAKE YELLOWJACKETS 1 ST. JOSEPH JOES 0 (10 Innings) The Yellow Jackets defeated their Sauk Valley League rivals the Joes, backed by ten hits, and a very good pitcher performance. Mason Miller started on the mound, he threw a complete game to earn the win. He gave up just three hits and he recorded nine strikeouts. The Yellow Jackets were led on offense by Tanner Teige, he went 3-for-5, Brandon Holthaus went 2-for-3 and Sam Dokkebakken went 2-for-4. Dustin Wilcox was credited with the games only RBI, Joe Rathmanner and Tony Rathmanner both went 1-for-3 and Joe scored a run and Tony earned a walk. The Joes starting pitcher Joey Atkinson threw eight innings, he gave up six hits, issued one walk and he recorded eight strikeouts. Greg Anderson threw one inning in relief, he gave up four hits, issued one walk, surrendered one run and he recorded two strikeouts. Tanner Blommer, Hunter Blommer and Brandon Bissett all went 1-for-4. MONTICELLO POLECATS 16 STONE PONEYS 6 (8 Innings) The Polecats defeated their Sauk Valley League rival the Stone Poneys, backed by nineteen hits, including six extra base hits. The Polecats starting pitcher Hunter Kisner threw four innings to earn the win. He gave up five hits, issued three walks, surrendered six runs and he recorded three strikeouts. Joe Tupy threw four innings in relief, he gave up four hits, issued one walk and he recorded three strikeouts. The Polecats were led on offense by Greg Holker, he went 3-for-4 with a home run, double and a sacrifice fly for four RBI’s, he earned a walk and he scored three runs. Joe Tupy went 3-for-5 with a double for three RBI’s and he scored two runs. Michael Revenig went 3-for-3 with a double for two RBI’s and he scored two runs. Brayden Hanson went 2-for-4 for two RBI’s, he had a stolen base and he scored two runs. Wyatt Morrell went 1-for-2 with a triple for a RBI and he scored one run. Jon Affeldt went 1-for-2 for two RBI’s and he earned a walk and Michael Olson went 2-for-5 with a double, he earned a walk and he scored two runs. Tommy Blackstone went 2-for-4, he earned two walks, had a stolen base and he scored one run. Cole Bovee went 1-for-3, he was hit by a pitch and he scored two runs, Isaac Frandsen went 1-for-3 and he scored a run, Keenan Macek had a sacrifice fly and he was hit by a pitch and Jake Rasmusen earned a walk. The Stone Poneys starting pitcher Jeff Amann, threw 2 1/3 innings, he gave up ten hits and he surrendered eight runs. Brandon Hartung threw 1 2/3 innings in relief, he gave up four hits, surrendered three runs and he recorded one strikeout. Cam Knudsen threw two innings in relief, he gave up two hits, issued four walks, surrendered two runs and he recorded two strikeouts. Sean Minder threw 1 2/3 innings in relief, he gave up three hits, issued two walks, surrendered three runs and he recorded one strikeout. The Stone Poneys were led on offense by Will Kranz, he went 2-for-4 for two RBI’s, he was hit by a pitch and he scored two runs. Zack Overboe went 2-for-4 with a triple and a double and he scored one run. Jeff Amann went 1-for-4 with a RBI and he earned a walk and Cam Knudsen went 2-for-4 and he scored a run. Rudy Sauerer went 1-for-3, he earned a walk and he scored one run and Cooper Lynch went 1-for-4. Dylan Dezurik earned a pair of walks and he scored a run and Josh Schaefer had a stolen base. FOLEY LUMBERJACKS 13 BECKER BANDITS 1 (8 Innings) The Lumberjacks defeated their Sauk Valley League rivals the Bandits, backed by sixteen hits, including three doubles, one triple and a home run. Veteran right hander Mike Beier started on the mound, he threw eight innings to earn the win. He gave up eight hits, issued two walks and he recorded four strikeouts. Brandon Buesgens threw one inning in relief, he gave up a pair of hits. The Lumberjacks were led on offense by Noah Winkelman, he went 2-for-4 with a double and a sacrifice fly for two RBI’s. Tony Stay went 3-for-5 with a triple and a double for one RBI and he scored three runs. Mitch Keeler went 3-for-3 with a home run for two RBI’s and Brandon Buesgens went 1-for-3 for a RBI and he earned a walk. Kyle Kipka went 1-for-4 for a RBI, he earned a walk and he scored one run. Drew Beier went 2-for-4, he earned a walk and he scored three runs and Mark Dierkes went 2-for-5. Alec Dietl went 1-for-1 for two RBI’s, he earned a walk and he scored one run, Joe Ziwicki went 1-for-2 with a double, he earned a walk and he scored one run and Drew Murphy was hit by a pitch. The Bandits starting pitcher wasn’t reported. They were led on offense by Weston Schug, he went 1-for-4 with a sacrifice fly for a RBI and Dalton Fouquette went 1-for-4 with a double. Connor Rolf went 1-for-3, he earned two walks and he scored a run and Ryan Sommerdorf went 1-for-3 and he was hit by a pitch. Ryan Hess and Conrad Goldade both went 2-for-4. Matthew Moe went 1-for-4, Matt Krenz went 1-for-3 and Zach Wenner was hit by a pitch. ST. JOSEPH JOES 9 ROGERS RED DEVILS 0 (FORFEIT) SARTELL STONE PONEYS 16 BECKER BANDITS 5 The Stone Poneys defeated their Sauk Valley League rivals the Bandits, backed by eighteen hits, including six doubles. Lefty Sean Minder started on the mound, he threw a complete game to earn the win. He gave up twelve hits, issued one walk, surrendered five runs and he recorded two strikeouts. The Stone Poneys were led on offense by Cooper Lynch, he went 4-for-6 for five RBI’s and he scored two runs. William Kranz went 2-for-5 with two doubles for a RBI, he earned a walk, he one stolen base and he had three stolen bases and he scored two runs. Josh Schaefer went 2-for-4 with a double for two RBI’s, he had two stolen bases and he scored one run. Zach Overboe went 3-for-5 for two RBIs, he was hit by a pitch and he scored two runs. Jeff Amann went 1-for-3 with a double for two RBI’s and he earned a walk. Cameron Knudsen went 1-for-4 for two RBI’s, he earned a walk and he was hit by a pitch. Patrick Dolan went 1-for-2 with a double, he earned two walks, he was hit by a pitch and he scored three runs. Brandon Hartung went 2-for-5 for a RBI and he scored two runs, Rudy Sauerer went 1-for-6 with a double for a RBI and he scored a run, Brandon Reinking went 1-for-1 with a double for a RBI and Michael Ashwill was hit by a pitch and he scored a run. SARTELL MUSKIES 4 BIG LAKE YELLOW JACKETS 1 The Muskies defeated their Sauk Valley League rivals the Yellow Jackets in a very well played game. The Muskies were backed by nine hits and one big inning. Veteran lefty David Deminsky threw a complete game to earn the win, he gave up four hits, issued one walk, surrendered one run and he recorded thirteen strikeouts. The Muskies were led on offense by veteran Tim Burns, he went 1-for-4 with a double for a RBI, he had a stolen base and he scored a run. Cody Partch went 2-for-4 with a double for a RBI and Andrew Deters went 2-for-4 with a double and he scored a run. Ethan Carlson went 1-for-4 for a RBI, he had a stolen base and he scored one run. Dylan Notsch went 2-for-4 and Adam Schellinger went 1-for-3 and he scored a run. The Yellow Jackets starting pitcher Taylor Giving threw two innings, he gave up six hits, surrendered four runs and he recorded two strikeouts. Dallas Miller threw six innings in relief, he gave up just three hits and he recorded eight strikeouts. They were led on offense by Trey Teige, he went 1-for-3 for a RBI and Dustin Wilcox went 1-for-4 with a double and he scored one run. Brandon Holthaus went 1-for-4, Luke Atwood went 1-for-3 and Tanner Teige earned a walk. MONTICELLO POLECATS 10 FOLEY LUMBERJACKS 5 The Polecats defeated their Sauk Valley League rivals the Lumberjacks, backed by fourteen hits, including four extra base hits. Their starting pitcher Wyatt Morrell threw 1 1/3 inning, he gave up six hits and he surrendered five runs. Micheal Revenig threw 7 2/3 innings in relief, he gave up two hits, issued five walks and he recorded six strikeouts to earn the win. The Polecats were led by Keenan Macek on offense, he went 3-for-5 with two doubles for two RBI’s and he scored a pair of runs. Tommy Blackstone went 1-for-5 with a double for two RBI’s and Greg Holker went 2-for-3 for a RBI, he earned a pair of walks, he had two stolen bases and he scored one run. Michael Olson went 1-for-5 with a home run and a sacrifice fly for two RBI’s and Joe Tupy went 1-for-5 for two RBI’s. Isaac Frandsen went 2-for-3, he earned two walks, he had a stolen base and he scored one run. Brayden Hanson went 2-for-3, he earned two walks and he scored two runs, Wyatt Morrell went 2-for-5 and he scored a run, Ty Kline and Jake Rasmusen both scored a run. The Lumberjacks starting pitcher, Kyle Kipka threw seven innings, he gave up twelve hits, issued six walks, surrendered nine runs and he recorded five strikeouts. Alex Foss threw one inning in relief, he gave up two hits and he recorded one run. The Lumberjacks were led on offense by Noah Winkelman, he went 2-for-5 with a triple for two RBI’s and he scored one run. Tony Stay went 1-for-5 with a double for a RBI and he scored one run. Mitch Keeler went 2-for-3 for a RBI and he earned a pair of walks and Drew Beier went 1-for-2, he earned a pair of walks, he was hit by a pitch and he scored one run. Brandon Buesgens went 1-for-5 with a double, Kyle Kipka went 1-for-4 with a double and he scored a run, Mark Dierkes scored a run and Tyler Midas earned a walk. Monticello Polecats vs Sartell Muskies (1:30) at Sartell St. Joseph Joes vs. Foley Lumberjacks (1:30) at Foley Sartell Stone Poneys vs. Big Lake Yellowjackets (1:30) Big Lake Winners Play on Sunday at the Highest Seed (1:30) ELROSA SAINTS 3 SPRING HILL CHARGERS 1 The Saints defeated their Stearns County League rivals the Chargers, backed by eight hits, and a very good pitching performance. Veteran right hander Ethan Vogt threw a complete game to earn the win. He scattered seven hits, issued one walk and he recorded eleven strikeouts. The Saints were led on offense by Jackson Peter, he went 2-for-3 for a RBI, he earned a walk, he had a stolen base and he scored two runs. Brady Weller went 1-for-4 with a double and Kevin Kuefler went 1-for-4 and he scored a run. Cody Eichers went 1-of-4 and he scored a pair of runs, Ethan Vogt went 1-for-3 and he earned a walk. Brandon Roelike went 1-for-3. The Chargers starting pitcher Anthony Revermann threw six innings, he gave up seven hits, issued a pair of walks, surrendered three runs and he recorded five strikeouts. Reagan Nelson threw two innings in relief, he gave up one hit and he recorded two strikeouts. The Chargers were led on offense by Eric Schoenberg, he went 2-for-4 for a RBI and Jamie Terres went 2-for-4 with two doubles and he scored one run. Eric Terres went 2-of-4 with a stolen base, Nathan Terres went 1-for-4 with a double and Austin Schoenberg earned a walk. ST. MARTIN MARTINS 10 ROSCOE RANGERS 0 (7 Innings) The Martins defeated the Stearns County League rivals the Rangers, backed by eight hits, including three home runs and a three doubles and good pitching performances. Ryan Nett started on the mound, he threw one inning, he gave up two hits. Kyle Lieser threw three innings, he gave up three hits, issued one walk and he recorded six strikeouts. Ryan Schlangen threw two innings in relief to earn the win, he gave up one hit, issued one walk and he recorded one strikeout. Scott Schlangen threw one inning in relief, he gave up one hit. The Martins were led on offense by Kyle Lieser, he went 3-for-3 with two home runs for six RBI’s and he scored three runs. Nathan Schlangen went 1-for-3 with a home run for two RBI’s and he earned a walk. Avery Schmitz went 2-for-3 with two doubles for a RBI and Ryan Messer went 2-for-3 with a double and he scored two runs. Chas Hennen went 1-for-1 for a RBI and he scored one run, Ryan Nett went 1-for-2, he earned a walk and he scored a run and Bryan Schlangen went 1-for-4 and he scored a run. The Rangers starting pitcher Brent Heinen threw one inning, he gave up two hits, surrendered five runs and he recorded three strikeouts. Josh Mackedanz threw three innings in relief, he gave up three hits, issued one walk, surrendered one run and he recorded one strikeout. Brandon Schleper threw two innings in relief, he gave up one hit, issued one walk and he surrendered four runs. Devon Savage threw one inning in relief, he gave up one hit. The Rangers were led on offense by by Brandon Schleper, he went 2-for-3 with two doubles and he had a stolen base. Russell Leyendecker went 2-for-4 and Cody Mackendanz went 1-for-2. Brent Heinen and Chris Vanderbeek both went 1-for-4, Zach Mackedanz and Jordan Schleper were both hit by a pitch once and Devon Savage and RJ Leyendecker both earned a walk. St. MARTIN MARTINS 5 FARMING FLAMES 0 The Martins defeated their Stearns County League rivals the Flames, backed by some very timely hitting, good defense and a pair of very good pitching performances. Scott Lieser threw eight innings, he gave up two hits and he recorded eleven strikeouts. Ryan Nett threw one inning in relief, he gave up one hit, issued one walk and he recorded two strikeouts. The Martins were led on offense by Bryan Schlangen, he went 1-for-4 for two RBI’s and he scored a run and Nathan Schlangen went 1-for-3 or two RBI’s and he earned a walk. Michael Schlangen went 2-for-3 for a RBI, he earned a pair of walks and he scored one run. Scott Schlangen went 1-for-3, he was hit by a pitch, he had a sacrifice bunt and he scored one run. Kyle Lieser and Chas Hennen both earned a walk and both scored one run and Tanner Arceneau went 1-for-1. The Flames starting pitcher, Brad Mergen threw two innings, he gave up three hits, issued four walks, surrendered five runs and he recorded one strikeout. Dylan Panek threw four innings in relief, he gave up two hits, issued one walk and he recorded one strikeout. Tylor Schroeder threw two innings in relief, he gave up one hit and he recorded two strikeouts. The Flames were led on offense by Cody Fourre, he went 1-for-3 and he was hit by a pitch and Tylor Schroeder earned a walk. Taylor Fourre went 1-for-4 and Will Mergen went 1-for-3. MEIRE GROVE GROVERS 7 NEW MUNICH SILVERSTREAKS 5 The Grovers defeated their Stearns County rivals the Silverstreaks, backed by nine hits, including a home run. Matt Imdieke started on the mound for the Grovers, he threw six innings to earn the win. He gave up seven hits, issued one walk, surrendered two runs and he recorded four strikeouts. Ben Klaphake threw two innings in relief, he gave up three hits, issued four walks and he surrendered three runs. Jaron Klaphake threw the final inning in relief to close it out, he gave up one hit and he recorded two strikeouts. Colton Meyer went 2-for-3 with a home run for two RBI’s, he earned two walks and he scored two runs. Anthony Welle went 2-for-4 for two RBI’s, he was hit by a pitch and he had a stolen base. Jaron Klaphake went 2-for-3 and he scored two runs and Ryan Olmscheid went 2-for-4 and he scored a run. Tyler Moscho went 1-for-4 for two RBI’s and he scored a run. Jordan Klaphake went 1-for-4 and he earned a walk and Andrew Welle scored a run. The Silverstreaks starting pitcher Jim Thull threw seven innings, he gave up seven hits, issued four walks, surrendered six runs and he recorded two strikeouts. Ty Reller threw one inning in relief, he gave up two hits, issued one walk and he gave up one run. The Silverstreaks were led on offense by Jacob Hinnenkamp, he went 2-for-4 for two RBI’s, he earned a walk, had a stolen base and he scored one run. Ty Reller went 1-for-5 for a RBI and Joe Stangler went 2-for-3, he earned a walk, he was hit by a pitch and he scored one run. Logan Funk went 2-for-3, he earned a pair of walks and he scored a run and Hunter Rademacher went 3-for-5 and he scored a run. Tanner Rieland went 1-for-5 and he scored a run and Nick Stangler earned a walk. ELROSA SAINTS 7 RICHMOND ROYALS 3 The Saints defeated their Stearns County rivals the Royals, backed by ten hits, including a pair of home runs and a double. Aaron Vogt started on the mound, he gave up nine hits, issued one walk surrendered three runs and he recorded three strikeouts. Payton VanBeck threw four innings in relief, he gave up two hits, issued five walks and he recorded one strikeout. The Saints were led on offense by Ethan Vogt, he had a pair of home runs for three RBI’s and he scored a pair of runs. Brady Weller went 2-for-4 for a RBI and Cody Eichers went 1-for-3 with a sacrifice fly two RBI’s, he earned a walk and he scored one run. Jackson Peter went 2-for-5 and he scored two runs and Kevin Kuefler went 1-for4, he earned a walk and he scored two runs. Matt Schmitz went 1-for-5 with a double and Austin Imdieke went 1-for-3. James Kuefler was credited with a RBI and he earned a walk and Ryan Olmscheid earned a walk and he had a stolen base. The Royals starting pitcher Eli Emerson threw four innings, he gave up four hits, one run and he recorded six strikeouts. DJ Schleicher threw three innings in relief, he gave up six hits, issued two walks, gave up six runs and he recorded two strikeouts. Dalton Thelen threw two innings in relief, he issued two walks and he recorded three strikeouts. They were led on offense by Andy Hadley, he went 3-for-5 for a RBI and Adam Backes went 3-for-5 for a RBI and he scored a run. Alex Budde went 2-for-4 with a double, he earned a walk, he had a stolen base and he scored one run. Dusty Adams went 1-for-4 with a sacrifice bunt, Kyle Budde went 1-for-3 and he earned two walks and Cole Schmitz was credited with a RBI and he earned a walk. Trent Gertken went 1-for-5 and he scored a run and Justin Schroeder earned a pair of walks. LAKE HENRY LAKERS 14 NEW MUNICH SILVERSTREAKS 5 The Lakers defeated their Stearns County rivals the Silverstreaks, backed by sixteen hits. Their starting pitcher Grant Ludwig threw four innings, he gave up five hits, issued four walks, surrendered five runs and he recorded two strikeouts. Sam Hopfer threw two innings in relief, he gave up three hits and he issued a pair of walks. Jason Kampsen threw three innings in relief, he gave up three hits, issued one walk and he recorded two strikeouts. The Lakers were led on offense by Josh Kampsen, he went 3-for-5 with a home run for five RBI’s and he scored three runs. Jason Kampsen went 2-for-4 for three RBI’s, he earned a walk and he scored one run. Matt Quade went 3-for-5 with a home run for four RBI’s and he scored five runs. Shane Kampsen went 3-for-5 with a home run and he scored two runs. Sam Hopfer went 1-for-3 for a RBI, he earned a walk and he scored two runs. Adam Jaeger went 2-for-2 with a double and he scored a run and Aaron Savelkoul went 1-for-3, he earned a walk and he scored two runs and Jordan Lieser went 1-for-1 and he scored a run. The Silverstreaks starting pitcher Ty Reller threw six innings, he gave up nine hits, issued one walk, surrendered five runs and he recorded three strikeouts. Nolan Sand threw two innings, he gave up seven hits, issued two walks and he surrendered nine runs. They were led on offense by Chad Funk, he went 2-for-4 with a double for two RBI’s and Logan Funk went 2-for-3 for a RBI and he scored a run. Nick Stangler went 1-for-5 for a RBI and Joe Stangler went 1-for-4 for a RBI and he earned a walk. Peyton Rademacher went 1-for-5 and Tanner Rieland went 1-for-3, he earned a walk and he scored one run. Hunter Rademacher went 1-for-2, he earned three walks and he scored two runs and Ty Reller earned a walk and he scored one run. STEARNS COUNTY PLAY IN GAMES For Region 15C Greenwald Cubs vs. Roscoe Rangers (12:00) at Farming Winner vs. Farming Flames (2:30) at Farming AVON LAKERS 6 FREEPORT BLACK SOX 1 The Lakers defeated their Victory League South rivals, backed by ten hits, including a pair of extra base hits. The starting pitcher Putter Harlander threw seven innings, he gave up three hits, issued one walk, he gave up one run and he recorded five strikeouts. Jon Bauer threw 1 /23 innings in relief, he issued one walk. The Lakers were led on offense by Cody Stich, he went 2-for-4 with a home run and a double for a RBI and he had a stolen base. Caleb Curry went 2-for-4 for three RBI’s and he scored one run. Taylor Holthaus went 1-for-4 and he had a stolen base and Josh Becker went 1-for-4. Tony Harlander went 1-for-2 and he scored a run and Carter Philippi went 1-for3, he earned a walk, he had two stolen bases and he scored a run. Carter Holthaus had a stolen base and he scored a run, Carter Huberty earned a walk and he scored a run and Adam Harlander scored a run. Tyler Ritter and Matt Pichelmann each went 1-for-1 and Tony Schoenberg earned a walk. The Black Sox’s starting pitcher Mitch Reller threw six innings, he gave up seven hits, issued six walks, surrendered six runs and he recorded three strikeouts. Edwin Zambrona threw two innings in relief, he gave up three hits and he issued a walk. They were led on offense by Jake Braegelmann, he was credited with a RBI and he earned a pair of walks. Bryan Benson, Alex Martinez and Nate Winter all went 1-for-4. ST. STEPHEN STEVES 3 AVON LAKERS 2 The Steves defeated their Victory League South rivals the Lakers to win the South Division. They were backed by seven timely hits, including a home run and a double. Riley Hartwig started on the mound, he threw six innings to earn the win. He gave up seven hits, issued two walks, gave up one run and he recorded three strikeouts. Blake Guggenberger threw three innings in relief to earn the save, he gave up four hits, he gave up one run and he recorded four strikeouts. The Steves were led on offense by Bo Schmitz, went 2-for-3 for a RBI and he earned a walk. Riley Hartwig went 1-for-4 with a home run and Blake Guggenberger went 2-for-4 with a double. Mathew Meyer went 1-for-3 and he earned a walk and Austin Guggenberger went 1-for-4. Tony Schmitz was credited with a RBI and he earned a walk and Alex Wolhart scored a run. The Lakers starting pitcher Matt Pichelmann threw nine innings, he gave up seven hits, issued three walks, surrendered three runs and he recorded three strikeouts. The Lakers offense was led Taylor Holthaus, he went 3-for-5 with two doubles and Caleb Curry went 2-for-4 with a double and he earned a walk. Josh Becker went 1-for-4 with a double and he scored a run and Riley Voit went 1-for-5 with a double. Carter Holthaus went 3-for-4, with a stolen base and he scored one run and Carter Philippi was credited with a RBI. Matt Meyer went 1-for-5, Zac Tomsche earned a walk, he had three stolen bases and he was hit by a pitch and Cody Stich had a sacrifice bunt. Pierz Bulldogs vs. St. Stephen Steves (4:30) Avon Lakers vs. Buckman Billgoats (7:00) NEW LONDON-SPICER TWINS 8 BIRD ISLAND BULLFROGS 6 The Twins of the County Line League defeated the Bullfogs of the Cornbelt League and region rivals, backed by ten hits, including four extra base hits. The starting pitcher for the Twins was Patrick Courtney, he threw eight innings to earn the win. He gave up seven hits, issued one walk, surrendered six runs and he recorded five strikeouts. Grant Bangen threw one inning in relief to earn the save, he gave up one hit and he issued one walk. The Twins were led on offense by John Perkins, he went 2-for-4 with a double for three RBI’s, he was hit by a pitch and he had a stolen base. Josh Soine went 3-for-4 with a double for a RBI, he had two stolen bases and he scored two runs. Austin Rambow went 2-for-4 with two doubles, a stolen base and he scored two runs. Dalton Rambow went 1-for-4 for a RBI and he scored a run and Trent Pientka went 1-for-2 for a RBI, he had a stolen base and he scored one run. Jake Rambow went 2-for-5. Scott Rambow earned a walk and he scored one run and Derek Dolezal earned a pair of walks and he scored one run. The Bullfrogs starting pitcher Brad Gass threw three innings, he gave up three hits, issued two walks and he surrendered one run. James Woelfel threw two innings in relief, he gave up one hit and he recorded one strikeout. Eric Gass threw one inning in relief, he gave up three hits, issued one walk, surrendered five runs and he recorded one strikeout. Casey Lewandowski threw two innings in relief, he gave up three hits and surrendered two runs. They were led on offense by Luke Ryan, he went 1-for-4 with a double for a RBI, he earned a walk, he had a stolen base and he scored two runs. Jack Peppel went 2-for-4 with a home run and Reed Studther went 2-for-4 and he scored a run. Shawn Dollarschell went 1-for-3, he earned a walk and he scored one run and Jordan Sagedahl was credited with a RBI, he had a stolen base and he was hit by a pitch. Logan Swann went 1-for-3, he had a stolen base and he scored one run, Trevor Nissan went 1-for-4 and Trent Athmann was hit by a pitch. Guillotine Writer (26 years) Class A State Ratings Editor (20 Years) USA Wrestling Magazine State Co-State Editor (12 Years) 1390 Granite City Sports College/Amateur/Legion Baseball/Wrestling Beat Writer (6 Years) More From 1390 Granite City Sports
{"url":"https://1390granitecitysports.com/town-ball-weekly-7/","timestamp":"2024-11-08T11:30:58Z","content_type":"text/html","content_length":"352010","record_id":"<urn:uuid:a7396c97-66c7-4ded-bb6c-873701bcf502>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00392.warc.gz"}
Bug squash A simple immutable list like the fundamental list type in OCaml, F# or Haskell can be expressed as: type 'a Alist = | Nil | Cons of 'a * 'a Alist That is, it's either empty, or it's non-empty. We could refactor the non-empty part to a record type: type 'a Alist = | Nil | Cons of 'a NonEmptyList and 'a NonEmptyList = { Head: 'a; Tail: 'a Alist } This NonEmptyList type is clearly not a new data structure (it's still an immutable list). It may seem silly at first to use this as a separate list type, but it actually goes a long way towards making illegal states unrepresentable. Because it is guaranteed not to be empty by the type system, it has certain interesting properties. For one, obviously getting the head of a non-empty list will always work, for any instance of the type, while List.head throws an exception for an empty list (here's the proof of why it can't have any other behavior). The F# List module has many such partial functions that are undefined for empty lists: head, tail, reduce, average, min, max... and of course the respective functions in the Seq module and System.Linq.Enumerable. If you want to use one of these functions on a regular list/IEnumerable, you either have to immediately check if the input is empty first, and return a 'default' value (many people wrap this in a function e.g. MaxOrDefault(defaultValue), effectively making it a total function); or catch the possible exception every time. Otherwise you're risking a possible failure. Another way to make these functions total is by simply removing the empty list from the domain, that is, operating on non-empty lists. The Haskell community generally recommends avoiding such trivially avoidable partial functions [1] [2] [3], and this advice applies equally to most (all?) languages, especially typed languages. At the very least, it's useful to be aware of where and why you're using a partial function. Applicative validation makes for a good example of NonEmptyLists. I've written about applicative functor validation before, in F# and in C#, with examples. The type I used to represent a validation was Choice<'a, string list>. This means: either the value (when it's correct), or a list of errors. But strictly speaking, this type is too "loose", since it allows the value Choice2Of2 [], which intuitively means "The input was invalid, but there is no error". This simply doesn't make any sense. If the input is invalid, there must be at least one error. Thus, the correct type to use here is Choice<'a, string NonEmptyList>. Another occurrence of NonEmptyList recently popped up while I was writing a library to bind the Urchin Data API. In this API there's a parameter that is mandatory, but admits more than one value: a perfect match for a NonEmptyList. In general, when you find yourself not knowing what to do with the empty list case, or when you think "this list can't possibly be empty here", it may be an indication that you need a NonEmptyList. The code is currently in my fork of FSharpx, it includes the usual functions: cons, map, append, toList, rev, collect, etc. It's also usable from C#, here are some tests showing this. I briefly touched on the subject of totality here, which has deep connections to Turing completeness. Here's some recommended further reading about it: Both C# and F# support optional parameters. But since they're implemented differently, how well do they play together? How well do they interop? Here's I'll analize both scenarios: consuming F# optional parameters from C#, and consuming C# optional parameters from F#. For reference, I'm using VS2012 RC (F# 3.0, C# 5.0) Calling C# optional parameters in F# Let's start with some C# code that has optional parameters and see how it behaves in F#: public class CSharp { public static string Something(string a, int b = 1, double c = 2.0) { return string.Format("{0}: {1} {2}", a, b, c); Here are some example uses of this function: var a = CSharp.Something("hello"); var b = CSharp.Something("hello", 2); var c = CSharp.Something("hello", 2, 3.4); Now we try to call this method from F# and we see: Uh-oh, those parameters sure don't look very optional. However, it all works fine and we can write: let a = CSharp.Something("hello") let b = CSharp.Something("hello", 2) let c = CSharp.Something("hello", 2, 3.4) which compiles and works as expected. Calling F# optional parameters in C# Now the other way around, a method defined in F#, using the F# flavor of optional parameters: type FSharp = static member Something(a, ?b, ?c) = let b = defaultArg b 0 let c = defaultArg c 0.0 sprintf "%s: %d %f" a b c We can happily use it like this in F#: let a = FSharp.Something("hello") let b = FSharp.Something("hello", 2) let c = FSharp.Something("hello", 2, 3.4) But here's how this method looks like in C#: Yeah, there's nothing optional about those parameters. What we need to do is to implement the C# flavor of optional parameters "manually". Fortunately that's pretty easy, just mark those parameters with the Optional and DefaultParameterValue attributes: open System.Runtime.InteropServices type FSharp = static member Something(a, [<Optional;DefaultParameterValue(null)>] ?b, [<Optional;DefaultParameterValue(null)>] ?c) = let b = defaultArg b 0 let c = defaultArg c 0.0 sprintf "%s: %d %f" a b c Why "null" you ask? The default value should have been None, but that's not a compile-time constant so it can't be used as an attribute argument. Null is interpreted as None. These attributes don't affect F# callers, but now in C# we can write: Console.WriteLine(FSharp.Something("hello", FSharpOption<int>.Some(5))); So we have optional parameters but we still have to deal with option types when we want to use them. If you find that annoying or ugly, you could use FSharpx, in which case FSharpOption<int>.Some(5) turns into 5.Some() . The astute reader will suggest an overload just to handle the C# compatibility case. Alas, that doesn't work in the general case. Let's try and see what happens: type FSharp = static member private theActualFunction (a, b, c) = sprintf "%s: %d %f" a b c static member Something(a, ?b, ?c) = let b = defaultArg b 0 let c = defaultArg c 0.0 static member Something(a, [<Optional;DefaultParameterValue(0)>] b, [<Optional;DefaultParameterValue(0.0)>] c) = Note that I moved the "actual working function" to a separate method, otherwise the second overload would just recurse. But we have a duplication in the definition of the default values. Still, the real problem shows when we try to use this in F#: let d = FSharp.Something("hello", 2, 3.4) This doesn't compile as F# can't figure out which one of the overloads to use. F# has no issues consuming optional parameters defined in C#. When writing methods with optional parameters in F# to be called from C#, either add the corresponding attributes and deal with the option types, or add a separate non-overloaded method. Or forget the optional parameters altogether and add overloads, just as we all did in C# before it supported optional parameters. My implementation of formlets, based on the original paper, composes quite a few applicative functors. They're all standard applicatives. For example, the one that looks up values from the submitted form is just a specialized Reader (i.e. a Reader with one of the type parameters fixed to the form type). The applicative responsible for generating form element names is a State. Another two of the applicatives are actually the same applicative, only specialized with different type parameters. Since many of these are already implemented in FSharpx, I decided to use those implementations instead... After making the necessary changes and getting it to compile, I ran the tests and got a lot of failures. Many of the outputs involving lists were exactly in inverse order! I traced it down to the composition of applicatives, but I couldn't figure out what was wrong. I'll illustrate with a simple but concrete example. We'll use the Writer applicative. Essentially, the effect of this applicative is appending values with a monoid. Here we'll just accumulate on a This is much simpler to see in code: let puree x = [],x let ap (x1,x2) (f1,f2) = f1 @ x1, f2 x2 An example using it: puree (-) |> ap (["a";"b"],3) |> ap (["c";"d"],2) This evaluates to (["a"; "b"; "c"; "d"], 1) , i.e. it concatenates the lists (the effect) and applies the function to the second value in the tuple. So far so good. Now let's try to compose this applicative with itself. Composing applicatives is easy: as I explained in a previous article, just lift ap and apply pure to pure: let lift2 f a b = puree f |> ap a |> ap b let composedPure x = x |> puree |> puree let composedAp x f = lift2 ap x f Let's see how this works: composedPure (-) |> composedAp (["a"; "b"], ([1; 2], 2)) |> composedAp (["c"; "d"], ([3; 4], 3)) which gives us (["c"; "d"; "a"; "b"], ([1; 2; 3; 4], -1)) Uh oh, the outer applicative has its effect flipped! The result should have been (["a"; "b"; "c"; "d"], ([1; 2; 3; 4], -1)) . What went wrong here? One difference in this code with the Haskell definition of applicative functors is that I flipped the parameters of ap. This allowed us to apply ap with a pipe as is usual in F#. You could also use a forward and a backward pipe to "infixify" a function but it just doesn't look right to me. Even though it compiles and apparently looks correct, this difference broke our applicative composition. In order to fix the applicative composition and still keep the convenient flipped parameters, we have to change composedAp to: let flip f a b = f b a let composedAp x f = flip (lift2 (flip ap)) x f The question now is: do you really understand why this composedAp is correct, just by looking at its definition, while the previous one would flip one of the applicatives and not the other? To be honest, I don't. But simple equational reasoning can tell us what went wrong. Let's start with the original (incorrect) definition of composedAp: lift2 ap x f = puree ap |> ap x |> ap f // lift2 definition = ap f (ap x (puree ap)) // |> definition = ap (f1,f2) (ap (x1,x2) (pap1, pap2)) // expand tuples, apply puree = ap (f1,f2) (pap1 @ x1, pap2 x2) // apply inner ap = pap1 @ x1 @ f1, pap2 x2 f2 // apply outer ap = [] @ x1 @ f1, ap x2 f2 // apply puree = x1 @ f1, ap (x21, x22) (f21, f22) // simplify empty list, expand tuples = x1 @ f1, (f21 @ x21, f22 x22) // apply ap Now the correct composedAp, for comparison: flip (lift2 (flip ap)) x f = flip (fun f a b -> ap b (ap a (puree (flip ap)))) x f // lift2 definition = (fun f a b -> ap b (ap a (puree (flip ap)))) f x // apply flip = ap x (ap f (puree (flip ap))) // apply lambda = ap (x1,x2) (ap (f1,f2) (pfap1, pfap2)) // expand tuples, apply puree = ap (x1,x2) (pfap1 @ f1, pfap2 f2) // apply ap = pfap1 @ f1 @ x1, pfap2 f2 x2 // apply ap = [] @ f1 @ x1, (flip ap) f2 x2 // puree = f1 @ x1, ap x2 f2 // simplify empty list, apply flip = f1 @ x1, ap (x21, x22) (f21, f22) // expand tuples = f1 @ x1, (f21 @ x21, f22 x22) // apply ap By comparing both you can get a better understanding of why two flips are necessary. Reasoning like this is a simple but powerful tool. We kinda do it continuously, informally, while writing code, which is usually called "running the program in your head". The absence of side-effects (i.e. pure functional programming) makes it easier to do it formally as well as informally, since you typically need to juggle less stuff in your head. In my last two posts I showed how MbUnit supports first-class tests, and how you could use that to build a DSL in F# around it. I explained how many concepts in typical xUnit frameworks can be more simply expressed when tests are first-class values, which is not the case for most .NET and Java test frameworks. More concretely, test setup/teardown is a function over a test, and parameterized tests are... just data manipulation. Since first-class tests greatly simplify things, why not dispense with the typical class-based, attribute-driven approach and build a test library around first-class tests? Well, Haskellers have been doing this for at least 10 years now, with HUnit. HUnit organizes tests using this tree: -- | The basic structure used to create an annotated tree of test cases. data Test -- | A single, independent test case composed. = TestCase Assertion -- | A set of @Test@s sharing the same level in the hierarchy. | TestList [Test] -- | A name or description for a subtree of the @Test@s. | TestLabel String Test Where Assertion is simply an alias for IO (). This is all you need to organize tests in suites and give them names. We can trivially translate this to F# : type TestCode = unit -> unit type Test = | TestCase of TestCode | TestList of Test seq | TestLabel of string * Test Let's see an example: let testA = TestLabel ("testsuite A", TestList TestLabel ("test A", TestCase(fun _ -> Assert.AreEqual(4, 2+2))) TestLabel ("test B", TestCase(fun _ -> Assert.AreEqual(8, 4+4))) It's quite verbose, but we can define the same DSL as I defined earlier for MbUnit tests, so this becomes: let testA = "testsuite A " =>> [ "test A" => fun _ -> Assert.AreEqual(4, 2+2) "test B" => fun _ -> Assert.AreEqual(8, 4+4) Actually, I first ported HUnit (including this DSL), then discovered that MbUnit has first-class tests and later wrote the DSL around MbUnit. Everything I described in those posts (setup/teardown as higher-order functions, parameterized tests as simple data manipulation, arbitrary nesting of test suites) applies here in the exact same way. In fact, MbUnit's class hierarchy of Test/TestSuite/TestCase can be read as the following algebraic data type: type Test = | TestSuite of string * Test list | TestCase of string * Action which turns out to be very similar to the tree we translated from HUnit, only the names are embedded instead of being a separate case. I called this HUnit port Fuchu (it doesn't mean anything), it's on github. Fuchu doesn't include any assertion functions, or at least not yet. (EDIT: assertions were added in 0.2.0) It only gives you tools to organize and run tests, but you're free to use NUnit, MbUnit, xUnit, NHamcrest, etc, or more F#-ish solutions like Unquote or FsUnit or NaturalSpec for assertions. Tighter integration with FsCheck is planned. (EDIT: it was added in the first release of Fuchu) As with HUnit, the test assembly is the runner itself. That is, as opposed to having an external test runner as with most test frameworks, your test assembly is an executable (a console application). This is because it's more of a library instead of a framework. As a consequence, there is no need of installing any external tool to run tests (just hit CTRL-F5 in Visual Studio) or debug tests (just set your breakpoints and hit F5 in Visual Studio). Here's a clear signal of why this matters: How does one run a #fsharp unit test project in #vs11? — Community for F# (@c4fsharp) March 23, 2012 So how do you run tests with Fuchu? Given a test suite testA like the one defined above, you can run it like this: let main _ = run testA // or runParallel But this is quite inconvenient, as it's common to split tests among different modules/files and this would mean having to list all tests somewhere, to feed them to the run function. HUnit works around this using Template Haskell, and OUnit (OCaml's port of HUnit) users generate the boilerplate code by parsing the tests' source code. In .NET we can just decorate the tests with an attribute and then use reflection to fetch them: let testA = "2+2=4" => fun _ -> Assert.AreEqual(4, 2+2) let testB = "2*3=6" => fun _ -> Assert.AreEqual(7, 2*3) let main args = defaultMainThisAssembly args This function defaultMainThisAssembly does exactly what it says on the tin. Notice that it also takes the command-line args, so if you call it with "/m" it will run the tests in parallel. (Curiously, you can't say let main = defaultMainThisAssembly, it won't be recognized as the entry point). By the way, this is just an example, you wouldn't normally annotate every single test with the Tests attribute, only the top-level test group per module. Run this code and you get an output like this: 2*3=6: Failed: Expected: 7 But was: 6 G:\prg\Test.fs(15,1): SomeModule.testB@15.Invoke(Unit _arg1) 2 tests run: 1 passed, 0 ignored, 1 failed, 0 errored (00:00:00.0058780) If you run this within Visual Studio with F5 you can navigate to the failing assertion by clicking on the line that looks like a mini stack trace. REPL it! Since running tests is as easy as saying "run test", it's also convenient sometimes to do so from the F# REPL. • You can directly load the source code under tests in the REPL, which cuts down compilation times. • Easy to cherry-pick one or a few tests to run instead of running all tests (with the provided Test.filter function) • Having to manually load all dependencies to the tests. It may be possible to work around this using a variant of this script by Gustavo Guerra. • If you reference the assembly under test in the REPL, fsi.exe blocks the DLLs, so you have to reset the REPL session to recompile. But if you're testing F# code, you can work around this by loading source code instead of referencing the assembly. You can see an example of running tests from the REPL here. Other tools Integrating with other tools is not simple. Most tools out there seem to assume that tests are organized in classes, and each test corresponds to a method or function. This also happens with MbUnit's StaticTestFactory: for example in ReSharper or TestDriven.Net you can't single out tests. Still, they can be made to run let-bound tests (which may be a test suite), so it should be possible to have some support within this limitation. Also, no immediate support for any continuous test runner. I checked with Greg Young, he tells me that MightyMoose/AutoTest.NET can be configured to use an arbitrary executable (with limitations). Remco Mulder, of NCrunch, suggested wrapping the test runner in a test from a known test framework as a workaround. Maybe executing the tests after compilation (with a simple AfterBuild msbuild target) is enough. I haven't looked into this yet. Coverage tools should have no problem, it makes no difference where the executable comes from. Build tools should have no issues either; obviously FAKE is the more direct option, but I see no problems integrating this with other build tools. C# support I threw in a few wrapper functions to make this library usable in C# / VB.NET. Of course, it will never be as concise as F#, but still usable. I'm not going to fully explain this (it's just boring sugar) but you can see an example here. NUnit/MbUnit test loading Even though it may seem very different, Fuchu is still built on xUnit concepts. And since tests are first-class values, it's very easy to map tests written with other xUnit framework to Fuchu tests. For example, building Fuchu tests from NUnit test classes takes less than 100 LoC (it's already built into Fuchu) This lets you use Fuchu as a runner for existing tests, and to write new tests. I'm planning to use this soon in SolrNet to replace Gallio (for example, Gallio doesn't work on Mono). There is a limitation here: Fuchu can't express TestFixtureTearDowns. It can do TestFixtureSetups (and obviously SetUp/TearDown, as explained in previous posts), but not TestFixtureTearDowns (or at least not unless you treat that test suite separately). Give it a try and see for yourself :) . Is it a real downside? I don't think so (for example, TestFixtureTearDowns make parallelization harder), but it's something to be aware of. Also I haven't looked into test inheritance yet, but it should be pretty easy to support it. Does .NET really need yet another test framework? Absolutely not. The current test frameworks are "good enough" and hugely popular. But since they don't treat tests as first-class values, extending them results in more and more complexity. Consider the lifecycle of a test in a typical unit testing framework. Inheritance and multiple marker attributes make it so complex that it reminds me of the ASP.NET page lifecycle. What I propose with Fuchu is a hopefully simpler, no-magic model. Remember KISS? In F# we typically organize tests much like in C# or VB.NET: writing functions marked with a [<Test>] attribute or similar. Actually there's a slight advantage in F#: you don't need to write a class marked as test fixture, you can directly write the tests as let-bound functions. Still, it's fundamentally the same model. (If you're into BDD there's also TickSpec as an alternative model). Since it's the same model, you get the same issues I described in my last post, and then some: for example as Kurt explains, attributes in F# sometimes aren't treated exactly as in C#. Also in my last post, I wrote about how MbUnit supports first-class tests as an alternative to attribute-defined tests. In F# we can take advantage of this and custom operators to build a very concise DSL to define tests. First let's see a small test suite with setup/teardown, written with the classic attributes: type ``MemoryStream tests``() = let mutable ms : MemoryStream = null member x.Setup() = ms <- new MemoryStream() member x.Teardown() = member x.``Can read``() = Assert.IsTrue ms.CanRead member x.``Can write``() = Assert.IsTrue ms.CanWrite Looks simple enough, right? And yet, the mutable field is a smell, or at least an indicator that this isn't functional. Let's try to get rid of that mutable. As a first step we'll rewrite this as first-class tests, that is, using [<StaticTestFactory>] as shown in my last post: let testFactory() = let suite = TestSuite("MemoryStream tests") let ms : MemoryStream ref = ref null suite.SetUp <- fun () -> ms := new MemoryStream() suite.TearDown <- fun () -> (!ms).Dispose() let tests = [ TestCase("Can read", fun () -> Assert.IsTrue (!ms).CanRead) TestCase("Can write", fun () -> Assert.IsTrue (!ms).CanWrite) Seq.iter suite.Children.Add tests Oh great, that's even uglier than what we started with! And we have replaced the mutable field with a ref cell, not much of an improvement. But bear with me, we have first-class tests now so there's a lot a room for improvement. In order to keep refactoring this, we need to realize that the problem is that our test cases should be functions MemoryStream -> unit instead of unit -> unit. That way, they wouldn't have to depend on an external MemoryStream instance; instead the instance would be pushed somehow to the test. Let's write that: let tests = [ "Can read", (fun (ms: MemoryStream) -> Assert.IsTrue ms.CanRead) "Can write", (fun ms -> Assert.IsTrue ms.CanWrite) Now we have this list of strings and MemoryStream -> unit functions. What we need now is to turn these functions into unit -> unit so we can ultimately build TestCases. In other words, we need a function (MemoryStream -> unit) -> (unit -> unit). This function should create the MemoryStream, pass it to our test function, then dispose the MemoryStream. Hey, what do you know, turns out that's just what SetUp and TearDown do! Still with me? It's much easier to see this in code: let withMemoryStream f () = use ms = new MemoryStream() f ms Now we apply this to our list, building the TestCases and then the TestSuite: let testFactory() = let suite = TestSuite("MemoryStream tests") |> Seq.map (fun (n,t) -> TestCase(n, Gallio.Common.Action(withMemoryStream t))) |> Seq.iter suite.Children.Add We've eliminated all mutable references, and also replaced SetUp/TearDown with a simple higher-order function. But we can do still better, in terms of readability. We can define a few custom operators to hide the TestSuite and TestCase constructors: let inline (=>>) name tests = let suite = TestSuite(name) Seq.iter suite.Children.Add tests suite :> Test let inline (=>) name (test: unit -> unit) = TestCase(name, Gallio.Common.Action test) :> Test let testFactory() = "MemoryStream tests" =>> [ "Can read" => withMemoryStream(fun ms -> Assert.IsTrue ms.CanRead) "Can write" => withMemoryStream(fun ms -> Assert.IsTrue ms.CanWrite) And with a couple more operators we get rid of the duplicate call to withMemoryStream: let inline (+>) f = Seq.map (fun (name, partialTest) -> name => f partialTest) let inline (==>) (name: string) test = name,test let testFactory() = "MemoryStream tests" =>> withMemoryStream +> [ "Can read" ==> fun ms -> Assert.IsTrue ms.CanRead "Can write" ==> fun ms -> Assert.IsTrue ms.CanWrite Confused about all those kinds of arrows? The good thing about first-class tests is that you can build them any way you want, no need to use these operators if you don't like them. That's also precisely one of its downsides: as there is no fixed idiom, it can get harder to read compared to attribute-based test definitions, where there is a single, well-defined way to do things. In my last post I showed how first-class tests practically eliminate the concept of parameterized tests. In this post I showed how they eliminate the concept of setup/teardown, replacing them with a higher-order function, a more generic concept. More generally, I'd say that whatever domain you're modeling (in this case, tests), there is much to gain if the core concepts are representable as first-class values. It should also be noted that different languages have very different notions of what language objects are first-class values. Some are more flexible than others, but that doesn't imply any superiority by itself. However it does mean that if you're not aware of this you'll probably misuse your language and end up with ever more complex workarounds to manipulate your domain objects as values. Nice APIs, conventions, configuration, etc, are all secondary and can be built much more easily on top of composable, first-class building blocks. But I digress. In the next post I'll show a simple testing library built around tests as first-class values and more pros/cons about this approach. Originally, xUnit style testing frameworks used inheritance to define tests. SUnit, the original xUnit framework, builds test cases by inheriting the TestCase class. NUnit 1.0 and JUnit derived from SUnit and also used inheritance. Fast-forward to today, unit testing frameworks in .NET and Java typically organize tests using attributes/annotations instead. For a few years now, MbUnit has been able to define tests programmatically as an alternative, though it seems this feature isn't used much. Let's compare attributes vs programmatic tests with a simple example in C#: public class TestFixture { public void Test() { Assert.AreEqual(4, 2 + 2); public void AnotherTest() { Assert.AreEqual(8, 4 + 4); public class TestFixture { public static IEnumerable<Test> Tests() { yield return new TestCase("Test", () => { Assert.AreEqual(4, 2 + 2); yield return new TestCase("Another test", () => { Assert.AreEqual(8, 4 + 4); At first blush, declaring tests programmatically is more verbose and complex. However, the real difference is that these tests are first-class values. It becomes more clear why this matters with an example of parameterized tests: public class TestFixture { public void Parse(string input, DateTime expectedOutput) { var r = DateTime.ParseExact(input, "yyyy-MM-dd'T'HH:mm:ss.FFF'Z'", CultureInfo.InvariantCulture); Assert.AreEqual(expectedOutput, r); IEnumerable<object[]> Parameters() { yield return new object[] { "1-01-01T00:00:00Z", new DateTime(1, 1, 1) }; yield return new object[] { "2004-11-02T04:05:20Z", new DateTime(2004, 11, 2, 4, 5, 20) }; public class TestFixture { public static IEnumerable<Test> Tests() { var parameters = new[] { new { input = "1-01-01T00:00:00Z", expectedOutput = new DateTime(1, 1, 1) }, new { input = "2004-11-02T04:05:20Z", expectedOutput = new DateTime(2004, 11, 2, 4, 5, 20) }, return parameters.Select(p => new TestCase("Parse " + p.input, () => { var r = DateTime.ParseExact(p.input, "yyyy-MM-dd'T'HH:mm:ss.FFF'Z'", CultureInfo.InvariantCulture); Assert.AreEqual(p.expectedOutput, r); Programmatically, we just wrote the parameters and tests in a direct style. With attributes, not only we lost the types but also it's more complicated: you have to know (or look up in the documentation) that you need a [Factory] attribute, that its string parameter indicates the method name that contains the test parameters, and the format for the parameters (e.g. can they be represented as a property? As a field? Can it be private? Static? Can it be a non-generic IEnumerable? An ArrayList[]?). Fortunately, MbUnit is quite flexible about it. Yet it doesn't handle an Something similar happens with JUnit and TestNG. Actually JUnit did have something close to first-class tests with its inheritance API. With programmatic tests, you simply return a list of tests, there's no magic about it. It doesn't matter how they're built, they can be parameterized or not, all you have to know is [ StaticTestFactory] public static IEnumerable<Test> Tests() . If they're parameterized, it doesn't matter what kind of parameters they are. Actually, the very concept of "parameterized tests" simply With attributes, you may have tried to use [Row] first, only to have the compiler remind you that attribute parameter types are very limited and you can't have a DateTime. Or a function. Or even a decimal. The testing framework gets in the way. Attributes are just not the right tool to model this. With programmatic tests, you are in control, not the testing framework. It becomes more of a library rather than a framework. Things are conceptually simpler. What about SetUp and TearDown? Don't worry, MbUnit supports them directly as properties of TestSuite. However, as we'll see in the next post, they're not really necessary. We'll also see a few other pros/cons first-class tests have. I'll leave you with this quote from the twitter-fake Alain de Botton: I understand that annotations beat boilerplate XML config.But what's wrong with, you know, writing real code instead? — PLT Alain de Botton (@PLTAlaindeB) March 13, 2012 The more I learn about functional programming, the more I come to question many widely used and accepted practices in mainstream programming. This time it's the turn of mocks and mocking libraries. First, since there are so many different definitions for stubs, mocks, fakes, etc, here's my own definition of a mock: an entity (in an object-oriented language, usually an object) used to test an interaction between the entity under test and an external entity (again, in OO languages, these entities are objects). So mocks are used for interaction-based testing which means testing for side-effects. The original paper on mock objects says this explicitly: "Test code should communicate its intent as simply and clearly as possible. This can be difficult if a test has to set up domain state or the domain code causes side effects." Side effects are code smells, or more precisely, they should be few and isolated. Even the creators of mock object say that side effects make testing difficult! We should minimize the need for mocking. Code without side-effect doesn't need mocks. You still may need stubs or fakes, but those should be trivial to build. Quoting Daniel Cazzulino (author of Moq): "The sole presence of a 'Verify' method on the mock is a smell to me, one that will slowly get you into testing the interactions as opposed to testing the observable state changes caused by a particular behavior.". Make those states immutable (i.e. a state change create a new state) and you're half-way to side-effect-free code. Remember that discussion a few years ago where people said that Typemock was too powerful? The argument was that Typemock "doesn't force you to write testable code". What this really means is "it doesn't make you isolate side-effects". I'm thinking that all current mocking libraries are actually too powerful: instead of making you think how to write pure (side-effect free) code, it encourages you to just use a mock to replace your impure code with pure code in tests. There's also the matter of library complexity. .NET mocking libraries are big and complex: Moq: 17000 LoC, NSubstitute: 12000 LoC, Rhino Mocks: 87000 LoC, FakeItEasy: 17000 LoC. Not counting the embedded runtime proxy library (usually Castle DynamicProxy). Many have issues running mocks in parallel (1, 2, 3, 4). It's 2012 and most developers have at least a 4-core workstation, using a mock library that limits my ability to run tests in parallel is getting ridiculous. In summary, I think mock libraries support an undesirable practice, are not worth their code and have to go, except for very specific scenarios. But I still have a lot of existing side-effecting code I have to test, I can't just wish it away. Refactoring to pure code is not trivial. So I decided that I'm switching to manual mocking and using the mock count as a measure of code smell. — Mauricio Scheffer (@mausch) December 19, 2011 To which I got an encouraging reply: @mausch Lemme know if you need help achieving 0. — Tony Morris (@dibblego) December 20, 2011 By the way I'm not the first or the only one that thinks that mocking libraries aren't worth it. Uncle Bob also prefers manual mocking (even in Java, which is much more verbose than any .NET language), though he mostly stresses the argument of simplicity. In F# it's easy to do manual mocking thanks to object expressions (1, 2), so you don't have to actually create a new class for each mock. In C#/VB.NET we're not so lucky but we can get 80% there with a little boilerplate, making a "semi-manual", reusable mock class with settable Funcs to define behavior. Example: interface ISomething { void DoSomething(int a); class MockSomething: ISomething { public Action<int> doSomething; public void DoSomething(int a) { class Test { void test() { var r = new List<int>(); var s = new MockSomething { doSomething = r.Add // etc This isn't anything new, people have been doing this for years. Downsides: doesn't play well with overloaded and generic methods, but still works. Also, as Uncle Bob explains, manual mocks are prone to break whenever you change the mocked interface, but if you hit this often it could be revealing you that perhaps you should have used an abstract base class instead. I added some code to track the call count of a Func and named the resulting library Moroco. So I wanted to get away from mocking libraries and ended up writing one, talk about hypocrisy! The difference between Moroco and other mocking libraries is that it's really minimal: less than 400 lines of pretty trivial code, fitting in a single file, with no dependencies. And I'm still against mocks: I get to count mocks to measure mock smell. And run tests in parallel. I'm already using it in SolrNet, where I simply dropped Moroco's source code in the test project and replaced Rhino.Mocks in all tests. Here's the 'before vs after' of one of the tests: public void Extract() { var mocks = new MockRepository(); var connection = mocks.StrictMock<ISolrConnection>(); var extractResponseParser = mocks.StrictMock<ISolrExtractResponseParser>(); var docSerializer = new SolrDocumentSerializer<TestDocumentWithoutUniqueKey>(new AttributesMappingManager(), new DefaultFieldSerializer()); var parameters = new ExtractParameters(null, "1", "test.doc"); .Expecting(() => { .Call(connection.PostStream("/update/extract", null, parameters.Content, new List<KeyValuePair<string, string>> { new KeyValuePair<string, string>("literal.id", parameters.Id), new KeyValuePair<string, string>("resource.name", parameters.ResourceName), .Return(EmbeddedResource.GetEmbeddedString(GetType(), "Resources.responseWithExtractContent.xml")); .Return(new ExtractResponse(null)); .Verify(() => { var ops = new SolrBasicServer<TestDocumentWithoutUniqueKey>(connection, null, docSerializer, null, null, null, null, extractResponseParser); public void Extract() { var parameters = new ExtractParameters(null, "1", "test.doc"); var connection = new MSolrConnection(); connection.postStream += (url, contentType, content, param) => { Assert.AreEqual("/update/extract", url); Assert.AreEqual(parameters.Content, content); var expectedParams = new[] { KV.Create("literal.id", parameters.Id), KV.Create("resource.name", parameters.ResourceName), Assert.AreElementsEqualIgnoringOrder(expectedParams, param); return EmbeddedResource.GetEmbeddedString(GetType(), "Resources.responseWithExtractContent.xml"); var docSerializer = new SolrDocumentSerializer<TestDocumentWithoutUniqueKey>(new AttributesMappingManager(), new DefaultFieldSerializer()); var extractResponseParser = new MSolrExtractResponseParser { parse = _ => new ExtractResponse(null) var ops = new SolrBasicServer<TestDocumentWithoutUniqueKey>(connection, null, docSerializer, null, null, null, null, extractResponseParser); Assert.AreEqual(1, connection.postStream.Calls); Yes, I do realize this uses an old Rhino.Mocks API, but it's necessary to get thread-safety. No, I don't expect anyone to use Moroco instead of Moq, Rhino.Mocks, etc. Yes, I know this means more code (though it's not as much as you might think), and I agree that .NET needs another mock library like I need a hole in my head. But I think we should think twice before using a mock and see if we can find a side-effect-free alternative for some piece of code. Even when you do use mocks, don't just blindly reach for a mocking library. Consider the trade-offs. I recently found a nice example of applicative functor validation in Scala (using Scalaz) by Chris Marshall, and decided to port it to F# and C# using FSharpx. I blogged about applicative functor validation before, in F# and in C#. When trying to port the Scala code to F# I found there were a few missing general functions in FSharpx, notably sequence and mapM. These are one- or two-liners, I ported them from Haskell, as it's syntactically closer to F# than Scala. Hoogle is always a big help for this. Here is the original code in Scala; here's the F# port and here's the C# port. I'm not going to copy it here: it's 160 lines of F# and 250 lines of C#. This example also makes for a nice comparison of these three languages (or four, if you count the implicit presence of Haskell). There are a few little differences in the ports, it's not a literal translation, but you can still see how Scala, being semantically closer to Haskell than either F# or C#, achieves more generality. As for type inference, the F# version requires almost no type annotations, while C# needs the most type annotations, and Scala is somewhere in the middle. This actually depends on what you consider a type annotation. I chose to make Person immutable in C# to reflect more accurately the equivalent F# and Scala code, but it's not really instrumental to this example. Still, it shows how verbose it is to create a truly immutable class in C#. The C# dev team at Microsoft seems to highly value immutability, so I still have hopes that a future version of C# will improve this situation. The ability to define custom operators in Scala and F#, like <!> or *> (an ability that C# lacks) also makes it easier to work with different ways of composing functions. FSharpx also offers 'named' versions for many of these operators, for example <!> is simply 'map' and <*> is 'ap'. Despite what some people say, I think custom operators enable better readability once you know the concepts behind them. Remember that at some point you also learned what '=', '%' and '+' mean. In particular, the F# port shows the Kleisli composition operator >=> which I haven't seen mentioned in F# before. This operator is like the regular function composition operator >> except it works for monadic functions a -> m b. Compare the signatures for >> and >=> for Option: (>>) : ('a -> 'b) -> ('b -> 'c) -> 'a -> 'c (>=>) : ('a -> 'b option) -> ('b -> 'c option) -> 'a -> 'c option I'm quite pleased with the results of this port, even if I do say so myself. This example shows again that many higher concepts in functional programming commonly applied in Haskell are applicable, useful and usable in F# and even in C#. The lack of typeclasses and type constructor abstraction in .NET means some code duplication (mapM for example has to be defined for each monad), but this duplication is on the side of library code in many cases, and so client code isn't that badly affected. Homework: port this example to Gustavo's fork of FSharpx. In the last post I showed how to interop F# and C# algebraic data types (ADT). In C# you'll typically have ADTs or ADT-like structures expressed as a hierarchy of classes and the Visitor pattern to traverse/deconstruct them. So the question is: what's the best way to use C#/VB.NET visitors in F#? As an example, let's borrow a C# - F# comparison from Stackoverflow by Juliet Rosenthal, which models boolean operations: public interface IExprVisitor<out T> { T Visit(TrueExpr expr); T Visit(And expr); T Visit(Nand expr); T Visit(Or expr); T Visit(Xor expr); T Visit(Not expr); public abstract class Expr { public abstract t Accept<t>(IExprVisitor<t> visitor); public abstract class UnaryOp : Expr { public Expr First { get; private set; } public UnaryOp(Expr first) { First = first; public abstract class BinExpr : Expr { public Expr First { get; private set; } public Expr Second { get; private set; } public BinExpr(Expr first, Expr second) { First = first; Second = second; public class TrueExpr : Expr { public override t Accept<t>(IExprVisitor<t> visitor) { return visitor.Visit(this); public class And : BinExpr { public And(Expr first, Expr second) : base(first, second) {} public override t Accept<t>(IExprVisitor<t> visitor) { return visitor.Visit(this); public class Nand : BinExpr { public Nand(Expr first, Expr second) : base(first, second) {} public override t Accept<t>(IExprVisitor<t> visitor) { return visitor.Visit(this); public class Or : BinExpr { public Or(Expr first, Expr second) : base(first, second) {} public override t Accept<t>(IExprVisitor<t> visitor) { return visitor.Visit(this); public class Xor : BinExpr { public Xor(Expr first, Expr second) : base(first, second) {} public override t Accept<t>(IExprVisitor<t> visitor) { return visitor.Visit(this); public class Not : UnaryOp { public Not(Expr first) : base(first) {} public override t Accept<t>(IExprVisitor<t> visitor) { return visitor.Visit(this); Let's say we want to write a function in F# to evaluate a boolean expression using these classes. We could use an object expression to create an inline visitor: let rec eval (e: Expr) = e.Accept { new IExprVisitor<bool> with member x.Visit(e: TrueExpr) = true member x.Visit(e: And) = eval(e.First) && eval(e.Second) member x.Visit(e: Nand) = not(eval(e.First) && eval(e.Second)) member x.Visit(e: Or) = eval(e.First) || eval(e.Second) member x.Visit(e: Xor) = eval(e.First) <> eval(e.Second) member x.Visit(e: Not) = not(eval(e.First)) } This is already more concise than the equivalent C# code, but we can do better. Once again, active patterns are the key. We only need one visitor to do the required plumbing: module Expr = open DiscUnionInteropCS type ExprChoice = Choice<unit, Expr * Expr, Expr * Expr, Expr * Expr, Expr * Expr, Expr> let private visitor = { new IExprVisitor<ExprChoice> with member x.Visit(e: TrueExpr): ExprChoice = Choice1Of6 () member x.Visit(e: And): ExprChoice = Choice2Of6 (e.First, e.Second) member x.Visit(e: Nand): ExprChoice = Choice3Of6 (e.First, e.Second) member x.Visit(e: Or): ExprChoice = Choice4Of6 (e.First, e.Second) member x.Visit(e: Xor): ExprChoice = Choice5Of6 (e.First, e.Second) member x.Visit(e: Not): ExprChoice = Choice6Of6 e.First } let (|True|And|Nand|Or|Xor|Not|) (e: Expr) = e.Accept visitor And now we can write eval more idiomatically and more concise: let rec eval = | True -> true | And(e1, e2) -> eval(e1) && eval(e2) | Nand(e1, e2) -> not(eval(e1) && eval(e2)) | Or(e1, e2) -> eval(e1) || eval(e2) | Xor(e1, e2) -> eval(e1) <> eval(e2) | Not(e1) -> not(eval(e1)) This also opens the doors for more complex pattern matching. See Juliet's post for an example. In a previous post I wrote about encoding algebraic data types in C#. Now let's explore the interoperability issues that arise when defining and consuming algebraic data types (ADTs) cross-language in C# and F#. More concretely, let's analyze construction and deconstruction of an ADT and how to keep operations as idiomatic as possible while also retaining type safety. Defining an ADT in F# and consuming it in C# In F#, ADTs are called discriminated unions. The first thing I should mention is that the F# component design guidelines recommend hiding discriminated unions as part of a general .NET API. I prefer to interpret it like this: if you can hide it with minor consequences, or you have stringent binary backwards compatibility requirements, or you foresee it changing a lot, hide it. Otherwise I wouldn't worry much. Let's use this simple discriminated union as example: type Shape = | Circle of float | Rectangle of float * float Construction in C# is pretty straightforward: F# exposes static methods NewCircle and NewRectangle: var circle = Shape.NewCircle(23.77); var rectangle = Shape.NewRectangle(1.5, 2.2); No, you can't use constructors directly to instantiate Circle or Rectangle, F# compiles these constructors as internal. No big deal really. Deconstruction, however, is a problem here. C# doesn't have pattern matching, but as I showed in the previous article you can simulate this with a Match() method like this: static class ShapeExtensions { public static T Match<T>(this Shape shape, Func<double, T> circle, Func<double, double, T> rectangle) { if (shape is Shape.Circle) { var x = (Shape.Circle)shape; return circle(x.Item); var y = (Shape.Rectangle)shape; return rectangle(y.Item1, y.Item2); Here we did it as an extension method in the consumer side of things (C#). The problem with this is, if we add another case to Shape (say, Triangle), this will still compile successfully without even a warning, but fail at runtime, instead of failing at compile-time as it should! It's best to define this in F# where we can take advantage of exhaustively-checked pattern matching, either as a regular instance member of Shape or as an extension member: type Shape with static member Match(shape, circle: Func<_,_>, rectangle: Func<_,_,_>) = match shape with | Circle x -> circle.Invoke x | Rectangle (x,y) -> rectangle.Invoke(x,y) This is how we do it in FSharpx to work with Option and Choice in C#. Defining an ADT in C# and consuming it in F# Defining an ADT in C# is already explained in my previous post. But how does this encoding behave when used in F#? To recap, the C# code we used is: namespace DiscUnionInteropCS { public abstract class Shape { private Shape() {} public sealed class Circle : Shape { public readonly double Radius; public Circle(double radius) { Radius = radius; public sealed class Rectangle : Shape { public readonly double Height; public readonly double Width; public Rectangle(double height, double width) { Height = height; Width = width; public T Match<T>(Func<double, T> circle, Func<double, double, T> rectangle) { if (this is Circle) { var x = (Circle) this; return circle(x.Radius); var y = (Rectangle) this; return rectangle(y.Height, y.Width); Just as before, let's analyze construction first. We could use constructors: let shape = Shape.Circle 2.0 which looks like a regular F# discriminated union construction with required qualified access. There are however two problems with this: 1. Object constructors in F# are not first-class functions. Try to use function composition (>>) or piping (|>) with an object constructor. It doesn't compile. On the other hand, discriminated union constructors in F# are first-class functions. 2. Concrete case types lead to unnecessary upcasts. shape here is of type Circle, not Shape. This isn't much of a problem in C# because it upcasts automatically, but F# doesn't, and so a function that returns Shape would require an upcast. Because of this, it's best to wrap constructors: let inline Circle x = Shape.Circle x :> Shape let inline Rectangle (a,b) = Shape.Rectangle(a,b) :> Shape Let's see deconstruction now. In F# this obviously means pattern matching. We want to be able to write this: let area = match shape with | Circle radius -> System.Math.PI * radius * radius | Rectangle (h, w) -> h * w We can achieve this with a simple active pattern that wraps the Match method: let inline (|Circle|Rectangle|) (s: Shape) = s.Match(circle = (fun x -> Choice1Of2 x), rectangle = (fun x y -> Choice2Of2 (x,y))) For convenience, put this all in a module: module Shape = open DiscUnionInteropCS let inline Circle x = Shape.Circle x :> Shape let inline Rectangle (a,b) = Shape.Rectangle(a,b) :> Shape let inline (|Circle|Rectangle|) (s: Shape) = s.Match(circle = (fun x -> Choice1Of2 x), rectangle = (fun x y -> Choice2Of2 (x,y))) So with a little boilerplate you can have ADTs defined in C# behaving just like in F# (modulo pretty-printing, comparison, etc, but that's up the C# implementation if needed). No need to to define a separate, isomorphic ADT. Note that pattern matching on the concrete type of a Shape would easily break, just like when we defined the ADT in F# with Match in C#. By using the original Match, if the original definition is modified, Match() will change and so the active pattern will break accordingly at compile-time. If you need binary backwards compatibility however, it's going to be more complex than this. In the next post I'll show an example of a common variant of this. By the way it would be interesting to see how ADTs in Boo and Nemerle interop with F# and C#. I was rather surprised to realize only recently, after using C# for so many years, that it doesn't have a proper static upcast operator. By "static upcast operator" I mean a built-in language operator or a function that upcasts with a static (i.e. compile-time) check. C# actually does implicit upcasting and most people probably don't even realize it. Consider this simple example: Stream Fun() { return new MemoryStream(); Whereas in F# we have to do this upcast explicitly, or we get a compile-time error: let Fun () : Stream = upcast new MemoryStream() The reason being that type inference is problematic in the face of subtyping [1]. Now how does this interact with parametric polymorphism (generics)? C# 4.0 introduced variant interfaces, so we can write: IEnumerable<IEnumerable<Stream>> Fun() { return new List<List<MemoryStream>>(); Note that covariance is not implicit upcasting: List<List<MemoryStream>> is not a subtype of IEnumerable<IEnumerable<Stream>>. But this doesn't compile in C# 3.0, requiring conversions instead. When the supertypes are invariant we have to start converting. Even in C# 4.0 if you target .NET 3.5 the above snippet does not compile because System.Collections.Generic.IEnumerable<T> isn't covariant in T. And even in C# 4.0 targeting .NET 4.0 this doesn't compile: ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>>(); because ICollection<T> isn't covariant in T. It's not covariant for good reason: it contains mutators (i.e. methods that mutate the object implementing the interface), so making it covariant would make the type system unsound (actually, this already happens in C# and Java) [2][3]. A programmer new to C# might try the following to appease the compiler (ReSharper suggests this so it must be ok? UPDATE: I submitted this bug and ReSharper fixed it.): ICollection<ICollection<Stream>> Fun() { return (ICollection<ICollection<Stream>>)new List<List<MemoryStream>>(); (attempt #1) It compiles! But upon running the program, our C# learner is greeted with an InvalidCastException. The second suggestion on ReSharper says "safely cast as...": ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>>() as ICollection<ICollection<Stream>>; (attempt #2) And sure enough, it's safe since it doesn't throw, but all he gets is a null. So our hypothetical developer googles a bit and learns about Enumerable.Cast<T>(), so he tries: ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>>() (attempt #3) Yay, no errors! Ok, let's add elements to this list: ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>> { new List<MemoryStream> { new MemoryStream(), (attempt #4) Oh my, InvalidCastException is back... Determined to make this work, he learns a bit more about LINQ and gets this to compile: ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>> { new List<MemoryStream> { new MemoryStream(), .Select(x => (ICollection<Stream>)x).ToList(); (attempt #5) But gets another InvalidCastException. He forgot to convert the inner list! He tries again: ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>> { new List<MemoryStream> { new MemoryStream(), .Select(x => (ICollection<Stream>)x.Select(y => (Stream)y).ToList()).ToList(); (attempt #6) This (finally!) works as expected. Experienced C# programmers are probably laughing now at these obvious mistakes, but there are two non-trivial lessons to learn here: 1. Avoid applying Enumerable.Cast<T>() to IEnumerable<U> (for T,U != object). Indeed, Enumerable.Cast<T>() is the source of many confusions, even unrelated to subtyping [4] [5] [6] [7] [8], and yet often poorly advised [9] [10] [11] [12] [13] [14] since it's essentially not type-safe. Cast<T>() will happily try to cast any type into any other type without any compiler check. Other than bringing a non-generic IEnumerable into an IEnumerable<T>, I don't think there's any reason to use Cast<T>() on an IEnumerable<U>. The same argument can be applied to OfType<T>(). 2. It's easy to get casting wrong (not as easy as in C, but still), particularly when working with complex types (where the definition of 'complex' depends on each programmer), when the compiler checks aren't strict enough (here's a scenario that justifies why C# allows seemingly 'wrong' casts as in attempt #5). Note how in attempt #6 the conversion involves three upcasts: • MemoryStream -> Stream (explicit through casting) • List<Stream> -> ICollection<Stream> (explicit through casting) • List<ICollection<Stream>> -> ICollection<ICollection<Stream>> (implicit) What we could use here is a static upcast operator, a function that only does upcasts and no other kind of potentially unsafe casts, that doesn't let us screw things up no matter what types we feed it. It should catch any invalid upcast at compile-time. But as I said at the beginning of the post, this doesn't exist in C#. It's easily doable though: static U Upcast<T, U>(this T o) where T : U { return o; With this we can write: ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>> { new List<MemoryStream> { new MemoryStream(), .Select(x => x.Select(y => y.Upcast<MemoryStream, Stream>()).ToList().Upcast<List<Stream>, ICollection<Stream>>()).ToList(); You may object that this is awfully verbose. Maybe so, but you can't screw this up no matter what types you change. The verbosity stems from the lack of type inference in C#. You may also want to lift this to operate on IEnumerables to make it a bit shorter, e.g: static IEnumerable<U> SelectUpcast<T, U>(this IEnumerable<T> o) where T : U { return o.Select(x => x.Upcast<T, U>()); ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>> { new List<MemoryStream> { new MemoryStream(), .Select(x => x.SelectUpcast<Stream, Stream>().ToList().Upcast<List<Stream>, ICollection<Stream>>()).ToList(); Alternatively, we could have used explicitly typed variables to avoid casts: ICollection<ICollection<Stream>> Fun() { return new List<List<MemoryStream>> { new List<MemoryStream> { new MemoryStream(), .Select(x => { ICollection<Stream> l = x.Select((Stream s) => s).ToList(); return l; I mentioned before that F# has a static upcast operator (actually two, one explicit/coercing and one inferencing operator). Here's what the same Fun() looks like in F#: let Fun(): ICollection<ICollection<Stream>> = List [ List [ new MemoryStream() ]] |> Seq.map (fun x -> List (Seq.map (fun s -> s :> Stream) x) :> ICollection<_>) |> Enumerable.ToList |> fun x -> upcast x Now if you excuse me, I have to go replace a bunch of casts... ;-)
{"url":"https://bugsquash.blogspot.com/2012/","timestamp":"2024-11-12T02:04:47Z","content_type":"application/xhtml+xml","content_length":"195178","record_id":"<urn:uuid:78fbf3fd-15b6-450b-803b-345b815e0c7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00271.warc.gz"}
2007 AMC 8 Problems/Problem 21 Two cards are dealt from a deck of four red cards labeled $A$, $B$, $C$, $D$ and four green cards labeled $A$, $B$, $C$, $D$. A winning pair is two of the same color or two of the same letter. What is the probability of drawing a winning pair? $\textbf{(A)}\ \frac{2}{7}\qquad\textbf{(B)}\ \frac{3}{8}\qquad\textbf{(C)}\ \frac{1}{2}\qquad\textbf{(D)}\ \frac{4}{7}\qquad\textbf{(E)}\ \frac{5}{8}$ Video Solution by OmegaLearn ~ pi_is_3.14 Video Solution Solution 1 There are 4 ways of choosing a winning pair of the same letter, and $2 \left( \dbinom{4}{2} \right) = 12$ ways to choose a pair of the same color. There's a total of $\dbinom{8}{2} = 28$ ways to choose a pair, so the probability is $\dfrac{4+12}{28} = \boxed{\textbf{(D)}\ \frac{4}{7}}$. Solution 2 Notice that, no matter which card you choose, there are exactly 4 cards that either has the same color or letter as it. Since there are 7 cards left to choose from, the probability is $\frac{4}{7}$. Solution 3 We can use casework to solve this. Case $1$: Same letter After choosing any letter, there are seven cards left, and only one of them will produce a winning pair. Therefore, the probability is $\frac17$. Case $2$: Same color After choosing any letter, there are seven cards left. Three of them will make a winning pair, so the probability is $\frac37$. Now that we have the probability for both cases, we can add them: $\frac17+\frac37=\boxed{\textbf{(D)} \frac47}$. See Also The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
{"url":"https://artofproblemsolving.com/wiki/index.php?title=2007_AMC_8_Problems/Problem_21&oldid=186855","timestamp":"2024-11-04T00:51:26Z","content_type":"text/html","content_length":"46810","record_id":"<urn:uuid:7c4258bc-71af-4802-ad75-944ccf3fa5d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00891.warc.gz"}
Understanding Polynomial Functions: A Comprehensive Overview Polynomial functions are an essential concept in the field of mathematics, particularly in the context of International Baccalaureate Maths. These functions play a crucial role in understanding and solving various real-life problems, making them a vital topic for students to master. In this comprehensive overview, we will delve into the world of polynomial functions, exploring their definition, properties, and applications. Whether you are new to this topic or looking to refresh your knowledge, this article will provide you with a solid understanding of polynomial functions and their significance in the study of Functions and Equations. So let's dive in and discover the power of polynomial functions together!In this article, we will explore the basics of polynomial functions, including their definition, properties, and common types. Understanding polynomial functions is crucial for success in various levels of study, making it an essential topic in mathematics. Whether you are a student preparing for exams or an educator looking for resources to assist your students, this comprehensive overview will cover all the necessary information about polynomial functions to help you achieve your goals. What are Polynomial Functions? polynomial function is a mathematical expression that consists of terms involving variables raised to non-negative integer powers. These functions can have multiple terms and can include constants, coefficients, and The degree of a polynomial function is determined by the highest power of the variable in the expression. Properties of Polynomial Functions Polynomial functions have several important properties that are essential to understand. These include: • They are continuous and smooth functions, meaning they have no breaks or sharp turns. • They have a finite number of real roots or solutions. • The degree of the polynomial determines the number of roots. Common Types of Polynomial Functions There are several types of polynomial functions that are commonly studied, including: • Linear functions: These are first-degree polynomial functions with a single variable. • Quadratic functions: These are second-degree polynomial functions with a single variable. • Cubic functions: These are third-degree polynomial functions with a single variable. Tips and Techniques for Studying and Test-Taking Studying and taking tests on polynomial functions can be challenging, but with the right techniques, you can improve your understanding and performance. Some tips and techniques to consider include: • Practice solving problems regularly to become familiar with the concepts. • Understand the properties and rules of polynomial functions. • Break down complex problems into smaller, more manageable steps. • Use online resources, such as practice tests and interactive tutorials, to supplement your learning. Resources for Advanced Math Studies If you are interested in pursuing advanced math studies, there are several resources available to help you continue your learning about polynomial functions. Some options include: • Online courses or tutorials that cover advanced topics in polynomial functions. • Textbooks or study guides that provide in-depth explanations and practice problems. • Tutoring services or study groups to receive personalized support. Navigating Different Levels of Study The level of study for polynomial functions can vary depending on the educational program or curriculum. Some common levels of study include: • Middle school: Students may first encounter polynomial functions in middle school, where they learn the basic concepts and properties. • High school: In high school, students typically delve deeper into polynomial functions and their applications, such as graphing and solving equations. • College: College-level courses may explore advanced topics in polynomial functions, such as calculus and complex numbers. Regardless of the level of study, it is important to have a solid understanding of polynomial functions to succeed in math education. Understanding Polynomial Functions In this section, we will define polynomial functions and explain their properties and common types. This will give readers a solid foundation for understanding more complex concepts later on. Navigating Different Levels of Study This section will guide readers through the different levels of study in relation to polynomial functions, from basic concepts in high school to more advanced topics in higher education. Tips and Techniques for Studying and Test-Taking Studying polynomial functions can be a challenging task, but with the right tools and techniques, it can become a manageable and rewarding experience. Here are some tips to help you in your studying: • Review the basics: Before diving into more complex concepts, make sure you have a solid understanding of the basic principles of polynomial functions. This will provide a strong foundation for further learning. • Practice, practice, practice: The best way to truly understand polynomial functions is to practice solving problems. Make use of practice questions and past exams to test your knowledge and identify areas that need improvement. • Use visual aids: Many students find it helpful to use visual aids such as graphs and diagrams to better comprehend polynomial functions. These can also be useful for memorizing key formulas. When it comes to test-taking, here are some strategies that can help you ace your exams: • Prioritize: Make sure to allocate your time wisely during the exam. Start with questions you are confident about and leave more difficult ones for later. • Show all your work: Even if you know the answer, it's important to show your work when solving polynomial function problems. This not only helps you get partial credit, but also allows you to catch any mistakes you may have made along the way. • Read the instructions carefully: Before starting the exam, take a moment to read through the instructions and make sure you understand what is expected of you. This will prevent any unnecessary mistakes. Resources for Advanced Math Studies For those interested in delving deeper into polynomial functions, we will provide a list of resources that offer advanced materials and courses. These resources include online courses, textbooks, and study guides that cover topics such as advanced polynomial equations, graphing techniques, and real-world applications. Some recommended resources include Khan Academy's Advanced Polynomial Functions course, MIT OpenCourseWare's Mathematics for Computer Science textbook, and Barron's IB Math SL study guide. These resources not only provide in-depth explanations and practice problems, but also offer interactive tools and quizzes to help you master the concepts. With these resources at your disposal, you can take your understanding of polynomial functions to the next level. Polynomial functions are a fundamental part of mathematics and play a crucial role in various levels of study. By understanding their properties, utilizing effective study techniques, and utilizing resources for advanced studies, students and educators can excel in this topic. With this comprehensive overview, readers now have a solid foundation for mastering polynomial functions.
{"url":"https://www.mathslesson.co.uk/functions-and-equations-polynomial-functions","timestamp":"2024-11-13T18:08:03Z","content_type":"text/html","content_length":"125000","record_id":"<urn:uuid:416b2b86-3ef8-4da6-91aa-761b2ef87925>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00561.warc.gz"}
SMT 2012: Author Index Author Papers Abdul Aziz, Mohammad A Machine Learning Technique for Hardness Estimation of QFBV SMT Problems Alberti, Francesco Reachability Modulo Theory Library Biere, Armin Practical Aspects of SAT Solving On the Complexity of Fixed-Size Bit-Vector Logics with Binary Encoded Bit-Width Program Verification as Satisfiability Modulo Theories Bjorner, Nikolaj SMT-LIB Sequences and Regular Expressions Anatomy of Alternating Quantifier Satisfiability (Work in progress) Bruttomesso, Roberto Reachability Modulo Theory Library Bruttomesso, Roberto The 2012 SMT Competition Codish, Michael Exotic Semi-Ring Constraints Cok, David The 2012 SMT Competition Conchon, Sylvain Built-in Treatment of an Axiomatic Floating-Point Theory for SMT Solvers Reasoning with Triggers Darwish, Nevin A Machine Learning Technique for Hardness Estimation of QFBV SMT Problems Deters, Morgan The 2012 SMT Competition Dross, Claire Reasoning with Triggers Falke, Stephan A Theory of Arrays with set and copy Operations Fekete, Yoav Exotic Semi-Ring Constraints Fröhlich, Andreas On the Complexity of Fixed-Size Bit-Vector Logics with Binary Encoded Bit-Width Fuhs, Carsten Exotic Semi-Ring Constraints Ganesh, Vijay SMT-LIB Sequences and Regular Expressions An SMT-based approach to automated configuration Ghilardi, Silvio Reachability Modulo Theory Library Giesl, Jürgen Exotic Semi-Ring Constraints Goel, Amit SMT-Based System Verification with DVF Griggio, Alberto The 2012 SMT Competition Heymans, Patrick An SMT-based approach to automated configuration Hubaux, Arnaud An SMT-based approach to automated configuration Iguernlala, Mohamed Built-in Treatment of an Axiomatic Floating-Point Theory for SMT Solvers Kanig, Johannes Reasoning with Triggers Kovásznai, Gergely On the Complexity of Fixed-Size Bit-Vector Logics with Binary Encoded Bit-Width Krstic, Sava SMT-Based System Verification with DVF Leslie, Rebekah SMT-Based System Verification with DVF McMillan, Kenneth L. Program Verification as Satisfiability Modulo Theories Melquiond, Guillaume Built-in Treatment of an Axiomatic Floating-Point Theory for SMT Solvers Merz, Florian A Theory of Arrays with set and copy Operations Michel, Raphaël SMT-LIB Sequences and Regular Expressions An SMT-based approach to automated configuration Monniaux, David Anatomy of Alternating Quantifier Satisfiability (Work in progress) Paskevich, Andrei Reasoning with Triggers Phan, Anh-Dung Anatomy of Alternating Quantifier Satisfiability (Work in progress) Ranise, Silvio Reachability Modulo Theory Library Roux, Cody Built-in Treatment of an Axiomatic Floating-Point Theory for SMT Solvers Rybalchenko, Andrey Program Verification as Satisfiability Modulo Theories Shankar, Natarajan The Architecture of Inference from SMT to ETB Sharygina, Natasha Reachability Modulo Theory Library Sinz, Carsten A Theory of Arrays with set and copy Operations Tuttle, Mark SMT-Based System Verification with DVF Veanes, Margus SMT-LIB Sequences and Regular Expressions Waldmann, Johannes Exotic Semi-Ring Constraints Wassal, Amr A Machine Learning Technique for Hardness Estimation of QFBV SMT Problems
{"url":"https://1www.easychair.org/publications/volume/SMT_2012/authors","timestamp":"2024-11-14T08:04:35Z","content_type":"text/html","content_length":"11862","record_id":"<urn:uuid:9df36391-b79a-4da0-9212-ba2e663923ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00860.warc.gz"}
How to find the square root of a number using Newton Raphson method? - EE-Vibes How to find the square root of a number using Newton Raphson method? What is the Newton Raphson Method(NR)? How to find the square root of a number using Newton Raphson method? This method falls in the category of open bracketing methods. It is also called the method of tangents as it determines the root of an equation by drawing the tangent to the function at initial guess. This method converges very fast and it fails only when the derivative of the function at the initial guess become equal to Derivation of NR method There are two approaches to derive the formula for this method. 1. Using Taylor’s series 2. Using Graphical Interpretation Taylor’s series use for deriving Newton Raphson Formula Let x0 be the initial guess and the value of the function at this point is f(x0). Lets assume that x0+h be the next value or better approximation to the root of the function f(x)=0 where h is very very small. Then according to Taylor’s series expansion formula derivation of Newton Raphson method using Taylor’s series Geometrical Interpretation of NR Method Geometrical Interpretation of Newton Raphson Method derivation of newton Raphson method How to find the square root of a number using Newton Raphson method? Develop an iterative formula for finding the square root of any positive integer using NR method Develop an iterative formula for finding the square root of any positive integer using NR method For more detail watch here For reading next topic click here
{"url":"https://eevibes.com/mathematics/numerical-analysis/what-is-the-newton-raphson-method-derive-formula-for-it/","timestamp":"2024-11-06T10:59:04Z","content_type":"text/html","content_length":"58692","record_id":"<urn:uuid:c1c693fe-c2de-477a-8f44-616fd93a6dc3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00875.warc.gz"}
An AP Statistics class surveys 24 randomly selected female students from their high school, and calculates a 95% confidence interval for the mean height of female students to be 63.4±1.6 inches. Which of the following is a correct interpretation of this interval? (A) There is a 95% probability that the true mean height of female students at the school falls between 61.8 and 65.0 inches. (B) We can be 95% confident that the true mean height of female students at the school is 63.4 inches. (C) 95% of the time, we can be confident that the mean sample height of female students will fall between 61.8 and 65.0 inches. (D) We can be 95% confident that the true mean height of female students at the school is between 61.8 and 65.0 inches. (E) There is a 95% probability that the true mean height of female students at the school is 63.4 inches. Find an answer to your question 👍 “An AP Statistics class surveys 24 randomly selected female students from their high school, and calculates a 95% confidence interval for ...” in 📗 Advanced Placement (AP) if the answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions. Search for Other Answers
{"url":"https://cpep.org/ap/2420658-an-ap-statistics-class-surveys-24-randomly-selected-female-students-fr.html","timestamp":"2024-11-11T00:20:44Z","content_type":"text/html","content_length":"24263","record_id":"<urn:uuid:3944ebbe-af24-492f-af87-d46d98bbdfc1>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00507.warc.gz"}
Glossary of Terms Value at Risk, Value at Risk – Historical Simulation, Value at Risk – Variance Covariance, Volatility Trend Analysis, Portfolio Volatility, American Option Option that can be exercised any time before the final exercise date. The gradual reduction of a loan or other obligation by making periodic payments of principal and interest. Amortizing Swap An interest rate swap or currency swap where the principal or notional amount decreases in steps over the life of the swap. Annual return The increase in value of an investment, expressed as a percentage per year. If the annual return is expressed as annual percentage yield, then the number takes into account the effects of compounding interest. If it is expressed as annual percentage rate, then the annual rate will usually not take into account the effect of compounding interest. Asian Option Option based on the average price of the asset during the life of the option. Anything owned that has commercial or exchange value. Balance Sheet A detailed listing of assets, liabilities and capital accounts (net worth), showing the financial condition of a bank or company as of a given date. A balance sheet illustrates the basic accounting Assets= liabilities + net worth. Basis Point One hundredth of a percentage point. Spreads in interest rate markets are commonly quoted in basis points. 1bps=1/10,000 A Binder is a set of users who receive a number of specified reports every time they are produced. The Binder stores the e-mail addresses of the specified users and the report(s) that these users must receive. A user performing the Reporter’s tasks will e-mail selected reports to these defined binders. Binomial Option Tree Option Pricing method which assumes that the price of the underlying can go up or down by fixed multiples. Each price jump is assigned a probability and a tree of possible underlying prices is built. Working from the tree points or nodes at the option maturity date, the worth of the option can be back calculated until the option can be valued at the desired date. An analytical option pricing formula which is used to price European options on non-dividend paying equity. The Black-Scholes (BS) method can be extended to price American options. A debt instrument which pays back cash to the holder at regular frequencies. The payment is normally a fixed percentage, known as a coupon. At maturity, the face value of the bond is paid. Call Option A contract between a buyer and seller whereby the buyer acquires the right, but not the obligation, to buy a specified stock, commodity or index at a predetermined price on or before a predetermined date. The seller of the option assumes the obligation of delivering the underlying, should the buyer exercise the option. An upper and lower limit on the interest rate on a floating-rate note. Compounding frequency The number of compounding periods in a year. For example, quarterly compounding has a compounding frequency of 4. Property that a curve is above a straight line connecting two end points. If the curve falls below the straight line, it is called concave. Cost of capital The required return for a capital budgeting project. Cost of funds Interest rate associated with borrowing money. One of a series of promissory notes of consecutive maturities, attached to a bond or other debt certificate and intended to be detached and presented on the due dates for payment of interest. Coupon rate In bonds, notes, or other fixed income securities, the stated percentage rate of interest, usually paid twice a year. Credit risk The risk that an issuer of debt securities or a borrower may default on its obligations, or that the payment may not be made on a negotiable instrument. Credit spread Applies to derivative products. Difference in the value of two options, when the value of the one sold exceeds the value of the one bought. One sells a “credit spread.” Current rate method The translation of all foreign currency balance sheet and income statement items at the current exchange rate. Date of issue Used in the context of bonds to refer to the date on which a bond is issued and when interest accrues to the bondholder. Used in the context of stocks to refer to the date trading begins on a new stock issued to the public. Day Count A convention for quoting interest rates. Day Count Conventions This determines the convention to be used in pricing of Fixed Income Bonds. Actual / Actual – The number of accrued days is equal to the actual number of days between the start and the end date of the period, while the number of days in the year is taken to be the actual number of days in the year concerned. Actual/365 – The number of accrued days is equal to the actual number of days between the start and the end date of the period, while the number of days in a year is taken to be 365. Actual/360 – The number of accrued days is equal to the actual number of days between the start and the end date of the period, while the number of days in a year is taken to be 360. European 30/360 – The number of accrued days are calculated on the basis of a year of 360 days and a month of 30 days. If the first date falls on the 31st, it is changed to the 30th. If the second date falls on the 31st, it is changed to the 30th. US (NASD) 30/360 – The number of accrued days are calculated on the basis of a year of 360 days and a month of 30 days. If the first date falls on the 31st, it is changed to the 30th. If the second date falls on the 31st, it is changed to the 30th but only if the first date falls on the 30th or the 31st. An entity that stands ready and willing to buy a security for its own account (at its bid price) or sell from its own account (at its ask price). Individual or firm acting as a principal in a securities transaction. Principals are market makers in securities, and thus trade for their own account and risk. The rate of change of fair value of an option with respect to the change in price of the underlying. Asset whose value derives from that of some other asset (e.g. future or an option). Dividend % A portion of a company’s profit paid to common and preferred shareholders. A common gauge of the price sensitivity of a fixed income asset or portfolio to a change in interest rates. The stockholder’s investment interest in a corporation, equalling the excess of assets over liabilities and including common and preferred stock, retained earnings, and surplus reserves. European Option Option that can only be exercised only on the final exercise date. Exotic Option A non standard option. Face value Also called the maturity value or face value; the amount that an issuer agrees to pay at the maturity date. Term used to denote one side of an interest rate swap – the payments made on this side will remain a constant percentage of the principal amount. Fixed rate A traditional approach to determining the finance charge payable on an extension of credit. A predetermined and certain rate of interest is applied to the principal. Fixed Income Bond A bond which provides income over its life and at maturity the original investment is returned Foreign Exchange. Floating-rate Note Note whose interest payment varies with the short-term interest rate. Floating-rate preferred Preferred stock paying dividends that varies with short-term interest rate. Term used to denote one side of an interest rate swap – the payments made on this side will vary over the life of the swap depending on some pre-defined market index such as Libor. Forward price The price specified in a forward contract for a specific commodity. The forward price makes the forward contract have no value when the contract is written. However, if the value of the underlying commodity changes, the value of the forward contract becomes positive or negative, depending on the position held. Forwards are priced in a manner similar to futures. As with a futures contract, the first step in pricing a forward is to add the spot price to the cost of carry (interest forgone, convenience yield, storage costs and interest/dividend received on the underlying). However, unlike a futures contract, the price may also include a premium for counterparty credit risk, and there is not daily marking-to-market to minimize default risk. If there is no allowance for these credit risks, then the forward price will equal the futures price. Fixed Income Bonds It is a loan an investor makes to the bonds’ issuer. The investor generally receives regular interest payments on the loan until the bond matures, at which point the issuer repays the principal. Fixed Income Issues Each Fixed Income Bond in the market has an Issue. An Issue of a bond is characterised by the following information: Issue Date – date on which the Fixed Income Bond is issued Maturity Date – date on which the issue of a Fixed Income Bond Coupon Rate – annual rate of interest payable on the bond Yield to Maturity – rate of return measuring the total performance of a bond (coupon payments as well as Capital gain or loss- from the time of purchase until maturity. The rate of change of an option’s delta with respect to underlying price. The second derivative of option value with respect to underlying price. Also referred to as an options curvature. Commonly used to indicate an options value and how this value will change as market conditions change. Buying one security and selling another in order to reduce risk. Interest Rate Swap (IRS) An exchange of a fixed rate of interest on a certain notional principal for a floating rate of interest on the same notional principal. Issue date The date on which a bond, insurance policy or stock offering is issued. Also called date of issue. Karachi Interbank Offer Rate. The rate offered by banks to banks. London interbank offer rate. The rate offered by banks on Euro-currency deposits. The date on which a note, draft, bond or acceptance becomes due and payable. Maturity date The date on which a debt becomes due for payment. Also called maturity. Recording the price or value of a security, portfolio, or account on a daily basis, to calculate profits and losses or to confirm that margin requirements are being met. Market Capitalisation Expected return on a security. Market Risk Risks that result from the overall movements of the market. The expected value of a random variable. Money Market Market for short term safe investments. Monte Carlo Simulation Method for calculating the probability distribution of possible outcomes. National Association of Securities Dealers. Net position The value of the position subtracting the initial cost of setting up the position. Notional principal amount In an interest rate swap, the predetermined dollar principal on which the exchanged interest payments are based. The right, but not the obligation, to buy (for a call option) or sell (for a put option) a specific amount of a given stock, commodity, currency, index, or debt, at a specified price (the strike price) during a specified period of time. Owner’s equity Total assets minus total liabilities of an individual or company. For a company, also called net worth or shareholders’ equity or net assets. Par Value Value of a security shown on a certificate. Term for describing all the investments that an entity owns. A diversified portfolio contains a variety of investments. Position limit The maximum number of listed option contracts on a single security which can be held by an investor or group of investors acting jointly. The amount by which a bond or stock sells above its par value. A value describing one quantity in terms of another quantity. A common type of rate is a quantity expressed in terms of time, such as percent change per year. The degree of possibility that a loss will be sustained in a loan, investment, or other transaction. Risk Free Rate The rate of interest that can be earned without assuming any risks. Spot Price Price of asset for immediate delivery (in contrast to forward or futures price). Difference between the price at which an underwriter buys an issue from a firm and the price at which the underwriter sells it to the public. An arrangement whereby two companies lend to each other on different terms e.g. in different currencies, or one at a fixed rate and the other at a floating rate. The gap between bid and ask prices of a stock or other security. (2) The simultaneous purchase and sale of separate futures or options contracts for the same commodity for delivery in different months. Also known as a straddle. (3) Difference between the price at which an underwriter buys an issue from a firm and the price at which the underwriter sells it to the public. (4) The price an issuer pays above a benchmark fixed-income yield to borrow money. The current market price of the actual physical commodity. Also called cash price. The stated price per share for which underlying stock may be purchased (in the case of a call) or sold (in the case of a put) by the option holder upon exercise of the option contract. Stock Exchange An exchange on which shares of stock and common stock equivalents are bought and sold. Examples include the NYSE and the AMEX. Stop loss A stop order for which the specified price is below the current market price and the order is to sell. Settlement date The date by which an executed securities transaction must be settled, by paying for a purchase or by delivering a sold asset; usually three business days after the trade was executed (T+3); or one day for listed options and government securities. Standard deviation A statistical measure of the historical volatility of a mutual fund or portfolio, usually computed using 36 monthly returns. More generally, a measure of the extent to which numbers are spread around their average. Term Structure Relationship between interest rates on loans of different maturities. The rate at which option loses value as time to maturity decreases. Also referred to as the time decay of an option. Buying and selling securities or commodities on a short-term basis, hoping to make quick profits. The name for the centre of financial operations within a company. The Treasury is responsible for such things as issuing new securities. The “something” that the parties agree to exchange in a derivative contract. Value at Risk Procedure for estimating the probability of a portfolio’s losses exceeding some specified proportion. Vanilla Option Option without unusual features. VAR Model Value at Risk model. The relative rate at which the price of a security moves up and down. Volatility is found by calculating the annualized standard deviation of daily change in price. If the price of a stock moves up and down rapidly over short time periods, it has high volatility. If the price almost never changes, it has low volatility. XML Id The external id used by an institution to integrate their front office date with the Alchemy Risk Manager. The rate of return on any financial instrument, normally expressed as a percentage. Yield curve A curve which plots current yields of fixed interest securities against their times to redemption (maturity). This enables investors to compare the yields of short, medium and long term securities at a given time. Yield to Maturity Internal rate of return on a bond. The percentage rate of return paid on a bond, note or other fixed income security if you buy and hold it to its maturity date. The calculation for YTM is based on the coupon rate, length of time to maturity and market price. It assumes that coupon interest paid over the life of the bond will be reinvested at the same rate. Zero Coupon Bond A bond which pays no interest through its life and which pays a capital gain by being issued at a substantial discount to the maturity value. Zero Coupon Rate The interest rate that would be earned on a bond that provides no coupons.
{"url":"https://oilinsights.net/glossary-of-terms/","timestamp":"2024-11-11T08:17:11Z","content_type":"text/html","content_length":"40570","record_id":"<urn:uuid:235f14f3-5092-41c3-aeaa-b8109017eb74>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00058.warc.gz"}
Cost Accounting by Horngren, Datar, Rajan book solutions manual free PDF download (all chapters) Staff member Nov 23, 2017 University Course Hello Students, Cost Accounting - a managerial emphasis book by Charles T Horngen, Datar and Rajan is one of the most popular course text book for Cost Accounting course students in American universities. Here I am sharing the PDF Book solutions for all chapters of Cost Accounting by Horngen, Datar & Rajan. These solution manuals for each chapter in the textbook Cost Accounting by Horngen-Datar-Rajan contains detailed answers to questions as given in the book and will give you a good reference while preparing for your exams. Here are the direct download links to get Free solution manuals for Horngren's Cost Accounting by Datar & Rajan: You can easily download each of these chapter-wise Book Solutions for Cost Accounting by Horngren, Datar, Rajan by clicking any link above.
{"url":"https://www.acetechie.com/community/threads/cost-accounting-by-horngren-datar-rajan-book-solutions-manual-free-pdf-download-all-chapters.299/","timestamp":"2024-11-08T18:11:35Z","content_type":"text/html","content_length":"54427","record_id":"<urn:uuid:55167b2a-4582-4736-b754-52036f40e8b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00720.warc.gz"}
5.5 Use of Edge Weights 5.5 Use of Edge Weightsο In a weighted graph, each edge is associated with a semantically meaningful scalar weight. For example, the edge weights can be connectivity strengths or confidence scores. Naturally, one may want to utilize edge weights in model development. Message Passing with Edge Weightsο Most graph neural networks (GNNs) integrate the graph topology information in forward computation by and only by the message passing mechanism. A message passing operation can be viewed as a function that takes an adjacency matrix and additional input features as input arguments. For an unweighted graph, the entries in the adjacency matrix can be zero or one, where a one-valued entry indicates an edge. If this graph is weighted, the non-zero entries can take arbitrary scalar values. This is equivalent to multiplying each message by its corresponding edge weight as in GAT. With DGL, one can achieve this by: • Saving the edge weights as an edge feature • Multplying the original message by the edge feature in the message function Consider the message passing example with DGL below. import dgl.function as fn # Suppose graph.ndata['ft'] stores the input node features graph.update_all(fn.copy_u('ft', 'm'), fn.sum('m', 'ft')) One can modify it for edge weight support as follows. import dgl.function as fn # Save edge weights as an edge feature, which is a tensor of shape (E, *) # E is the number of edges graph.edata['w'] = eweight # Suppose graph.ndata['ft'] stores the input node features graph.update_all(fn.u_mul_e('ft', 'w', 'm'), fn.sum('m', 'ft')) Using NN Modules with Edge Weightsο One can modify an NN module for edge weight support by modifying all message passing operations in it. The following code snippet is an example for NN module supporting edge weights. DGLβ s built-in NN modules support edge weights if they take an optional edge_weight argument in the forward function. One may need to normalize raw edge weights. In this regard, DGL provides EdgeWeightNorm().
{"url":"https://docs.dgl.ai/en/2.2.x/guide/training-eweight.html","timestamp":"2024-11-08T14:18:41Z","content_type":"text/html","content_length":"19793","record_id":"<urn:uuid:d289e251-0f9b-4bc4-9fbb-db7b9787394e>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00526.warc.gz"}
How To Perform A Pearson Correlation In SPSS What is a Pearson correlation? A Pearson correlation, also known as a Pearson Product-Moment Correlation, is a measure of the strength for an association between two linear quantitative measures. For example, you can use a Pearson correlation to determine if there is a significance association between the age and total cholesterol levels within a population. This is the example I will use for this guide. Assumptions of a Pearson correlation test There are just a few assumptions that data has to meet before a Pearson correlation test can be performed. There are: 1. The two variable of interest are continuous data (interval or ratio). 2. The two variables should be approximately normally distributed. Refer to our guide on normality testing in SPSS if you need help with this. 3. There should be a linear relationship between the two variables. Plot them on a scatterplot to see their association. 4. There should be no outliers present. How to perform a Pearson correlation in SPSS I have created a simple dataset containing 10 rows of data, each row signifies one person. I have two variables, the first being Age (in years) and the other being blood total Cholesterol levels (in For this example, the null hypothesis is: There is no correlation between participant ages and blood total cholesterol levels. On the other hand, the alternative hypothesis would read: There is a correlation between participant ages and blood total cholesterol levels. Performing the test 1. Within SPSS, go to Analyze > Correlate > Bivariate. A new window will open called Bivariate Correlations. Here, you need to specify which variables you want to include in the analysis. Drag both variables from the left window, to the right window called Variables. In this case, both Age and Cholesterol will be moved across. Note, that you can drag more than two variables into the test, with each combination possible being tested for at the same time. 2. Ensure that Pearson is ticked under the title Correlation Coefficients. Since we have not made any prior assumptions, we will also leave the Test of Significance as Two-tailed. 3. Click the OK button to run the test. By going to the SPSS Output window, there will be a new heading of Correlations with a correlation matrix displayed. Within the grid, there are three pieces of information which are listed below. • Pearson Correlation – This is the Person Correlation Coefficient (r) value. These values range from 0 to 1 (for positive correlations) and -1 to 0 (for negative correlations). The larger the number, the stronger the linear association between the two variables i.e. a value of 1 indicates a strong positive association and a value of -1 indicates a strong negative association. A value of 0 indicates no such association. • Sig. (2-tailed) – The P value for a two-tailed analysis. • N – The number of pairs of data in the analysis. By looking at the results in the above table, it can be seen that the correlation between age and blood cholesterol levels gave a Pearson Correlation Coefficient (r) value of 0.882, which indicates a strong positive association between the two variables. Also, the P value of the association was 0.001, thus indicating a highly significant result. Therefore, I will reject the null hypothesis. When reporting the results of a Pearson Correlation, it is useful to quote two pieces of data: the r value (the correlation coefficient) and the P value of the test. For the example above this could IBM SPSS version used: 23 1. It was easy to understand and easy to apply. It is very helpful for novices like us. Thank you so much for the easy tutorial. 2. Straight to the point and very helpful. Thank you Dr. Steven 3. Thank you Dr. Steven Bradburn, it was easy to understand and helpful for me. □ Thank you for your kind words. I’m glad you found it useful
{"url":"https://toptipbio.com/pearson-correlation-spss/","timestamp":"2024-11-11T04:07:34Z","content_type":"text/html","content_length":"183945","record_id":"<urn:uuid:84a65d0f-0345-4ea8-8dbf-44b09743f5e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00300.warc.gz"}
Queue Up Your Regrets: Achieving the Dynamic Capacity Region of Multiplayer Bandits Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track Ilai Bistritz, Nicholas Bambos Abstract Consider $N$ cooperative agents such that for $T$ turns, each agent n takes an action $a_{n}$ and receives a stochastic reward $r_{n}\left(a_{1},\ldots,a_{N}\right)$. Agents cannot observe the actions of other agents and do not know even their own reward function. The agents can communicate with their neighbors on a connected graph $G$ with diameter $d\left(G\right)$. We want each agent $n$ to achieve an expected average reward of at least $\lambda_{n}$ over time, for a given quality of service (QoS) vector $\boldsymbol{\lambda}$. A QoS vector $\boldsymbol{\lambda}$ is not necessarily achievable. By giving up on immediate reward, knowing that the other agents will compensate later, agents can improve their achievable capacity region. Our main observation is that the gap between $\lambda_{n}t$ and the accumulated reward of agent $n$, which we call the QoS regret, behaves like a queue. Inspired by this observation, we propose a distributed algorithm that aims to learn a max-weight matching of agents to actions. In each epoch, the algorithm employs a consensus phase where the agents agree on a certain weighted sum of rewards by communicating only $O\left(d\ left(G\right)\right)$ numbers every turn. Then, the algorithm uses distributed successive elimination on a random subset of action profiles to approximately maximize this weighted sum of rewards. We prove a bound on the accumulated sum of expected QoS regrets of all agents, that holds if $\boldsymbol{\lambda}$ is a safety margin $\varepsilon_{T}$ away from the boundary of the capacity region, where $\varepsilon_{T}\rightarrow0$ as $T\rightarrow\infty$. This bound implies that, for large $T$, our algorithm can achieve any $\boldsymbol{\lambda}$ in the interior of the dynamic capacity region, while all agents are guaranteed an empirical average expected QoS regret of $\tilde{O}\left(1\right)$ over $t=1,\ldots,T$ which never exceeds $\tilde{O}\left(\sqrt{t}\right)$ for any $t$. We then extend our result to time-varying i.i.d. communication graphs.
{"url":"https://proceedings.nips.cc/paper_files/paper/2022/hash/056e8e9c8ca9929cb6cf198952bf1dbb-Abstract-Conference.html","timestamp":"2024-11-12T00:35:10Z","content_type":"text/html","content_length":"9911","record_id":"<urn:uuid:0b3c12c4-59e6-4241-a2ae-f2d1979272c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00434.warc.gz"}
Need help to get the formula for my alphabet code to number Need help to to get the formula for my alphabet code to number as per below example. A to J represent the number if I type the code JFGAA I need 15400 as output A 0 JFGAA 15400 B 9 JIAA 1200 C 8 BAA 900 D 7 JA 10 E 6 F 5 G 4 H 3 I 2 J 1 Something like this can work if you have Microsoft 365. View attachment 86607 Thanks dear working fine
{"url":"https://chandoo.org/forum/threads/need-help-to-get-the-formula-for-my-alphabet-code-to-number.56302/#post-305004","timestamp":"2024-11-03T11:57:33Z","content_type":"text/html","content_length":"50312","record_id":"<urn:uuid:dfd6aa80-33b3-44a8-ad2e-9afeee63c036>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00211.warc.gz"}
Show Only Certain Items In Legend Python Matplotlib - The Citrus Report Show Only Certain Items In Legend Python Matplotlib How to Make Your Data Visualization More Effective Using Matplotlib Legends Looking for a way to make your data visualization more efficient? You might be interested in using Matplotlib Legends. Legends are a way to label your data and explain the meaning of different plots in your visualization. By doing so, they not only make your visualization more comprehensible, but also more aesthetically pleasing. Matplotlib Legends are incredibly versatile too, from customizing labels to changing the location of your legend in your visualization. In this article, we will cover how to efficiently use Matplotlib Legends to improve your data visualization. Getting Started with Matplotlib Legends Before we dive into the details of how to use Matplotlib Legends, let’s make sure we have a basic understanding of Matplotlib. Matplotlib is a Python library used for creating high quality visualizations. It is widely used in data science to present data in an easy-to-understand format. You can create a range of visualizations with Matplotlib, such as line plots, scatter plots, pie charts, and histograms. Matplotlib is built on top of another Python library called NumPy. NumPy is used for numerical data manipulation. So, before you get started with Matplotlib Legends, make sure you have both libraries Here’s how to install Matplotlib and Numpy via pip: pip install matplotlib pip install numpy Now that you have Matplotlib and NumPy installed, you can start using Matplotlib Legends. How to Add Legends to Your Visualization Let’s start with the most basic aspect of using Matplotlib Legends: adding them to your visualization. Here is some sample code that loads data and scatter plot using Matplotlib. import matplotlib.pyplot as plt import numpy as np #sample data #plotting the data using matplotlib fig = plt.figure() plt.plot(x,y, label=’Sine’) plt.plot(x,z, label=’Cosine’) Let’s break this code down. First, we imported Matplotlib and NumPy. Then, we created sample data using the numpy linspace function. We then plotted the data using Matplotlib and created a figure object. Inside the plot function, we added two lines, a sine wave (`y=np.sin(x)`) and a cosine wave (`z=np.cos(x)`). Now, let’s add the legends to our plot. We use the `plt.legend()` function to add the legends to our plot. That’s all there is to it. When you run the code, you will see the legends of sine and cosine in your visualization. Customizing Your Legends We can customize our legends in many ways. You can adjust the font size, location, background color, border, and more. Let’s discuss some of these options. Changing Font Size You can adjust the font size of the text in your legend using the `fontsize` parameter. Here’s an example: import matplotlib.pyplot as plt import numpy as np #sample data #plotting the data using matplotlib fig = plt.figure() plt.plot(x,y, label=’Sine’) plt.plot(x,z, label=’Cosine’) In this example, we set the font size to 20 using the `fontsize` parameter. Changing the Location By default, the location of the legend is at the rightmost corner of your plot. This might not be the best location for your visualization. Fortunately, Matplotlib lets you change the location of your legend. There are several predefined locations such as `upper left, upper right, lower left, lower right`, and more. Here’s an example. import matplotlib.pyplot as plt import numpy as np #sample data #plotting the data using matplotlib fig = plt.figure() plt.plot(x,y, label=’Sine’) plt.plot(x,z, label=’Cosine’) plt.legend(loc=’lower left’) In this example, we used the `loc` parameter and set it to `lower left`. You can choose any location that best suits your visualization. Changing the Background Color You can change the background color of your legends using the `facecolor` parameter. Here’s an example. import matplotlib.pyplot as plt import numpy as np #sample data #plotting the data using matplotlib fig = plt.figure() plt.plot(x,y, label=’Sine’) plt.plot(x,z, label=’Cosine’) plt.legend(loc=’lower left’, facecolor=’yellow’) In this example, we set the facecolor to `yellow` using the `facecolor` parameter. Changing the Border Color You can change the border color of your legend using the `edgecolor` parameter. Here’s an example. import matplotlib.pyplot as plt import numpy as np #sample data #plotting the data using matplotlib fig = plt.figure() plt.plot(x,y, label=’Sine’) plt.plot(x,z, label=’Cosine’) plt.legend(loc=’lower left’,facecolor=’yellow’, edgecolor=’black’) In this example, we set the edge color to `black` using the `edgecolor` parameter. Matplotlib Legends are an essential tool in creating effective data visualization. They allow you to label your data and make your visualization more aesthetically pleasing. With Matplotlib, Legends customization is limitless, from adjusting font size to changing the background color. Keep in mind that the example code used in this article is just a starting point. You can customize your plots even further. So go ahead and practice creating custom legends to enhance your data Leave a Comment
{"url":"https://thecitrusreport.com/questions/show-only-certain-items-in-legend-python-matplotlib","timestamp":"2024-11-12T08:37:17Z","content_type":"text/html","content_length":"113975","record_id":"<urn:uuid:b9198496-7c0b-4b35-ae9b-0309e6eae3c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00485.warc.gz"}
Gaussian Mixture Models in PyTorch Update: Revised for PyTorch 0.4 on Oct 28, 2018 Mixture models allow rich probability distributions to be represented as a combination of simpler “component” distributions. For example, consider the mixture of 1-dimensional gaussians in the image While the representational capacity of a single gaussian is limited, a mixture is capable of approximating any distribution with an accuracy proportional to the number of components^2. In practice mixture models are used for a variety of statistical learning problems such as classification, image segmentation and clustering. My own interest stems from their role as an important precursor to more advanced generative models. For example, variational autoencoders provide a framework for learning mixture distributions with an infinite number of components and can model complex high dimensional data such as images. In this blog I will offer a brief introduction to the gaussian mixture model and implement it in PyTorch. The full code will be available on my github. The Gaussian Mixture Model A gaussian mixture model with \(K\) components takes the form^1: \[p(x) = \sum_{k=1}^{K}p(x|z=k)p(z=k)\] where \(z\) is a categorical latent variable indicating the component identity. For brevity we will denote the prior \(\pi_k := p(z=k)\) . The likelihood term for the kth component is the parameterised gaussian: \[p(x|z=k)\sim\mathcal{N}(\mu_k, \Sigma_k)\] Our goal is to learn the means \(\mu_k\) , covariances \(\Sigma_k\) and priors \(\pi_k\) using an iterative procedure called expectation maximisation (EM). The basic EM algorithm has three steps: 1. Randomly initialise the parameters of the component distributions. 2. Estimate the probability of each data point under the component parameters. 3. Recalculate the parameters based on the estimated probabilities. Repeat Step 2. Convergence is reached when the total likelihood of the data under the model stops decreasing. Synthetic Data In order to quickly test my implementation, I created a synthetic dataset of points sampled from three 2-dimensional gaussians, as follows: def sample(mu, var, nb_samples=500): :param mu: torch.Tensor (features) :param var: torch.Tensor (features) (note: zero covariance) :return: torch.Tensor (nb_samples, features) out = [] for i in range(nb_samples): out += [ torch.normal(mu, var.sqrt()) return torch.stack(out, dim=0) Initialising the Parameters For the sake of simplicity, I just randomly select K points from my dataset to act as initial means. I use a fixed initial variance and a uniform prior. def initialize(data, K, var=1): :param data: design matrix (examples, features) :param K: number of gaussians :param var: initial variance # choose k points from data to initialize means m = data.size(0) idxs = torch.from_numpy(np.random.choice(m, k, replace=False)) mu = data[idxs] # uniform sampling for means and variances var = torch.Tensor(k, d).fill_(var) # uniform prior pi = torch.empty(k).fill_(1. / k) return mu, var, pi The Multivariate Gaussian Step 2. of the EM algorithm requires us to compute the relative likelihood of each data point under each component. The p.d.f of the multivariate gaussian is \[p(x;\mu, \sigma)=\frac{1}{\sqrt{2\pi|\Sigma|} }\exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right)\] By only considering diagonal covariance matrices \(I\sigma^2 = \Sigma\) , we can greatly simplify the computation (at the loss of some flexibility): \[|\Sigma| = \prod_{j=1}^{N}\sigma_{j}^{2}\] Instead of computing the matrix inverse we can simply invert the variances. \[Tr(\Sigma^{-1}) = \left[\sigma_1^{-2}, ..., \sigma_N^{-2}\right]\] And lastly, the exponent simplifies to, where \(\odot\) represents element-wise multiplication and \(\sigma^{-2}\) is our vector of inverse variances. It is worth taking a minute to reflect on the form of the exponent in the last equation. Because there is no linear dependence between the dimensions, the computation reduces to calculating a gaussian p.d.f for each dimension independently and then taking their product (or sum in the log domain). Calculating Likelihoods In high dimensions the likelihood calculation can suffer from numerical underflow. It is therefore typical to work with the log p.d.f instead (i.e. the exponent we derived above, plus the constant normalisation term). Note that we could use the in-built PyTorch distributions package for this, however for transparency here is my own functional implementation: log_norm_constant = -0.5 * np.log(2 * np.pi) def log_gaussian(x, mean=0, logvar=0.): Returns the density of x under the supplied gaussian. Defaults to standard gaussian N(0, I) :param x: (*) torch.Tensor :param mean: float or torch.FloatTensor with dimensions (*) :param logvar: float or torch.FloatTensor with dimensions (*) :return: (*) elementwise log density if type(logvar) == 'float': logvar = x.new(1).fill_(logvar) a = (x - mean) ** 2 log_p = -0.5 * (logvar + a / logvar.exp()) log_p = log_p + log_norm_constant return log_p To compute the likelihood of every point under every gaussian in parallel, we can exploit tensor broadcasting as follows: def get_likelihoods(X, mu, logvar, log=True): :param X: design matrix (examples, features) :param mu: the component means (K, features) :param logvar: the component log-variances (K, features) :param log: return value in log domain? Note: exponentiating can be unstable in high dimensions. :return likelihoods: (K, examples) # get feature-wise log-likelihoods (K, examples, features) log_likelihoods = log_gaussian( X[None, :, :], # (1, examples, features) mu[:, None, :], # (K, 1, features) logvar[:, None, :] # (K, 1, features) # sum over the feature dimension log_likelihoods = log_likelihoods.sum(-1) if not log: return log_likelihoods Computing Posteriors In order to recompute the parameters we apply Bayes rule to likelihoods as follows: The resulting values are sometimes referred to as the “membership weights”, as they \(z\) can explain the observation \(x\). Since our likelihoods are in the log-domain, we exploit the logsumexp trick for stability. def get_posteriors(log_likelihoods): Calculate the the posterior probabilities log p(z|x), assuming a uniform prior over :param likelihoods: the relative likelihood p(x|z), of each data point under each mode (K, examples) :return: the log posterior p(z|x) (K, examples) posteriors = log_likelihoods - logsumexp(log_likelihoods, dim=0, keepdim=True) return posteriors Parameter Update Using the membership weights, the parameter update proceeds in three steps: 1. Set new mean for each component to a weighted average of the data points. 2. Set new covariance matrix as weighted combination of covariances for each data point. 3. Set new prior, as the normalised sum of the membership weights. def get_parameters(X, log_posteriors, eps=1e-6, min_var=1e-6): :param X: design matrix (examples, features) :param log_posteriors: the log posterior probabilities p(z|x) (K, examples) :returns mu, var, pi: (K, features) , (K, features) , (K) posteriors = log_posteriors.exp() # compute `N_k` the proxy "number of points" assigned to each distribution. K = posteriors.size(0) N_k = torch.sum(posteriors, dim=1) # (K) N_k = N_k.view(K, 1, 1) # get the means by taking the weighted combination of points # (K, 1, examples) @ (1, examples, features) -> (K, 1, features) mu = posteriors[:, None] @ X[None,] mu = mu / (N_k + eps) # compute the diagonal covar. matrix, by taking a weighted combination of # the each point's square distance from the mean A = X - mu var = posteriors[:, None] @ (A ** 2) # (K, 1, features) var = var / (N_k + eps) logvar = torch.clamp(var, min=min_var).log() # recompute the mixing probabilities m = X.size(1) # nb. of training examples pi = N_k / N_k.sum() return mu.squeeze(1), logvar.squeeze(1), pi.squeeze() Apart from some simple training logic, that is the bulk of the algorithm! Here is a visualisation of EM fitting three components to the synthetic data I generated earlier: Thanks for Reading! If you found this post interesting or informative, have questions or would like to offer feedback or corrections feel free to get in touch at my email or on twitter. Also stay tuned for my upcoming post on Variational Autoencoders! For a more rigorous treatment of the EM algorithm see [1]. 1. Bishop, C. (2006). Pattern Recognition and Machine Learning. Ch9. 2. Bengio, Y., Goodfellow, I. (2016). Deep Learning.
{"url":"https://angusturner.github.io/generative_models/2017/11/03/pytorch-gaussian-mixture-model.html","timestamp":"2024-11-11T09:55:38Z","content_type":"text/html","content_length":"28462","record_id":"<urn:uuid:1b9d64e0-1523-4d18-9a51-a4fc782cd079>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00493.warc.gz"}
Re: [AMBER] Autocorrelation time From: Charo del Genio via AMBER <amber.ambermd.org> Date: Mon, 12 Sep 2022 06:48:00 +0100 On 11/09/2022 22:19, He, Amy via AMBER wrote: > Dear Amber community, > I have a question about time series analysis. I’m doing an autocorrelation analysis of a time series I obtained from MD trajectories. Following the definition of autocorrelation function (ACF), I was able to calculate ACF with respect to different lags. However, I was a bit confused about how to calculate the characteristic time of ACF (also referred to as “autocorrelation time”). > My question is: > What is the most widely accepted definition of autocorrelation time, for discrete time series data such as MD trajectories? > What I have tried: > (1) > I found the definition of the autocorrelation time on a paper: > John D. Chodera, William C. Swope, Jed W. Pitera, Chaok Seok, and Ken A. Dill Journal of Chemical Theory and Computation 2007 3 (1), 26-41 DOI: 10.1021/ct0502864 > Per se equation 19, the autocorrelation time is calculated as the sum of (1-t/N)*Ct, where Ct is the ACF with respect to the lag t. > I tried that on my data and some mock data, and I found the result always converged to 0.5, regardless of the input data … I think others found the same issue with possibly the same equation, as discussed in this post<https:// mattermodeling.stackexchange.com/questions/7061/what-is-autocorrelation-time> on StackExchange. > (2) > I also viewed the answer to that StackExchange post, which suggested the autocorrelation time as 1+2*sum(Ct). That did not work for me because my Ct at longer lags (larger t) is very noisy... Summing up all Ct can either return a large positive or negative value, and I don’t think the result is a good estimator for autocorrelation time. > (3) > If we assume the decay of ACF is exponential, we can approximate the decay rate and the characteristic time by curve fitting. The fitting can be very poor in some cases, and again it’s due to the noise at longer lags as well as the rapid decay at shorter lags, which suggests my data appear to be uncorrelated already at shorter lags. > My thought: > I think it makes more sense to estimate the autocorrelation time with only the first a few Ct, assuming that Ct at longer lags is swinging above and below 0 but eventually cancel. Although that sounds like a very sleazy way to estimate the autocorrelation time :’) > Has anyone calculated the autocorrelation time for any kind of MD data, and how did you do that? Any comments & suggestions would be greatly appreciated. > Many Thanks, > Amy Hi there, first of all, your final thought is quite correct. Typically, the ACF is fairly noisy at large times. Thus, what one normally does is disregard the points after the first time when it becomes smaller than a certain arbitrarily chosen threshold. If you computed the ACF on enough data, this leaves you with a sufficiently large interval to estimate the autocorrelation time. As to how to do this, it is pretty much as you wrote. The ACF decays exponentially, like e^{-t/\tau}. However, rather than fitting it, which can be yield poor results, you can estimate \tau by integrating it. In fact, the integral from 0 to infinity of e^{-t/\tau} is indeed \tau. This is why this quantity is often called the integrated autocorrelation time. The only thing you must remember is to operate the normalized ACF, which simply means dividing it by its value at 0. So, recapitulating: 1) Normalize the ACF (divide all its points by its value at 0); 2) Throw away everything after the first time when the normalized ACF is smaller than some threshold; 3) Integrate (sum) what is left. The result is the characteristic time. Dr. Charo I. del Genio Senior Lecturer in Statistical Physics Applied Mathematics Research Centre (AMRC) Design Hub Coventry University Technology Park Coventry CV1 5FB AMBER mailing list Received on Sun Sep 11 2022 - 23:00:03 PDT
{"url":"http://archive.ambermd.org/202209/0060.html","timestamp":"2024-11-12T02:51:51Z","content_type":"application/xhtml+xml","content_length":"11315","record_id":"<urn:uuid:3162f447-15e6-48fc-bd94-18e6f4f1017e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00043.warc.gz"}
C.-M. Liegener's research works | Friedrich-Alexander-University of Erlangen-Nürnberg and other places Quasiparticle band structure and exciton spectrum of hexagonal boron nitride using second-order M�ller-Plesset many-body perturbation theory October 2004 14 Reads 5 Citations International Journal of Quantum Chemistry Many-body perturbation theory up to second-order in the Møller-Plesset partitioning has been used to calculate the quasiparticle band structure of hexagonal boron nitride, treated as a periodic two-dimensional system. It was found that correlation leads to an essential narrowing of the fundamental gap. The exciton spectrum has been calculated using the first order irreducible vertex part with Hartree-Fock and quasiparticle band structure data. It was found that the exciton binding energy amounts to about 10% of the fundamental gap (for the first singlet excitation).
{"url":"https://www.researchgate.net/scientific-contributions/C-M-Liegener-28717664","timestamp":"2024-11-09T01:45:23Z","content_type":"text/html","content_length":"319285","record_id":"<urn:uuid:e488ab3e-2ad9-4066-91c0-d4f219b8e547>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00522.warc.gz"}
Numbers in Veps Learn numbers in Veps Knowing numbers in Veps is probably one of the most useful things you can learn to say, write and understand in Veps. Learning to count in Veps may appeal to you just as a simple curiosity or be something you really need. Perhaps you have planned a trip to a country where Veps is the most widely spoken language, and you want to be able to shop and even bargain with a good knowledge of numbers in Veps. It's also useful for guiding you through street numbers. You'll be able to better understand the directions to places and everything expressed in numbers, such as the times when public transportation leaves. Can you think of more reasons to learn numbers in Veps? The Veps language (vepsän), also known as Vepsian, belongs to the Uralic family, in the Finnic group. Mainly spoken by the Vepsians in the Russian Republic of Karelia (where it has the official status of minority language), but also in Vologda Oblast and in Ingria, it counts about 5,700 speakers.Due to lack of data, we can only count accurately up to 100 in Veps. Please contact me if you can help me counting up from that limit. List of numbers in Veps Here is a list of numbers in Veps. We have made for you a list with all the numbers in Veps from 1 to 20. We have also included the tens up to the number 100, so that you know how to count up to 100 in Veps. We also close the list by showing you what the number 1000 looks like in Veps. • 1) üks’ • 2) kaks’ • 3) koume • 4) nelli • 5) viž • 6) kuz’ • 7) seiččeme • 8) kahesa • 9) ühesa • 10) kümne • 11) üks’toštkümne • 12) kaks’toštküme • 13) koumetoštküme • 14) nellitoštküme • 15) vižtoštküme • 16) kuz’toštküme • 17) seiččemetoštküme • 18) kahesatoštküme • 19) ühesatoštküme • 20) kaks’küme • 30) kuumeküme • 40) nellküme • 50) vižküme • 60) kuzküme • 70) seiččemeküme • 80) kahesaküme • 90) ühesaküme • 100) sada • 1,000) tuha Numbers in Veps: Veps numbering rules Each culture has specific peculiarities that are expressed in its language and its way of counting. The Veps is no exception. If you want to learn numbers in Veps you will have to learn a series of rules that we will explain below. If you apply these rules you will soon find that you will be able to count in Veps with ease. The way numbers are formed in Veps is easy to understand if you follow the rules explained here. Surprise everyone by counting in Veps. Also, learning how to number in Veps yourself from these simple rules is very beneficial for your brain, as it forces it to work and stay in shape. Working with numbers and a foreign language like Veps at the same time is one of the best ways to train our little gray cells, so let's see what rules you need to apply to number in Veps Numbers from one to ten are specific words: üks’ [1], kaks’ [2], koume [3], nelli [4], viž [5], kuz’ [6], seiččeme [7], kahesa [8], ühesa [9], and kümne [10]. From eleven to nineteen, the numbers are formed from the matching digits, adding the -toštküme suffix at the end, which means from the second (ten): üks’toštkümne [11], kaks’toštküme [12], koumetoštküme [13], nellitoštküme [14], vižtoštküme [15], kuz’toštküme [16], seiččemetoštküme [17], kahesatoštküme [18], and ühesatoštküme [19]. The tens are formed by adding the -küme suffix (partitive case of kümne, ten) at the end of the matching multiplier digit, with the obvious exception of ten: kümne [10], kaks’küme [20], kuumeküme [30], nellküme [40], vižküme [50], kuzküme [60], seiččemeküme [70], kahesaküme [80], and ühesaküme [90]. Compound numbers from twenty-one to ninety-nine are formed by saying the ten, then the digit separated with a space (e.g.: kaks’küme üks’ [21], vižküme nelli [54]). One hundred is sada, and one thousand, tuha. Numbers in Finnic languages (Omniglot) Numbers in different languages
{"url":"https://numbersdata.com/numbers-in-veps","timestamp":"2024-11-09T20:00:58Z","content_type":"text/html","content_length":"19427","record_id":"<urn:uuid:f6b2602f-cf16-421a-9297-71b5b45db2d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00119.warc.gz"}
Long division is an algorithm for finding the quotient of two numbers expressed in decimal form. It works by building up the quotient one digit at a time, from left to right. Each time you get a new digit, you multiply the divisor by the corresponding base ten value and subtract that from the dividend. Using long division we see that \(513 \div 4 = 128 \frac14\). We can also write this as \(513 = 128 \times 4 + 1\).
{"url":"https://im.kendallhunt.com/MS_ACC/students/1/3/19/index.html","timestamp":"2024-11-11T04:57:12Z","content_type":"text/html","content_length":"92744","record_id":"<urn:uuid:abbd19f1-ccc0-4c87-955b-7d093b052e24>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00718.warc.gz"}
How Much Omega-3 Is Enough? That Depends on Omega-6. - Chris Kresser In the first article of this series, we discussed the problems humans have converting omega-3 (n-3) fats from plant sources, such as flax seeds and walnuts, to the longer chain derivatives EPA and DHA. In the second article, we discussed how excess omega-6 (n-6) in the diet can block absorption of omega-3, and showed that the modern, Western diet contains between 10 and 25 times the optimal level of n-6. In this article we’ll discuss strategies for bringing the n-6 to n-3 ratio back into balance. There are two obvious ways to to do this: increase intake of n-3, and decrease intake of n-6. Many recommendations have been made for increasing n-3 intake. The important thing to remember is that any recommendation for n-3 intake that does not take the background n-6 intake into account is completely inadequate. It’s likely that the success and failure of different clinical trials using similar doses of EPA and DHA were influenced by differing background intakes of the n-6 fatty acids. In the case of the Lyon Diet Heart Study, for example, positive outcomes attributed to ALA may be related in part to a lower n-6 intake (which would enhance conversion of ALA to EPA and DHA). This explains why simply increasing intake of n-3 without simultaneously decreasing intake of n-6 is not enough. Bringing n-3 and n-6 back into balance: easier said than done! Let’s examine what would happen if we followed the proposed recommendation of increasing EPA & DHA intake from 0.1 to 0.65g/d. This represents going from eating virtually no fish to eating a 4-oz. serving of oily fish like salmon or mackerel three times a week. The average intake of fatty acids (not including EPA & DHA) in the U.S. has been estimated as follows: • N-6 linoleic acid (LA): 8.91% • N-6 arachidonic acid (AA): 0.08% • N-3 alpha-linolenic acid (ALA): 1.06% Keep in mind from the last article that the optimal ratio of omega-6 to omega-3 is estimated to be between 1:1 and 2.3:1. Assuming a median intake of n-6 (ALA + LA) at 8.99% of total calories in a 2,000 calorie diet, that would mean a daily intake of 19.9g of n-6. If we also assume the recommended intake of 0.65g/d of EPA and DHA, plus an average of 2.35g/d of ALA (1.06% of calories), that’s a total of 3g/d of n-3 fatty acid intake. This yields an n-6:n-3 ratio of 6.6:1, which although improved, is still more than six times higher than the historical ratio (i.e. 1:1), and three times higher than the ratio recently recommended as optimal (i.e. 2.3:1). On the other hand, if we increased our intake of EPA and DHA to the recommended 0.65g/d (0.3% of total calories) and maintained ALA intake at 2.35g/d, but reduced our intake of LA to roughly 7g/d (3.2% of total calories), the ratio would be 2.3:1 – identical to the optimal ratio. Further reducing intake of n-6 to less than 2% of calories would in turn further reduce the requirement for n-3. But limiting n-6 to less than 2% of calories is difficult to do even when vegetable oils are eliminated entirely. Poultry, pork, nuts, avocados and eggs are all significant sources of n-6. I’ve listed the n-6 content per 100g of these foods below: • Walnuts: 38.1g • Chicken, with skin: 2.9g • Avocado: 1.7g • Pork, with fat: 1.3g • Eggs: 1.3g It’s not too hard to imagine a day where you eat 200g of chicken (5.8g n-6), half an avocado (1.1g n-6) and a handful of walnuts (10g of n-6). Without a drop of industrial seed oils (like safflower, sunflower, cottonseed, soybean, corn, etc.) you’ve consumed 16.9g of n-6, which is 7.6% of calories and far above the limit needed to maintain an optimal n:6 to n:3 ratio. Check the chart below for a listing of the n-6 and n-3 content of several common foods. Ditch the processed foods and cut back on eating out Of course, if you’re eating any industrial seed oils you’ll be way, way over the optimal ratio in no time at all. Check out these n-6 numbers (again, per 100g): • Sunflower oil: 65.7g • Cottonseed oil: 51.5g • Soybean oil: 51g • Sesame oil: 41.3g • Canola oil: 20.3g Holy moly! The good news is that few people these days still cook with corn, cottonseed or soybean oil at home. The bad news is that nearly all processed and packaged foods contain these oils. And you can bet that most restaurant foods are cooked in them as well, because they’re so cheap. So chances are, if you’re eating foods that come out of a package or box on a regular basis, and you eat out at restaurants a few times a week, you are most likely significantly exceeding the recommended intake of n-6. Two other methods of determining healthy n-3 intakes Tissue concentration of EPA & DHA Hibbeln et al have proposed another method of determining healthy intakes of n-6 and n-3. Studies show that the risk of coronary heart disease (CHD) is 87% lower in Japan than it is in the U.S, despite much higher rates of smoking and high blood pressure. When researchers examined the concentration of n-3 fatty acids in the tissues of Japanese subjects, they found n-3 tissue compositions of approximately 60%. Further modeling of available data suggests that a 60% tissue concentration of n-3 fatty acid would protect 98.6% of the worldwide risk of cardiovascular mortality potentially attributable to n-3 deficiency. Of course, as I’ve described above, the amount of n-3 needed to attain 60% tissue concentration is dependent upon the amount of n-6 in the diet. In the Phillipines, where n-6 intake is less than 1% of total calories, only 278mg/d of EPA & DHA (0.125% of calories) is needed to achieve 60% tissue concentration. In the U.S., where n-6 intake is 9% of calories, a whopping 3.67g/d of EPA & DHA would be needed to achieve 60% tissue concentration. To put that in perspective, you’d have to eat 11 ounces of salmon or take 1 tablespoon (yuk!) of a high-potency fish oil every day to get that much EPA & DHA. This amount could be reduced 10 times if intake of n-6 were limited to 2% of calories. At n-6 intake of 4% of calories, roughly 2g/d of EPA and DHA would be needed to achieve 60% tissue Like what you’re reading? Get my free newsletter, recipes, eBooks, product recommendations, and more! The Omega-3 Index Finally, Harris and von Schacky have proposed a method of determining healthy intakes called the omega-3 index. The omega-3 index measures red blood cell EPA and DHA as a percentage of total red blood cell fatty acids. Values of >8% are associated with greater decreases in cardiovascular disease risk. (Note that n-6 intake was not considered in Harris and von Shacky’s analysis.) However, 60% tissue concentration of EPA & DHA in tissue is associated with an omega-3 index of between 12-15% in Japan, so that is the number we should likely be shooting for to achieve the greatest reduction in CVD mortality. The omega-3 index is a relatively new test and is not commonly ordered by doctors. But if you want to get this test, you can order a finger stick testing kit from Dr. William Davis’ Track Your Plaque website here. It’ll cost you $150 bucks, though. What does it all mean to you? These targets for reducing n-6 and increasing n-3 may seem excessive to you, given current dietary intakes in the U.S.. Consider, however, that these targets may not be high enough. Morbidity and mortality rates for nearly all diseases are even lower for Iceland and Greenland, populations with greater intakes of EPA & DHA than in Japan. All three methods of calculating healthy n-3 and n-6 intakes (targeting an n-6:n-3 ratio of 2.3:1, 60% EPA & DHA tissue concentration, or 12-15% omega-3 index) lead to the same conclusion: for most people, reducing n-6 intake and increasing EPA & DHA intake is necessary to achieved the desired result. To summarize, for someone who eats approximately 2,000 calories a day, the proper n-6 to n-3 ratio could be achieved by: 1. Making no changes to n-6 intake and increasing intake of EPA & DHA to 3.67g/d (11-oz. of oily fish every day!) 2. Reducing n-6 intake to approximately 3% of calories, and following the current recommendation of consuming 0.65g/d (three 4-oz. portions of oily fish per week) of EPA & DHA. 3. Limiting n-6 intake to less than 2% of calories, and consuming approximately 0.35g/d of EPA & DHA (two 4-oz. portions of oily fish per week). Although option #1 yields 60% tissue concentration of EPA & DHA, I don’t recommend it as a strategy. All polyunsaturated fat, whether n-6 or n-3, is susceptible to oxidative damage. Oxidative damage is a risk factor for several modern diseases, including heart disease. Increasing n-3 intake while making no reduction in n-6 intake raises the total amount of polyunsaturated fat in the diet, thus increasing the risk of oxidative damage. This is why the best approach is to limit n-6 intake as much as possible, ideally to less than 2% of calories, and moderately increase n-3 intake. 0.35g/d of DHA and EPA can easily be obtained by eating a 4 oz. portion of salmon twice a week. Check out my Update on Omega-6 PUFAs here. Better supplementation. Fewer supplements. Close the nutrient gap to feel and perform your best. A daily stack of supplements designed to meet your most critical needs. 1. Hi Chris, Interesting stuff. That said, I what I never see addressed in such articles is the fact that, say, 1g of LA from veg sources isn’t comparable to 1g of EPA/DHA from oily fish – they’re at different stages in PUFA It’s my understanding that the body’s ability to convert LA and ALA into AA and EPA/DHA respectively is very poor. So, in an omnivorous diet, doesn’t this muddy the waters when it comes to calculating one’s ratios? Theoretically, if LA to AA conversion is poor, I could envision someone taking EPA/DHA in order to ‘balance’ things gram for gram actually pushing AA levels into deficit. 2. Chris, I have had a “battle” with fats, for a few years now. I get dizzy when I consume them. I eat mostly vegan diet. It would be great to talk to you “one on one” and get more in-depth. Basically, I lost significant body weight due to stress and have cold extremities. It seems I may not get enough. No sex drive at all and I’m not even 40. Lost majority of hair on my legs. Acidic foods make me irritable, trembling in legs. And I do have some parasites according to Genova tests. Doctors here are useless. To add more fish oil to my diet means unbearable dizziness. Any general advice? And any possibility of consults with you?
{"url":"https://chriskresser.com/how-much-omega-3-is-enough-that-depends-on-omega-6/","timestamp":"2024-11-11T20:39:36Z","content_type":"text/html","content_length":"318136","record_id":"<urn:uuid:b1486495-3833-41fc-9756-a0b6ea5acd8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00342.warc.gz"}
845 research outputs found We perform first principles calculations on CaFe2As2 under hydrostatic pressure. Our total energy calculations show that though the striped antiferromagnetic (AFM) orthorhombic (OR) phase is favored at P=0, a non-magnetic collapsed tetragonal (cT) phase with diminished c-parameter is favored for P > 0.36 GPa, in agreement with experiments. Rather than a mechanical instability, this is an enthalpically driven transition from the higher volume OR phase to the lower volume cT phase. Calculations of electronic density of states reveal pseudogaps in both OR and cT phases, though As(p) hybridization with Fe(d) is more pronounced in the OR phase. We provide an estimate for the inter-planar magnetic coupling. Phonon entropy considerations provide an interpretation of the finite temperature phase boundaries of the cT phase.Comment: 4 pages, 4 figures, 1 Tabl We consider an array of coupled cavities with staggered inter-cavity couplings, where each cavity mode interacts with an atom. In contrast to large-size arrays with uniform-hopping rates where the atomic dynamics is known to be frozen in the strong-hopping regime, we show that resonant atom-field dynamics with significant energy exchange can occur in the case of staggered hopping rates even in the thermodynamic limit. This effect arises from the joint emergence of an energy gap in the free photonic dispersion relation and a discrete frequency at the gap's center. The latter corresponds to a bound normal mode stemming solely from the finiteness of the array length. Depending on which cavity is excited, either the atomic dynamics is frozen or a Jaynes-Cummings-like energy exchange is triggered between the bound photonic mode and its atomic analogue. As these phenomena are effective with any number of cavities, they are prone to be experimentally observed even in small-size arrays.Comment: 12 pages, 4 figures. Added 5 mathematical appendice We develop the most probable wave functions for a single free quantum particle given its momentum and energy by imposing its quantum probability density to maximize Shannon information entropy. We show that there is a class of solutions in which the quantum probability density is self-trapped with finite-size spatial support, uniformly moving hence keeping its form unchanged.Comment: revtex, 4 We calculate that the electron states of strained self-assembled Ge/Si quantum dots provide a convenient two-state system for electrical control. An electronic state localized at the apex of the quantum dot is nearly degenerate with a state localized at the base of the quantum dot. Small electric fields shift the electronic ground state from apex-localized to base-localized, which permits sensitive tuning of the electronic, optical and magnetic properties of the dot. As one example, we describe how spin-spin coupling between two Ge/Si dots can be controlled very sensitively by shifting the individual dot's electronic ground state between apex and base We have developed a Green's function formalism based on the use of an overcomplete semicoherent basis of vortex states, specially devoted to the study of the Hamiltonian quantum dynamics of electrons at high magnetic fields and in an arbitrary potential landscape smooth on the scale of the magnetic length. This formalism is used here to derive the exact Green's function for an arbitrary quadratic potential in the special limit where Landau level mixing becomes negligible. This solution remarkably embraces under a unified form the cases of confining and unconfining quadratic potentials. This property results from the fact that the overcomplete vortex representation provides a more general type of spectral decomposition of the Hamiltonian operator than usually considered. Whereas confining potentials are naturally characterized by quantization effects, lifetime effects emerge instead in the case of saddle-point potentials. Our derivation proves that the appearance of lifetimes has for origin the instability of the dynamics due to quantum tunneling at saddle points of the potential landscape. In fact, the overcompleteness of the vortex representation reveals an intrinsic microscopic irreversibility of the states synonymous with a spontaneous breaking of the time symmetry exhibited by the Hamiltonian dynamics.Comment: 19 pages, 4 figures ; a few typos corrected + some passages in Sec. V rewritte We study the spin dependent tunneling of electrons through a zinc-blende semiconductor with the indirect X (or D) minimum serving as the tunneling barrier. The basic difference between tunneling through the G vs. the X barrier is the linear-k spin-orbit splitting of the two spin bands at the X point, as opposed to the k3 Dresselhaus splitting at the G point. The linear coefficient of the spin splitting b at the X point is computed for several semiconductors using density-functional theory and the transport characteristics are calculated using the barrier tunneling model. We show that both the transmission coefficient as well as the spin polarization can be large, suggesting the potential application of these materials as spin filters.Comment: 9 page The electronic structure of crystalline CdTe, CdO, $\alpha$-TeO$_2$, CdTeO$_3$ and Cd$_3$TeO$_6$ is studied by means of first principles calculations. The band structure, total and partial density of states, and charge densities are presented. For $\alpha$-TeO$_2$ and CdTeO$_3$, Density Functional Theory within the Local Density Approximation (LDA) correctly describes the insulating character of these compounds. In the first four compounds, LDA underestimates the optical bandgap by roughly 1 eV. Based on this trend, we predict an optical bandgap of 1.7 eV for Cd$_3$TeO$_6$. This material shows an isolated conduction band with a low effective mass, thus explaining its semiconducting character observed recently. In all these oxides, the top valence bands are formed mainly from the O 2p electrons. On the other hand, the binding energy of the Cd 4d band, relative to the valence band maximum, in the ternary compounds is smaller than in CdTe and CdO.Comment: 13 pages, 15 figures, 2 tables. Accepted in Phys Rev We investigate theoretically donor-based charge qubit operation driven by external electric fields. The basic physics of the problem is presented by considering a single electron bound to a shallow-donor pair in GaAs: This system is closely related to the homopolar molecular ion H_2^+. In the case of Si, heteropolar configurations such as PSb^+ pairs are also considered. For both homopolar and heteropolar pairs, the multivalley conduction band structure of Si leads to short-period oscillations of the tunnel-coupling strength as a function of the inter-donor relative position. However, for any fixed donor configuration, the response of the bound electron to a uniform electric field in Si is qualitatively very similar to the GaAs case, with no valley quantum interference-related effects, leading to the conclusion that electric field driven coherent manipulation of donor-based charge qubits is feasible in semiconductors Wave-packet interference is investigated within the complex quantum Hamilton-Jacobi formalism using a hydrodynamic description. Quantum interference leads to the formation of the topological structure of quantum caves in space-time Argand plots. These caves consist of the vortical and stagnation tubes originating from the isosurfaces of the amplitude of the wave function and its first derivative. Complex quantum trajectories display counterclockwise helical wrapping around the stagnation tubes and hyperbolic deflection near the vortical tubes. The string of alternating stagnation and vortical tubes is sufficient to generate divergent trajectories. Moreover, the average wrapping time for trajectories and the rotational rate of the nodal line in the complex plane can be used to define the lifetime for interference features.Comment: 4 pages, 3 figures (major revisions with respect to the previous version have been carried out During the last stage of collapse of a compact object into the horizon of events, the potential energy of its surface layer decreases to a negative value below all limits. The energy-conservation law requires an appearance of a positive-valued energy to balance the decrease. We derive the internal-state properties of the ideal gas situated in an extremely strong, ultra-relativistic gravitational field and suggest to apply our result to a compact object with the radius which is slightly larger than or equal to the Schwarzschild's gravitational radius. On the surface of the object, we find that the extreme attractivity of the gravity is accompanied with an extremely high internal, heat energy. This internal energy implies a correspondingly high pressure, the gradient of which has such a behavior that it can compete with the gravity. In a more detail, we find the equation of state in the case when the magnitude of the potential-type energy of constituting gas particles is much larger than their rest energy. This equation appears to be identical with the general-relativity condition of the equilibrium between the gravity and pressure gradient. The consequences of the identity are discussed.Comment: 12 pages (no figure, no table) Changes in 3-rd version: added an estimate of neutrino cooling and relative time-scale of the final stage of URMS collaps
{"url":"https://core.ac.uk/search/?q=author%3A(E.%20Madelung)","timestamp":"2024-11-13T21:33:47Z","content_type":"text/html","content_length":"160971","record_id":"<urn:uuid:ff380fa1-6b77-48b8-934d-25fdab12a14d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00528.warc.gz"}
High Performance Novel Square Root Architecture Using Ancient Indian Mathematics for High Speed Signal Processing High Performance Novel Square Root Architecture Using Ancient Indian Mathematics for High Speed Signal Processing () 1. Introduction Arithmetic circuits are indispensable in DSP and image processing applications. Many researchers have already designed different arithmetic circuits like adder, subtractor, multiplier, divider, squarer, square root architecture etc. [1] -[9] , but those techniques exhibit more propagation delay and power consumption. Square root design is a very challenging work in modern research area. Newton-Raphson’s (N-R) method of square root computation has been the most efficient approach so far. After N-R method, there has been no remarkable approach for square root computation so far. We are showing an efficient technique using ancient Indian mathematics. In ancient times, mathematicians made calculations based on 16 Sutras (Formulae). The idea for designing the square root processor has been adopted from ancient Indian mathematics “Vedas” [10] [11] . The well known duplex method is an ancient Indian method of extracting the square root. The algorithm incorporates digit by digit method for calculating the square root of a whole or decimal number one digit at a time. As per the proposed algorithm, the square root of any number is obtained in one step which reduces the iterations. Bakhshali method is used for finding a proximal square root described in ancient Indian mathematical manuscript called the Bakhshali manuscript. A prototype of 16 bit square root processors has been implemented and their functionality has been experimented on a Virtex-5 FPGA board. For calculating integer part of a number, we have used the Vedic method and for calculating the floating point part, we have adopted Bakhshali method. Many researchers have used the most popular Newton-Raphson’s method for computing square root of a number. In our methodology, we are combining both Vedic duplex and Bakhshali approximation to generate more accurate result with comparatively less time and power consumption. The paper has been organized as follows. Section 2 describes the back ground mathematics for calculating the square root. Section 3 gives the architectural description of the proposed technique. Section 4 describes the error calculation and accuracy analysis of the proposed technique. Result analysis has been described in Section 5 and Section 6 is the conclusion. 2. Square Root Algorithm Square root technique using Division method was introduced by ancient Indian Mathematicians and the procedure has been lucidly discussed in “Vedas”. The technique stands upon the execution by mere observation. Square root of a perfect square can be easily determined by the Division method. But for those numbers, which are not perfect square, Division method is not enough to compute with high precision. In this paper an iterative method described in Bakhshali Mathematics of Quran has been proposed to achieve high speed and high precision square root calculation. The mathematical formulation of the iterative approach is shown below. Consider a number X whose square root is to be calculated. The Bakhshali manuscript approach computes the square root of the perfect square which is nearest but less than the number X by the method of observational inspection. Consider A is the above mentioned perfect square whose square root is R. So it can be expressed as where Y is the residue. Equation (1) can be reformulated as So the square root of X can be written as Using binomial expansion and the mathematical formulae of Bakhshali manuscript, Equation (3) can be expressed as Equation (10) is the modified Bakshali expression for computing the square root of non-squared numbers. In modified Bakshali approach the accuracy is more as comparable to in Bakshali approach which is shown in the table. So in respect of accuracy modified Bakshali approach is beneficial. Also Bakshali methodology contains some drawbacks mentioned below: 1) Bakshali approach does not gives the idea for computing the nearest squared number and its squareroot. To obtain thenearest square number and its square root we have to go for Vedic approach. 2) The division method for calculating the square root using Vedic mathematics is based on “Vilokanam” (Inspection) method which incorporates an extra squaring and extra subtraction operations. The modified Bakshali approach eliminates the extra squaring and subtraction operations. thus the technique reduces the stage delay. The square root determination technique combining the Vedic Duplex and Bakhshali approach is shown in the flow chart of Figure 1. Figure 1. Architecture for square root determination technique. 2.1. Integral Square Root Determinant Using Vedic Methodology Mathematical Modeling of the Algorithm to Calculate Perfect Square Root Consider X is an N bit number whose square root is to be determined. X can be expressed as Now consider , (14) where Q is the quotient and R is the remainder. Inserting Equation (14), Equation (13) can be reformulated as Equation (15) can be rewritten as The procedure to calculate the square root by division method can be described in the following steps: Step 1: Obtain the nearest square root of the N/2 Most Significant Bits (MSB). Assume that the output is Z. Step 2: Determine the square of Z by combining Yavadunam and Duplex methodology. Step 3: Subtract the squared output from N/2 MSB. Step 4: Obtain the double of Z. Step 5: Combine the output of the subtractor and the next N/4 bits. Divide the combination by 2Z. Assume the quotient as Q and the remainder as R. Step 6: Determine the square of Q and subtract Q2 from the perfect square number whose square root is (Z + Q) otherwise (Z + Q) is the square root of the perfect square nearest to X. Again if “p” is positive then the nearest perfect square number is less than the given input else if “p” is negative then the nearest perfect square number is greater than the input. Step 7: Compute Step 8: Determine the square of Step 9: Compute Step 10: Finally subtract Step 11: Compute the maximum power of first left most “1” of 3. Proposed Architecture Square Root architecture following the methodology given in Step 1 to Step 6, consists of four sub sections: 1) Nearest Square Root Generation Unit (NSRGU) for first N/2 numbers; 2) Squaring Unit; 3) Sub-tractor; 4) Divider. Figure 2 exhibits a fully optimized system level architecture for square root generation using Duplex method adopted by ancient Indian mathematicians. In this architecture all the left shifters are N/4 bit left shifters. Quotient register is used to store the fixed point or integral result of the square root. The most significant two bits fed to the doubler whose output is fed to the subtraction unit. 3.1. Nearest Square Root Generation Unit In ancient Indian Mathematics, the nearest square number for a given number and its square root was calculated by the method of inspection. In this paper an easy and efficient architecture has been provided to compute the nearest square root generation. It is obvious that Figure 2. System level architecture for Integral part of Square root generator, N indicating number of input bits. must be of N/4 bits. The architecture of the nearest square root generation unit is same as the figure depicted in Figure 2. The above mentioned Step 1 to Step 6 are needed to obtain the nearest square root for N/2 bit number. 3.2. Squaring Unit Squaring algorithm and the corresponding architecture was implemented with the aid of “Yavadunam Sutra”. Mathematical Formulation of Yavadunam Sutra If a number X is between 2n − 1 and 2n then the average of 2n − 1 and 2n is then 2n is chosen as radix and if In Binary mathematics, the number From Equation (21), the square of the number X can be expressed as, Similarly, the expression of X^2 that can be obtained from Equation (22) is given as, Equation (24) and Equation (25) are the mathematical formulations of Yavadunam Sutra in Binary mathematics. The architecture of squaring algorithm using “Yavadunam Sutra” is shown in Figure 3. The basic building blocks of the architecture are 1) RSU, 2) Subtractor, 3) Add-Sub unit and 4) Duplex squaring architecture. The test bench waveform of RSU is shown in Figure 4. The Subtractor architecture using “Nikhilam” sutra has been elucidated in sub-section-(C). The input bits X (of length n) is fed to the RSU. The RSU produces proper radix and exponent for the input. The radix is subtracted from the input and the result is added to or subtracted from the input again depending upon the control signal generated from the borrow output of the sub-tractor. The output of add/sub unit is the residue which is again squared by duplex operation discussed in [1] and it is also fed to the left shifter which shift the data depending upon the exponent value. Finally the outputs of duplex squarer and left shifter are added to obtain the squared result. 3.3. Subtractor Mathematical Modeling of “Nikhilam” Sutra for Binary Subtraction The Subtraction method has been implemented using normal binary arithmetic. It is a bit wise subtraction method. The rule is mathematically expressed in Equation (26). Consider the number, Figure 3. Architecture of squaring algorithm using “Yavadunam” sutra. Figure 4. Test bench window for VHDL implementation of RSU. There is a borrow bit generated at the end of operation which signifies that the result is positive or negative. 3.4. Divider Using “Dhvajanka” Sutra (“On Top of the Flag”) 3.4.1. Mathematical Modeling of Division Operation Consider the number The term shifted right by n/2 terms which has been shown in Figure 5, to get the actual quotient. The mathematical modeling of division operation by “Vertically and Crosswise” methodology has been described in the following subsection. The architecture shown in Figure 6 consists of some elementary building blocks like left shifter, right shifter, incrementer and demultiplexer. Incrementer calculates the exponent of the number which is controlled by the shifted bit. The clock applied to the shifter is also controlled by the shifted bit. If the shifted bit is “1” then the shifters stops further shifting. 3.4.2. Mathematical Modeling of “On Top of the Flag” Sutra Consider the number Figure 5. RTL representation of exponent extraction unit. Figure 6. Divider architecture using “Dhvajanka” sutra. Here “x” signifies the radix. Figure 6 shows the architectural description of division operation using “On top of the flag” Sutra. The Dividend of N bits is divided into two equal N/2 bits. The most significant part is fed to the subtractor which is of N/2 + 1 bits size. Single bit zero padding is done in subtractor module to the left. The division procedure incorporated here is non-restoring type of division. The carry output after the subtraction is fed to the quotient array register as quotient. The quotient array register stores the quotients based on the iterated value which is used as position signal. The value N/2 + 1 is stored in the decrementer initially to count and check the specified iteration. After each iteration, both the left shifters are updated by shifting until the decremented value reaches zero. The Divider architecture combining the exponent extraction and the “Dhvajanka” Sutra is shown in Figure 7. The basic building blocks of the architecture are 1) Exponent Extraction Unit, 2) Divider using “Dhvajanka” Figure 7. Divider architecture combining exponent extraction and “Dhvajanka” sutra. sutra, 3) Subtractor, 4) Quotient Array Register and 5) Bidirectional Shift Register. Zero padding is needed to represent a number to higher number of bits. For example if a four bit number “0111” is represented in eight bit format then the representation becomes “00000111”. Here, zero padding has been executed based on the requirement. Bidirectional shift register performs the operation of shifting both to the left or right depending upon the control signal “carry” from the second subtractor. If carry = 0 then the input bits are shifted to the left and if carry = 1 then input bits are shifted to the right. The result of the second subtractor is fed to the shifter to control number shifting. The contents of the Quotient Array Register are the input to the Bidirectional Shift Illustration: Consider a fourth order function 3.4.3. Adder-Subtractor Unit Using “Nikhilam” Sutra for Subtraction From Equation (5), it is obvious that to achieve the final result an Adder-Subtractor unit is required which has been designed with the help of well-known binary addition/subtraction methodology. The architecture of Adder- Subtractor unit is shown in Figure 8. The carry signal from the square root determinant shown in Figure 2 is fed to the adder-subtractor unit at the control pin to assign the type of operation (addition/subtraction). If the carry signal is low then addition operation is executed else subtraction operation is executed. 4. Error Calculation and Accuracy Analysis The exact expression for square root can be recapitulated from Equation (3) as, The approximated expression for the square root can be expressed as, So the computational error (ec) in determining the square root can be expressed as, The percentage of error in calculating the square root is, Figure 8. 8 bit binary adder-subtractor architecture. So,Table 1 exhibits the comparative study of the computational error in proposed algorithm with respect to “Bakshali” algorithm. 5. Result Analysis The square root architecture has been implemented using VHDL and verified using Modelsim simulator which is shown in Figure 9 and tested in Xilinx simulator with Xilinx Vertex-5 FPGA using XC5VLX30 device with a package FF676 at a speed grade (−3). Table 2 shows the comparative study of the N-R method and the proposed method with respect to LUT count, delay and power consumption. The power has been measured using Xilinx Xpower Analyzer tool. Table 1. Coefficients of the quotient expression. Table 2. Comparison of LUT count, delay and power of two methods. Figure 9. Test bench window of VHDL implementation of square root architecture. 6. Conclusion From the result analysis, it is obvious that the architecture provides comparatively more accuracy than the well- known N-R approximation. Moreover, the proposed architecture produces less amount of propagation delay than N-R method and the power consumption is also very less as shown in Table 2. The area with respect to LUT count is also less than N-R method. It has also less circuit complexity than N-R technique which has been elucidated in the result analysis. We would like to thank our dear colleagues for their precious support to execute the research in the department.
{"url":"https://www.scirp.org/journal/paperinformation?paperid=57075","timestamp":"2024-11-13T15:56:50Z","content_type":"application/xhtml+xml","content_length":"111404","record_id":"<urn:uuid:bf7b8c81-50ba-448e-812e-cfb894580f55>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00084.warc.gz"}
Burning Calories How many calories do I burn per mile cycling? When I was a runner, the rule of thumb was 100 calories per mile, and when I started cycling, it was 40 calories per mile. But that number does not make any sense given I could be riding a downhill or climbing the Alpe d’Huze or riding flats or sprinting. Using the Cycling Models developed later, we can estimate the calories burned under these various conditions. Do I use Joules or Watts? Am I computing joules or watts? Calories are equivalent to 4.1858 Joules so we are interested in the total effort to ride a mile, rather not how many calories we burn per minute. The faster you ride a mile, the quicker you burn the calories. A person riding a mile in five minutes burns the same number of joules as a person riding it in three minutes. Riding Flats Riding Flats means working against Rolling Resistance and Aerodynamic Drag. Let’s first compute the number of Joules when riding on an asphalt surface at a range of speeds upwards of 50 mph which correspond to the max of Elite Cyclists. Calories Expended by an Elite Cyclist Riding Flats. Reagan Zogby Note that for most realistic riding scenarios for recreational riders, the calorie burn is less than 10 per mile. In fact, 40 Calories per mile is only achieved at the outer limit of an Elite cyclist’s limits. Making Ascents What are the comparable burn rates for an ascent? The total work is a combination of resistance forces and work against gravity. Work against gravity is computed by the CyclistCycle weight in Newtons multiplied by the elevation gain in meters. The chart provides the calories per mile to make a climb at a given slope measured by degrees. Note the one mile corresponds to the distance ridden so it is the Hypotenuse of the right angle. In general, aerodynamic drag is usually not included because the ascent speeds are much lower, and with most climbs snaking back and forth, aerodynamic will alternate between drag and push. Calories Burned by an Elite Cyclist Making an Ascent by Slope Angle. Reagan Zogby Whereas riding flats have a physical limit of 40 calories per mile only achieved at 50 mph at the limit of Elite Cyclist sprinting, we see here an Alpe d’Huze climb of 8 degree slope, we would be burning 50 calories per mile. Comparing Flats to Ascents for Calorie Burning Not surprisingly, cycling ascents burn more calories than flats. 20 mph on flats burns approximately 7.5 calories per mile. An 8° slope ascent would burn approximately 50 calories, or 6.7 times as Clearly, 40 calories per cycling mile is only achieved for challenging riding scenarios. On the flats, it would require an exceptional cyclist to be in that range. On ascents, however, moderate climbs are in that range and while it may take longer for a recreational cyclist to make an ascent, they will be burning in the 40 calorie and above range once the mile is complete. How Effective is Cycling for Weight Loss When I worked these numbers out, I was surprised to see how few calories were being burned. I had expected moderate climbs to burn in the range of 100 – 125 calories rather than 40. And I did not expect that riding flats would be less than 10 rather than 40 calories. By itself, cycling is not a big calorie burning engine. What I am cautioning against is misreading why cycling is good for your fitness. It is low impact, it gets you out and away from the refrigerator, and it can help keep your blood sugar levels in balance. The last is important because it can lower your appetite in general. It exacts a smaller toll on your joints which means it is something you can easily do into your Seventies unlike running. Simply put, burning calories is hard work and in comparison to other sports, cycling is one of the lower calorie burning ones. On the other hand, as part of a more comprehensive program, it can be an important component of a healthy living style.
{"url":"https://physicalcycling.com/burning-calories/","timestamp":"2024-11-03T06:37:03Z","content_type":"text/html","content_length":"53282","record_id":"<urn:uuid:e91f19d8-1cfb-4b7f-80d0-53978a6c8d1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00175.warc.gz"}
How to make a number of rootHow to make a number of root 🚩 to take out from under the root 🚩 Math. Use the definition of rootas a mathematical operation, from which it follows that the extraction of a root is the inverse operation of raising a number to a power. This means that the number can be taken from under the root if the decrease in radical expressions , the number of times that corresponds to a raised to the power passed. For example, to take from under the square root of the number 10, divide remaining under the root expression of ten squared. Pick up the radical number of this multiplier, making of which is under root do simplify the expression - otherwise the operation will lose meaning. For example, if under the sign of the root with exponent equal to three (cubic root), is the number 128, then out of sign can be taken, for example, the number 5. In this radical , the number 128 will have to be divided by 5 cubed: 3√128 = 5∗3√ (128/53) = 5∗3√(128/125) = 5∗3√1.024. If the presence of fractional numbers under the sign of root is not contrary to the conditions of the problem, the solution can be left in this form. If you need a more simple variant, we first divide radical expression for such an integer multiplier, cube root one of which will be a whole numberC. for Example: 3√128 = 3√(64∗2) = 3√(43∗2) = 4∗3√2. Use for the selection of the multipliers radical number calculator, if you calculate in the mind the powers of a number is not possible. This is especially important for the rootm with the exponent greater than two. If you have Internet access, you can calculate the built-in Google search engine and Nigma solvers. For example, if you need to find the largest integer multiplier, which can be taken from under the sign of the cubic root for numbers 250, then going to Google enter "6^3" to check, can you just take out of sign of root six. The search engine displays the result equal to 216. Alas, the 250 cannot be divided without a remainder is the number. Then enter the query 5^3. The result will be 125, and it allows you to break 250 on the multipliers 125 and 2, and therefore to stand under the sign of the root number 5, leaving the number 2.
{"url":"https://eng.kakprosto.ru/how-59719-how-to-make-a-number-of-root","timestamp":"2024-11-06T01:24:09Z","content_type":"text/html","content_length":"33849","record_id":"<urn:uuid:9e2bebd2-b358-4394-92c0-c4b2ec82634e>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00705.warc.gz"}
Glencoe Pre Calc Textbook Pdf Glencoe Pre Calc Textbook Pdf. See all formats and editions. An investigation of functions (2nd ed) david lippman and melonie rasmussen. Click an item at the left to access links, activities, and more. An investigation of functions (2nd ed) david lippman and melonie rasmussen. The complete classroom set, print & digital. Functions From A Calculus Perspective. Web precalculus, student edition (advanced math concepts) 1st edition. Web glencoe advanced mathematical concepts: An investigation of functions (2nd ed) david lippman and melonie rasmussen. Open The Glencoe Precalculus Pdf And Follow The Instructions. Click an item at the left to access links, activities, and more. Web glencoe precalculus ©2011, 2nd edition, is a comprehensive program that provides more depth, more applications, and more opportunities for students to be successful in college. Common core teacher edition by john a. This Textbook Is Available In English And Spanish. For example, in the lesson on variables. Precalculus with applications (pdf) 1996 • 1,130 pages • 86.07 mb • english. By mcgraw hill (author) 4.7 135 ratings. Easily Sign The Glencoe Mcgraw Hill Precalculus Teacher Edition Pdf With Your Finger. Once your teacher has registered for the online student edition, he or she will give you the user name and. Cuevas,roger day,carol mallory,luajean bryan,berchie holliday,viken. The complete classroom set, print & digital. An Investigation Of Functions Is A Free, Open Textbook. See all formats and editions. Always keep your workbook handy. Glencoe precalculus student edition (2nd edition) edit edition.
{"url":"https://haravgipdf.com/glencoe-pre-calc-textbook-pdf/","timestamp":"2024-11-09T12:27:22Z","content_type":"text/html","content_length":"161723","record_id":"<urn:uuid:ce24681f-52b6-4c0a-9c08-0e80b410e83a>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00714.warc.gz"}
Pandas rank() Function Pandas is an open-source Python library that provides Data manipulation and analysis tools. It includes data structures such as Dataframes and Series for handling structured data. The library also offers various functions, such as rank(), groupby(), count(), etc., for reshaping or manipulating your dataset. The rank() function is used for calculating the ranking of data elements based on some of their column values. In this article, you will learn about the rank() function in Pandas with the help of some examples. Let’s get started. What is the rank() function in Pandas? The rank() function assigns a rank to the elements in a Series or Dataframe. It computes the numerical rank of each element, usually based on the position of an element within the sorted dataset. It handles ties by taking the average(by default) of the ranks they would have received. The rank() function accepts the following parameters:- 1. method: This parameter controls how ties are handled. It can take the following values - average, min, max, first(rank based on the order of appearance), and dense (similar to min, but the rank is incremented by 1). We’ll take a look at each of them closely in the examples. 2. axis: This parameter specifies whether to rank elements along rows (0) or columns (1). By default, the value of this parameter is 0. 3. na_option: This parameter controls how missing values are treated. It can take the following values - keep(NaN values receive NaN ranks), top(ranked lower than non-NaN values), and bottom(ranked higher than non-NaN values). 4. ascending: This is a boolean parameter specifying whether the ranks should be assigned in ascending(true) or descending(false) order. The default value is true. 5. pct: If this parameter is true, the rankings are displayed in percentile form. By default, its value is false. 6. numeric_only: If this parameter is true, the function only ranks columns with numeric values. By default, this parameter is set to false. Return Types The return type is the same as the caller object. For example, if the caller object is a dataframe, the function returns a dataframe containing the data ranks as values. In the next section, we will go through some examples of using the rank() function in Pandas.
{"url":"https://www.naukri.com/code360/library/pandas-rank-function","timestamp":"2024-11-14T14:47:22Z","content_type":"text/html","content_length":"411867","record_id":"<urn:uuid:145ddcd7-d102-4d17-9f03-6fd220e7870c>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00341.warc.gz"}
Excel TextJoin Function - Free Excel Tutorial This post will guide you how to use Excel TEXTJOIN function with syntax and examples in Microsoft excel. how to concatenate strings using the TEXTJOIN function in excel 2016. we can use CONCATENATE function to join text in differenct cells in 2013 or lower version excel. The Excel TEXTJOIN function joins two or more text strings together and separated by a delimiter. you can select an entire range of cell references to be combined in excel 2016. and you can also specify a en empty string as delimiter to include between each texts in different cells. The TEXTJOIN function is a build-in function in Microsoft Excel and it is categorized as a Text Function. The TEXTJOIN function is only available in Excel 2016. The syntax of the TEXTJOIN function is as below: = TEXTJOIN (delimiter, ignore_empty,text1,[text2]) Where the TEXTJOIN function arguments are: ● delimiter -This is a required argument. It can be a text string or empty string to join text values with a given delimiter, it can be a space,comma,hash or other text string. ● Ignore_empty – This is a required argument. If TRUE, empty cells or string value should be ignored. ● Text1/Text2– This is a required argument. One or more strings that you want to join together. Note: the text arguments should not exceed 252 and the length of resulting string should not exceed the cell limit of 32767. Excel TEXTJOIN Function Examples The below examples will show you how to use Excel TEXTJOIN Text function to concatenate the items of one or more text strings using a specified delimiter. #1 To join strings in B1,C1,D1 cells, just using formula:= TEXTJOIN (“,”,TRUE,B1,C1,D2). #2 while you want to concatenate the strings by specifying multiple delimiter and order, you can use the following formula: #3 To join values from the multiple cell ranges with a double dash character as delimiter. using the following formula: #4 to join string in range cell A1:C1 by a comma character as delimiter and empty cells should not be ignored. using the following formula: Video: How to Use Excel TextJoin Function This Excel video tutorial where we’ll master the art of combining text in Excel using the TEXT JOIN function. This feature, available in Excel 2016 and later versions, allows you to effortlessly merge text strings from multiple cells with a delimiter of your choice. More TEXTJOIN Formula Examples • Remove Numeric Characters from a Cell If you want to remove numeric characters from alphanumeric string, you can use the following complex array formula using a combination of the TEXTJOIN function, the MID function, the Row function, and the INDIRECT function..… • remove non numeric characters from a cell If you want to remove non numeric characters from a text cell in excel, you can use the array formula:{=TEXTJOIN(“”,TRUE,IFERROR(MID(B1,ROW(INDIRECT(“1:”&LEN(B1))),1)+0,””))}… • How to combine Text from Two or More Cells into One Cell If you want to combine text from multiple cells into one cell and you can use the Ampersand (&) symbol.If you are using the excel 2016, then you can use a new function TEXTJOIN function to combine text from multiple cells…
{"url":"https://www.excelhow.net/excel-textjoin-function.html","timestamp":"2024-11-11T21:43:32Z","content_type":"text/html","content_length":"90983","record_id":"<urn:uuid:29667b90-d3cd-4c0b-96be-052732a6f57b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00809.warc.gz"}
Phase diagram of 3D SU(3) gauge-adjoint Higgs system and C-violation in hot QCD Thermally reduced QCD leads to three dimensional SU(3) gaugefields coupled to an adjoint scalar field $A_0$. We compute the effective potential in the one-loop approximation and evaluate the VEV's of $TrA_0^2$ and $TrA_0^3$. In the Higgs phase not only the former, but also the latter has a VEV. This happens where the SU(3) gauge symmetry is broken minimally with U(2) still unbroken. The VEV of the cubic invariant breaks charge conjugation and CP. It is plausible that in the Higgs phase one has a transition for large enough Higgs selfcoupling to a region where $TrA_0^3$ has no VEV and where the gaugesymmetry is broken maximally to $U(1)\times U(1)$. For a number of colours larger than 3 an even richer phase structure is possible. Physics Letters B Pub Date: February 1999 □ High Energy Physics - Phenomenology 6 pages, 3 figures, LaTeX and revtex additional references, corrected typos
{"url":"https://ui.adsabs.harvard.edu/abs/1999PhLB..448...85B/abstract","timestamp":"2024-11-13T15:34:13Z","content_type":"text/html","content_length":"37335","record_id":"<urn:uuid:66601efa-8420-48a0-8770-9655370176d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00376.warc.gz"}
Matrix element of (tensor products of) Pauli operators in Julia Hi Arnab, yes this is a task that MPS (and ITensor) is perfectly suited for. The simplest way to do this is to: 1. choose a target string i1, i2, i3, that you want to compute (i.e. fix the i’s) 2. apply the X^i_n operators to the MPS psi. Since these are single-site operators they can be applied very quickly and efficiently (even in parallel if you want to go there) by just multiplying each MPS by X on its physical index. In ITensor, you obtain the X operator for that site, use the * operator to contract it with the MPS, then call noprime to set the site’s prime level back to 0. Of course for sites where you act with the identity, you can just skip over such sites. 3. finally take the resulting MPS, calling it Xpsi, say, then call inner(phi,Xpsi) to obtain the overlap of the modified psi with phi. Then you can repeat this for other i-strings. Finally, if you are planning to obtain a large number, or all amplitudes this way, there should be a way to organize them where say you reuse substrings, i.e. overlap by hand the modified psi tensors with phi tensors and then attach the next ones in both combinations and so on. It would take a bit of thought to see how advantage there would be in doing this (because ultimately you’d still have to make exponentially many such “substring partial overlap” tensors anyway). Let me know if any of those steps aren’t clear. Oh also we have a new framework precisely for making it easier to apply various operators to MPS, but it’s not totally well documented yet. Here is a sample code file showing it off: But it’s overkill just for your case of single-site operators which you can definitely do with the usual ITensor interface. Hi Miles, Thank you for your answer. Actually my initial code does what you say upstairs. But the issue is I am indeed extracting all the amplitudes (all the 2^N of them) this way and thus it would be nice to not write a line of code for each fixed value of i. Hi Arnab, Yes let's discuss this some more. So in the version I am thinking, you would write one line of code (or short block of code) not for *every* i, but which would work generically for any fixed, given i. Then you would put a "for" loop around this which loops over all possible values of i, and store the results in an array. For example, inside this block of code (within the for loop), as you reach each site, the code would ask "if i_n == 1" then if it is true, it would apply an X on site n. Otherwise it would go onto the next site. So there would be an inner loop from i=1:N with such an if statement inside. Is that what you already had in mind or was I misunderstanding the question? In terms of how to loop over all bit strings in Julia, there are a few different ways if you don't already know them (just thought I would share what I know). You'll have to double check that these work properly: (1) This bit of code: bits = [1:2 for n in 1:N] its = NTuple{N,Int}[] for it in Iterators.product(bits...) # ... code goes here ... (2) or the solutions in this forum post: Hi Miles, Thank you for the additional comments. I am indeed not very familiar with Julia as a language. I will try the looping over bit string idea to see if that resolves the problem. Hi Miles, A quick follow-up to this: from what you outline in your answer, what exactly does the "noprime" function play here and how do you apply it? Is it possible to refer me to some documentation regarding this or just a sample line of code that applies X⊗I⊗X⊗X⊗I⊗I to a 6-qubit MPS state |psi> in Julia? Hi Arnab, so ITensor indices (Index objects) have a "prime level" which can be used to distinguish them from other indices that would otherwise compare equal to them. The noprime function, when called on an ITensor, returns a copy of that ITensor with all of the primelevels of the indices set to zero. There is more information about all this in the following page of the ITensor book: Also here is the documentation for the `noprime` function (at least the version taking an Index, and there are similar versions taking an ITensor, an MPS, etc.): To access this kind of documentation, you can also do: ?noprime inside of the interactive mode of Julia. Here is what doing ?noprime inside of Julia gives: help?> noprime search: noprime noprime! Return a copy of Index i with its prime level set to zero. noprime[!](A::ITensor; <keyword arguments>) -> ITensor noprime(is::IndexSet; <keyword arguments>) -> IndexSet Set the prime level of the indices of an ITensor or IndexSet to zero. Optionally, only modify the indices with the specified keyword arguments. Hi Miles, Sorry for these lingering questions. I tried another method for terms involving pauli operators that have support at more than one site: 1. First define a MPO using the AutoMPO function : sites = siteinds("S=1/2",N) sample = AutoMPO() sample+= 4,"Sx",4,"Sx",5; 2. Use Inner(phi,Operator,psi) to get the matrix element for X_4⊗X_5. However this outputs nonsensical results like -7.968 for normalized states psi/phi. Can you point me to why this method doesn't work? Hi Arnab, I'm not sure why you are getting that result. The only thing that looks 'off' about your code to me is that the line: Operator = MPO(ampo,sites); seems to be taking "ampo" as input rather than "sample". So did you define another AutoMPO object earlier in the code called "ampo"? When I run this code in Julia I get the following result: julia> let sites = siteinds("S=1/2",N) sample = AutoMPO() sample+= 4,"Sx",4,"Sx",5; A = randomMPS(sites,4); B = randomMPS(sites,4)
{"url":"http://itensor.org/support/2824/matrix-element-tensor-products-of-pauli-operators-in-julia?show=2828","timestamp":"2024-11-07T06:39:40Z","content_type":"text/html","content_length":"47204","record_id":"<urn:uuid:3c7575fc-9467-4b49-b32b-2dc0de0194a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00611.warc.gz"}
How To Multiply Fractions 2024 How to Multiply Fractions: A Complete Guide Multiplying fractions seems a lot complicated, but it is actually quite simple once you know how to do it. Whether you are simply looking to help with homework or trying to beef up your math skills, this guide will take you through the steps of multiplying fractions-from some very basic cases through very complex ones, touching on all helpful tips and real-life applications. Let’s get down to How to Multiply Fractions: Understanding Fractions How to Multiply Fractions: What Are Fractions? A fraction represents a part of a whole. It consists of two parts: the numerator and the denominator. • Numerator: The top part of the fraction. It shows how many parts you have. • Denominator: The bottom part of the fraction. It indicates how many equal parts the whole is divided into. For example, in the fraction 34: • The numerator is 3, meaning you have three parts. • The denominator is 4, indicating the whole is divided into four equal parts. How to Multiply Fractions: Types of Fractions Before learning how to multiply fractions, it’s important to understand the different types of fractions: • Proper Fractions: These have numerators smaller than the denominators (e.g., 35). • Improper Fractions: These have numerators that are greater than or equal to the denominators (e.g., 74). • Mixed Numbers: These combine a whole number with a fraction (e.g., 112). Understanding these types helps when multiplying fractions, particularly when dealing with mixed numbers. How to Multiply Fractions: Step-by-Step: How to Multiply Fractions Now that we have the basics down, let’s go through the steps on how to multiply fractions. How to Multiply Fractions Step 1: Multiply the Numerators The first step in multiplying fractions is to multiply the numerators together. Example: Multiply 23 by 45. So, the new numerator is 8. How to Multiply Fractions Step 2: Multiply the Denominators Next, multiply the denominators together. Now, the new denominator is 15. How to Multiply Fractions Step 3: Write the New Fraction After you have multiplied the numerators and the denominators, write them as a new fraction. • The result of multiplying 23 and 45 gives you 815. How to Multiply Fractions Step 4: Simplify the Fraction (If Necessary) Finally, you should check if the fraction can be simplified. A fraction is simplified when there are no common factors between the numerator and denominator other than 1. In our example, 815 cannot be simplified because the numbers 8 and 15 do not share any common factors. Thus, the answer remains 815. How to Multiply Fractions: Quick Summary of Steps 1. Multiply the numerators. 2. Multiply the denominators. 3. Write the new fraction. 4. Simplify if needed. How to Multiply Fractions: Examples of Multiplying Fractions Let’s look at a few more examples to solidify this knowledge. Example 1: Proper Fractions Multiply 12 by 34. 1. Numerators: 1×3=3 2. Denominators: 2×4=8 3. New Fraction: 38 Result: 12×34=38 Example 2: Improper Fractions Multiply 53 by 27. 1. Numerators: 5×2=10 2. Denominators: 3×7=21 3. New Fraction: 1021 Result: 53×27=1021 (It cannot be simplified.) Example 3: Multiply a Whole Number by a Fraction To multiply a whole number by a fraction, first convert the whole number to a fraction by putting it over 1. Multiply 3 by 14: 1. Convert 3 to 31. 2. Numerators: 3×1=3 3. Denominators: 1×4=4 4. New Fraction: 34 Result: 3×14=34 Example 4: Mixed Numbers When multiplying mixed numbers, first convert them to improper fractions. Multiply 212 by 23. 1. Convert 212 to an improper fraction:2×2+1=5, so 212=52. 2. Now multiply: □ Numerators: 5×2=10 □ Denominators: 2×3=6 □ New Fraction: 106 3. Simplify the fraction:106=53 (after dividing the numerator and denominator by 2). Result: 212×23=53 How to Multiply Fractions: Common Errors in Multiplying Fractions While multiplying fractions is simple, a few common mistakes can be made: 1. Not simplifying the fraction: Always check if you can reduce your answer to its simplest form. 2. Confusing addition with multiplication: Remember, you multiply both numerators and denominators, not add them. 3. Keeping the denominator the same when multiplying: This misconception may arise from adding or subtracting fractions, where the denominator must be common. 4. Failing to convert mixed numbers: Short-cutting the conversion process can lead to errors in multiplication. How to Multiply Fractions: Tips to Avoid Mistakes • Practice regularly: The more you practice, the more familiar you will become with the steps. • Check your work: After solving a problem, quickly go through the steps to ensure accuracy. • Use visual aids: Diagrams or physical manipulatives can help solidify understanding. How to Multiply Fractions: Real-Life Applications of Multiplying Fractions Understanding how to multiply fractions can be incredibly useful in everyday life. Here are some scenarios where you might need this skill: Cooking and Baking When a recipe calls for a certain fraction of an ingredient, knowing how to multiply fractions can help you adjust it. For instance, if a recipe calls for 34 cup of sugar and you want to make half of it, you would calculate: When you find an item on sale for a fraction of its original price, multiplying can help you find out how much you will save. If a shirt costs $40 and is 25% off, you can calculate the discount: • Convert 25% to a fraction: 25100=14 • Multiply: 40×14=10 This means you save $10! Crafts and Construction When working on craft projects or making furniture, you often need to measure materials accurately, which can involve fractions. For example, if you are using 23 of a board and need another 12 for a project, you can determine how much board you need: This tells you that a third of the board is needed for the project. How to Multiply Fractions: Current News Recent developments in technology in education make learning how to multiply fractions more interactive and engaging. Many teachers are now using digital platforms with lessons that allow students to visually see the process through animation and simulation. Moreover, research shows that the basic foundation of advanced math is in understanding fractions. Therefore, teachers are keen to see to it that students not only acquire skills in multiplication of fractions, but they also understand the concept behind fractions so that it becomes relevant enough when applied in various situations in life. The math practice mobile apps have become so popular; it provides practice for a good range of mathematical problems to be solved, from simple to complex fractions. Most of the time, these tools consist of games and can really increase learning. How to Multiply Fractions: Conclusion Multiplying fractions is one of the crucial skills—most essential to everyday life events, as in cooking or budgeting. Following the steps highlighted in this tutorial, considering types of problems, and avoiding what plateaus you on such a lesson will make you confident enough to multiply fractions. Whether one is helping a child with homework or just refreshing one’s own skills, knowing the steps of multiplication gives one the confidence that they can handle any fraction-based hurdle that may come their way. Just remember, practice makes perfect; keep working these skills, and before you know it, multiplying fractions will just come naturally. It is only when you are given the proper tools and knowledge that you can approach fractions-correctly and clearly. Now, take to calculating! Common Problem Solving blogs: Best Support for Hip Joint Pain Fatty Liver Disease Drug: Live Pure Are Teeth Bones: Unraveling the Mystery 2 hours of sun a day lowering blood sugar Knowing Fat Burner —A Voyage to a Healthier You. AI-Powered Video And Content Creation CelluCare: New Breakthrough In Blood Sugar Science The Ultimate Guide to Dolphin Tattoos Learning to draw is supposed to be difficult How to Series How to Cancel Kindle Unlimited How to Buy Bitcoin on eToro App How to Deactivate Facebook: A Simple Guide for Everyone How to Delete Instagram Account: A Simple Guide for Everyone How to Screenshot on Windows: A Complete Guide for Everyone How to Screenshot on Mac: A Complete Guide for Everyone How to Change Your Name on Facebook How to Block Someone on TikTok How to Delete Your Facebook Account How Long to Boil Corn on the Cob How Long Does It Take to Get a Passport How to Screen Record on iPhone
{"url":"https://dailybreez.com/how-to-multiply-fractions/","timestamp":"2024-11-11T08:06:54Z","content_type":"text/html","content_length":"103940","record_id":"<urn:uuid:9d77c85f-71b9-4cc2-8f7b-0d162dfb55f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00001.warc.gz"}
A091362 - OEIS Apparently if the squares of the digits of a prime sum to a prime, it is more likely that the digits themselves also sum to a prime. In the first 10,000 primes there are 1558 primes p such that the squares of the digits of p sum to a prime. Of these, only 360 are such that the sums of the digits are not prime. Interestingly, all of these primes have a digit sum of 25 or 35. Essentially this sequence is the terms of (primes whose digits squared sum to a prime) that do not also appear in (primes whose digits sum to a prime). a(1)=997 because 9+9+7 = 25 which is not prime, but 9^2+9^2+7^2 = 211 which is prime. ssdQ[n_]:=Module[{idn=IntegerDigits[n]}, !PrimeQ[Total[idn]]&&PrimeQ[ Total[ idn^2]]]; Select[Prime[Range[2100]], ssdQ] (* Harvey P. Dale , Jun 28 2011 *) (primes whose digits sum to a prime), (primes whose digits squared sum to a prime). Chuck Seggelin (barkeep(AT)plastereddragon.com), Jan 03 2004
{"url":"https://oeis.org/A091362","timestamp":"2024-11-05T07:43:29Z","content_type":"text/html","content_length":"13553","record_id":"<urn:uuid:4284e0f7-47f1-4f05-a96e-047d2ab7f30d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00363.warc.gz"}
How many amps is 50kva? How many amps is 50kva? kVA to amperes, table for conversion, equivalence, transformation (3F, 220Volt) : How many kVA are: Ampere Equivalence 2 kVA 5.25 Amp. 3 kVA 7.87 Amp. 4 kVA 10.50 Amp. 5 kVA 13.12 Amp. How many amps can a 50kva transformer handle? 50 kva is 50000 volt amps which at 240 volts gives you 200 amps, but we all know that you would use a 50 kva to feed more than 1 200 amp service. How many amps is a 10 kVA transformer good for? 45.45 Amps It’s comprised of both electrical potential (Volts) and electrical current (Amps). 1 kVA is a frequently used unit; it represents 1,000 Volt-Amperes….kVA To Amps Calculator (With Table) kVA (Apparent Power) Voltage (220 V) Amperage (A) How many amps is 5 kVA? 220 V 22.73 Amps How many amps is 10 kVA? 220 V 45.45 Amps How many amps is a 3 phase supply? If a three-phase supply is available, then the 24,000 watts are divided by 3, meaning that 8000 watts is being used per phase. Now the current per phase is also down to a third of what it would be with a single phase supply (about 30 amps per phase, rather than 100). How big is 50kva? Cummins Genset 50 kVA, Size: 973 X 784 X 870mm, Eo Energy Private Limited | ID: 21659482988. How do you calculate amps in a 3 phase circuit? Divide the power consumption in watts by the line voltage multiplied by the power factor to find the amperage. For three phase circuits the power factor is the square root of 3. If your calculator doesn’t have a square root function, use 1.73 as an approximation of the square root of 3. How many amps is a 25 kVA 3 phase transformer? Three Phase Transformer KVA 208V 480V 20 56.6 24.1 25 69.5 30.1 30 83.4 36.1 37.5 104 45.2 How many amps is a 3 kVA transformer good for? Three Phase Transformer KVA 208V 240V Amps Amps 3 8.3 7.2 6 16.6 14.4 9 25.0 21.7 How many kVA is 100 amps 3 phase? Note : The conversions of the previous table were made taking into account a voltage of 220V, with a three-phase AC power….Amp. to kVA, table for conversion, equivalence, transformation (Voltage = 220, AC, 3F): How many Amps are: Equivalence in kVA 70 Amps 26.67 kVA 80 Amps 30.48 kVA 90 Amps 34.29 kVA 100 Amps 38.11 kVA What is the 1.73 in 3-phase? In a 3-phase system the voltage between any two phases is 3 times higher than the voltage of an individual phase by a factor of 1.73 (square root of 3 to be exact). A 220V system with three 220V phases has a 220 * 1.73 = 380V cross-phase voltage. How many amps per leg is 200 amp service? In the US, a 200 amp service will provide 200 amps per leg. You get a two pole 200A circuit breaker with such a service with 120 volts between each leg and neutral and 240 volts between legs. What is the ratio of kVA to amps in three phase? Three Phase kVA to Amps The three-phase kVA calculation is the ratio of 1000 times kVA to 1.73 times voltage. I = How many amps does a 50kVA feed? 50 kva is 50000 volt amps which at 240 volts gives you 200 amps, but we all know that you would use a 50 kva to feed more than 1 200 amp service. Let’s keep this single phase primary. . . How many AMPS is 3000 kVA? 2000 kVA: 5248.64 Amp. 3000 kVA: 7872.96 Amp. 4000 kVA: 10497.28 Amp. 5000 kVA: 13121.60 Amp. 6000 kVA: 15745.92 Amp. 7000 kVA: 18370.24 Amp. How do you find the amperage of a 3 phase circuit? Example: Find the amperage that passes through a 10 kVA, 2 phase circuit having operating voltages 250 V. The three-phase kVA calculation is the ratio of 1000 times kVA to 1.73 times voltage. I = [kVA * 1000] / (E * 1.73)
{"url":"https://yourfasttip.com/miscellaneous/how-many-amps-is-50kva/","timestamp":"2024-11-08T19:02:01Z","content_type":"text/html","content_length":"67838","record_id":"<urn:uuid:590dcd19-ce4f-4aad-bb5d-f9415694c095>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00213.warc.gz"}
Linear Classifiers Explore linear classifiers, their principles, and their training process. We'll cover the following What is a linear classifier? Suppose we want to build a machine learning model to classify the following points into two categories based on their color. It is very easy to see that we can find a single point that separates them perfectly. The goal of our model is to find this point. Get hands-on with 1400+ tech skills courses.
{"url":"https://www.educative.io/courses/intro-deep-learning/linear-classifiers","timestamp":"2024-11-08T15:03:01Z","content_type":"text/html","content_length":"712860","record_id":"<urn:uuid:c55581a2-6aa2-4f7c-a161-3231acb4f914>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00162.warc.gz"}
value of the test statistic.What is your decision regarding the null hypothesis?Determine, in a single sentence, the results of the statistical test. - London Term Papers value of the test statistic.What is your decision regarding the null hypothesis?Determine, in a single sentence, the results of the statistical test. Heinz, a manufacturer of ketchup, uses a particular machine to dispense 16 ounces of its ketchup into containers. From many years of experience with the particular dispensing machine, Heinz knows that the amount of produce in each container follows a normal distribution with a mean of 16 ounces and a standard deviation of 0.15 ounce. A sample of 50 containers filled last hour revealed that the mean amount per container was 16.017 ounces. Does this evidence suggest that the mean amount dispensed is different from 16 ounces? Use the .05 significance level.State the null and alternate hypothesis.What is probability of a Type I error?Give the formula for the test statistic.State the decision rule.Determine the value of the test statistic.What is your decision regarding the null hypothesis?Determine, in a single sentence, the results of the statistical test.
{"url":"https://www.londontermpapers.co.uk/essays/value-of-the-test-statistic-what-is-your-decision-regarding-the-null-hypothesisdetermine-in-a-single-sentence-the-results-of-the-statisticaltest/","timestamp":"2024-11-12T10:02:53Z","content_type":"text/html","content_length":"53237","record_id":"<urn:uuid:1b3c8b28-963a-48dd-8f61-2b7fdaf99e86>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00461.warc.gz"}
Held-To-Maturity Securities - principlesofaccounting.com held-to-maturity securities are normally accounted for by the amortized cost method. To elaborate, if an individual wishes to borrow money he or she would typically approach a bank or other lender. But, a corporate giant’s borrowing needs may exceed the lending capacity of any single bank or lender. Therefore, the large corporate borrower may instead issue “bonds,” thereby splitting a large loan into many small units. For example, a bond issuer may borrow $500,000,000 by issuing 500,000 individual bonds with a face amount of $1,000 each (500,000 X $1,000 = $500,000,000). If an individual wished to loan some money to that corporate giant, he or she could do so by simply buying (“investing in”) one or more of the bonds. The specifics of bonds will be covered in greater detail in a subsequent chapter, where bonds are examined from the issuer’s perspective (i.e., borrower). For now, bonds will be considered from the investor perspective. Each bond has a “face value” (e.g., $1,000) that corresponds to the amount of principal to be paid at maturity, a contract or stated interest rate (e.g., 5% — meaning that the bond pays interest each year equal to 5% of the face amount), and a term (e.g., 10 years — meaning the bond matures 10 years from the designated issue date). In other words, a $1,000, 5%, 10-year bond would pay $50 per year for 10 years (as interest), and then pay $1,000 at the stated maturity date. The Issue Price How much would one pay for a 5%, 10-year bond: Exactly $1,000, more than $1,000, or less than $1,000? The answer to this question depends on many factors, including the credit-worthiness of the issuer, the remaining time to maturity, and the overall market conditions. If the “going rate” of interest for other bonds was 8%, one would likely avoid this 5% bond (or, only buy it if it were issued at a deep discount). On the other hand, the 5% rate might look pretty good if the “going rate” was 3% for other similar bonds (in which case one might actually pay a premium to get the bond). So, bonds might have an issue price that is at face value (also known as par), or above (at a premium) or below (at a discount) face. The price of a bond is typically stated as percentage of face; for example 103 would mean 103% of face, or $1,030. The specific calculations that are used to determine the price one would pay for a particular bond are revealed in a subsequent chapter. Bonds Purchased at Par An Investment in Bonds account (at the purchase price plus brokerage fees and other incidental acquisition costs) is established at the time of purchase. Premiums and discounts on bond investments are not recorded in separate accounts: The above entry reflects a bond purchase as described, while the following entry reflects the correct accounting for the receipt of the first interest payment after 6 months. The entry that is recorded on June 30 would be repeated with each subsequent interest payment, continuing through the final interest payment on December 31, 20X5. In addition, at maturity, when the bond principal is repaid, the investor would also make this final accounting entry: Bonds Purchased at a Premium When bonds are purchased at a premium, the investor pays more than the face value up front. However, the bond’s maturity value is unchanged; thus, the amount due at maturity is less than the initial issue price! This may seem unfair, but consider that the investor is likely generating higher annual interest receipts than on other available bonds. Assume the same facts as for the preceding bond illustration, but this time imagine that the market rate of interest was something less than 5%. Now, the 5% bonds would be very attractive, and entice investors to pay a premium: The above entry assumes the investor paid 106% of par ($5,000 X 106% = $5,300). However, remember that only $5,000 will be repaid at maturity. Thus, the investor will be “out” $300 over the life of the bond. Thus, accrual accounting dictates that this $300 “cost” be amortized (“recognized over the life of the bond”) as a reduction of the interest income: The preceding entry can be confusing and bears additional explanation. Even though $125 was received, only $75 is being recorded as interest income. The other $50 is treated as a return of the initial investment; it corresponds to the premium amortization ($300 premium allocated evenly over the life of the bond: $300 X (6 months/36 months)). The premium amortization is credited against the Investment in Bonds account. This process of premium amortization would be repeated with each interest payment. Therefore, after three years, the Investment in Bonds account would be reduced to $5,000 ($5,300 – ($50 amortization X 6 semiannual interest recordings)). This method of tracking amortized cost is called the straight-line method. There is another conceptually superior approach to amortization, called the effective-interest method, which will be revealed in later chapters. However, it is a bit more complex and the straight-line method presented here is acceptable so long as its results are not materially different than would result under the effective-interest method. In addition, at maturity, when the bond principal is repaid, the investor would make this final accounting entry: In an attempt to make sense of the preceding, perhaps it is helpful to reflect on just the “cash out” and the “cash in.” How much cash did the investor pay out? It was $5,300; the amount of the initial investment. How much cash did the investor get back? It was $5,750; $125 every 6 months for 3 years and $5,000 at maturity. What is the difference? It is $450 ($5,750 – $5,300). This is equal to the income recognized via the journal entries ($75 every 6 months, for 3 years). At its very essence, accounting measures the change in money as income. Bond accounting is no exception, although it is sometimes illusive to see. The following “amortization” table reveals certain facts about the bond investment accounting, and is worth studying closely. Be sure to “tie” the amounts in the table to the illustrated journal entries. Sometimes, complex transactions are easier to understand when one simply thinks about the balance sheet impact. For example, on December 31 20X4, Cash is increased $125, but the Investment in Bonds account is decreased by $50 (dropping from $5,150 to $5,100). Thus, total assets increased by a net of $75. The balance sheet remains in balance because the corresponding $75 of interest income causes a corresponding increase in retained earnings. Bonds Purchased at a Discount The discount scenario is very similar to the premium, but “in reverse.” When bonds are purchased at a discount, the investor pays less than the face value up front. However, the bond’s maturity value is unchanged; thus, the amount due at maturity is more than the initial issue price! This may seem like a bargain, but consider that the investor is likely getting lower annual interest receipts than is available on other bonds. Assume the same facts as for the previous bond illustration, except imagine that the market rate of interest was something more than 5%. Now, the 5% bonds would not be very attractive, and investors would only be willing to buy them at a discount: The above entry assumes the investor paid 97% of par ($5,000 X 97% = $4,850). However, remember that a full $5,000 will be repaid at maturity. The investor will get an additional $150 over the life of the bond. Accrual accounting dictates that this $150 “benefit” be recognized over the life of the bond as an increase in interest income: The preceding entry would be repeated at each interest payment date. Again, further explanation may prove helpful. In addition to the $125 received, another $25 of interest income is recorded. The other $25 is added to the Investment in Bonds account, as it corresponds to the discount amortization ($150 discount allocated evenly over the life of the bond: $150 X (6 months/36 months)=$25). This process of discount amortization would be repeated with each interest payment. Therefore, after three years, the Investment in Bonds account would be increased to $5,000 ($4,850 + ($25 amortization X 6 semiannual interest recordings)). This example again uses the straight-line method of amortization since the amount of interest is the same each period. The alternative effective-interest method demonstrated later in the book would be required if the results would be materially different. When the bond principal is repaid at maturity, the investor would also make this final entry: Consider the “cash out” and the “cash in.” How much cash did the investor pay out? It was $4,850; the amount of the initial investment. How much cash did the investor get back? It is the same as it was in the premium illustration: $5,750 ($125 every 6 months for 3 years and $5,000 at maturity). What is the difference? It is $900 ($5,750 – $4,850). This is equal to the income recognized ($150 every 6 months, for 3 years). Be sure to “tie” the amounts in the following amortization table to the related entries. What is the balance sheet impact on June 30, 20X5? Cash increased by $125, and the Investment in Bonds account increased $25. Thus, total assets increased by $150. The balance sheet remains in balance because the corresponding $150 of interest income causes a corresponding increase in retained earnings. Did you learn? How should an initial investment in a bond be recorded? What is meant by the amortized cost method? Know the meaning of bond terminology, including “issue price,” “face (or par),” “premium,” and “discount.” Note that the recorded bond investment account includes the amount of premium or discount (a separate account is not used). Be able to record bond interest income. Understand why it is necessary to amortize a premium or discount on a bond investment. Be able to apply the straight-line method of amortization. Be able to account for the full life cycle of a bond investment, including situations involving premiums and discounts.
{"url":"https://www.principlesofaccounting.com/chapter-9/held-to-maturity-securities/","timestamp":"2024-11-05T00:16:11Z","content_type":"text/html","content_length":"246367","record_id":"<urn:uuid:fa5e0d79-6ad6-4954-8fe9-e78045bf544c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00251.warc.gz"}