content
stringlengths
86
994k
meta
stringlengths
288
619
08 Relations, Compositions and Properties Lecture from: 14.10.2024 | Video: Videos ETHZ In essence, a relation is simply a connection or association between elements from two sets. Imagine you have a set of students () and a set of courses they enroll in (). A relation could describe which students take which courses. This connection can be represented as pairs: each pair links a student to the course(s) they are enrolled in. Formally, we define a relation as: This means that is a subset of the Cartesian product of sets and . The Cartesian product () is the set of all possible ordered pairs where the first element comes from and the second element comes from . Example: ≤ Example: School Let’s say: • (Set of students) • (Set of courses) A relation could represent the following: We now introduce a new set: And a relation that connects subjects with their respective teachers: Visualizing the Connections: • Students (A): Alice, Bob, Carol • Courses (B): Math, Physics, History • Teachers (T): David, Emily, Frank And the relations: • connects students to courses they take (e.g., Alice takes Math). • connects subjects with teachers who teach them (e.g., David teaches Math). Given a relation , its inverse, denoted as , is defined as: Example: Inverse of ≤ Example: Inverse of School Example • — “Subject is taken by Student” • — “Teacher teaches Subject” Example: Parent Relationship Let us define as . Then the inverse is: . Composition of Relations The composition of two relations, and , is denoted as . It represents a new relation formed by applying and then . Formal Definition: If and , then their composition, , is defined as: Example: Student-Subject-Teacher • (Student takes Subject) • (Subject is taught by Teacher) Then (Student is taught by Teacher). Example: Child-Parent • (is grandparent of) • (is half-sibling of) — The identity relation (id) is removed to exclude a person being their own half-sibling. Proof of Lemma 3.8 Lemma 3.8: Visualization using Matrices Relations can be represented visually using matrices, offering a structured way to understand their connections. Matrix Representation: Consider a relation on a set . We can represent this relation as a matrix where: • The rows and columns of the matrix correspond to the elements in set . • An entry is 1 if (meaning there’s a connection from to ) and 0 otherwise. Let’s say . The matrix representation of would be: Inverse Relation and Matrix Transposition: The inverse relation of a relation is defined as: if and only if . Importantly, the matrix representation of the inverse relation is simply the transpose of the original matrix. Example (continued): The inverse relation would be: . The transposed matrix is: Properties and Rules 1. Associativity: Composition of relations is associative. If we have three relations , , and (where ⊆ , ⊆ , and ⊆ ), then: ( ∘ ) ∘ = ∘ ( ∘ ) 2. Non-Commutativity: Composition of relations is not commutative. This means that generally, ∘ ≠ ∘ . 3. Identity Relation: There exists an identity relation for each set, denoted as if A is our set. • = {(, ) | ∈ } – It connects an element only to itself. Applying the identity relation doesn’t change anything. Example with Composition: Let be a relation “student takes a subject” and be the identity relation on the set of subjects. Then: ∘ = This means taking a subject followed by being that same subject doesn’t change anything! 4. Reflexivity, Symmetry, Antisymmetry, Transitivity, Asymmetry: These are properties that relations can possess. A relation is: If for every element ‘a’ in set A, (, ) Alternate: Similarly irreflexive is the opposite: for every ‘a’ in A. If for every pair (, ) , then (, ) Alternate: If for every pair (, ) and (, ) , then . (If (a, b) and (b, a) are both in the relation, then ‘a’ and ‘b’ must be the same element.) Same as: Alternative: A relation is antisymmetric if it is true for all if If for every triple (, ), (, ) , then (, ) . Same as: Alternative: A relation is transitive if and only if . If for every pair (, ) , then (, ) . (In other words, if (a, b) is in the relation, then (b, a) cannot be.) Transitive Closure The transitive closure of a relation on a set is the smallest transitive relation containing . In simpler terms, it includes all pairs where there exists a path from to through , even if that path involves multiple steps. Let be a relation on set . The transitive closure of , denoted as , can be defined recursively as follows: • (The original relation is included) • If and , then Example: “Parent” and “Ancestor” Relationships” Let’s say our set represents people, and the relation is defined as “is a parent of.” For instance: (John, Mary) (John is Mary’s parent) Now, we want to find the transitive closure which would represent the “ancestor” relationship. • Direct Parents: The initial relation includes direct parent-child pairs like (John, Mary). • Transitive Steps: Since John is Mary’s parent, and Mary could potentially have children who are also descendants of John, we include those relationships in too. For example: • If Mary has a child named David, then (John, David) would be in because John is the ancestor of David through his direct relationship with Mary. The transitive closure can grow quite large for complex relations with many elements. There are algorithms to efficiently compute the transitive closure, but it’s essential to understand its concept and how it extends a relation beyond direct connections. Continue here: 09 Equivalency Relation and Classes, Partitions, Partially Ordered Sets
{"url":"https://cs.shivi.io/Semesters/Semester-1/Discrete-Maths/Lecture-Notes/08-Relations,-Compositions-and-Properties","timestamp":"2024-11-05T00:25:49Z","content_type":"text/html","content_length":"161763","record_id":"<urn:uuid:b8b8c09a-533c-480f-b8ab-d867d6b60b73>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00580.warc.gz"}
Count Data Models and Consumer Purchasing Behavior Analytics When we talk about count data from an econometrics standpoint, we can understand that it is a form of data where the observations are in the form of non-negative integer values {0, 1, 2, 3, …}, and that these integers mostly come from the process of counting and not ranking. Also the focus of analysis in terms of marketing analytics, is commonly positioned on clarifying a dependent variable which is limited (also known as count data). The most common samples of customer behaviors are the number of specified products that have been procured or the number of customers that have visited any website within a one week time period. Like many other customer analytics probability models, we can also use a certain type of distribution to understand the individual-level behaviors. A basic building block for count data is the Poisson distribution. A Poisson random variable (X) shows the number of occurrences of a certain event in either a space or time unit. Poisson distribution can be depicted in the equation shown Here, x = 0,1,2,… are non-negative integers that have no upper limit Then, we assume a website has 10,000 registered users, and the users’ buying activities are logged. Moreover, to simplify the modeling even further, we can consider the conclusion of one user’s buying behavior in any one particular week. Figure 1 Thus, as per the Poisson model, we can include the equation and settings in the below equation. • We allow random variable Y[i] denote the number of times that an individual i made a purchase on the website in a certain time period unit. • On an individual-level, Y[i] can be assumed to be the dispersed Poisson with mean λ [i], (the average number of purchases in one single time period) Now when it comes to practicing the science of marketing analytics, heterogeneity among various observations (buyers or consumers) is noted to be one of the most important issues that the model needs to deal with. Often, marketers mostly speak of the “80/20 rule”— which states that 80% of sales revenues come from just 20% of their customers. This pattern is also often observed: most consumers have just 1 or 2 buying activities, and a tiny percent of consumers also have huge amounts of activities. So we can see two types of heterogeneity that is, observed and unobserved. The Poisson Regression model takes care of the heterogeneity that is observed, and also the Negative Binomial Regression model deals with the second one. Figure 2 Density of Customers’ Purchases • We assume that the exposure rates λ have a gamma distribution,so we can model the heterogeneity across the individuals Based on the assumption that Table 1 shows many recordings of the number of times the customers purchased in one unit period along with their demographic features, such as level of education, number of children, etc. userid edu region householdsize age income child race Purchasing Times 9573834 . 4 2 10 5 1 1 2 9576277 . 1 3 8 7 1 1 5 9581009 . 2 2 7 5 1 1 1 Table 1 Sample Data of Customer Purchasing Activities The Poisson regression model has been developed as the following through revised in the ordinary linear regression model • An individual’s mean is related to her noticeable attributes through the function, guarantee λ>0 • the link function like this, and after transformation lnλ = lnλ0 + b1x1+b2x2 …….,(Eq.4) So we see that a Poisson Regression model is almost same as an ordinary linear regression, with mostly two variances. Firstly, the errors of regression follow a Poisson, and not normal distribution. Secondly, instead of modeling the dependent variable Y as a linear function of the independent variables, it starts modeling the natural log of the response variable, ln(Y), as a linear function of the determinants. The Poisson model makes an assumption that the mean and variance of errors are equal. However, usually when it comes to practice the variance of the errors is mostly bigger than the mean (although it can also be a bit lesser). As many customer individualities are not noticed in practice, when we use the Poisson Regression model, we might miss some of the important information that causes the errors of omitted variables. In order to capture the unobserved heterogeneity among individuals, let λ[0] (in Eq.3) vary across the individuals as per a Gamma distribution with parameters r and α, The negative binomial distribution is another type of the Poisson distribution in which the distribution’s parameter (λ[0]) is considered as a random variable itself. The variation of this parameter accounts for a variance of data that is greater than the mean, therefore the including an unobserved heterogeneity would improve the model’s good fit by a great extent.
{"url":"https://crescointl.com/count-data-models-consumer-purchasing-behavior-analytics/","timestamp":"2024-11-08T20:52:19Z","content_type":"text/html","content_length":"216556","record_id":"<urn:uuid:de73fed0-b5be-4b41-ab94-8ea8e820a66b>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00608.warc.gz"}
On the dearth of coproducts in the category of locally compact groups Alexandru Chirvasitu We prove that a family of at least two non-trivial, almost-connected locally compact groups cannot have a coproduct in the category of locally compact groups if at least one of the groups is connected; this confirms the intuition that coproducts in said category are rather hard to come by, save for the usual ones in the category of discrete groups. Along the way we also prove a number of auxiliary results on characteristic indices of locally compact or Lie groups as defined by Iwasawa: that characteristic indices can only decrease when passing to semisimple closed Lie subgroups, and also along dense-image morphisms. Keywords: locally compact group; Lie group; almost-connected; pro-Lie; coproduct; representation 2020 MSC: 22D05; 22E46; 22D12; 18A30 Theory and Applications of Categories, Vol. 38, 2022, No. 20, pp 791-810. Published 2022-06-01. TAC Home
{"url":"http://www.tac.mta.ca/tac/volumes/38/20/38-20abs.html","timestamp":"2024-11-07T03:14:28Z","content_type":"text/html","content_length":"2444","record_id":"<urn:uuid:5a882ab5-9c5c-46ba-ba6f-05ebb660def8>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00494.warc.gz"}
Diffusion Coefficient - (Mathematical Biology) - Vocab, Definition, Explanations | Fiveable Diffusion Coefficient from class: Mathematical Biology The diffusion coefficient is a parameter that quantifies the rate at which particles spread out or diffuse through a medium. It plays a crucial role in reaction-diffusion equations, as it determines how quickly substances like chemicals or biological entities can move and interact in space, influencing patterns of growth and distribution. congrats on reading the definition of Diffusion Coefficient. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The diffusion coefficient is influenced by factors such as temperature, the size of the diffusing particles, and the properties of the medium they are diffusing through. 2. In reaction-diffusion models, different diffusion coefficients can lead to distinct patterns, such as stripes or spots, depending on how substances interact with each other. 3. The units of the diffusion coefficient are typically expressed in square centimeters per second (cm²/s), reflecting how far particles move over time. 4. A higher diffusion coefficient indicates that particles can spread out more quickly, which can be critical in processes like chemical reactions or biological signaling. 5. In many mathematical models, the diffusion coefficient is assumed to be constant; however, it can vary spatially and temporally in real biological systems. Review Questions • How does the diffusion coefficient influence pattern formation in reaction-diffusion equations? □ The diffusion coefficient is central to determining how quickly substances spread within a medium. In reaction-diffusion equations, varying the diffusion coefficients of different species can lead to diverse patterns like spots or stripes. For instance, if one chemical diffuses faster than another, it can create zones where concentrations vary dramatically, resulting in spatial structures that are characteristic of many biological phenomena. • Compare and contrast Fick's laws of diffusion and their relevance to understanding the diffusion coefficient. □ Fick's first law states that the flux of particles is proportional to the concentration gradient, while Fick's second law accounts for how this flux changes over time. Both laws highlight the importance of the diffusion coefficient as they establish a quantitative relationship between particle movement and concentration differences. Understanding these laws helps in analyzing how quickly substances can diffuse in various environments, influencing processes like nutrient uptake in cells or pollutant dispersion in ecosystems. • Evaluate how variations in the diffusion coefficient can impact biological systems and their modeling. □ Variations in the diffusion coefficient can significantly affect biological systems by altering how substances interact and spread. For example, if certain signaling molecules diffuse more rapidly than others due to changes in environmental conditions or cellular structure, this can lead to miscommunication between cells or uneven nutrient distribution. In modeling these systems, accurately capturing these variations is essential for predicting outcomes like tissue development or reaction rates, ultimately leading to a better understanding of complex biological behaviors. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/mathematical-biology/diffusion-coefficient","timestamp":"2024-11-01T19:01:12Z","content_type":"text/html","content_length":"156917","record_id":"<urn:uuid:f6e6b416-2b3c-4cc9-bd18-c8c1c27552dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00096.warc.gz"}
Re: st: simple program [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: simple program From Phil Schumm <[email protected]> To [email protected] Subject Re: st: simple program Date Tue, 21 Jun 2005 10:00:56 -0500 At 9:43 AM -0400 6/21/05, Seyda G Wentworth wrote: I have a long list of exported goods and their annual values (y_1999; y_2000; y_2001...). I want to compute the growth rate for each of these products between 1999-2000 called gro2000; between 2000-2001 called gro2001; and so on till 2003-2004. So I need to create 5 variables. I'm trying to write a simple program of the following sort: program define growth local i=1999 while (`i'<=2004) { gen gro`i'+1=(y_`i'+1-y_`i')/y_`i' local i=`i'+1 But it doesn't work, I suspect because of syntax error. Could you correct? Your immediate problem is with the line: gen gro`i'+1=(y_`i'+1-y_`i')/y_`i' in two places. For example, consider the portion on the left; after macro expansion (during the first iteration), this becomes: gen gro1999+1= which, as I'm sure you will recognize, is not valid Stata syntax. One way to get around this is with the macro expansion operator `=exp' which provides inline access to Stata's expression evaluator (see [P] macro for details). Thus, you could replace your original line with the following: gen gro`=`i'+1'=(y_`=`i'+1'-y_`i')/y_`i' which, after macro expansion, becomes: gen gro2000=(y_2000-y_1999)/y_1999 As you can see each instance of `=`i'+1' is replaced, first by `=1999+1', and then by 2000. Finally, note that you can simplify things a bit here by using the -forvalues- command: forv i = 2000/2005 { gen gro`i' = (y_`i' - y_`=`i'-1') / y_`=`i'-1' -- Phil P.S. You haven't indicated why you need an actual program here (as opposed to using the loop directly within the context where you need to generate the variables). If you do, it is most likely because you want to perform this calculation repeatedly and/or in other contexts, and if so, you probably want to code it a bit more generally (i.e., so that it doesn't rely on specific variable * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2005-06/msg00630.html","timestamp":"2024-11-11T23:45:12Z","content_type":"text/html","content_length":"9232","record_id":"<urn:uuid:d81cc38f-4307-4f6b-8845-fef2fb1fa267>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00781.warc.gz"}
Holt CA Course Volume of Cylinders Warm Up Warm Up Lesson Presentation Lesson Presentation California Standards California StandardsPreview. - ppt download Presentation is loading. Please wait. To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy , including cookie policy. Ads by Google
{"url":"https://slideplayer.com/slide/3620320/","timestamp":"2024-11-07T19:05:59Z","content_type":"text/html","content_length":"178664","record_id":"<urn:uuid:a6825eda-d3ff-4f44-b649-3442e6d858ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00607.warc.gz"}
On the Schur Expansion of Hall-Littlewood and Related Polynomials via Yamanouchi Words On the Schur Expansion of Hall-Littlewood and Related Polynomials via Yamanouchi Words Keywords: Hall-Littlewood polynomials, Dual equivalence, Schur functions, Symmetric functions, Macdonald polynomials This paper uses the theory of dual equivalence graphs to give explicit Schur expansions for several families of symmetric functions. We begin by giving a combinatorial definition of the modified Macdonald polynomials and modified Hall-Littlewood polynomials indexed by any diagram $\delta \subset {\mathbb Z} \times {\mathbb Z}$, written as $\widetilde H_{\delta}(X;q,t)$ and $\widetilde H_{\ delta}(X;0,t)$, respectively. We then give an explicit Schur expansion of $\widetilde H_{\delta}(X;0,t)$ as a sum over a subset of the Yamanouchi words, as opposed to the expansion using the charge statistic given in 1978 by Lascoux and Schüztenberger. We further define the symmetric function $R_{\gamma,\delta}(X)$ as a refinement of $\widetilde H_{\delta}(X;0,t)$ and similarly describe its Schur expansion. We then analyze $R_{\gamma,\delta}(X)$ to determine the leading term of its Schur expansion. We also provide a conjecture towards the Schur expansion of $\widetilde H_{\delta}(X;q,t) $. To gain these results, we use a construction from the 2007 work of Sami Assaf to associate each Macdonald polynomial with a signed colored graph $\mathcal{H}_\delta$. In the case where a subgraph of $\mathcal{H}_\delta$ is a dual equivalence graph, we provide the Schur expansion of its associated symmetric function, yielding several corollaries.
{"url":"https://www.combinatorics.org/ojs/index.php/eljc/article/view/v24i1p57","timestamp":"2024-11-02T01:19:17Z","content_type":"text/html","content_length":"17054","record_id":"<urn:uuid:2ab35523-59c8-49a3-ba67-6d9cdd18d7fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00317.warc.gz"}
How Do You Find The Diffrent Typs of Mathmatical Symmetry Learn How Do You Find The Diffrent Typs of Mathmatical Symmetry Mathematical symmetry is a captivating and fundamental concept that plays a crucial role in various branches of mathematics and beyond. From the beauty of geometric shapes to the intricacies of advanced algebra, symmetry reveals itself in numerous forms. In this article, we will delve into the different types of mathematical symmetry and explore how to identify and appreciate them. Exploring the World of Mathematical Symmetry: A Comprehensive Guide Reflection Symmetry One of the most recognizable forms of symmetry is reflection symmetry, also known as line symmetry. An object or shape exhibits reflection symmetry if there is a line, called the axis of symmetry, such that when the object is folded along that line, the two resulting halves match perfectly. Common examples include a butterfly’s wings or the uppercase letter “A.” Rotational Symmetry Rotational symmetry involves the ability of a figure to rotate around a central point and still maintain its original appearance. The order of rotational symmetry is determined by the number of times the shape can be rotated to coincide with its initial position. For instance, a square possesses rotational symmetry of order four, as it can be rotated 90 degrees four times to align with itself. Translational Symmetry Translational symmetry, also known as slide symmetry, occurs when an object can be moved along a certain distance and still align with its original position. This type of symmetry is prevalent in patterns and tessellations, where identical shapes repeat in a regular manner. A classic example is a checkerboard where each square is the same size and shape, creating a regular grid. Point Symmetry Point symmetry, also referred to as central symmetry, involves an object appearing unchanged after a 180-degree rotation around a central point. In such cases, the shape looks the same when viewed from any angle. The capital letter “X” and a five-pointed star are examples of figures with point symmetry. Bilateral Symmetry Bilateral symmetry is similar to reflection symmetry but extends to three dimensions. An object or shape is bilaterally symmetric if there is a plane through it, and when folded along that plane, the two halves match. Many animals, such as butterflies and humans, exhibit bilateral symmetry. Fractal Symmetry Fractals are complex structures that display self-similarity at different scales. Fractal symmetry involves repeating patterns within the structure, and as you zoom in, you find similar shapes or patterns. The Mandelbrot set is a famous example of a fractal with intricate and fascinating symmetrical properties. What is reflection symmetry, and how do you identify it? Reflection symmetry, also known as line symmetry, is found in shapes that can be folded along a specific line, known as the axis of symmetry, resulting in identical halves. To identify reflection symmetry, look for shapes where one side mirrors the other. Common examples include letters like “H” or everyday objects like a heart. Can you explain rotational symmetry and how to determine its order? Rotational symmetry occurs when a figure can be rotated around a central point and still maintain its original form. The order of rotational symmetry is determined by the number of times the shape aligns with itself during a complete rotation. For example, a regular hexagon has rotational symmetry of order six, as it can be rotated 60 degrees six times to match its initial position. What is translational symmetry, and where is it commonly observed? Translational symmetry, also known as slide symmetry, involves shapes that can be shifted along a certain distance and still overlap with their original positions. This type of symmetry is often observed in patterns, mosaics, and tessellations. A straightforward example is a tiled floor where identical tiles create a repeated, regular arrangement through translation. Final Thought Mathematical symmetry is a captivating aspect of the mathematical world, enriching our understanding of shapes, patterns, and structures. Whether exploring reflection symmetry in everyday objects or deciphering the intricate patterns of fractals, recognizing the different types of mathematical symmetry enhances our appreciation for the underlying order and beauty in the world of mathematics. As you embark on your mathematical journey, keep an eye out for symmetry in its various forms, and marvel at the inherent elegance it brings to the world around us. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://studywatches.com/2024/03/09/how-do-you-find-the-diffrent-typs-of-mathmatical-symmetry/","timestamp":"2024-11-04T13:49:42Z","content_type":"text/html","content_length":"153567","record_id":"<urn:uuid:e1be9925-b2fc-480d-ae56-b2dc92a66ffc>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00861.warc.gz"}
Degrees of freedom t test or confidence interval Find the degrees of freedom for a particular t test or confidence interval ($CI$) below: Test/$CI$ Degrees of freedom One sample $t$ test/$CI$ $N - 1$. Here $N$ is the sample size. Paired sample $t$ test/$CI$ $N - 1$. Here $N$ is the number of difference scores. For hand calculations, it is common to use the smaller of $n_1 - 1$ and $n_2 - 1$ as an approximation for the degrees of freedom. Here $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. Two sample $t$ test/$CI$ - equal variances not assumed Computer programs use the following formula for the degrees of freedom: $$df = \dfrac{\Bigg(\dfrac{s^2_1}{n_1} + \dfrac{s^2_2}{n_2}\Bigg)^2}{\dfrac{1}{n_1 - 1} \Bigg(\dfrac{s^2_1}{n_1}\Bigg)^2 + \dfrac{1}{n_2 - 1} \Bigg(\dfrac{s^2_2}{n_2}\Bigg)^2}$$ Here $s^2_1$ is the sample variance in group 1, and $s^2_2$ is the sample variance in group 2. Two sample $t$ test/$CI$ - equal $n_1 + n_2 - 2$. variances assumed Here $n_1$ is the sample size of group 1, and $n_2$ is the sample size of group 2. $t$ test for the Pearson correlation $N - 2$. coefficient Here $N$ is the sample size (number of pairs). $t$ test for the Spearman correlation $N - 2$. coefficient (Spearman's rho) Here $N$ is the sample size (number of pairs). $t$ test/$CI$ within one way ANOVA $N - I$. setting (multiple comparisons/ Here $N$ is the total sample size and $I$ is the number of groups. $t$ test/$CI$ for a single regression $N - K - 1$. coefficient (in OLS regression) Here $N$ is the total sample size and $K$ is the number of independent variables.
{"url":"https://statkat.com/degrees-of-freedom-t-test.php","timestamp":"2024-11-02T07:32:44Z","content_type":"text/html","content_length":"10583","record_id":"<urn:uuid:5a7701d6-b5c8-465a-bde3-e4ddb7b63cff>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00114.warc.gz"}
wu :: forums - Print Page wu :: forums (http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi) riddles >> medium >> How Many Squares (Message started by: Virgilijus on Jan 2^nd, 2015, 10:54am) Title: How Many Squares Post by Virgilijus on Jan 2^nd, 2015, 10:54am Willy Wutang is trying to scrounge up some money for his alimony payments and has decided to sell the rights to some of his land. The land he has is boring (assuming the buyers don't dig too deep) and square, n x n meters. It is also already sectioned off into 1 x 1 meter squared plots. He knows he can sell n^2 parcels of these smallest, indivisible sections of land, but he's feeling devious and wants to push his luck with Johnny Law; he also wants to sell rights to all 2 x 2 plots, and 3 x 3 plots, and all possible unique square plots. If Willy does this, how many unique plots of land can he sell for his n x n meters squared of land? What if he has m x n meters squared? Title: Re: How Many Squares Post by rmsgrey on Jan 4^th, 2015, 4:25pm There are uncountably many 1x1 plots (orientations range over a quarter circle before symmetry catches up, and, for any given orientation, the northernmost corner of the plot can range over most of the land) unless the land itself is just 1x1. In other words, the problem might want to be rephrased... Title: Re: How Many Squares Post by Virgilijus on Jan 4^th, 2015, 4:46pm Yes, you're right. Changed the wording to hopefully make it more clear. Title: Re: How Many Squares Post by rloginunix on Jan 11^th, 2015, 10:19pm Eliminating rotations mentioned by rmsgrey and counting only the squares with integral side lengths 1xK where K runs from 1 to n we get: Say the land is 8x8. In the top row (enumerating left to right) there will fit 8 + 1 - 2 = 7 2x2 squares - their top left vertexes located at r1c1, r1c2, r1c3, etc. Since the land is square the same number applies to the number of 2x2 squares that will fit into the first column (enumerating top to bottom), the total number of 2x2 squares being 7x7 = 49. Though not shown above, in the top row there will fit 8 + 1 - 3 = 6 3x3 squares and the same number of 3x3 squares will fit into the first column for a total of 36 3x3 squares. Generalizing for K we get (8 + 1 - K)^2 KxK squares summed over K: 64 1x1 sq. + 49 2x2 sq. + 36 3x3 sq. + 25 4x4 sq. + ... + 1 8x8 sq. = 204 squares. Generalizing for an arbitrarily sized chunk of land we get: N = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif^n[K=1]K^2 You can look up that sum or you can deduce it by observing that: (n + 1)^3 = n^3 + 3n^2 + 3n + 1 Now in the expression above (n - 1) times replace n with (n - 1), (n - 2), (n - 3), etc. and write the results down (for a total of n equations): (n + 1)^3 = n^3 + 3n^2 + 3n + 1 n^3 = (n - 1)^3 + 3(n - 1)^2 + 3(n - 1) + 1 (n - 1)^3 = (n - 2)^3 + 3(n - 2)^2 + 3(n - 2) + 1 (n - 2)^3 = (n - 3)^3 + 3(n - 3)^2 + 3(n - 3) + 1 2^3 = 1^3 + 3*1^2 + 3*1 + 1 Observe that when you sum these equations n^3 in the first equation on the right side of the equal sign will cancel out with n^3 in the second equation on the left side of the equal sign. And so will (n - 1)^3 and (n - 2)^3 and so on. In other words diagonally nearest "upper right" and "lower left" terms will cancel out. What will remain is: (n + 1)^3 = 1^3 + 3S[2] + 3S[1] + n where S[1] - the sum of the first powers of the first n natural numbers is known (Karl Gauss or it too can be deduced via the method just described). Solving the above linear equation for S[2] (the sum of second powers sought after) we get: N = S[2] = n(n + 1)(2n + 1)/6 Title: Re: How Many Squares Post by Virgilijus on Jan 12^th, 2015, 3:17am And you are correct! I visualized the squares being filled in a little differently [you can only fit one nxn square in the very center, four [n-1]x[n-1] squares around it, etc.) but, of course, you still get to the same answer. Powered by YaBB 1 Gold - SP 1.4! Forum software copyright © 2000-2004 Yet another Bulletin Board
{"url":"https://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_medium;action=print;num=1420224891","timestamp":"2024-11-05T07:41:39Z","content_type":"text/html","content_length":"6862","record_id":"<urn:uuid:0784eeea-8b46-47ca-a91f-7a11bfdff365>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00558.warc.gz"}
(49g 50g) Theoretical Earth gravity g = g(latitude, height), WGS84, GRS80/67 09-21-2021, 01:31 PM Post: #3 Gil Posts: 656 Senior Member Joined: Oct 2019 RE: HP49-50G : —>g gravity calculation = g(latitude, height) with WGS84 You are perfectly correct — and in a way I am wrong. The truth is the following : the formula in Wikipedia showed some wrong digits at the end, as well as the Chinese page. To check, I used the a and f exact values of WGS84 and cut the decimals points and added zeros accordingly to operate with integers (see my program: \<< "Version 5.2 1 single Arg like \[] or {'7/3*' 300} for 300 digits " DROP DUP TYPE 5 == THEN OBJ\-> DROP ELSE 100 "Put above 200 if you want by default 200 digits & not 100" DROP END SWAP DUP UNROT 0 0 0 RCLF \-> digit x1 x2 x21 num f \<< RAD STD -3 CF -105 CF digit \->STR "." "" SREPL DROP OBJ\-> 'digit' STO x1 \->STR "." "" SREPL 0 == ELSE OBJ\-> 'x2' STO x2 x1 / \->NUM LOG DUP FP 0 \=/ THEN DROP "Instead of decimals (ab.c), Try fractions ('abc/10') !" DOERR END \->STR "." "" SREPL DROP OBJ\-> ALOG 'x21' STO IF x21 1 > THEN x2 x21 / IF x21 1 < THEN x2 x21 INV * ELSE x2 END \->STR "." "" SREPL DROP OBJ\-> END DUP EXPAND DUP2 SAME DROPN DUP \->NUM DUP 'num' STO num ABS 100000000000 > num FP 0 \=/ OR THEN OVER 10 digit ^ * PROPFRAC PROPFRAC -105 SF DUP TYPE 9 == THEN OBJ\-> 3 DROPN END f STOF and get then the most accurate value for b. I did the same when calculating the constants k and e². So that the digits shown on the English and Chinese page for the WGS84 g-formulae are now all "correct", though meaningless, as you noticed. The problem was : suppose I have an "effective")result Should I cut it into 123456789012 (the first 12 digits are correct) Or prefer to have something printed incorrectly But that is nearer of the true value (and "better" for further calculations [on my calculator]). I chose the second solution and decided to give the most complete values of k and e², leaving the choice for the reader to cut where it's the most "convenient" for him. As I am limited on the digits of the values entered on my HP (for non-integers), you will see that the constant k and e² in my programs do approximate correctly the theorical values (with many digits) of k and e². But I could not decently write the initial values, for checking purposes, to be 123456789013. In fact, I cut the final digits of the k and e² values in Wikipedia, being sure that the first cut digit from the left was less than 5. Could be cut to 1234567890123 × 10³ (because after the last digit 3 there is a digit < 5, here 4). But not to 12345678901234 × 10², as 12345678901235 × 10² would be better (but not "nice looking", as the digit after 123 is a 4 and not a 5 as shown). User(s) browsing this thread: 1 Guest(s)
{"url":"https://hpmuseum.org/forum/showthread.php?mode=threaded&tid=17495&pid=152336","timestamp":"2024-11-09T14:48:19Z","content_type":"application/xhtml+xml","content_length":"25605","record_id":"<urn:uuid:5e241b69-4d64-4d9b-b492-175cf3736c3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00572.warc.gz"}
Monday, 13 April 2020 19:01 On April 16, 2020 the Faculty of Fundamental Sciences of Bauman Moscow State Technical University will held the fourth meeting of the Scientific Workshop "BMSTU Mathematical Colloquium". It addresses a broad mathematical audience including keen students. The workshop aims to give listeners a general view of various areas of modern mathematics The workshop will be held on Thursday at 5:30 pm, ZOOM, meeting id: 934 8805 5576, password: 004462 The workshop topic: Introduction to Arakelov Geometry The report will be given by Doctor of physico-mathematical sciences, Professor of Russian Academy of Sciences, Leading Scientific Researcher of Steklov Mathematical Institute Osipov Denis Vasilievich Friday, 03 April 2020 12:52 On April 9, 2020 the Faculty of Fundamental Sciences of Bauman Moscow State Technical University will held the fourth meeting of the Scientific Workshop "BMSTU Mathematical Colloquium". It addresses a broad mathematical audience including keen students. The workshop aims to give listeners a general view of various areas of modern mathematics The workshop will be held on Thursday at 4:00 pm, ZOOM, meeting id: 975 725 677, password: 031691 The workshop topic: An Upper Bound for Weak B_k – Sets The report will be given by PhD, Correspondent Member of Russian Academy of Sciences, Main Researcher of Steklov Mathematical Institute, Professor Shkredov Ilya Dmitrievich Wednesday, 18 December 2019 16:47 On October 3, 2019 the Faculty of Fundamental Sciences of Bauman Moscow State Technical University will held the fourth meeting of the Scientific Workshop "BMSTU Mathematical Colloquium". It addresses a broad mathematical audience including keen students. The workshop aims to give listeners a general view of various areas of modern mathematics. The workshop will be held on Thursday at 5:30 pm, room 216l, BMSTU Laboratory Building, 2/18, Rubtsovskaya embankment. The workshop topic: Nevanlinna domains and related topics The report will be given by Doctor of Physical and Mathematical Sciences, BMSTU Professor Fedorovskiy Konstantin Yurievich Saturday, 23 November 2019 15:43 Sunday, 29 September 2019 18:37 On October 3, 2019 the Faculty of Fundamental Sciences of Bauman Moscow State Technical University will held the fourth meeting of the Scientific Workshop "BMSTU Mathematical Colloquium". It addresses a broad mathematical audience including keen students. The workshop aims to give listeners a general view of various areas of modern mathematics. The workshop will be held on Thursday at 5:30 pm, room 216l, BMSTU Laboratory Building, 2/18, Rubtsovskaya embankment. The workshop topic: Nevanlinna domains and related topics The report will be given by Doctor of Physical and Mathematical Sciences, BMSTU Professor Fedorovskiy Konstantin Yurievich Tuesday, 17 September 2019 12:39 A regular meeting under the leadership of academician V. I. Pustovoit scheduled for 15:00 on 19 September 2019. Seminar Agenda: 1. A.N. Morozov (MSTU). Generation and registration of coupled gravitational waves excited by a standing gravitational wave. 2. A.E. Sharandin (MSTU) Prospects for using high-power lasers to generate gravitational waves. Wednesday, 17 July 2019 14:25 Tuesday, 14 May 2019 17:59 On April 18, 2019 the Faculty of Fundamental Sciences of Bauman Moscow State Technical University will held the fourth meeting of the Scientific Workshop "BMSTU Mathematical Colloquium". It addresses a broad mathematical audience including keen students. The workshop aims to give listeners a general view of various areas of modern mathematics. The workshop will be held on Thursday at 5:30 pm, room 222l, BMSTU Laboratory Building, 2/18, Rubtsovskaya embankment. The workshop topic: Geometry of hinged constructions The report will be given by Doctor of Physical and Mathematical Sciences, Professor Kovalev Mikhail Dmitrievich Tuesday, 16 April 2019 18:06 On April 18, 2019 the Faculty of Fundamental Sciences of Bauman Moscow State Technical University will held the fourth meeting of the Scientific Workshop "BMSTU Mathematical Colloquium". It addresses a broad mathematical audience including keen students. The workshop aims to give listeners a general view of various areas of modern mathematics. The workshop will be held on Thursday at 5:30 pm, room 222l, BMSTU Laboratory Building, 2/18, Rubtsovskaya embankment. The workshop topic: Manifolds of triangulations and higher-dimensional braids (a joint work with I.M. Nikonov) The report will be given by Professor of Russian Academy of Sciences, Doctor of Physical and Mathematical Sciences, BMSTU Professor Manturov Vassily Olegovich. Monday, 25 March 2019 17:15 Chairman: Academician of Russian Academy of Science Vladislav I. Pustovoit The workshop will be held on Thursday at 4:00 pm, room 827, BMSTU Laboratory Building, 2/18, Rubtsovskaya embankment. 1. I.V. Fomin (BMSTU). On the possible cosmic and terrestrial sources of gravitational waves. 2. A.V. Kayutenko (BMSTU). Fundamental physical ideas that influenced the solution of the problem of ground-based registration of cosmic gravitational radiation.
{"url":"http://fn.bmstu.ru/en/language-clubs-fl-en/l-3-conversational-discussion-club-en/itemlist/user/497-2024-06-11-04-10-20?start=60","timestamp":"2024-11-08T05:23:27Z","content_type":"text/html","content_length":"90102","record_id":"<urn:uuid:6c3a6c33-ddcd-49f8-89bf-52a8c589a6e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00283.warc.gz"}
Math Department Offerings Undergraduate credential pathways: 1) Joint Math Education Program (JMEP) - The Joint Math/Ed Program (JMEP) is a collaborative effort of the UCLA Mathematics Department and the Graduate School of Education's Teacher Education Program. In this program, students begin work toward a California Preliminary Single Subject Teaching Credential in Mathematics during their senior year, and complete this coursework by the end of the academic year immediately following completion of their bachelor's degree. In the academic year immediately following their bachelors degree, students complete a Masters in Education while teaching full time (earning a full time salary) in Los Angeles urban schools. 2) Integrated Pathway - This program allows students to complete most or all of the required credential coursework as an undergraduate. For more information and to apply for either JMEP or the Integrated Pathway, please go to the UCLA Curtis Center for Mathematics and Teaching. Also see UCLA Graduate School of Education Undergraduate Math Subject Matter Preparation Applicants for a California Preliminary Single Subject Teaching Credential in Mathematics must verify their "subject matter competence" to teach mathematics in one of two ways: 1) complete a CA-approved "subject matter program" and obtain verification of completion from the university with the approved program or 2) achieve a passing score on the three part California Subject Matter Examination for Teachers (CSET). The UCLA Mathematics Department is one of three UC campuses with a CA-approved subject matter program in mathematics. The courses comprising the program are listed on on the Math Department website (follow the link below for additional information). Students who complete the Mathematics for Teaching major will automatically complete the department's CA-approved subject matter program. At the end of their senior year, students may request a letter from the Mathematics Department's Undergraduate Office verifying their completion of these courses and thus their subject matter competence for the CA Single Subject Teaching Credential in Mathematics. For more information, see UCLA Curtis Center for Mathematics and Teaching. Math for Teaching Major The Math for Teaching Major is designed for students who have a substantial interest in teaching mathematics at the secondary level. Contact Connie Jung at connie@math.ucla.edu for more details. Mathematics for Teaching Minor The Mathematics for Teaching Minor is designed for students majoring in fields other than mathematics who plan to teach secondary mathematics after graduation. For non-majors joining the Mathematics Department and School of Education's Joint Mathematics Education Program (JMEP), the minor provides recognition for completion of prerequisite coursework for the program. This coursework also prepares students for content on the Math CSET exam. Contact Connie Jung at connie@math.ucla.edu for more details.
{"url":"http://cateach.ucla.edu/?q=content/math-department-offerings","timestamp":"2024-11-11T23:42:03Z","content_type":"text/html","content_length":"29575","record_id":"<urn:uuid:22430266-d371-4715-b320-36b32fafbaf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00314.warc.gz"}
What if we multiply 0 with infinity? - Resto NYC What if we multiply 0 with infinity? The Concept of 0 and Infinity Zero and infinity are two very interesting mathematical concepts that have fascinated mathematicians and philosophers for centuries. At first glance, they may seem straightforward – zero represents nothing, while infinity represents endlessness. However, when explored more deeply, both concepts lead to mind-bending paradoxes and mysteries. Multiplication is one of the most basic arithmetic operations. We learn to multiply numbers starting from elementary school. Numbers like 0 and infinity are usually excluded from the multiplication tables we learn. So what happens if we try to multiply these two special numbers? Can we even multiply infinity with anything? Is it defined? Let’s try to find answers to these intriguing questions in this article. What is Zero? Zero is an important concept in mathematics that denotes the absence of any quantity or magnitude. It is the starting point of the number system, that comes before the natural numbers 1, 2, 3 and so Some key facts about zero: • Zero represents a null value, an empty set or the additive identity in arithmetic. • It is neither positive nor negative. • Any number multiplied by zero is zero. • Zero has unique properties in arithmetic – any other number divided by zero is undefined. So zero stands for nothingness, devoid of any value. Let’s now try to understand infinity, which is at the other end of the mathematical spectrum. What is Infinity? Infinity represents limitlessness and endlessness in mathematics. It denotes something that has no boundaries and no end. Some key facts about infinity: • Infinity is an abstract concept used to represent something boundless. • It is not an actual real number, but more of a conceptual idea. • There are different levels of infinity in mathematics – countable and uncountable. • The infinity symbol is used to denote something never-ending. Infinity can seem paradoxical, as how can something be endlessly big? Mathematicians have pondered over the true meaning of infinity for ages. Now that we have looked at both the concepts individually, let’s try and multiply them! Multiplying Zero and Infinity Zero and infinity are very different mathematical entities. Zero signifies nothing while infinity represents endless limitlessness. Multiplying any finite number by zero gives zero. So one may expect that multiplying infinity by zero may also give zero. But is this always the case? Can we definitively multiply infinity with zero? Let’s analyze further. Does Infinity Exist? Before trying to multiply infinity with anything, we first need to clarify what we mean by infinity. Infinity is not like the finite numbers 1, 2, 3 that can be quantified and manipulated mathematically. It is more of an abstract, philosophical concept. Mathematicians deal with two notions of infinity: • Potential infinity – refers to a quantity that has no boundaries and can increase indefinitely. For example, the sequence of natural numbers 1, 2, 3.. has no end. • Actual infinity – refers to completed infinity as a definite entity, not just potential. For example, the set of all natural numbers {1, 2, 3..}. Potential infinity is generally accepted in mathematics. But actual infinity is more controversial – does it really exist? Or is it just a concept used to understand unboundlessness? So strictly speaking, infinity is not a number that can be used in arithmetic operations like multiplication. It is more of a philosophical idea. What Happens When We Try to Multiply Infinity by Zero? Now let’s see what happens when we try to multiply infinity, however we may interpret it, with zero: • If we take infinity just as a concept or idea, we cannot meaningfully multiply it with anything, let alone zero. • If we imagine some endless sequence like the natural numbers as infinity, multiplying it by zero will still give zero. • If we somehow consider actual infinity as a number, multiplying it by zero is ambiguous – it could be zero or it could even be undefined. So in summary, there is no clear mathematical result when we try to multiply zero and infinity. The very concept of infinity makes it challenging to use in arithmetic operations. At best, we can say multiplying endless sequences by zero gives zero. But infinity as a whole has no definite value that can be multiplied by zero. What are Some Interesting Perspectives on Multiplying Zero and Infinity? While there is no set answer for multiplying zero and infinity, mathematicians have provided some fascinating perspectives on this ambiguous product: • Indeterminate: Renowned mathematician Leopold Kronecker considered the product of infinity and zero to be indeterminate, similar to the concept of “undefined” in limits. • Zero: Mathematician Ernst Zermelo proposed that infinity multiplied by zero should equal zero, based on Abraham Robinson’s nonstandard analysis theory. • Undefined: Several modern theories state that since infinity is not a true number, multiplying it by zero is simply meaningless and undefined. • Infinity: As infinity represents boundlessness, philosopher Immanuel Kant theorized that infinity times zero results in infinity. So we have a range of fascinating perspectives, but still no consensus on the answer! This highlights the enigmatic nature of infinity in mathematics. Why is Infinity Challenging to Work With? The difficulties in pinning down the product of zero and infinity arise from fundamental issues with the concept of infinity itself in mathematics: Infinity is Paradoxical Infinity leads to logical paradoxes – if universe is infinite, anything that can exist should exist. But clearly that is not the case. This makes reasoning mathematically about infinity inconsistent. Different Types of Infinity There are different levels of infinity – countable and uncountable. Comparing and working with them gets problematic. Which one do we take as the “true” infinity? Not a Number Infinity is an abstract concept of endlessness. But mathematics requires working with quantities and numbers. Infinity lacks numerical specificity. No Fixed Value Numbers have fixed values that can be manipulated precisely. Infinity represents boundless increase or decrease. Pinning down its value for calculations is inherently paradoxical. These aspects make infinity a slippery concept mathematically. Unless it can be rigorously defined, working with it leads to contradictions and ambiguities, as seen in multiplying it with zero. Interesting Examples Related to Multiplying Infinity While zero and infinity multiplication remains ambiguous, mathematicians have analyzed related aspects that provide more insight: Limits Involving Infinity Limits are a fundamental concept in calculus that deal with behavior of functions approaching infinity: Function Limit as x approaches infinity f(x) = 5x Infinity f(x) = x^2 Infinity f(x) = 1/x Zero Limits codify the idea of potential infinity in a rigorous way. The function values increase or decrease without bound. Infinite Series and Convergence Infinite series add infinite sequence of numbers: Series Converges? 1 + 2 + 3 + 4… Yes 1 + 1/2 + 1/3 + 1/4… Yes 1 + 1 + 1 + 1… No These provide interesting examples of taming infinity through convergence and defining summation. Asymptotes in Graphs Asymptotes are lines that functions can get closer and closer to, but never touch. They represent infinity visually: Function Asymptote y = 1/x x = 0 y = tan(x) x = 90, 270, 450 etc. These graphs depict how functions behave as they approach infinity or negative infinity through asymptotes. So while infinity cannot be pinned down into arithmetic operations, mathematicians have found innovative ways to incorporate its essence into rigorous theories like calculus. The intriguing question of whether infinity multiplied by zero gives zero or some other value touches upon deep fundamentals of mathematics. While there is no definitive answer, analyzing this problem gives us insights into the paradoxical nature of infinity and difficulties in applying arithmetic to it. The abstract philosophical concept of infinity does not easily lend itself to numerical manipulation. Mathematicians have developed sophisticated ways to incorporate it through limits, series, asymptotes etc. But direct multiplication with zero runs into inconsistencies. The essence of infinity will continue to fascinate mathematicians and philosophers looking to explore the boundaries of mathematics and our ability to comprehend the infinite unknown.
{"url":"https://www.restonyc.com/what-if-we-multiply-0-with-infinity/","timestamp":"2024-11-10T19:24:14Z","content_type":"text/html","content_length":"75495","record_id":"<urn:uuid:23d2ba54-a7cc-4c67-b294-d1f1243ea83b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00510.warc.gz"}
Comments on Confessions of a College Professor: All the remedial classes in one place...Linear Algebra is an isolated course. You don&#39;...Yea interestingly CCP doesn&#39;t have that &#39;b...Adjuncts are a tough call; some of them have bogus...Thanks for all that. It is in Philly, &amp; despit...Wow, that requires a long answer. Statistics *can...Thanks for replying! Also, is probability &amp; co...I should mention, the course syllabus and catalog ...Since accreditation never checks to see if courses...About courses called College Algebra (and Intermed...Wow, thank you for the very kind words. For those ...God bless your soul Prof. Doom, you are the first ...We can&#39;t do &quot;intensive&quot; remedial sch...The math teaching in school is atrocious. By the t...I&#39;m sorry if you got that message, but that&#3...This blog makes me feel like one of the most worth... 04528555392898760692noreply@blogger.comBlogger15125tag:blogger.com,1999:blog-491174673971804494.post-73510379690077285492014-07-17T22:35:44.786-07:002014-07-17T22:35:44.786-07:00Linear Algebra is an isolated course. You don&#39;t need calculus for it, and you certainly don&#39;t need Diff Eq for it either. If you&#39;re really good in algebra, you can take it after algebra. You really need some skills and careful algebra to do well with it, which is why it&#39;s usually taken after Calc 2 or so (at which point you&#39;ve got plenty of practice with algebra).Doomhttps://www.blogger.com/ profile/04528555392898760692noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-3391503522477864432014-07-17T22:01:18.207-07:002014-07-17T22:01:18.207-07:00Yea interestingly CCP doesn&#39;t have that &#39;business calc&#39; that some places have in addition to I guess regular calc 1. I always wondered what was covered in them courses but I bet biz calc in them other places is likely stuff for just what economics/finance/accounting majors face (that being said those unis allow such majors to take the regular calc 1 or higher if they want).<br /><br />Prof Delaware has a free series on YouTube on College Algebra &amp; Calc 1. The CA material seems to have more than some classes in real life.<br /><br />Also, is Linear Algebra taken after the entire calc 1-3 sequence but before Diff Eq?Jesshttps://www.blogger.com/profile/ 17151299421522255994noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-84972292864869120132014-07-17T20:24:11.542-07:002014-07-17T20:24:11.542-07:00Adjuncts are a tough call; some of them have bogus degrees (and thus can&#39;t get full time positions), some of them are jerks (and can&#39;t keep full time positions) and many of them are just being screwed by admin. Usually, you &#39;re not going to get a good course from an adjunct, at least in math, because there are so many bogus &quot;math education&quot; degrees out there.<br /><br />MATH 150 is probably too basic for your needs; there might not be any probability at all in there, I&#39;d have to see the book/syllabus. You&#39;re better off with MATH 251.<br /><br />For computer science, yes, you want linear mathematics, and probability. Both of those topics are absolutely essential if you write any sort of decision-making program (i.e., a.i., at the risk of overdoing abbreviations).<br /><br />I totally, totally, recommend Khan Academy (it&#39;s free, it&#39;s good), and know nothing of Professor Delaware...if he&#39;s free, I don&#39;t see the harm in at least seeing if he helps you.<br /> <br />That&#39;s one thing about math: there are many approaches to teaching mathematics, and what works for one individual can be disastrous for another. You&#39;re risking very little with &quot; free.&quot;<br /><br />If you&#39;re thinking about going engineering, then take the most advanced calculus you can; every 4 year program requires hard core calculus (not &quot;Business Calculus& quot;, or, basically, any calculus that doesn&#39;t use trigonometry). You may as well start now.<br /><br />I took a student from remedial math to differential equations (which you take after calculus III), so it totally can be done.Doomhttps://www.blogger.com/profile/ 04528555392898760692noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-56885554340668475302014-07-17T18:38:55.994-07:002014-07-17T18:38:55.994-07:00Thanks for all that. It is in Philly, &amp; despite there being offerings as high as Discrete Mathematics 1 &amp; 2, Linear Algebra, Calc 3, &amp; Differential Equations, CCP is a 2-year college &amp; students are expected to transfer to a 4-year university when they finish whatever courses they&#39;re advised to by their major selection. The higher classes like Linear Algebra &amp; DiffEq have just one section per semester, whereas the lowest classes like 016, 017, 118, etc have thousands of seats in dozens of sections across all semesters including summer. I believe that some classes, like 151 &amp; 152, have few majors that really require them and mainly serve as for students who need a maths elective and took Intermediate Algebra (118) and don&#39;t want/need precalc 1/2 (161/162). I believe here at CCP, what others call College Algebra is Precalc 1 as it&#39;s all about functions &amp; their graphs. 162, Precalc 2, seems to be some more functions &amp; trig.<br /><br />The stat classes we do have are MATH 150 Introductory Data Analysis &amp; MATH 251 Statistics for Science. Oops above. We have 2 if you don&#39;t count ECON 112/114. 254 says it is algebra-based &amp; requires passing MATH 118 Intermediate Algebra or placement into 161 (precalc 1).<br /><br />I am in an interesting predicament of wanting to do comp sci &amp; engineering down the road, but am lacking in foundational knowledge because of high school education was inadequate. Also, I don&#39;t know which teachers are full-time and which are adjuncts. I had unpleasant experiences at a 4-year uni before with a couple adjuncts. One time I had an adjunct who didn&#39;t know how to do something he was set to teach! Regarding College Algebra, would you recommend Khan Academy &amp; this UMKC big YouTube playlist ( http://www.youtube.com/playlist?list=PLDE28CF08BD313B2A ) by Prof Richard Delaware? I&#39;m looking to take Linear Mathematics, Computer Mathematics, &amp; Probability out of personal interest even if my major doesn&#39;t require it because I enjoy maths &amp; puzzles, &amp; am a tech guy &amp; could benefit from the knowledge.<br /><br />Thanks for such an awesome blog.Jesshttps: that requires a long answer.<br /><br />Statistics *can* be high school or college, it really depends on the presentation. The 1000 level intro statistics I taught at Tulane was well beyond a 5000 statistics course at a nearby state university (which was about the same as the statistics I learned in high school, and similar to a 2000 level statistics course at another state U).<br /><br />A 152 Probability is almost certainly a high-school level course, but for &quot;college credit&quot;. You&#39;ll probably have trouble transferring it to a university. That linear programming course sounds pretty fun, but be careful, that material only applies to a select few majors.<br /><br />Probability and Combinatorics is typically taught only in a very limited way in high school algebra, or in a less limited way in statistics. College courses that discuss it specifically are usually pretty involved (3000 level, I&#39;ll be teaching such a course in the Fall, coincidentally).<br /><br />3 statistics courses at a CC? That&#39;s impressive. I tried to convince a local CC that they should offer at least 1, but admin didn&#39;t really understand it. Good lord, they had no idea what the &quot;central limit theorem&quot; was, so when an educationist was using it for his grading scale (stupid idea) they needed my help...and when they were getting statistics for accreditation, they again needed serious help on basic ideas. Oh lordy, the cluelessness of Ph.D.s in Admin; I don&#39;t know how you can get a research degree in the social sciences without at least a crude understanding of statistics, but anyway.<br /><br />Back to the point, if your CC has that kind of array of math courses, then it&#39;s a &quot;2 track&quot; college. One track is bogus, one is legit. It&#39;s your responsibility to figure out which track you want. Many CC&#39;s are just 1 track (all bogus), so you&#39;re at a well above average CC.<br /><br />I&#39;d need more information about your major and goals before I could point you in the right direction. Your best bet? Find out who&#39;s teaching the highest level math courses there, and start with the lowest level math course that guy teaches.<br /><br />Usually, CC&#39;s use Educationists to teach the bogus courses, but have people with real degrees to teach the real college courses. The guy(s) teaching the real college courses are the ones you want, and sometimes they teach the intro courses, too.Doomhttps://www.blogger.com/profile/ 04528555392898760692noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-52173278402064346192014-07-16T22:15:15.185-07:002014-07-16T22:15:15.185-07:00Thanks for replying! Also, is probability &amp; combinatorics traditionally algebra or statistics? Is stats high school or university material? I&#39;ve always been under the impression that in the past, high schoolers took stats.<br /><br />My local community college has separate courses for lower algebra levels, 2 precalc classes, the usual calc sequence, 3 stats classes, linear algebra, &amp; differential equations, + these two classes: Linear Mathematics (151) &amp; Probability (152). 151 covers basic algebra review, graphing linear equations, solving linear systems via matrices, linear programming via graphing &amp; the simplex method.Jesshttps://www.blogger.com/profile/ 17151299421522255994noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-52300132671467487222014-05-31T06:55:21.232-07:002014-05-31T06:55:21.232-07:00I should mention, the course syllabus and catalog still says the College Algebra has matrices even though it&#39;s no longer in the course. Accreditation, of course, does not care and has no way of figuring it out.Doomhttps:// accreditation never checks to see if courses are legitimate, it&#39;s quite possible to have a course called &quot;college algebra&quot; on one campus that is equivalent to &quot;pre-remedial algebra &quot; on another. Retention, not learning, is the goal on college campuses.<br /><br />My high school algebra covered matrices in a little detail (we did determinants of 3x3, but that&#39;s as far as it went). As luck would have it, I used linear programming for my honors thesis to optimize a mathematical game...I had to look it up in a book and program it into a computer. Not even the graduate courses at my campus covered it (but that&#39;s more of fluke of the faculty there than any slight against the institution).<br /><br />Anyway, I know of no college campus that includes matrices in &quot;College Algebra&quot; in much detail. There was a guy that at least had his students learn determinants of 2x2 matrices...but admin had him take that material out.Doomhttps:// courses called College Algebra (and Intermediate Algebra, etc), it can be confusing knowing exactly what one&#39;s even getting unless they can see a full syllabus with mention of text book, chapters, and exercise sets recommended that students do.<br /><br />In my own experience, I have been to 3 different colleges (in the US) and one had a course called College Algebra that basically was most of a grade 7/8 course but with 2 chapters removed as it was a quarter-style class taught in 7 weeks. When I transferred to a university in the same city, they assumed I knew all of the material covered in /their/ course entitled College Algebra, which goes &amp; I don&#39;t know if even high schoollers get some of that material in their algebra 2 class (stuff like quadratic inequalities involving rational expressions - many things to pay careful attention to detail with those if one&#39;s not extra careful). Naturally I had to switch to a lower course once I was placed into Precalculus due to that transfer credit.<br /><br />By the way, do or have any any College Algebra or Precalculus class in the US go into matrices and linear programming in much of any detail? I recall seeing some of it in some algebra 2 texts but mainly only a few basic examples with 2 expressions in smaller matrices &amp; LP problems. I had to get a cheap text book called Finite Mathematics to get much of any decent matrix &amp; LP details as well as exercises including word problems that didn&#39;t require skipping over a lot of stuff &amp; looking into Linear Algebra. None of the regular Precalc or CA books had that stuff and Linear Algebra texts seem to be for more advanced students with knowledge of Calculus. There is another subject that deals with Linear Programming that may have more of what I&#39;m looking for: management science. Familiar with that? As a tech guy, this stuff as well as logic &amp; some discrete topics interest me.<br /><br /> Thanks for such an enjoyable set of blog entries. I love reading this stuff. Good insight.Jesshttps://www.blogger.com/profile/ 17151299421522255994noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-2227933562248517202014-02-28T15:24:11.657-08:002014-02-28T15:24:11.657-08:00Wow, thank you for the very kind words. For those that don&#39;t know Gatto, I encourage to seek his site and read his works. He has much relevant to say about public (government) school.Doomhttps://www.blogger.com/profile/ 04528555392898760692noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-53997017261773934902014-02-28T12:18:14.468-08:002014-02-28T12:18:14.468-08:00God bless your soul Prof. Doom, you are the first person to blow the lid off of the ugly realities of higher education. I&#39;ve worked in it for over a decade now and what you say is true. You&#39;re like the higher education version of John Taylor Gatto! You are the only person I&#39;ve seen reveal these truths and try to explain the illogic that pervades higher (liar) education. Very few of those outside this fantasy land can begin to comprehend the real nature of the beast! Bravo!!!!!Johnhttps://www.blogger.com/profile/ 08355168465733989761noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-22747127083368414072013-12-07T19:45:39.900-08:002013-12-07T19:45:39.900-08:00We can&#39;t do &quot;intensive& quot; remedial schooling...if we did that, most students would fail, and admin won&#39;t tolerate that.<br /><br />Doomhttps://www.blogger.com/profile/ 04528555392898760692noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-89320281623842115392013-11-10T17:24:33.417-08:002013-11-10T17:24:33.417-08:00The math teaching in school is atrocious. By the time the products of abysmally poor math teaching and abysmally designed curricula reach college, further math &quot;schooling&quot; is a lost cause: the only way to make up lost ground is several years of intensive remedial schooling. Which they don&#39;t get as college teaching tends to be more of what they got in school.AAhttps://www.blogger.com/profile/ 13242448989166177843noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-73303880197780222012013-08-20T22:21:53.949-07:002013-08-20T22:21:53.949-07:00I&#39;m sorry if you got that message, but that&#39;s not what I&#39;m saying at all. <br /><br />As I discuss in one of my earlier posts...academia isn&#39;t everything, you&#39;ve been misled to think you *need* a degree. You don&#39;t. A degree is not an approval of anything; the vast majority of degrees are useless.<br /><br />My mother had no degree; she was a successful real estate agent, and ran an antique mall that grossed over a million a year...and no, my parents didn&#39;t hire an accountant to do the taxes (nor was my father an accountant).<br /><br />The founder of Wendy&#39;s didn&#39;t even graduate high school. Bill Gates doesn&#39;t have a degree. Karl Rove never went to college. My plumber makes more money than I do, and he has no degree. Being bad in academics means almost nothing--you write better than most of my students for what it&#39;s worth.<br /><br />Half of college graduates are in jobs where their degree is worthless. There are many folks with graduate degrees living lives in near poverty.<br /><br />Having a college degree is like having a black belt in karate: sure, it has its uses, but the bulk of humanity has done just fine without it, and it really doesn&#39;t help you on a day to day basis.<br /><br />Your life isn&#39;t meaningless, but don&#39;t let the excessive meaning you put into a degree drag you down.Doomhttps://www.blogger.com/profile/ 04528555392898760692noreply@blogger.comtag:blogger.com,1999:blog-491174673971804494.post-87494397256603400172013-08-20T22:09:20.197-07:002013-08-20T22:09:20.197-07:00 This blog makes me feel like one of the most worthless and moronic human alive. While I try to think positively of myself due to experience in some areas, I am pathetic when it comes to academia. The way this is worded reinforces a painful realization of what is realistically a determined future of poverty and struggle without a degree. From what I see, a degree is basically the approval of the government and the wealthy to allow you to &quot;succeed&quot; in life. Such as, have a job that pays you a wage that allows you to live comfortably, being socially held at a higher esteem, ect...<br /><br />I suppose I just have a hard time accepting that my life is meaningless. I am just another mindless monkey that will live a life of mediocrity, struggle, and pain; then die. The world being none the wiser or better off for my ever being there.silver_wasphttps://www.blogger.com/profile/14066742757343315632noreply@blogger.com
{"url":"https://professorconfess.blogspot.com/feeds/1082217505429243639/comments/default","timestamp":"2024-11-09T12:53:37Z","content_type":"application/atom+xml","content_length":"42393","record_id":"<urn:uuid:8b974472-edff-4d25-8352-ee3e938f3255>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00312.warc.gz"}
Refer Friends | Parklane Guitars for a 15% off Coupon Code top of page Get a 15% discount on your order Apply reward when placing your first order. Get a 15% discount for each friend you refer Get special perks for you and your friends 1. Give your friends a 15% discount. 2. Get a 15% discount for each friend who places an order. bottom of page
{"url":"https://www.parklaneguitars.com/refer-friends-15off-coupon","timestamp":"2024-11-14T05:11:27Z","content_type":"text/html","content_length":"1050484","record_id":"<urn:uuid:1fc92418-803d-4870-b16e-3f9aa9a68540>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00701.warc.gz"}
The Null Space ~ VARS Multivariate Product Forecasting Business Insights There are times when a lot of things are happening in a retail environment which can make forecasting on all of them a real chore. If we’re fortunate there will be a nice smooth trend we can pick out with a low maintenance model like exponential smoothing (ETS). However, if we need a low maintenance forecast and the interactions between different things happening like price and promotion are making it difficult then Vector Auto Regressive models (VARs) can help make sense of the situation. VAR models tie different variables together into a single unified model and we may not even need to use any additional external inputs. This is important because if we have to provide future values it can take time, especially if assumptions change often. By including causal variables directly in the model we can minimize wasted time there will be fewer values we need to anticipate. As an added bonus, where we have two way interactions say between price and traffic counts, the model will naturally capture that inter-relationship. Let’s explore using a VAR model to forecast product sales. We will use the same data that we used in this article where we discussed ARIMA models. It is a small time series dataset available on Kaggle at https://www.kaggle.com/datasets/soumyadiptadas/ These data are a number of products that have inter-relationships plus price and temperature information. The data are very ill-behaved which makes it a great real world example. Creating a forecast allows us to make a short term prediction that we will sell 989 pairs of mittens three days from today, with a possible range between 400 and 1,500. Follow along below in R Studio to see how we arrived at this result. Exploratory Analysis Our first step will be to load the 100 series observations and run summary statistics to identify any missing values. There are no missing values so imputation is not required. Next we will run pairs.panels from the Psych package because it provides a great overview of any correlations within our category of products, plus the price and temperature variables. TimeData <- read.csv("TimeData.csv") The overview shows us that all but one products have small negative correlation with Price, with P4 being the most responsive. P2 and P3 have massive correlations with temperature, but the others do not; and what is the most interesting is that certain products have correlations with each other. P4 and P5 have a modest 0.16 positive correlation suggesting they are bought together. P2 and P3 have a massive positive correlation. P3 and P4 have a modest negative correlation suggesting they are partial substitutes. Given that high temperature will negatively impact the sale of P3 which will then impact the sale of P2 we can see already that this would be fairly irritating to forecast, especially if we had to provide future values for weather and expected sales of related items. These products could be jackets and mittens compared to running shoes in a clothing store; this type of inter-relationship is very common in retail. Time Series for P3 We are interested in making an order of P3, let’s call this the mittens and plot the prior sales information. plot(TimeData$ProductP3, type = "line", main = "Product3 Plot") What is significant about P3 is that the trend is increasing, possibly temperatures were cooling. There is also a seasonal cycle that occurs about every 7 days, which is not uncommon in retail. Otherwise, the variations seem irregular. Potentially, this is due to promotional weeks or unusual amounts of snow. It could also be due to promotions on complementary or substitute items .This information may be contained in our other variables. Creating our Time Series Feeding several variables into the ts() function in R will return an MTS object, for multi-variate time series analysis. However, first we will clean up the excessively long variable names. These are not Null Space Approved. Then we will check all of our variables for stationarity by running an Augmented Dickey-Fuller test and a KPSS test. Stationarity is a requirement for VARS models to produce good results. ## For those with a distaste for coding long variable names colnames(TimeData)[2:8] <- c("P1", "P2", "P3", "P4", "P5", "Price", "Temp") ### Check for stationarity apply(TimeData[,3:8], MARGIN = 2, FUN = adf.test) apply(TimeData[,3:8], MARGIN = 2, FUN = kpss.test, null="Level") The ADF test suggests our data is stationary, but these tests are known for false positives so we confirm with a KPSS test. As it turns out both have differing opinions. In this particular case that means we should difference our data where they disagree. Where they agree we will avoid differencing. For a brief explanation of differencing please see this article on ARIMA forecasting. KPSS suggests that we should leave Price undifferenced, along with P4 and P5. Following this we will use dplyr to mutate a difference across the other numeric variables. Finally, we will load our transformed data into a MTS object for analysis. ### Set up MTS Object Time.mts <- TimeData %>% mutate(across(c(2:4,8), difference)) %>% dplyr::select(t, P1, P2, P3, P4, P5, Price, Temp) %>% filter(!is.na(P3)) Time.mts <- ts(Time.mts) Endogenous Variables VAR models assume variables are endogenous, that is to say they have a bi-directional effect on each other. For example, high demand causing price to increase, and mittens selling with jackets. The side inputs are not endogenous, they only impact the model, but are not impacted by the model. In this case we separate out temperature because it is unlikely that sales of mittens affect the ExoTemp <- as.matrix(Time.mts[,8]) colnames(ExoTemp) <- "Temp" Above we put our variable into a matrix with column name “Temp” because VAR will complain and rename our variable “exo1” if we simply extract the column from the time series. Determining Lag Value Lag values are just the observations that occur after our time series, so today’s lag is tomorrow’s value. With VAR models we need to specify the number of lags, essentially telling the model how many future periods what happens today will affect. Fortunately, choosing lags is made much simpler with the VARselect function. We let VARSelect know which variables to use as part of the multivariate structure, those variables will be inter-dependent with each other. We specify the seasonal period as 7 because there is a weekly cycle; and an exogenous variable for temperature. That is a side input to the model, more frequently called an external regressor. VARselect(Time.mts[,2:7], exogen = ExoTemp, lag.max = 10, season = 7) AIC(n) HQ(n) SC(n) FPE(n) Although several options exist for selecting lag, we will generally decide on AIC or AICc. In this case we have no option for AICc so we will use a value of P=10 for our lags. AIC is widely considered the best selector. Before moving on it is worth mentioning that for this series VARSelect would choose 13 lags, but this seems like an excessive number. Therefore we cap the number of lags at 10. If we are unhappy with our results, we can always come back and see how things go with 1, 4, or 13 lags. Running the Model Having set everything up to determine our lags, we simply have to use the same parameters and store the model into a variable. The only difference is that we will select P = 10 lags. Mod.var <- vars::VAR(Time.mts[,2:7], exogen = ExoTemp, p=10, lag.max = 10, season = 7) Forecast Data Next we create some data to test our forecast on. We take a recent slice at the end of the time series and then reverse it to simulate a trend reversal in weather. This is converted to a matrix and given the column name “exo1.” NewData <- rev(Time.mts[80:99,8]) NewData <- as.matrix(NewData) colnames(NewData) <- "Temp" Next we make predictions with the generic predict() function. We must specify the number of future periods with n.ahead; we will go 20 periods out. We would now be finished if we had not used temperature as an exogen. Given that we have, we need to ensure that dumvar receives a matrix that is exactly as long as n.ahead. This is the matrix we created earlier, it contains only values for temperature along with a column name matching our exogen (“Temp”). pred <- predict(Mod.var, n.ahead = 20, dumvar = NewData) Our plots look pretty good, and as bonus we now have forecasts for P3 plus all of the products in our category without having to do any extra work. This is a great feature of VAR models. Inverting the Difference Recall that we differened a number of variables, including P3 which is our forecast goal. In order to get useful forecast numbers we will need to undo this differencing operation. In order to invert a time series with a single difference we have to find the original first observation, and then using that as a starting point and add that to a cummulative sum of the series. i.e. Cumulative Sum(series) + Starting Value. ### Invert Differences [P3] P3 <- pred$fcst$P3 BeginP3 <- head(TimeData,1)$P3 # Begin of Time Series par(mfrow = c(1,2)) plot(pred$model$datamat$P3, type = "l", main = "Differenced Data", ylab = "Volume P3") plot(cumsum(pred$model$datamat$P3)+BeginP3, type = "l", main = "Inverted Difference", ylab="Volume P3") P3 puffs back up into it’s original glory after this operation. Now we need to apply this to both our pre-forecast and forecast values. To accomplish this we will first invert the pre-forecast period with the start value BeginP3 above from the head of our series. Next we will identify StartP3 from the tail of the original series. Finally, we use the apply function with column margin (2) to apply this process to all of the Forecast output. Then work on putting all of this information into a dataframe so it is easier to work with. That will require using NA values to represent points in the pre-period where we have no confidence bounds. ## Pre-Period Cumsum P3Pre <- cumsum(pred$model$datamat$P3)+BeginP3 ## Forecast Start StartP3 <- tail(TimeData,1)$P3 ## Put everything into data frame P3Fcst <- as.data.frame(apply(P3, MARGIN = 2, FUN = function(x) {cumsum(x) + StartP3})) PlotForecast <- data.frame( index = seq(1,length(P3Pre) + length(P3Fcst[,1]), by = 1), Point = c(P3Pre, P3Fcst[,1]), Upper = c(rep(NA, length(P3Pre)), P3Fcst[,3]), Lower = c(rep(NA, length(P3Pre)), P3Fcst[,2]), Group = factor(c(rep("Pre",length(P3Pre)), rep("Fcst",length(P3Fcst[,1])))) Plotting the Forecast We can now use ggplot2 to view our forecast for P3. Note that the inverted confidence bounds are omitted to avoid ruining the plot scale. ggplot(PlotForecast, aes(index, Point, color = Group, group=1)) + geom_line() + labs(title = "VARS Forecast P3 (Inverted Difference)", color = "Forecast", y = "Product P3 Volume", x = "Time Index") + theme_minimal() The forecast looks plausible, and from this point we could work on testing and if necessary tightening up some of the parameters. As it stands, our forecast for three days from now would be 989 pairs of mittens, with a likely range between 400 and 1,500. Prediction Confidence The confidence bounds for this model are very large after a few periods, but we could easily run this forecast every day with a new seven day weather outlook. For an autoregressive model it makes sense that there would be uncertainty for future periods. As we get farther away from the known data because each future observation depends on what comes before it, and eventually on the estimate of the estimate that came before it. That doesn’t mean that our mitten sales will suddenly hit zero after 100 periods of growth, although that could happen with an early spring. Rather, it reflects the outer bounds model uncertainty due to declining information. However, provided past patterns hold steady we would expect to see actuals that are reasonably close to the forecasted values and this type of model lends itself well to repeated, low maintenance, rolling forecasts which can be a very useful thing. VAR models are popular in econometrics because they naturally handle a number of correlated variables that interact with each other. For the same reason they can be very useful in a retail context, particularly for co-dependent products. They are often not a great choice for longer term forecasts but for short-term forecasts they can be very convenient options because they avoid the problem of having to provide future values. They can be auto-tuned easily, and simultaneously forecast all of the co-dependent variables. Happy forecasting! Chatfield, Chris; Haipeng, Xing. The Analysis of Time Series: An Introduction with R, 7th Ed. Chapman & Hall/CRC, 2019. Related Posts
{"url":"https://data-science.io/vars-multivariate-product-forecasting/","timestamp":"2024-11-06T05:15:32Z","content_type":"text/html","content_length":"106239","record_id":"<urn:uuid:f7ab8aa1-438e-4b40-ac2d-ea4bc0d1a6a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00883.warc.gz"}
These days, cosmologists, astrophysicists and all that lot fill every nook and crannie of CERN TH. They also fill the seminar schedule with their deep dark matter talks. I have no choice but to make another dark entry in this blog. Out of 10^6 seminars i've heard this week i pick up the one by Marco Cirelli Minimal Dark Matter The common approach to dark matter is to obtain a candidate particle in a framework designed to solve some other problem of the standard model. The most studied example is the lightest neutralino in the MSSM. In this case, the dark matter particle is a by-product of a theory whose main motivation is to solve the hierarchy problem. This kind of attitiude is perfectly understandable from the psychological point of view. By the same mechanism, a mobile phone sells better if it also plays mp3s, makes photographs and sings lullabies. But after all, the only solid evidence for the existence of physics beyond the standard model is the observation of dark matter itself. Therefore it seems perfectly justified to construct extensions of the standard model with the sole objective of accommodating dark matter. Such an extension explains current observations while avoiding the excess baggage of full-fledged theoretical frameworks like supersymmetry. This is the logic behind the presented by Marco. The model is not really minimal (adding just a scalar singlet would be more minimal), but it is simple enough and cute. Marco adds one scalar or one Dirac fermion to the standard model, and assigns it a charge under SU(2)_L x U(1)_Y. The only new continuous parameter is the mass M of the new particle. In addition, there is a discrete set of choices of the representation. The obvious requirement is that the representation should contain an electrically neutral particle, which could play the role of the dark matter particle. According to the formula Q = T3 + Y, we can have an SU(2) doublet with the hypercharge Y= 1/2, or a triplet with Y = 0 or Y = 1, or larger multiplets. Having chosen the representation, one can proceed to calculating the dark matter abundance. In the early universe, the dark matter particles thermalize due to their gauge interactions with W and Z gauge bosons. The final abundance depends on the annihilation cross section, which in turn depends on the unknown mass M and the well known standard model gauge couplings. Thus, by comparing the calculated abundance with the observed one, we can fix the mass of the dark matter particle. Each representation requires a different mass to match the observations. For example, a fermion doublet requires M = 1 TeV, while for a fermion quintuplet with Y = 0 we need M = 10 TeV. After matching to observations, the model has no free parameters and yields quite definite predictions. For example, here is the prediction for the direct detection cross section: We can see that the cross sections are within reach of the future experiments. The dark matter particle, together with its charged partners in the SU(2) multiplet, could also be discovered at colliders (if M is not heavier than a few TeV) or in the cosmic rays. There are the usual indirect detection signals as well. The model was originally introduced in a 2005 . The recent corrects the previous computation of dark matter abundance by including the Sommerfeld corrections. These days the CERN Theory Institute program is focused on the interplay between cosmology and LHC phenomenology. Throughout July you should expect overrepresentation of cosmology in this blog. Last Wednesday, Julien Lesgourgues talked about the Planck satellite. Julien is worth listening to. First of all, because of his cute French accent. Also, because his talks are always clear and often damn interesting. Here is what he said this time. Planck is a satellite experiment to measure the Cosmic Microwave Background. It is the next step after the succesful COBE and WMAP missions. Although it looks like any modern vacuum cleaner, the instruments offer 2*10^(-6) resolution of temperature fluctuations (factor 10 better than WMAP) and 5' angular resolution (factor 3 better than WMAP). Thanks to that, Planck will be able to measure more precisely the angular correlations of the CMB temperature fluctuations, especially at higher multipoles (smaller angular scales). This is illustrated on this propaganda picture: Even more dramatic is the improvement in measuring the CMB polarization. In this context, one splits the polarization into the E-mode and the B-mode (the divergence and the curl). The E-mode can be seeded by scalar gravitational density perturbations which are responsible for at least half of the already observed amplitude of temperature fluctuations. For large angular scales, the E-mode has already been observed by WMAP. The B-mode, on the other hand, must originate from tensor perturbations, that is from gravity waves in the early universe. These gravity waves can be produced by inflation. Planck will measure the E-mode very precisely, while the B-mode is a chalenge. Observing the latter requires quite some luck, since many models of inflation predict the B-mode well below the Planck sensitivity. Planck is often described as the ultimate CMB temperature measerument. That is because its angular resolution corresponds to the minimal one at which temperature fluctuations of cosmological origin may exist at all. At scales smaller than 5' the cosmological imprint in the CMB is suppresed by the so-called Silk damping. 5' corresponds roughly to the photon mean free path in the early univere so that fluctuations at smaller scales get washed out. However, there is still room for future missions to improve the polarization measurements. All these precision measurements will serve the noble cause of precision cosmology, that is a precise determination of the cosmological parameters. Currently, the CMB and other data are well described by the Lambda-CDM model, which has become the standard model of cosmology. Lambda-CDM has 6 adjustable parameters. One is the Hubble constant. The other two are the cold (non-relativistic) dark matter and the baryonic matter densities. In this model matter is supplemented by the cosmological constant, so as to end up in the spatially flat universe. Another two parameters describe the spectrum of gravitational perturbations (the scalar amplitude and the spectral index). The last one is the optical depth to reionization. Currently, we know these parametes with a remarkable 10% accuracy. Planck will further improve the accuracy by a factor 2-3, in most cases. Of course, Planck may find some deviations from the Lambda-CDM model. There exist, in fact, many reasonable extensions that do not require any exotic physics. For example, there may be the already mentioned tensor perturbations, non-gaussianities or the running of the spectral index, which are predictions of certain models of inflation. Planck could find the trace of the hot (relativistic) component of the dark matter. Such contribution might come from the neutrinos, if the sum of their masses is at least 0.2 eV. Furthermore, Planck will accurately test the spatial flatness assumption. The most exciting discovery would be to see that the equation of state of dark energy differs from w=-1 (the cosmological constant). This would point to some dynamical field as the agent responsible for the vacuum energy. Finally, the Planck will test models of inflation. Although it is unlikely that the measurement will favour one particular model, it may exclude large classes of models. There are two parameters that appear most interesting in this context. One is the spectral index nS. Inflation predicts small departures from the scale invariant Harrison-Zeldovich spectrum corresponding to nS=1. It would be nice to see this departure beyond all doubt, as it would further strengthen the inflation paradigm. The currently favoured value is nS = 0.95, three sigma away from 1. The other interesting parameter is the ratio r of the tensor to scalar perturbations. The current limit is r < 0.5, while Planck is sensitive down to r = 0.1. If the inflation takes place at energies close to the GUT scale, tensor perturbations might be produced at the observable rate. If nothing is observed, large-field inflation models will be disfavoured. Planck is going to launch in July 2008. This coincides with the first scheduled collisions at the LHC. Let's hope at least one of us will see something beyond the standard model. No slided as usual. Here is one more splinter of Nima Arkani-Hamed's CERN visit. Apart from a disappointing seminar for theorists, Nima gave another talk advertising his MARMOSET to a mostly experimental audience. OK, I know it was more than two weeks ago, but firstly it's summertime, and secondly, i'm still doing better with the schedule than the LHC. MARMOSET is a new tool for reconstructing the fundamental theory from the LHC data. When you ask phenomenologists their opinion about MARMOSET, officially they just burst out laughing. Off the record, you could hear something like "...little smartass trying to teach us how to analyze data..." often followed by *!%&?#/ ^@+`@¦$. I cannot judge to what extent this kind of attitude is justified. I guess, it is partly a reaction to overselling the product. To my hopelessly theoretical mind, the talk and the whole idea appeared quite interesting. In the standard approach, the starting point to interpreting the data is a lagrangian describing the particles and interactions. From the lagrangian, all the necessary parton level amplitudes can be calculated. The result is fed to Monte Carlo simulations that convolute the amplitudes with the parton distribution functions, calculate the phase space distributions and so on. At the end of this chain you get the signal+the SM background that you can compare with the observations. Nima pointed out several drawbacks of such an approach. The connection between the lagrangian and the predicted signal is very obscure. The lagrangians have typically a large number of free parameters, of which only a few combinations affect the physical observables. Typically, the signal, e.g. a pT distribution, has a small dependence on the precise form of the amplitude. Moreover, at the dawn of the LHC era we have little idea which underlying theory and which lagrangian will turn out relevant. This is in strong contrast with the situation that has reigned in the last 30 years, when the discovered particles (the W and Z bosons, the top quark) were expected and the underlying lagrangian was known. Nima says that this new situation requires new strategies. Motivated by that, Nima&co came up earlier this year with a paper proposing an intermediate step between the lagrangian and the data. The new framework is called an On-Shell Effective Theory (OSET). The idea is to study physical processes using only kinematic properties of the particles involved. Instead of the lagrangian, one specifies the masses, production cross sections and decay modes of the new particles. The amplitudes are parameterized by one or two shape variables. This simple parameterization is claimed to reproduce the essential phenomenology that could equally well be obtained from more complicated and more time-consuming simulations in the standard approach. MARMOSET is a package allowing OSET-based Monte Carlo simulations of physical processes. As the input it requires just the new particles + their production and decay modes. Based on this, it generates all possible event topologies and scans the OSET parameters, like production and decay rates, in order to fit the data. The failure implies necessity to add new particles or new decay channels. In this recursive fashion one can extract the essential features of the underlying fundamental theory. This sounds very simple. So far, the method has been applied under greenhouse conditions to analyze the "black boxes" prepared for the LHC olympics. Can it be useful when it comes to real data? Proffesionals say that MARMOSET does not offer anything they could not, if necessary, implement within half an hour. On the other hand, it looks like a useful tool for laymen. If a clear signal is discovered at the LHC, the package can provide a quick check if your favourite theory is able to reproduce the broad features of the signal. Convince me if I'm wrong... Anyway, we'll see in two The video recording available here.
{"url":"https://resonaances.blogspot.com/2007/07/?m=0","timestamp":"2024-11-08T07:54:33Z","content_type":"application/xhtml+xml","content_length":"94373","record_id":"<urn:uuid:dde16860-f14f-4b0c-b516-8a4af5e63202>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00111.warc.gz"}
The Updated Scholar Tag: sequent calculus • About a year and a half ago I wrote about hypersequents, a modification of the tried and trusted sequent calculus approach to structural proof theory. In that setting, instead of working with a single sequent (a set of premises alongside a set of possible conclusions) we work with a list of sequents. In this paper,…
{"url":"https://blogs.fediscience.org/the-updated-scholar/tag/sequent-calculus/","timestamp":"2024-11-04T05:53:54Z","content_type":"text/html","content_length":"76424","record_id":"<urn:uuid:df183118-b95e-4487-a9c9-0cd11e1b991d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00327.warc.gz"}
Volume Formulas (pi = Volume Formulas Note: "ab" means "a" multiplied by "b". "a^2" means "a squared", which is the same as "a" times "a". "b^3" means "b cubed", which is the same as "b" times "b" times "b". Be careful!! Units count. Use the same units for all measurements. Examples cube = a^ 3 rectangular prism = a b c irregular prism = b h cylinder = b h = pi r^ 2 h pyramid = (1/3) b h cone = (1/3) b h = 1/3 pi r^ 2 h sphere = (4/3) pi r^ 3 ellipsoid = (4/3) pi r[1] r[2] r[3] Volume is measured in "cubic" units. The volume of a figure is the number of cubes required to fill it completely, like blocks in a box. Volume of a cube = side times side times side. Since each side of a square is the same, it can simply be the length of one side cubed. If a square has one side of 4 inches, the volume would be 4 inches times 4 inches times 4 inches, or 64 cubic inches. (Cubic inches can also be written in^3.) Be sure to use the same units for all measurements. You cannot multiply feet times inches times yards, it doesn't make a perfectly cubed measurement. The volume of a rectangular prism is the length on the side times the width times the height. If the width is 4 inches, the length is 1 foot and the height is 3 feet, what is the volume? NOT CORRECT .... 4 times 1 times 3 = 12 CORRECT.... 4 inches is the same as 1/3 feet. Volume is 1/3 feet times 1 foot times 3 feet = 1 cubic foot (or 1 cu. ft., or 1 ft^3).
{"url":"http://www.math.com/tables/geometry/volumes.htm","timestamp":"2024-11-14T18:09:13Z","content_type":"text/html","content_length":"18336","record_id":"<urn:uuid:1a12a420-ba03-4e7e-afb2-a9579c857ba6>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00509.warc.gz"}
The value of lambda for which the system ofequations 2x-y-z=12,-Turito Are you sure you want to logout? The value of A. 3 B. -3 C. 2 D. -2 The correct answer is: -2 The question is as follows the value of The given equations are Their determinant =D For no solution case D=0 => 2(−2λ−1)+1(λ−1)−1(1+2)=0 => −3λ−6=0 Hence λ=−2 for no solution. Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/maths-the-value-of-lambda-for-which-the-system-ofequations-2x-y-z-12-x-2y-z-4-x-y-lz-4-has-no-solution-is-2-2-3-3-q2f4cd3","timestamp":"2024-11-14T15:09:31Z","content_type":"application/xhtml+xml","content_length":"459955","record_id":"<urn:uuid:45434c88-faf6-479d-b713-83a8ca46ca86>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00241.warc.gz"}
Education and career Born on 4 January 1935 in London, Brown attended Oxford University, obtaining a B.A. in 1956 and a D.Phil. in 1962.^[2] Brown began his teaching career during his doctorate work, serving as an assistant lecturer at the University of Liverpool before assuming the position of Lecturer. In 1964, he took a position at the University of Hull, serving first as a Senior Lecturer and then as a Reader before becoming a Professor of pure mathematics at Bangor University, then a part of the University of Wales, in 1970. Brown served as Professor of Pure Mathematics for 30 years; he also served during the 1983–84 term as a Professor for one month at Louis Pasteur University in Strasbourg.^[2] In 1999, Brown took a half-time research professorship until he became Professor Emeritus in 2001. He was elected as a Fellow of the Learned Society of Wales in 2016. Editing and writing Brown has served as an editor or on the editorial board for a number of print and electronic journals. He began in 1968 with the Chapman & Hall Mathematics Series, contributing through 1986.^[2] In 1975, he joined the editorial advisory board of the London Mathematical Society, remaining through 1994. Two years later, he joined the editorial board of Applied Categorical Structures,^[3] continuing through 2007. From 1995 and 1999, respectively, he has been active with the electronic journals Theory and Applications of Categories^[4] and Homology, Homotopy and Applications,^[5] which he helped found. Since 2006, he has been involved with Journal of Homotopy and Related Structures.^[6] His mathematical research interests range from algebraic topology and groupoids, to homology theory, category theory, mathematical biology, mathematical physics and higher-dimensional algebra.^[7]^[8]^[9]^[10]^[11] Brown has authored or edited a number of books and over 160 academic papers published in academic journals or collections. His first published paper was "Ten topologies for X × Y", which was published in the Quarterly Journal of Mathematics in 1963.^[12] Since then, his publications have appeared in many journals, including but not limited to the Journal of Algebra, Proceedings of the American Mathematical Society, Mathematische Zeitschrift, College Mathematics Journal, and American Mathematical Monthly. He is also known for several recent co-authored papers on categorical Among his several books and standard topology and algebraic topology textbooks are: Elements of Modern Topology (1968), Low-Dimensional Topology (1979, co-edited with T.L. Thickstun), Topology: a geometric account of general topology, homotopy types, and the fundamental groupoid (1998),^[14]^[15] Topology and Groupoids (2006)^[16] and Nonabelian Algebraic Topology: Filtered Spaces, Crossed Complexes, Cubical Homotopy Groupoids (EMS, 2010).^[16]^[17]^[18]^[19]^[20]^[21]^[22]^[23]^[24]^[25] His recent fundamental results that extend the classical Van Kampen theorem to higher homotopy in higher dimensions (HHSvKT) are of substantial interest for solving several problems in algebraic topology, both old and new.^[26] Moreover, developments in algebraic topology have often had wider implications, as for example in algebraic geometry and also in algebraic number theory. Such higher-dimensional (HHSvKT) theorems are about homotopy invariants of structured spaces, and especially those for filtered spaces or n-cubes of spaces. An example is the fact that the relative Hurewicz theorem is a consequence of HHSvKT, and this then suggested a triadic Hurewicz theorem. See also External links • Ronald Brown at the Mathematics Genealogy Project • "Ronald Brown's Biography and publications". • "Ronald Brown's Home Page". • "MathOverflow user page". • Higher-Dimensional Algebra citations list • Editorial Board of Journal of Homotopy and Related Structures (JHRS) • nLab Abstract Mathematics Website • Editorial Board of Homology, Homotopy and Applications (HHA) • The Origins of `Pursuing Stacks' by Alexander Grothendieck • Homology, Homotopy and Applications • Theory and Applications of Categories
{"url":"https://www.knowpia.com/knowpedia/Ronald_Brown_(mathematician)","timestamp":"2024-11-05T13:43:08Z","content_type":"text/html","content_length":"98155","record_id":"<urn:uuid:7875ebd9-34e6-4ff3-84b9-b6ace3f41259>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00409.warc.gz"}
What Careers Use Linear Equations? A surprising number of occupations use linear equations. In math, linear equations use two or more variables that produce a graph that proceeds in straight line, such as y = x + 2. Learning how to use and solve linear equations can be vital to entering some popular careers. Careers using linear equations range from health care workers to store clerks and everything in between. Business Manager Managers in a variety of fields are required to use linear equations to calculate measurements, make purchases, evaluate raises and determine how many employees are required to complete specific jobs. Some of the more common managerial positions using linear equations include advertising, real estate, funeral director, purchasing and agriculture. For example, an advertising manager might plan an online ad campaign budget using linear equations based on the cost per click. Financial Analyst Financial occupations often require the use of linear equations. Accountants, auditors, budget analysts, insurance underwriters and loan officers frequently use linear equations to balance accounts, determine pricing and set budgets. Linear equations used in financial occupations may also be used in creating family budgets as well. A financial planner, for example, uses linear equations to determine the total worth of a client's stocks. Computer Programmer Computer programmers and support specialists must be able to solve linear equations. Linear equations are used within software applications, on websites and security settings, which must be programmed by a computer programmer. Support specialists must be able to understand linear equations to troubleshoot many software and networking issues. A programmer, for example, might use linear equations to calculate the time needed to update a large database of information. Research Scientist Scientists of all types use linear equations on a regular basis. Life, physical and social scientists all have situations where linear equations make their jobs easier. Biologists to chemists all use the same linear equation format to solve problems such as determining ingredient portions, sizes of forests and atmospheric conditions. A chemist might, for example, set up several linear equations to find the right combination of chemicals needed for an experiment. Professional Engineer Engineering is one of the most well-known fields for using linear equations. Engineers include architects, surveyors and a variety of engineers in fields such as: • biomedical • chemical • electrical • mechanical • nuclear Linear equations are used to calculate measurements for both solids and liquids. An electrical engineer, for example, uses linear equations to solve problems involving voltage, current and Resource Manager Human resources positions and even store clerks may find the need for linear equations. This is most common when calculating payroll and purchases without calculators. Linear equations are also used when placing orders for supplies and products, and can help find the lowest costs for an order, taking into account prices and volume discounts. Architect and Builder The construction field frequently uses linear equations when measuring and cutting all types of materials for job sites. Both carpenters and electricians are included in the construction field and use linear equations on many of the jobs they do. A carpenter might, for example, use a linear equation to estimate the cost of wood and nails for a remodeling project. Health Care Professional The health care field, including doctors and nurses, often use linear equations to calculate medical doses. Linear equations are also used to determine how different medications may interact with each other and how to determine correct dosage amounts to prevent overdose with patients using multiple medications. Doctors also use linear equations to calculate doses based on a patient's weight. Cite This Article Crowder, C.D.. "What Careers Use Linear Equations?" sciencing.com, https://www.sciencing.com/careers-use-linear-equations-6060294/. 24 April 2018. Crowder, C.D.. (2018, April 24). What Careers Use Linear Equations?. sciencing.com. Retrieved from https://www.sciencing.com/careers-use-linear-equations-6060294/ Crowder, C.D.. What Careers Use Linear Equations? last modified March 24, 2022. https://www.sciencing.com/careers-use-linear-equations-6060294/
{"url":"https://www.sciencing.com:443/careers-use-linear-equations-6060294/","timestamp":"2024-11-11T01:08:51Z","content_type":"application/xhtml+xml","content_length":"76701","record_id":"<urn:uuid:cffb1b21-54df-4086-be4e-9369dda02576>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00173.warc.gz"}
Office of Energy Efficiency and Renewable Energy Alternative Fuels and Advanced Vehicles Data Center Assistant Secretary for Energy Efficiency & Renewable Energy Assistant Secretary for Energy Efficiency and Renewable Energy EERE The Office of Energy Efficiency and Renewable Energy is an office within the United States Department of Energy.
{"url":"https://www.babelnet.org/synset?id=bn%3A00120964n&lang=EN","timestamp":"2024-11-02T09:06:44Z","content_type":"text/html","content_length":"302730","record_id":"<urn:uuid:7fd802a4-172d-4dfc-8dcb-6672ca347fb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00573.warc.gz"}
Kiloparsecs to Cubit (Greek) Converter Enter Kiloparsecs Cubit (Greek) β Switch toCubit (Greek) to Kiloparsecs Converter How to use this Kiloparsecs to Cubit (Greek) Converter π € Follow these steps to convert given length from the units of Kiloparsecs to the units of Cubit (Greek). 1. Enter the input Kiloparsecs value in the text field. 2. The calculator converts the given Kiloparsecs into Cubit (Greek) in realtime β using the conversion formula, and displays under the Cubit (Greek) label. You do not need to click any button. If the input changes, Cubit (Greek) value is re-calculated, just like that. 3. You may copy the resulting Cubit (Greek) value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Kiloparsecs to Cubit (Greek)? The formula to convert given length from Kiloparsecs to Cubit (Greek) is: Length[(Cubit (Greek))] = Length[(Kiloparsecs)] × 66675833900023420000 Substitute the given value of length in kiloparsecs, i.e., Length[(Kiloparsecs)] in the above formula and simplify the right-hand side value. The resulting value is the length in cubit (greek), i.e., Length[(Cubit (Greek))]. Calculation will be done after you enter a valid input. Consider that the diameter of the Milky Way galaxy is about 30 kiloparsecs. Convert this diameter from kiloparsecs to Cubit (Greek). The length in kiloparsecs is: Length[(Kiloparsecs)] = 30 The formula to convert length from kiloparsecs to cubit (greek) is: Length[(Cubit (Greek))] = Length[(Kiloparsecs)] × 66675833900023420000 Substitute given weight Length[(Kiloparsecs)] = 30 in the above formula. Length[(Cubit (Greek))] = 30 × 66675833900023420000 Length[(Cubit (Greek))] = 2.0002750170007027e+21 Final Answer: Therefore, 30 kpc is equal to 2.0002750170007027e+21 cubit (Greek). The length is 2.0002750170007027e+21 cubit (Greek), in cubit (greek). Consider that the Sun is located approximately 8 kiloparsecs from the center of the Milky Way. Convert this distance from kiloparsecs to Cubit (Greek). The length in kiloparsecs is: Length[(Kiloparsecs)] = 8 The formula to convert length from kiloparsecs to cubit (greek) is: Length[(Cubit (Greek))] = Length[(Kiloparsecs)] × 66675833900023420000 Substitute given weight Length[(Kiloparsecs)] = 8 in the above formula. Length[(Cubit (Greek))] = 8 × 66675833900023420000 Length[(Cubit (Greek))] = 533406671200187400000 Final Answer: Therefore, 8 kpc is equal to 533406671200187400000 cubit (Greek). The length is 533406671200187400000 cubit (Greek), in cubit (greek). Kiloparsecs to Cubit (Greek) Conversion Table The following table gives some of the most used conversions from Kiloparsecs to Cubit (Greek). Kiloparsecs (kpc) Cubit (Greek) (cubit (Greek)) 0 kpc 0 cubit (Greek) 1 kpc 66675833900023420000 cubit (Greek) 2 kpc 133351667800046850000 cubit (Greek) 3 kpc 200027501700070280000 cubit (Greek) 4 kpc 266703335600093700000 cubit (Greek) 5 kpc 333379169500117140000 cubit (Greek) 6 kpc 400055003400140550000 cubit (Greek) 7 kpc 466730837300164000000 cubit (Greek) 8 kpc 533406671200187400000 cubit (Greek) 9 kpc 600082505100210900000 cubit (Greek) 10 kpc 666758339000234300000 cubit (Greek) 20 kpc 1.3335166780004686e+21 cubit (Greek) 50 kpc 3.3337916950011713e+21 cubit (Greek) 100 kpc 6.667583390002343e+21 cubit (Greek) 1000 kpc 6.667583390002342e+22 cubit (Greek) 10000 kpc 6.667583390002342e+23 cubit (Greek) 100000 kpc 6.667583390002343e+24 cubit (Greek) A kiloparsec (kpc) is a unit of length used in astronomy to measure astronomical distances. One kiloparsec is equivalent to 3,262 light-years or approximately 3.086 Γ 10^16 meters. The kiloparsec is defined as one thousand parsecs, where one parsec is the distance at which one astronomical unit subtends an angle of one arcsecond. Kiloparsecs are used to measure large distances between celestial objects, such as the size of galaxies or the distance between galactic structures. They provide a convenient scale for expressing vast distances in the universe. Cubit (Greek) A Greek cubit is an ancient unit of length used in Greece and its surrounding regions. One Greek cubit is approximately equivalent to 18.2 inches or about 0.462 meters. The Greek cubit was used in classical Greece for various purposes, including architectural design, land measurement, and textiles. Its length was based on the distance from the elbow to the tip of the middle finger and could vary slightly depending on the historical period and specific region. Greek cubits are of historical interest for understanding ancient Greek construction and measurement practices. Although not in common use today, the unit provides valuable insight into the standards and techniques of ancient Greek architecture and trade. Frequently Asked Questions (FAQs) 1. What is the formula for converting Kiloparsecs to Cubit (Greek) in Length? The formula to convert Kiloparsecs to Cubit (Greek) in Length is: Kiloparsecs * 66675833900023420000 2. Is this tool free or paid? This Length conversion tool, which converts Kiloparsecs to Cubit (Greek), is completely free to use. 3. How do I convert Length from Kiloparsecs to Cubit (Greek)? To convert Length from Kiloparsecs to Cubit (Greek), you can use the following formula: Kiloparsecs * 66675833900023420000 For example, if you have a value in Kiloparsecs, you substitute that value in place of Kiloparsecs in the above formula, and solve the mathematical expression to get the equivalent value in Cubit
{"url":"https://convertonline.org/unit/?convert=kiloparsecs-cubits_greek","timestamp":"2024-11-10T13:01:43Z","content_type":"text/html","content_length":"92348","record_id":"<urn:uuid:eb79cfb9-580a-4c51-86c0-a6df951c0574>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00477.warc.gz"}
NYU Bridge to Tandon - Week 2 July 11, 2020 Module 3: Hello Word Process of executing a program - we’ll focus on CPU and main memory (RAM), alongside secondary memory. Both memory devices contain a lot of 0s and 1s, or bits. These bits encode information and are collected into collection of 8 bits, called a byte. The first byte is located at a physical address of 0, then 1, 2, etc. A sequence of bytes will capture a program. What happens when we execute a program? Program is currently stored in secondary memory. First it gets copied into main memory for faster access to CPU. Then instructions get fetched by CPU one at a time. The CPU has a program counter (PC) instruction, initialized to location in memory where program begins. CPU fetches, decodes, executes steps, then increments PC. We don’t program with 0s and 1s, or machine language. We use a higher level language like C++, Java, Python, etc etc. We need a way to express closer to human thought and get it translated to machine code. This process of translation is called compilation or a build process. This is automated by tools like clang, gcc, javac. A sample program that reads two numbers from user and prints the sum: intro comments that express the program task eg this program takes two numbers from stdin, finds the sum and returns it to stdout #include <iostream> using namespace space; int main() { int num1; // holds first input int num2; // holds second input int sum; // holds sum cout << "Please enter two numbers separated by a space:" << endl; cin >> num1 >> num2; sum = num1 + num2; cout << num1 << " + " <<num2 << " = " << sum << endl; return 0; Problem Solving with C++ sections 1.1-1.4 A set of instructions is called a program. A collection of programs for a computer is what we call the software for that computer (OS, Microsoft Office, VS Code). The physical parts are what we call hardware. Hardware is “conceptually very simple” but the breadth and complexity of software in a working system is what makes computers complicated and powerful. Three classes of computers - PCs, workstations and mainframes. A PC (personal computer is what it sounds like - relatively small, designed for one person at a home. A workstation is an industrial-strength PC (super powerful Dells). A mainframe is even larger that is shared between users and typically requires a support staff. A network is a connection of computers that share resources (printers, for example) and even information). A work network could have multiple PCs, a few mainframes for shared compute, access to shared printers, etc. The book classifies hardware into input devices, output devices, processors, main memory and secondary memory. CPU + main memory = integrated compute unit. Everything else connects to those two and operate under their direction. Input devices - keyboards, mice, maybe voice-operated equipment? Output devices - monitors, printer, something that CPU can write to externally. Keyboard + monitor = terminal (kinda) Memory - two forms. Main memory is a long list of numbers locations called memory locations (the number or index is the address), each holding a string of bits. 8 bits = 1 byte. You can store data in a location, then find it by its address. A consecutive chain of bytes can form data types like numbers and letters. One concern - 01000001 is both the letter A and the number 65. How does the computer know what type it is? The book skips over this. We call main memory Random Access Memory since randomly accessing a byte location takes constant time. Secondary storage is for permanent (non volatile) storage. Writing to disk (HDD, SSD, etc) is for file storage. Memory access is usually sequential (that is, is this location X? No, go to next place. Is this location A? No, go to next place). Processor (our central processing unit) is the brain of the computer. The chip on the actual hardware item is the processor (buying a Ryzen 3600 means the chip is that, the rest is supportive hardware). The CPU can interpret instructions but those instructions are typically very simple, eg ADD two numbers and store the result somewhere, MUL two nums, MOV an item, JMP to another location. We communicate to hardware via an operating system interface (go see my notes from the CS class from Wisconsin). If you tell the computer “run Steam” what really happens is you tell the operating system, which is in charge of coordinating with the hardware to find the file, load into memory, etc. Program = set of instructions. Data = conceptualized input to program. Niklaus Wirth said “Algorithms + Data Structures = Programs”. High level languages vs machine language = covered above. We write closer to human-speak, but computers speak computer-speak. What do we do? We compile programs. A compiler takes source code and returns object code (a.out, main.o). A linker takes your programs object code, combines it with other code needed for some routines (like input/output) and returns a bundled version. Workflow is C++ program —> Compiler —> Object Code —> linker —> Machine code Object code for other routines ---------------- ^ Algorithm a series of precise instructions. A program lays out an algorithm, that is “get a list of names from the user, validate that there is at least 1, initialize a counter, for each name in list increment counter if name starts with X, then return counter.” Program design can be broken down into two phases. There is the problem solving phase (what data structures, what steps, what algorithms, what optimizatoins) and the implementation phase (the writing of code). Quote: “Experience has shown that the two-phase process will produce a correctly working program faster.” Object oriented programming (or OOP) is a method to model your problem domain as a set of interacting objects. Each object can have their own internal algorithm for behavior. We care about OOP because it affords encapsulation, inheritance and polymorphism via classes (a combination of data and behavior / algorithm). The software life cycle (SLC or SLDC if we insert development) is a set of 6 phases: • Problem definition - analyzing the task • Object and algorithm design • Implementation • Testing • Maintenance and evolution • Obsolescence In C++ a return statement is a way of identifying that a program ends, and returning 0 is the usual way of returning successfully. We have variable declaration with prefix type annotations in C++ - int bearCount;C++ uses cin and cout for interacting with stdin and stdout. We often start a C++ program with #include <iostream> - this is called an include directive. iostream is joined by the linker at compile time. Directives always begin with the hash #. C++ allows us to open a namespace with, for example using namespace std which makes all methods in file scope. We can compile with g++ or clang++ and specify a language standard with the --std flag, eg --std=c++17. Module 4 : Data Types and Expressions - Part 1 We care about data, expressions and control flow. Data = types, classes, etc. Expressions = Arithmetic, IO, etc. Control flow = if/else, while, functions, etc. In the above C++ code from module 3, int num1 is an example of data. cin and cout are expressions, and we had no control flow. Lines are default ordered sequentially. Everything has a type, because C++ is strongly typed. The int type holds integers. We fix the size to 4 bytes (32 bit ints), and in memory the address of the integer points to the first of the bytes. 32 bit ints means we can store 2^32 or a little north of 2 billion. Can’t represent all ints. Numbers are stored in bytes presented in Two’s Complement. Two forms of data - variables and constants. int x declares a variable. 6 is a constant, since it cannot be redefined. We call built-in constants literals (6 is a literal, we don’t need to define it). User-defined or programmer-defined constants look more like const int MAX = 5; Operators are type-constrained. Arithmetic operators like + require ints or floats. Note that division is integer-division, eg 5 / 2 = 2. If we want remainder, we need the modulo (or mod) operator. In C++ that would be 5 % 2. We can do multiple assignment with y = x = 7 since the return value of assignment is the righthand side value. Conventions - camelCase or snake_case? camelCase is more common for C++ so I’ll go with that. Module 4 : Data Types and Expressions - Part 2 Floats and doubles are for real numbers (potential fractionals). Also fixed size, floats are 4 bytes and doubles are 8 bytes. Doubles allow us to represent up to 2^64. How do we represent the decimal place? Just gotta learn from Fabien Sanglard - the IEEE-754 spec. We call it floating point because the decimal point can “float around”. We need to represent floats as 6.0f, 0.85f, 3.1415f etc. To mark as a float we add an f, otherwise it’ll be a double. Write a program that reads from user the radius of a circle, then calculates and prints the area (radius * radius * PI). #include <iostream> #include <cmath> using namespace std; int main() { cout << "Enter the radius of your circle: " << endl; double radius; cin >> radius; double area = radius * radius * M_PI; cout << "the area of your circle is " << area << endl; return 0; Sometimes we need type casting - when we mix types we need to maintain one. We need to convert the data’s representation from one type to another. We type cast with the syntax, VARIABLE = (newType) VALUE eg double y = (double) 6; or int x = (int) 3.14f; If we mix types in an expression, like 5 / 3.0 then the compiler will try to cast both operands to an appropriate type. This is an implicit cast and maintains accuracy, that is int -> double is safe but double -> int is not okay. Module 4 : Data Types and Expressions - Part 3 The char type is for representing characters. It’s stored as 1 byte, since 2^8 (256) values is enough to represent lower case, upper case, digits, symbols. We call these representations the ASCII values. ASCII numbers are base-ten values that get converted to binary for storage. Write a program that takes a char and returns the ASCII value. #include <iostream> using namespace std; int main() { char letter; cout << "Please input one character." << endl; cin >> letter; cout << "The ASCII value is " << (int) letter << endl; return 0; Char literals for C++ are single-quote letters, like char x = 'a';. Double quotes are reserved for strings (std::string). We can use backslash before chars for special characters, like \n for newline. The backslash is called the escape. We can use arithmetic to get the next char like (char) ('a' + 1). We can convert to uppercase as well. Write a char that takes a letter (assume lower case) and returns its upper case letter. #include <iostream> using namespace std; int main() { char lowerCase; cout << "Please enter a letter. " << endl; cin >> lowerCase; int asciiCode = lowerCase - 32; cout << asciiCode << endl; char upperCase = (char)asciiCode; cout << "The upper case of your letter is " << upperCase << "." << endl; return 0; The video uses offset, closer to #include <iostream> using namespace std; int main() { char lowerCase; cout << "Please enter a letter. " << endl; cin >> lowerCase; int offset = 'A' - (int) lowerCase; char upperCase = (char) ('A' + offset); cout << "The upper case of your letter is " << upperCase << "." << endl; return 0; The string class is not built into C++, but needs the #include <string> directive. Representation is a sequence of characters. Literals are defined with double quotes. We can declare std::string x = "Hello " + "world!" Module 4 : Data Types and Expressions - Part 4 The bool data type represents true or false, or boolean logic. Takes 1 byte (not 1 bit) so any non-zero value is true. That is 10000000 and 00001000 and 0110000 are all true, only all-zeroes will be false. Only operators are our usual logic operators. First is not or !. Next is conjunction or and, with binary operator &&. Last is disjunction or or, with binary operator ||. Atomic boolean expressions are true or false. We can use operators to make compound boolean expressions. We have arithmetic expressions compared with relational operators: >, >=, <, <=, ==. Problem Solving with C++ sections 2.1 - 2.3 Variables and assignment We use variables to name and store data. A C++ variable can hold all types of data, from ints to bools to custom classes. The data itself is called the variable’s value. cin >> variable_name takes input from stdin and assigns it to the variable on the right side of >>. In practice, variables are implemented as memory locations by the compiler. The name of the variable is called the identifier (we learned this from Bob Nystrom). Identifiers MUST start with a letter or underscore, and the rest of the characters can be letters, digits or int _ignored; // valid std::string firstName; // valid bool 2_fast_2_furious; // invalid Keywords or reserved words are words disallowed for variable use, eg class. Every variable must be declared, and you can declare multiple variables with the same type with commas, like std::string firstName, lastName;. The first part of a declaration is a type name. Variable declarations are used to let the compiler know how to encode the data and how much memory to allocate. To give a variable a value, we use assignment statements. The parts are the type name, variable identifier, assignment operator, value, and then a semi colon. For example, int age = 30;. The value can also be an expression which will get evaluated prior to assignment. For example, int moonWeight = earthWeight / earthGravity * moonGravity;. Some values cannot change, like ints. We call these A variable without an assigned value is called an uninitialized variable. When variables are uninitialized, their values will be whatever value was in that memory location prior to the program running. That is, if Microsoft Word stuffed some file into that memory location and my uninitialized variable is assigned that location, its value will be a set of bits that correspond to the file An input stream is…the stream of input flowing into the program. Okay not a good definition. But specifically, we use the word stream because we don’t worry about the source and only act on the incoming data itself. We also have output streams. cout uses the insertion operator <<. An include directive looks like #include <LIBRARY> and tells the system to use code from that file. Akin to copying the code over into the current file. C++ also has namespaces and we can open a namespace for local resolution with a using directive like using namespace std;. Namespaces exist to let methods with the same name not clash. The backslash operator \ is to let users enter special characters in strings, such as \n for a newline or \r\n for “carriage return line feed”. double and float types allow decimal points but they must be neither the first nor last character (0.6, 1.0, 3.14f). There is a way to format cout to return for example 2 decimals with When using cin to take in multiple inputs, like cin >> var1 >> var2 the input from stdin must be separated by at least one whitespace. It’s considered good form to echo the input or write the input to stdout at some point before the program terminates. This allows the user to validate their input in case of weird or unexpected Since doubles and floats have finite space, their values are approximations. C++11 brought in the auto keyword for type inference, used as auto whatIsThis = 5; Boolean expressions are expressions that return true or false. Boolean operations are relational operations like ==, >, <=. Boolean operators in C++ are the unary ! (not or negation) and equivalency checks. Symbol Meaning ! Negation, like !ateCheese == Equality check && boolean AND, conjunction || boolean OR, disjunction > Greater than >= Greater than or equals < Less than <= Less than or equals Boolean logic in C++ follow laws of propositions, eg De Morgan’s law says that !(x && y) == !x || !y Precedence rules apply, where unary operators have highest precedence, boolean operators the lowest, everything else falls in the middle. C++ boolean operation can “short circuit evaluate” - that is, for x || y if x is true then y doesn’t evaluate, and the whole expression returns true. For x && y if x is false, the expression returns false before y evaluates. C++ will coerce ints to work as bools where any non-zero value is true. Enum types are enumerations over constants. If not specified values, they are monotonically increasings ints starting from zero. For example enum Direction { NORTH, SOUTH, WEST, EAST = 500} . There is a stronger version called a strong enum or enum class defined as enum class Days { Mon, Tues, Wed} that does not coerce to ints. Discrete Math Section 1.11 We create a set of hypotheses we assume are true. An argument is a series of propositions, each of which are hypotheses, followed by a final proposition called the conclusion. An argument is valid if the conclusion is true when all hypotheses are true, else it is invalid. Denotation is: p1…pn are the hypotheses, and c is the conclusion. The symbol ∴ is read as “therefore”. When p1…pn are all true, then the argument is valid. That is if (p1 ^ p2 ^ (p3 ^ ...pn) --> c is a tautology then the argument is valid. Order doesn’t matter for hypotheses due to commutative law of conjunction. The way we prove validity is with truth tables. We look at each in which ALL hypotheses are true. If the conclusion is true in each of those rows, the argument is valid. If there is a row in which all hypotheses are true but the conclusion is false, then the argument is invalid. Consider (p —> q) ^ (p v q) ∴ q p q p —> q p v q T T T T T F F T F T T T F F T F The only rows in which both hypotheses are true are rows 1 and 3. For both of those rows, q is true, so all the hypotheses are true and the conclusion is true so the argument is valid. What if we went with the below? ¬p ^ (p —> q) ∴ ¬q p q ¬p p —> q T T F T T F F F F T T T F F T F The only row in which both hypotheses are true is 3, but the conclusion ¬q yields false so the argument is invalid. The hypotheses and conclusion can be expressed in English as well, for something like “It is raining today AND if it is not raining then I will not ride my bike, therefore I will ride my bike”. Note that, in English, we might use propositions with known truth values, eg, 7 is an odd number, but the hypothesis MUST be expressed as a proposition over a domain and must be valid for all combinations of hypotheses. Discrete Math Section 1.12 There are rules of inference that exist that we can use for hypotheses that we know to be true. Treat the SLASH operator (/) as a newline in a normally formatted argument (newline + conjunction) Rule of Inference Name p / (p —> q) ∴ q Modus ponens ¬q / (p —> q) ∴ ¬p Modus tollens p ∴ p v q Addition p ^ q ∴ p Simplification p / q ∴ p ^ q Conjunction p —> q / q —> r ∴p —> r Hypothetical Syllogism p v q / ¬p ∴ q Disjunctive Syllogism p v q / ¬p v r ∴ q v r Resolution The process of applying the ruless of inference and laws of propositional logic is called a logical proof. A logical proof consistents of steps of pairing propositions with justifications. If the proposition in a step is a hypothesis, the justification is “Hypothesis” else it must follow from a previous step by applying one law of logic or rule of inference. The structure of a proof is a list of numbered steps, where the left column is either the hypothesis and the right side Hypothesis, or the left side is a substitution and the right a rule of inference and line number representing input. From the Rosen example w: It is windy r: It is raining c: The game will be canceled Then solve for If it is raining or windy or both, the game will be cancelled. The game is not canceled It is not windy (r v w) --> c The table then looks like # Proposition Rule 1 (r ∨ w) → c Hypothesis 2 ¬c Hypothesis 3 ¬(r ∨ w) Modus Tollens 1, 2 4 ¬r ^ ¬w De Morgan’s Law 3 5 ¬w ^ ¬r Commutative 4 6 ¬w Simplification 5 Discrete Math Section 1.13 We can apply the rules of inference to quantified statements, but we need to do so by substituting in one element from the domain. Eg “every employee who works hard got a bonus, Linda got a bonus, therefore some employee works hard”. When an element has no special distinguishing characteristics from other elements in the domain we call it arbitrary. If it can be distinguished in some way, we call it particular (eg 3 is odd, so that is a particular element). If the element is defined in a hypothesis, it is always a particular element and the definition of that element in the proof is labeled “Hypothesis”. If an element is introduced for the first time in the proof, the definition is labeled “Element definition” and must specify whether the element is arbitrary or particular. There are rules called existential instantiation and universal instantiation to replace a qualified variable with an element of the domain. To replace an element of the domain with a qualified variable we use existential generalization and universal generalization. This only works for non-nested quantifiers. Universal instantiation c is an element (arbitrary or particular) ∀x P(x) ∴ P(c) “Sam is a student in the class. Every student passed the class. Sam is a student. Therefore Sam passed the class”. Universal Generalization C is an arbitrary element P(c) ∴ ∀x P(x) “Let c be an arbitrary integer. c <= c^2. Therefore all integers are less than or equal to their square” Existential Instantiation ∃x P(x) ∴ (c is a particular element) ∧ P(c) “There is an integer that is equal to it’s square. Therefore, for some C, c == c^2” *Note: each use of Existential instantiation must define a new element with its own name (e.g., “c” or “d”). Existential Generalization c is an element (arbitrary or particular) P(c) ∴ ∃x P(x) “Sam is a particular student in the class. Sam completed the assignment. Therefore there exists a student in the class that completed the assignment.” IMPORTANT - for every use of existential instantiation, we MUST use a different existential variable letter in order to avoid invalid proofs. EG if we say “c is a particular element” then later if we need another element, we must not use c again. We can show an argument with quantified statements to be invalid by defining the domain and predicates which makes all hypotheses true but the conclusion false. Discrete Math Section 2.1 We want to prove things in mathemtics. A theorem is a statement that can be proven to be true. A proof consists of a series of steps each of which follows logically from assumptions or previously proven statements, and the final step should be the result of the theorem proving true. We might make use of axioms or statements we take to be true. A theorem might be something like “Every positive integer is less than or equal to its square.” How do we know where to start when writing proofs? We can apply known patterns to help break the problem down. Often we start by playing with different elements in the domain. Rewriting the statement into precise mathematical language can help. Most theorems make assertions about all elements in a domain and are therefore universal statements, although the theorem may not explicitly state it as such. The first step is to name a generic object and prove the statement for that object. For a universal statement, checking every element is known as a proof by exhaustion. This is doable for small domains, eg { -1, 0, 1 }. For larger domains, it is easier to invalidate by finding a Discrete Math Section 2.2 Many theorems take the form of a conditional where the conclusion follows a set of hypotheses. These can be expressed as p —> c, where p is the conjunction of all hypotheses and c is the conclusion. In a direct proof we assume p to be true and the conclusion c is proven as a direct result of the hypotheses. These hypotheses can also be universally qualified, like “for every integer x, if x is odd then x^2 is even” Discrete Math Section 2.3 A proof by contrapositive proves a conditional theorem p —> c by showing that ¬c —> ¬p: that is, ¬c is assumed to be true and ¬p is proven as a result of ¬c. An example. Imagine the theorem “For every integer n, if n^2 is odd then n is odd.” This can be described with the universal quantifer for the domain of all integers as ∀n (D(n^2) → D(n)) - in other words, assume negative conclusion and show negative hypothesis. The contrapositive proof starts with an arbitrary n, assumes D(n) is false and proves that D(n^2) is false. Or, ∀n (¬D(n) → ¬D(n^2)). Imagine this example: 3n + 7 is odd therefore n is even. If we prove by contrapositive, we test the case where n is odd. We can describe even numbers as 2k and odd numbers as 2k + 1. So if we replace n with the above, we get 3(2k + 1) + 7 which yields 6k + 3 + 7 or 6k + 10 or 2(3k + 5), and we said that 2k is an even number, so if k = (3k + 5) then 2k must be even. Therefore we can assert that 3n + 7 is even. Another example: Theorem: For every real number x, if x^3 + 2x + 1 ≤ 0, then x ≤ 0. 1. Negate the conclusion and assume x > 0 2. Since x > 0, 2x > 0 and x^3 > 0 3. Since x^3 > 0 and 2x >0 and 1 > 0, the sum is > 0 4. x^3 + 2x + 1 > 0 therefore the theorem is true Discrete Math Section 2.4 A proof by contradiction assumes the theorem is false and then tries to find some inconsistency that would lead this to be incorrect. If t is the statement of the theorem, the proof begins with the assumption ¬t and leads to a conclusion r ∧ ¬r, for some proposition r. If the theorem being proven has the form p → q, then the beginning assumption is p ∧ ¬q which is logically equivalent to ¬(p → q). A proof by contradiction is sometimes called an indirect proof The proof by contrapositive method is a special case of proof by contradiction. Example. Proof by contradiction that sqrt(2) is irrational 1. assume the negation, or sqrt(2) is rational 2. Express sqrt(2) as n / d, since it’s rational (d != 0 and no integer > 1 can divide into n and d) 3. since sqrt(2) = n / d we can square both sides of the equation 2 = n^2 / (d ^ 2) and then multiply both sides by d^2 to get 2d^2 = n^2 4. Since n^2 can be represented as 2 times some number, we can claim n^2 is even and thus n is even 5. If n is even, then n = 2k and if 2k = 2(d^2) then d^2 must be a multiple of 2 and therefore even 6. if d^2 is even then d is even 7. If n and d are even, there exists an integer n that divides into both n and d, so therefore n / d cannot be rational so it must be irrational Discrete Math Section 2.5 A proof by cases takes a univerally quantified statement, breaks it down into classes, and proves an example of each case to be true. Every value in the domain must belong to at least one of the classes. For example, for the theorem “for every integer x, x^2 - x is even” can be proven in two cases: x is odd or x is even. I work and live in Brooklyn, NY building software.
{"url":"https://nthomas.org/2020-07-11-nyu-tandon-bridge-week-2/index/","timestamp":"2024-11-06T01:37:17Z","content_type":"text/html","content_length":"75280","record_id":"<urn:uuid:a52e829a-c847-4b5a-a88a-6fe0f0266355>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00062.warc.gz"}
16-Bit Complex Vector Prepare Functions 16-Bit Complex Vector Prepare Functions# group vect_complex_s16_prepare_api void vect_complex_s16_macc_prepare(exponent_t *new_acc_exp, right_shift_t *acc_shr, right_shift_t *bc_sat, const exponent_t acc_exp, const exponent_t b_exp, const exponent_t c_exp, const headroom_t acc_hr, const headroom_t b_hr, const headroom_t c_hr)# Obtain the output exponent and shifts needed by vect_complex_s16_macc(). This function is used in conjunction with vect_complex_s16_macc() to perform an element-wise multiply-accumlate of complex 16-bit BFP vectors. This function computes new_acc_exp and acc_shr and bc_sat, which are selected to maximize precision in the resulting accumulator vector without causing saturation of final or intermediate values. Normally the caller will pass these outputs to their corresponding inputs of vect_complex_s16_macc(). acc_exp is the exponent associated with the accumulator mantissa vector \(\bar a\) prior to the operation, whereas new_acc_exp is the exponent corresponding to the updated accumulator vector. b_exp and c_exp are the exponents associated with the complex input mantissa vectors \(\bar b\) and \(\bar c\) respectively. acc_hr, b_hr and c_hr are the headrooms of \(\bar a\), \(\bar b\) and \(\bar c\) respectively. If the headroom of any of these vectors is unknown, it can be obtained by calling vect_complex_s16_headroom(). Alternatively, the value 0 can always be safely used (but may result in reduced precision). Adjusting Output Exponents If a specific output exponent desired_exp is needed for the result (e.g. for emulating fixed-point arithmetic), the acc_shr and bc_sat produced by this function can be adjusted according to the following: // Presumed to be set somewhere exponent_t acc_exp, b_exp, c_exp;headroom_t acc_hr, b_hr, c_hr;exponent_t desired_exp;...// Call prepare right_shift_t acc_shr, bc_sat;vect_complex_s16_macc_prepare(&acc_exp, &acc_shr, &bc_sat, acc_exp, b_exp, c_exp, acc_hr, b_hr, c_hr);// Modify results right_shift_t mant_shr = desired_exp - acc_exp;acc_exp += mant_shr;acc_shr += mant_shr;bc_sat += mant_shr;// acc_shr and bc_sat may now be used in a call to vect_complex_s16_macc() When applying the above adjustment, the following conditions should be maintained: ○ bc_sat >= 0 (bc_sat is an unsigned right-shift) ○ acc_shr > -acc_hr (Shifting any further left may cause saturation) It is up to the user to ensure any such modification does not result in saturation or unacceptable loss of precision. ○ new_acc_exp – [out] Exponent associated with output mantissa vector \(\bar a\) (after macc) ○ acc_shr – [out] Signed arithmetic right-shift used for \(\bar a\) in vect_complex_s16_macc() ○ bc_sat – [out] Unsigned arithmetic right-shift applied to the product of elements \(b_k\) and \(c_k\) in vect_complex_s16_macc() ○ acc_exp – [in] Exponent associated with input mantissa vector \(\bar a\) (before macc) ○ b_exp – [in] Exponent associated with input mantissa vector \(\bar b\) ○ c_exp – [in] Exponent associated with input mantissa vector \(\bar c\) ○ acc_hr – [in] Headroom of input mantissa vector \(\bar a\) (before macc) ○ b_hr – [in] Headroom of input mantissa vector \(\bar b\) ○ c_hr – [in] Headroom of input mantissa vector \(\bar c\) void vect_complex_s16_mul_prepare(exponent_t *a_exp, right_shift_t *a_shr, const exponent_t b_exp, const exponent_t c_exp, const headroom_t b_hr, const headroom_t c_hr)# Obtain the output exponent and output shift used by vect_complex_s16_mul() and vect_complex_s16_conj_mul(). This function is used in conjunction with vect_complex_s16_mul() to perform a complex element-wise multiplication of two complex 16-bit BFP vectors. This function computes a_exp and a_shr. a_exp is the exponent associated with mantissa vector \(\bar a\), and must be chosen to be large enough to avoid overflow when elements of \(\bar a\) are computed. To maximize precision, this function chooses a_exp to be the smallest exponent known to avoid saturation (see exception below). The a_exp chosen by this function is derived from the exponents and headrooms of associated with the input vectors. a_shr is the shift parameter required by vect_complex_s16_mul() to achieve the chosen output exponent a_exp. b_exp and c_exp are the exponents associated with the input mantissa vectors \(\bar b\) and \(\bar c\) respectively. b_hr and c_hr are the headroom of \(\bar b\) and \(\bar c\) respectively. If the headroom of \(\bar b\) or \(\bar c\) is unknown, they can be obtained by calling vect_complex_s16_headroom(). Alternatively, the value 0 can always be safely used (but may result in reduced precision). Adjusting Output Exponents If a specific output exponent desired_exp is needed for the result (e.g. for emulating fixed-point arithmetic), the a_shr and c_shr produced by this function can be adjusted according to the following: exponent_t desired_exp = ...; // Value known a priori right_shift_t new_a_shr = a_shr + (desired_exp - a_exp); When applying the above adjustment, the following conditions should be maintained: Be aware that using smaller values than strictly necessary for a_shr can result in saturation, and using larger values may result in unnecessary underflows or loss of precision. ○ Using the outputs of this function, an output mantissa which would otherwise be INT16_MIN will instead saturate to -INT16_MAX. This is due to the symmetric saturation logic employed by the VPU and is a hardware feature. This is a corner case which is usually unlikely and results in 1 LSb of error when it occurs. ○ a_exp – [out] Exponent associated with output mantissa vector \(\bar a\) ○ a_shr – [out] Unsigned arithmetic right-shift for \(\bar b\) used by vect_complex_s16_mul() ○ b_exp – [in] Exponent associated with input mantissa vector \(\bar b\) ○ c_exp – [in] Exponent associated with input mantissa vector \(\bar c\) ○ b_hr – [in] Headroom of input mantissa vector \(\bar b\) ○ c_hr – [in] Headroom of input mantissa vector \(\bar c\) void vect_complex_s16_real_mul_prepare(exponent_t *a_exp, right_shift_t *a_shr, const exponent_t b_exp, const exponent_t c_exp, const headroom_t b_hr, const headroom_t c_hr)# Obtain the output exponent and output shift used by vect_complex_s16_real_mul(). This function is used in conjunction with vect_complex_s16_real_mul() to perform a complex element-wise multiplication of a complex 16-bit BFP vector by a real 16-bit vector. This function computes a_exp and a_shr. a_exp is the exponent associated with mantissa vector \(\bar a\), and must be chosen to be large enough to avoid overflow when elements of \(\bar a\) are computed. To maximize precision, this function chooses a_exp to be the smallest exponent known to avoid saturation (see exception below). The a_exp chosen by this function is derived from the exponents and headrooms of associated with the input vectors. a_shr is the shift parameter required by vect_complex_s16_real_mul() to achieve the chosen output exponent a_exp. b_exp and c_exp are the exponents associated with the input mantissa vectors \(\bar b\) and \(\bar c\) respectively. b_hr and c_hr are the headroom of \(\bar b\) and \(\bar c\) respectively. If the headroom of \(\bar b\) or \(\bar c\) is unknown, they can be obtained by calling vect_complex_s16_headroom(). Alternatively, the value 0 can always be safely used (but may result in reduced precision). Adjusting Output Exponents If a specific output exponent desired_exp is needed for the result (e.g. for emulating fixed-point arithmetic), the a_shr and c_shr produced by this function can be adjusted according to the following: exponent_t desired_exp = ...; // Value known a priori right_shift_t new_a_shr = a_shr + (desired_exp - a_exp); When applying the above adjustment, the following conditions should be maintained: Be aware that using smaller values than strictly necessary for a_shr can result in saturation, and using larger values may result in unnecessary underflows or loss of precision. ○ Using the outputs of this function, an output mantissa which would otherwise be INT16_MIN will instead saturate to -INT16_MAX. This is due to the symmetric saturation logic employed by the VPU and is a hardware feature. This is a corner case which is usually unlikely and results in 1 LSb of error when it occurs. ○ a_exp – [out] Exponent associated with output mantissa vector \(\bar a\) ○ a_shr – [out] Unsigned arithmetic right-shift for \(\bar a\) used by vect_complex_s16_real_mul() ○ b_exp – [in] Exponent associated with input mantissa vector \(\bar b\) ○ c_exp – [in] Exponent associated with input mantissa vector \(\bar c\) ○ b_hr – [in] Headroom of input mantissa vector \(\bar b\) ○ c_hr – [in] Headroom of input mantissa vector \(\bar c\) void vect_complex_s16_squared_mag_prepare(exponent_t *a_exp, right_shift_t *a_shr, const exponent_t b_exp, const headroom_t b_hr)# Obtain the output exponent and input shift used by vect_complex_s16_squared_mag(). This function is used in conjunction with vect_complex_s16_squared_mag() to compute the squared magnitude of each element of a complex 16-bit BFP vector. This function computes a_exp and a_shr. a_exp is the exponent associated with mantissa vector \(\bar a\), and is be chosen to maximize precision when elements of \(\bar a\) are computed. The a_exp chosen by this function is derived from the exponent and headroom associated with the input vector. a_shr is the shift parameter required by vect_complex_s16_mag() to achieve the chosen output exponent a_exp. b_exp is the exponent associated with the input mantissa vector \(\bar b\). b_hr is the headroom of \(\bar b\). If the headroom of \(\bar b\) is unknown it can be calculated using vect_complex_s16_headroom(). Alternatively, the value 0 can always be safely used (but may result in reduced precision). Adjusting Output Exponents If a specific output exponent desired_exp is needed for the result (e.g. for emulating fixed-point arithmetic), the a_shr produced by this function can be adjusted according to the exponent_t a_exp;right_shift_t a_shr;vect_s16_mul_prepare(&a_exp, &a_shr, b_exp, c_exp, b_hr, c_hr);exponent_t desired_exp = ...; // Value known a priori a_shr = a_shr + (desired_exp - a_exp);a_exp = desired_exp; When applying the above adjustment, the following condition should be maintained: Using larger values than strictly necessary for a_shr may result in unnecessary underflows or loss of precision. ○ a_exp – [out] Output exponent associated with output mantissa vector \(\bar a\) ○ a_shr – [out] Unsigned arithmetic right-shift for \(\bar a\) used by vect_complex_s16_squared_mag() ○ b_exp – [in] Exponent associated with input mantissa vector \(\bar b\) ○ b_hr – [in] Headroom of input mantissa vector \(\bar b\)
{"url":"https://www.xmos.com/documentation/XM-014926-PC/html/modules/core/modules/xcore_math/lib_xcore_math/doc/programming_guide/src/reference/vect/vect_complex_s16_prepare.html","timestamp":"2024-11-15T01:07:39Z","content_type":"text/html","content_length":"110106","record_id":"<urn:uuid:630d9c4c-1a42-40c2-a189-1b1716f9bab1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00865.warc.gz"}
how to make pythagoras theorem working model tlm – diy - Science Projects | Maths TLM | English TLM | Physics Projects | Computer Projects | Geography Projects | Chemistry Projects | Working Projects | Working Models | DIY for School / College Science Exhibitions or Fair how to make pythagoras theorem working model tlm – diy Pythagoras theorem working model tlm – diy – maths project – simple and easy | craftpiller #pythagorastheorem #workingmodel #workingproject #tlm #diy #mathstlm #craftpiller A working model of Pythagoras theorem can be created using materials such as cardboard, paper, or wood. The model can demonstrate the theorem as follows: 1. Creating a right triangle: using cardboard or paper, make a triangular shape with one 90-degree angle. 2. Measuring the sides: using a ruler, measure the length of the two shorter sides and write them down as a and b. 3. Calculating the hypotenuse: using the formula c^2 = a^2 + b^2, calculate the length of the longest side (hypotenuse) c. This working model demonstrates how the Pythagorean theorem can be used to calculate the length of the sides of a right triangle, given the lengths of the other two sides. Pythagoras theorem working model tlm Step by Step Video on Pythagoras theorem working model tlm Leave a Comment
{"url":"https://howtofunda.com/pythagoras-theorem-working-model-tlm-diy/","timestamp":"2024-11-15T03:21:17Z","content_type":"text/html","content_length":"60890","record_id":"<urn:uuid:d90189ec-71a3-44c1-9e56-0c24f56c173d>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00182.warc.gz"}
Independence of Events in context of part base rate 21 Sep 2024 Title: The Independence of Events: A Conceptual Framework for Understanding Part-Based Rates In probability theory, the independence of events is a fundamental concept that has far-reaching implications for understanding various statistical phenomena. This article provides an in-depth examination of the independence of events in the context of part-based rates, highlighting its significance and mathematical formulation. The independence of events is a crucial concept in probability theory, which states that if two or more events are independent, then the occurrence or non-occurrence of one event does not affect the probability of the other events. In this article, we focus on the part-based rate, a statistical measure used to quantify the relationship between two variables. Definition and Mathematical Formulation: Let A and B be two events with probabilities P(A) and P(B), respectively. The independence of events A and B can be mathematically formulated as: P(A ∩ B) = P(A) × P(B) where P(A ∩ B) represents the probability of both events A and B occurring. Part-Based Rate: The part-based rate, denoted by p, is a statistical measure that quantifies the relationship between two variables. It can be mathematically formulated as: p = P(A B) = P(A ∩ B) / P(B) where P(A B) represents the conditional probability of event A given event B. Independence of Events in Context of Part-Based Rate: In the context of part-based rates, the independence of events can be mathematically formulated as: P(A B) = P(A) × P(B) / P(B) Simplifying the above equation, we get: This result indicates that if events A and B are independent, then the conditional probability of event A given event B is equal to the unconditional probability of event A. In conclusion, this article has provided an in-depth examination of the independence of events in the context of part-based rates. The mathematical formulation of independence of events has been presented, highlighting its significance and implications for understanding various statistical phenomena. Related articles for ‘part base rate’ : • Reading: Independence of Events in context of part base rate Calculators for ‘part base rate’
{"url":"https://blog.truegeometry.com/tutorials/education/69ce295b69982e13b5c49c51d6ee7c19/JSON_TO_ARTCL_Independence_of_Events_in_context_of_part_base_rate.html","timestamp":"2024-11-12T21:40:13Z","content_type":"text/html","content_length":"16368","record_id":"<urn:uuid:f92b7183-7aea-4600-9705-39fb4ab25f1d>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00150.warc.gz"}
source/fuzz/shrinker.h - SwiftShader - Git at Google // Copyright (c) 2019 Google LLC // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // http://www.apache.org/licenses/LICENSE-2.0 // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #ifndef SOURCE_FUZZ_SHRINKER_H_ #define SOURCE_FUZZ_SHRINKER_H_ #include <memory> #include <vector> #include "source/fuzz/protobufs/spirvfuzz_protobufs.h" #include "spirv-tools/libspirv.hpp" namespace spvtools { namespace fuzz { // Shrinks a sequence of transformations that lead to an interesting SPIR-V // binary to yield a smaller sequence of transformations that still produce an // interesting binary. class Shrinker { // Possible statuses that can result from running the shrinker. enum class ShrinkerResultStatus { struct ShrinkerResult { ShrinkerResultStatus status; std::vector<uint32_t> transformed_binary; protobufs::TransformationSequence applied_transformations; // The type for a function that will take a binary, |binary|, and return true // if and only if the binary is deemed interesting. (The function also takes // an integer argument, |counter|, that will be incremented each time the // function is called; this is for debugging purposes). // The notion of "interesting" depends on what properties of the binary or // tools that process the binary we are trying to maintain during shrinking. using InterestingnessFunction = std::function<bool( const std::vector<uint32_t>& binary, uint32_t counter)>; Shrinker(spv_target_env target_env, MessageConsumer consumer, const std::vector<uint32_t>& binary_in, const protobufs::FactSequence& initial_facts, const protobufs::TransformationSequence& transformation_sequence_in, const InterestingnessFunction& interestingness_function, uint32_t step_limit, bool validate_during_replay, spv_validator_options validator_options); // Disables copy/move constructor/assignment operations. Shrinker(const Shrinker&) = delete; Shrinker(Shrinker&&) = delete; Shrinker& operator=(const Shrinker&) = delete; Shrinker& operator=(Shrinker&&) = delete; // Requires that when |transformation_sequence_in_| is applied to |binary_in_| // with initial facts |initial_facts_|, the resulting binary is interesting // according to |interestingness_function_|. // If shrinking succeeded -- possibly terminating early due to reaching the // shrinker's step limit -- an associated result status is returned together // with a subsequence of |transformation_sequence_in_| that, when applied // to |binary_in_| with initial facts |initial_facts_|, produces a binary // that is also interesting according to |interestingness_function_|; this // binary is also returned. // If shrinking failed for some reason, an appropriate result status is // returned together with an empty binary and empty transformation sequence. ShrinkerResult Run(); // Returns the id bound for the given SPIR-V binary, which is assumed to be // valid. uint32_t GetIdBound(const std::vector<uint32_t>& binary) const; // Target environment. const spv_target_env target_env_; // Message consumer that will be invoked once for each message communicated // from the library. MessageConsumer consumer_; // The binary to which transformations are to be applied. const std::vector<uint32_t>& binary_in_; // Initial facts known to hold in advance of applying any transformations. const protobufs::FactSequence& initial_facts_; // The series of transformations to be shrunk. const protobufs::TransformationSequence& transformation_sequence_in_; // Function that decides whether a given module is interesting. const InterestingnessFunction& interestingness_function_; // Step limit to decide when to terminate shrinking early. const uint32_t step_limit_; // Determines whether to check for validity during the replaying of // transformations. const bool validate_during_replay_; // Options to control validation. spv_validator_options validator_options_; } // namespace fuzz } // namespace spvtools #endif // SOURCE_FUZZ_SHRINKER_H_
{"url":"https://swiftshader.googlesource.com/SwiftShader/+/ce70129939b00ca06f2c4a0825487c70b9a4b6cf/source/fuzz/shrinker.h","timestamp":"2024-11-06T20:40:16Z","content_type":"text/html","content_length":"41639","record_id":"<urn:uuid:14f5972a-720f-4add-a666-1dc6d3efd511>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00697.warc.gz"}
10: The Chemical Bond Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) The Hydrogen Molecule This four-particle system, two nuclei plus two electrons, is described by the Hamiltonian \[ \hat{H} = -\frac{1}{2} \nabla^2_1 -\frac{1}{2} \nabla^2_2 -\frac{1}{2M_A} \nabla^2_A -\frac{1}{2M_B} \nabla^2_B -\frac{1}{r_{1A}} -\frac{1}{r_{2B}} -\frac{1}{r_{2A}} -\frac{1}{r_{1B}} +\frac{1}{r_ {12}} +\frac{1}{R} \label{1}\] in terms of the coordinates shown in Figure \(\PageIndex{1}\). We note first that Figure \(\PageIndex{1}\): Coordinates used for hydrogen molecule. the masses of the nuclei are much greater than those of the electrons,M[proton] = 1836 atomic units, compared to m[electron] = 1 atomic unit. Therefore nuclear kinetic energies will be negligibly small compared to those of the electrons. In accordance with the Born-Oppenheimer approximation, we can first consider the electronic Schrödinger equation \[\hat{H}_{elec} \psi(r_1,r_2,R) = E_{elec}(R) \psi(r_1,r_2,R) \label{2}\] \[ \hat{H} = -\frac{1}{2} \nabla^2_1 -\frac{1}{2} \nabla^2_2 -\frac{1}{r_{1A}} -\frac{1}{r_{2B}} -\frac{1}{r_{2A}} -\frac{1}{r_{1B}} +\frac{1}{r_{12}} +\frac{1}{R} \label{3}\] The internuclear separation R occurs as a parameter in this equation so that the Schrödinger equation must, in concept, be solved for each value of the internuclear distance R. A typical result for the energy of a diatomic molecule as a function of R is shown in Figure \(\PageIndex{2}\). For a bound state, the energy minimum occurs at for R = R[e], known as the equilibrium internuclear distance . The depth of the potential well at R[e] is called the binding energy or dissociation energy D[e]. For the H[2] molecule, D[e] = 4.746 eV and R[e]=1.400 bohr = 0.7406 Å. Note that as R → 0, E(R) → \ (\infty\), since the 1/R nuclear repulsion will become dominant. Figure \(\PageIndex{2}\). Energy curves for a diatomic molecule. The more massive nuclei move much more slowly than the electrons. From the viewpoint of the nuclei, the electrons adjust almost instantaneously to any changes in the internuclear distance. The electronic energy E[elec](R) therefore plays the role of a potential energy in the Schrödinger equation for nuclear motion \[ \left\{ -\frac{1}{2M_A} \nabla^2_A -\frac{1}{2M_B} \nabla^2_B + V(R)\right\} \chi (r_A,r_B) = E \chi (r_A,r_B) \label{4}\] \[ V(R) = E_{elec}(R) \label{5}\] from solution of Equation \(\ref{2}\). Solutions of Equation \(\ref{4}\) determine the vibrational and rotational energies of the molecule. These will be considered elsewhere. For the present, we are interested in the obtaining electronic energy from Equation \(\ref{2}\) and \(\ref{3}\). We will thus drop the subscript "elec" on \(\hat{H}\) and E(R) for the remainder this Chapter. The first quantum-mechanical account of chemical bonding is due to Heitler and London in 1927, only one year after the Schrödinger equation was proposed. They reasoned that, since the hydrogen molecule H[2] was formed from a combination of hydrogen atoms A and B, a first approximation to its electronic wavefunction might be \[ \psi(r_1,r_2) = \psi_{1s} (r_{1A})\psi_{1s} (r_{2B}) \label{6}\] Using this function into the variational integral \[ \tilde{E}(R) = \frac{\int{ \psi \hat{H} \psi d\tau}}{\int{\psi^2 d\tau}} \label{7}\] the value R[e] \(\approx\) 1.7 bohr was obtained, indicating that the hydrogen atoms can indeed form a molecule. However, the calculated binding energy D[e] \(\approx\) 0.25 eV, is much too small to account for the strongly-bound H[2] molecule. Heitler and London proposed that it was necessary to take into account the exchange of electrons, in which the electron labels in Equation \(\ref{6}\) are reversed. The properly symmetrized function \[ \psi(r_1, r_2) = \psi_{1s} (r_{1A})\psi_{1s} (r_{2B}) +\psi_{1s} (r_{1B})\psi_{1s} (r_{2A}) \label{8}\] gave a much more realistic binding energy value of 3.20 eV, with R[e] = 1.51 bohr. We have already used exchange symmetry (and antisymmetry) in our treatment of the excited states of helium. The variational function (Equation \(\ref{8}\)) was improved (Wang, 1928) by replacing the hydrogen 1s functions \(e^{-r}\) by \(e^{-\zeta r}\). The optimized value \(\zeta\) = 1.166 gave a binding energy of 3.782 eV. The quantitative breakthrough was the computation of James and Coolidge (1933). Using a 13-parameter function of the form \[ \psi(r_1, r_2) = e^{- \alpha ( \xi_{1}+\xi_{2})} \mbox{ x polynomial in} \{ \xi_{1}, \xi_{2}, \eta_{1}, \eta_{2}, \rho \} , \xi_{i} \equiv \frac{r_{iA} + r_{iB}}{R}, \eta_{i} \equiv \frac{r_{iA} +r_{iB}}{R}, \rho \equiv \frac{r_{12}}{R} \label{9}\] they obtained R[e] = 1.40 bohr, D[e] = 4.720 eV. In a sense, this result provided a proof of the validity of quantum mechanics for molecules, in the same sense that Hylleraas' computation on helium was a proof for many-electron atoms. The Valence Bond Theory The basic idea of the Heitler-London model for the hydrogen molecule can be extended to chemical bonds between any two atoms. The orbital function (8) must be associated with the singlet spin function \(\sigma_{0,0}(1,2)\) in order that the overall wavefunction be antisymmetric. This is a quantum-mechanical realization of the concept of an electron-pair bond, first proposed by G. N. Lewis in 1916. It is also now explained why the electron spins must be paired, i.e., antiparallel. It is also permissible to combine an antisymmetric orbital function with a triplet spin function but this will, in most cases, give a repulsive state, as shown by the red curve in Figure \(\PageIndex{2}\). According to valence-bond theory, unpaired orbitals in the valence shells of two adjoining atoms can combine to form a chemical bond if they overlap significantly and are symmetry compatible. A \(\ sigma\)-bond is cylindrically symmetrical about the axis joining the atoms. Two s AO's, two p[z] AO's or an s and a p[z] can contribute to a \(\sigma\)-bond, as shown in Figure \(\PageIndex{3}\). The z-axis is chosen along the internuclear axis. Two p[x] or two p[y] AO's can form a \(\pi\)-bond, which has a nodal plane containing the internuclear axis. Examples of symmetry-incompatible AO's would be an s with a p[x] or a p[x] with a p[y]. In such cases the overlap integral would vanish because of cancelation of positive and negative contributions. Some possible combinations of AO's forming \ (\sigma\) and \(\pi\) bonds are shown in Figure \(\PageIndex{3}\). Bonding in the HCl molecule can be attributed to a combination of a hydrogen 1s with an unpaired 3p[z][ ]on chlorine. In Cl[2], a sigma bond is formed between the 3p[z] AO's on each chlorine. As a first approximation, the other doubly-occupied AO's on chlorine-the inner shells and the valence-shell lone pairs-are left undisturbed. Figure \(\PageIndex{3}\). Formation of \(\sigma\) and \(\pi\) bonds. The oxygen atom has two unpaired 2p-electrons, say 2p[x] and 2p[y]. Each of these can form a \(\sigma\)-bond with a hydrogen 1s to make a water molecule. It would appear from the geometry of the p -orbitals that the HOH bond angle would be 90°. It is actually around 104.5°. We will resolve this discrepency shortly. The nitrogen atom, with three unpaired 2p electrons can form three bonds. In NH [3], each 2p-orbital forms a \(\sigma\)-bond with a hydrogen 1s. Again 90° HNH bond angles are predicted, compared with the experimental 107°. The diatomic nitrogen molecule has a triple bond between the two atoms, one \(\sigma\) bond from combining 2p[z] AO's and two \(\pi\) bonds from the combinations of 2p[x]'s and 2p[y]'s, respectively. Hybrid Orbitals and Molecular Geometry To understand the bonding of carbon atoms, we must introduce additional elaborations of valence-bond theory. We can write the valence shell configuration of carbon atom as 2s^22p[x]2p[y], signifying that two of the 2p orbitals are unpaired. It might appear that carbon would be divalent, and indeed the species CH[2] (carbene or methylene radical) does have a transient existence. But the chemistry of carbon is dominated by tetravalence. Evidently it is a good investment for the atom to promote one of the 2s electrons to the unoccupied 2p[z] orbital. The gain in stability attained by formation of four bonds more than compensates for the small excitation energy. It can thus be understood why the methane molecule CH[4] exists. The molecule has the shape of a regular tetrahedron, which is the result of hybridization, mixing of the s and three p orbitals to form four sp^3 hybrid atomic orbitals. Hybrid orbitals can overlap more strongly with neighboring atoms, thus producing stronger bonds. The result is four C-H \(\sigma\)-bonds, identical except for orientation in space, with 109.5° H-C-H bond angles. Figure \(\PageIndex{4}\). Promotion and hybridization of atomic orbitals in carbon atom. Other carbon compounds make use of two alternative hybridization schemes. The s AO can form hybrids with two of the p AO's to give three sp^2 hybrid orbitals, with one p-orbital remaining unhybridized. This accounts for the structure of ethylene (ethene): The C-H and C-C \(\sigma\)-bonds are all trigonal sp^2 hybrids, with 120° bond angles. The two unhybridized p-orbitals form a \(\pi\)-bond, which gives the molecule its rigid planar structure. The two carbon atoms are connected by a double bond, consisting of one \(\sigma\) and one \(\pi\). The third canonical form of sp-hybridization occurs in C-C triple bonds, for example, acetylene (ethyne). Here, two of the p AO's in carbon remain unhybridized and can form two \(\pi\)-bonds, in addition to a \(\sigma\)-bond, with a neighboring carbon: Acetylene H-C\(\equiv\)C-H is a linear molecule since sp-hybrids are oriented 180° apart. The deviations of the bond angles in H[2]O and NH[3] from 90° can be attributed to fractional hybridization. The angle H-O-H in water is 104.5° while H-N-H in ammonia is 107°. It is rationalized that the p-orbitals of the central atom acquire some s-character and increase their angles towards the tetrahedral value of 109.5°. Correspondingly, the lone pair orbitals must also become hybrids. Apparently, for both water and ammonia, a model based on tetrahedral orbitals on the central atoms would be closer to the actual behavior than the original selection of s- and p-orbitals. The hybridization is driven by repulsions between the electron densities of neighboring bonds. Valence Shell Model An elementary, but quite successful, model for determining the shapes of molecules is the valence shell electron repulsion theory (VSEPR), first proposed by Sidgewick and Powell and popularized by Gillespie. The local arrangement of atoms around each multivalent center in the molecule can be represented by AX[n][-][k]E[k], where X is another atom and E is a lone pair of electrons. The geometry around the central atom is then determined by the arrangement of the n electron pairs (bonding plus nonbonding), which minimizes their mutual repulsion. The following geometric configurations satisfy this condition: n shape 2 linear 5 trigonal bipyramid 3 trigonal planar 6 octahedral 4 tetrahedral 7 pentagonal bipyramid The basic geometry will be distorted if the n surrounding pairs are not identical. The relative strength of repulsion between pairs follows the order E-E > E-X > X-X. In ammonia, for example, which is NH[3]E, the shape will be tetrahedral to a first approximation. But the lone pair E will repel the N-H bonds more than they repel one another. Thus the E-N-H angle will increase from the tetrahedral value of 109.5°, causing the H-N-H angles to decrease slightly. The observed value of 107° is quite well accounted for. In water, OH[2]E[2], the opening of the E-O-E angle will likewise cause a closing of H-O-H, and again, 104.5° seems like a reasonable value. Valence-bond theory is about 90% successful in explaining much of the descriptive chemistry of ground states. VB theory fails to account for the triplet ground state of O[2] or for the bonding in electron-deffcient molecules such as diborane, B[2]H[6]. It is not very useful in consideration of excited states, hence for spectroscopy. Many of these deficiencies are remedied by molecular orbital theory, which we take up in the next Chapter.
{"url":"https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Quantum_Mechanics/11%3A_Molecules/10%3A_The_Chemical_Bond","timestamp":"2024-11-05T22:50:17Z","content_type":"text/html","content_length":"144700","record_id":"<urn:uuid:7d5b2f06-c8b0-46ce-87a5-c52093e205cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00550.warc.gz"}
Explaining the components of a Neural Network [AI] - Galaxy Inferno Explaining the components of a Neural Network [AI] Machine Learning Artificial neural networks are part of the field machine learning. Connection to Biology Neural networks in machine learning were inspired and are based on biological neural networks. That’s why you will find some shared vocabulary and biological terms that you otherwise might not expect in computer science. The idea was that human brains were much more adapt at solving certain problems than conventional algorithms and researchers thought they could use the architecture of our brains to make machines more efficient in solving these problems. Basic Structure Example of Neural Network Structure graphical representation of an artificial neuron The basic building block of a neural network is the neuron – or also called the node. The essentials: • neurons most often have multiple inputs $x_i$ • they only have one output $y$ • by calculating $\sum_i w_i * x_i = y$ If you compare this with the above basic structure of a network, you might notice, that in that image the nodes have multiple outgoing edges. That just means that the single output is sent to multiple nodes in the next layer as input. The node just computes one value, but that value can be used multiple times. The neurons are connected with edges and each edge represents a weight $w$. So for each node every input gets a weight, which is the number with which the input is multiplied before all inputs are added up. $\sum_i w_i * x_i = y$ At the start of training, the weights are unknown and will be adjusted by trial and error (or with more useful methods like gradient descent). A known data point will be inserted into the network and the result will be compared to the true result $y$ and if it is wrong, the weights need to be changed. A layer consists of some neurons all at the same depth in the network. A network can have arbitrarily many layers and each layer can have a different number of neurons. There are also different types of layers. The first layer is always called input layer and the last layer output layer. Everything between is a hidden layer. However, layers are different from each other through different activation functions (see below), pooling etc. Typically each neuron within one layer has the same activation function, though, meaning the layers itself are uniform. Activation functions An activation function is a function that is applied to the output of a single neuron. One example is the rectified linear activation function or sometimes called a rectified linear unit. It calculates $max(0, y)$, so it sets all negative values to 0. That is useful in cases, where a negative value would make no sense in the context. 2 comments 1. Thank you for any other informative site. The place else may just I am getting that kind of info written in such a perfect manner? I have a project that I’m simply now working on, and I’ve been at the look out for such information. 2. Hmm it seems like your site ate my first comment (it was super long) so I guess I’ll just sum it up what I had written and say, I’m thoroughly enjoying your blog. I as well am an aspiring blog blogger but I’m still new to everything. Do you have any tips and hints for newbie blog writers? I’d definitely appreciate it.
{"url":"https://galaxyinferno.com/explaining-the-components-of-a-neural-network-ai/","timestamp":"2024-11-05T13:52:26Z","content_type":"text/html","content_length":"171261","record_id":"<urn:uuid:04f6c207-b26a-422d-803b-7a358dfe15c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00214.warc.gz"}
Free Printable Exponent Rules Worksheets [PDF] Answers Grade 1-12 Exponents are an important algebra topic that students will encounter repeatedly. However, remembering the various exponent rules and how to apply them can prove challenging. Reinforcing these properties takes practice and experience manipulating exponential expressions. To assist with building competency, we offer free printable exponent rules worksheets for extra drill. These useful worksheets provide step-by-step examples and practice problems applying key exponent rules like the power of a power, product, quotient, zero, and more. Educators can use the downloadable PDF and Word worksheets to supplement lessons and activities on exponents. The repetitive practice strengthens studentsโ comprehension and helps cement their knowledge of exponent properties. With customizable worksheets, teachers can easily assign engaging practice to instill mastery of this foundational algebraic skill. Targeted repetition is key for fluency with exponent rules needed to simplify and solve exponential equations. Importance of Mastering Exponent Rules Exponent Rules Worksheets Exponents, or powers, are fundamental components of mathematical language and expression, and understanding their rules is essential for a variety of reasons. 1. Foundational Knowledge in Mathematics: Exponents are a core part of basic arithmetic and algebra. They represent repeated multiplication and play a pivotal role in numerous mathematical operations. Mastery of exponent rules isn’t just about knowing how to work with powers; it’s about building a solid mathematical foundation. Just as we need to understand basic operations like addition, subtraction, multiplication, and division to progress in math, exponent rules act as building blocks for more advanced topics. 2. Simplifying Complex Calculations: One of the primary reasons exponentiation was “invented” was to simplify complex calculations. Imagine multiplying a number by itself many times without using exponents โ it would be an arduous process. By using exponent rules, we can simplify and condense these calculations, making them more manageable and less prone to errors. Mastering these rules allows students and professionals alike to solve problems more efficiently, especially when dealing with large numbers or repeated operations. 3. Facilitating Advanced Mathematical Studies: As students advance in their mathematical journey, they will encounter topics that inherently involve exponents, such as polynomial equations, logarithms, and advanced calculus concepts. Understanding exponent rules is crucial when dealing with these advanced subjects. For instance, in calculus, the rules of exponents play a vital role when differentiating or integrating certain functions. A firm grasp of these rules ensures a smoother transition to higher-level math topics and reduces the learning curve associated with them. 4. Real-World Applications: Beyond the classroom, exponents have practical applications in various fields. For instance, in finance, compound interest is calculated using exponentiation. In physics, many formulas, especially in topics like quantum mechanics and electromagnetism, involve exponents. In computer science, algorithms often have time complexities expressed in terms of exponents (e.g., O(n^2)). By mastering exponent rules, individuals are better equipped to engage with, understand, and solve real-world problems in these domains. Printable Exponent Rules Worksheets Exponent rules worksheets pdf are a great resource for learning and practicing exponent rules. These printable worksheets cover the basic exponent rules such as the product rule, quotient rule, power rule, zero exponent rule, negative exponent rule, and the power of a power rule. The worksheets feature clear explanations and examples of each rule followed by practice problems. The exponent rules worksheets pdf start with simpler problems and progress to more complex expressions and equations. Students must apply the proper exponent rules to simplify exponential expressions and solve equations. The pdf worksheets provide ample practice so students can master the exponent rules through repetition. With the step-by-step breakdowns, students are able to check their work and comprehend where they made mistakes. Overall, exponent rules worksheets pdf are an excellent math tool for reinforcing exponent concepts. The pdf format allows students to print out the worksheets and work on them at their own pace. Whether in the classroom or homeschool setting, these worksheets can provide the foundation needed for success with exponents. The mixture of clear instructions, examples, and practice prepares students to confidently simplify and solve problems using exponent rules. Exploration of Exponent Rules Product of Powers When we talk about the ‘product of powers’, we are referring to multiplying two exponential expressions with the same base. The rule states: when you multiply two exponents with the same base, you add their powers. For example, if you have a^m multiplied by a^n, the result will be a^(m+n). This rule is intuitive when you consider the definition of exponents. If you were to expand a^m and a^n, you would be multiplying ‘a’ by itself m times and then n times, so in total, ‘a’ is multiplied by itself m+n times. Quotient of Powers The ‘quotient of powers’ rule deals with dividing two exponential terms with the same base. The rule is: when you divide two exponents with the same base, you subtract their powers. For instance, if you have a^m divided by a^n, it will be expressed as a^(m-n). Think of this as the reverse of the product rule. By understanding that division is the inverse operation of multiplication, it makes sense that in division, you would subtract the exponent of the divisor from the exponent of the dividend. Power of a Power This rule involves raising an exponential term to another power. In other words, you have an exponent taken to another exponent. The rule states: (a^m)^n equals a^(m*n). For instance, (a^2)^3 would be a^6. This is because you’re multiplying ‘a’ by itself twice, and then doing that whole process three times, resulting in ‘a’ being multiplied by itself a total of six times. Power of a Product When raising a product to an exponent, each factor of the product gets raised to that power. The rule is: (ab)^n equals a^n * b^n. So, if you have something like (2x)^3, it would be equivalent to 2^3 * x^3. This rule showcases the distributive nature of exponents over multiplication, emphasizing that each component of the product is affected by the exponent. Power of a Quotient This rule is about raising a quotient (or a fraction) to an exponent. The rule dictates that when you have a quotient raised to a power, both the numerator and the denominator are raised to that power. For example, (a/b)^n is equal to a^n/b^n. So, (y/2)^4 would be represented as y^4/2^4. Just as with the power of a product, this rule demonstrates the distributive property of exponents, but in the context of division. Zero and Negative Exponents Two very important rules in exponentiation involve zero and negative exponents. Anything raised to the power of zero is 1, provided the base is not zero. So, a^0 equals 1. This rule emerges from the quotient rule. When the exponent is the same in both the numerator and denominator (like a^m/a^m), the result is always 1. Thus, subtracting the same number from itself (like m-m) gives zero, leading to a^0 = 1. On the other hand, a negative exponent represents the inverse. So, a^-n equals 1/a^n. Essentially, the negative exponent indicates a reciprocal. This is useful in simplifying expressions and ensuring positive exponents. Types of Exponent Rules Worksheets Understanding exponent rules is key for diving deeper into advanced math topics. To help students get a firm grip on these rules, teachers often use various worksheets. Each type is tailored to fit different learning styles and challenges. Letโ s look at the most common ones: Multiple Choice Worksheets Multiple choice worksheets about exponent rules provide students with questions where they must pick the right answer from a list. For instance, a question might ask: “What is the result of multiplying two powers of ‘a’ together?” and then provide several choices. This format helps see if students can recall and recognize the correct application of the rules. Itโ s easy to grade and gives a snapshot of the studentโ s understanding. Fill-in-the-Blank Worksheets These worksheets have questions where parts of the math expression are left out, and students have to fill them in. For example, the worksheet might show “a raised to the power of 3 times a raised to the power of what equals a raised to the power of 7?” Here, students have to figure out the missing number. This type tests if students really understand and can use the rules on their own. Simplification and Evaluation Worksheets Simplification worksheets ask students to take a math expression with exponents and rewrite it in the simplest way. So, if students see something like “a raised to the power of 4 times a raised to the power of 2”, they would need to simplify it. Evaluation worksheets, on the other hand, give students specific numbers and ask them to find the answer. If they know that ‘a’ is 2 and they’re asked about “a raised to the power of 3”, they’d find out it’s 8. Mixed Review Worksheets These worksheets are a blend of all the types mentioned before. They might have multiple choice questions, gaps to fill, and problems to simplify, all in one sheet. They are great for seeing if students can handle different kinds of challenges and offer a complete check on their understanding. They’re also wonderful for revision since they touch on every aspect of the topic. Utilizing Exponent Rules Worksheets Exponent rules worksheets are invaluable tools that educators can leverage in various settings to reinforce students’ understanding and application of these mathematical principles. Here’s how these worksheets can be optimally utilized: In the Classroom Incorporating exponent rules worksheets within classroom settings serves multiple purposes. Firstly, they can be utilized as an immediate follow-up after introducing a new exponent rule. This allows students to apply the newly learned concept while it’s still fresh, providing instant feedback to the teacher about their initial comprehension. Secondly, these worksheets can be beneficial for differentiated instruction. Since students have varying levels of understanding and pace, worksheets can be tailored to cater to different proficiency levels. For instance, a teacher could have basic, intermediate, and advanced worksheets and distribute them based on the student’s grasp of the topic. This ensures that all students are challenged appropriately and can progress at their own pace. Homework Assignments Exponent rules worksheets are also highly effective as homework assignments. Given post-lesson, they serve as an extended practice tool, allowing students to reinforce the day’s learnings. Homework assignments have the added advantage of letting students work in a self-paced environment, giving them the freedom to revisit their notes, textbooks, or online resources. Additionally, when students attempt these worksheets at home, they can identify their areas of struggle, leading to targeted questions and clarifications in the subsequent class. This iterative process of learning in class and practicing at home through worksheets can solidify their understanding of exponent rules. Group Activities and Peer Learning Worksheets don’t always have to be a solitary endeavor. For exponent rules, which can sometimes be tricky for students, group activities utilizing these worksheets can be particularly effective. Breaking students into small groups and providing them with a mixed set of problems can promote collaborative problem-solving. This peer-learning environment can be especially beneficial as students often have different strengths, and one student’s understanding of a concept can help clarify doubts for another. Moreover, group discussions around challenging problems can lead to deeper insights and a more robust conceptual understanding. After group work, a debriefing session where each group explains their methods and answers can further reinforce learning and allow for correction of misconceptions. Engage in Effective Learning with TypeCalendar’s Exponent Rules Worksheets Learning exponent rules effectively is a blend of practice and conceptual understanding, and TypeCalendar’s Exponent Rules Worksheets are designed to provide just that. These worksheets cover a range of exercises that dissect the rules of exponents into understandable chunks. Whether it’s multiplication, division, or power of a power, students get to explore and practice these rules through well-organized problems. Moreover, the availability of answer keys provides immediate feedback, making the learning process both effective and rewarding. Our primary goal is to create a self-paced learning environment where learners can thrive and excel in mastering exponent rules. Elevate Your Math Practice with Printable Exponent Rules Worksheets from TypeCalendar TypeCalendar’s commitment to fostering a love for learning and enhancing educational experiences shines through with the introduction of these Exponent Rules Worksheets. The ease of downloading and printing these resources means that quality math practice is just a click away. The variety of problems presented in these worksheets ensures a comprehensive understanding and application of exponent rules, setting a solid base for tackling more advanced mathematical topics. By offering these free and high-quality resources, TypeCalendar takes pride in supporting the academic community in their journey towards achieving mathematical excellence. Download Exponent Rules Worksheets: A Stepping Stone to Advanced Mathematics Exponent rules are not just mere mathematical operations; they are the stepping stones to more advanced topics in algebra and beyond. With our downloadable Exponent Rules Worksheets, learning and teaching these rules become less daunting and more engaging. The Word format allows for a level of customization, enabling educators to tailor the worksheets to match their curriculum or the learning pace of their students. The PDF format, on the other hand, offers a ready-to-print resource that’s perfect for quick revisions or assessments. By offering these downloadable worksheets, TypeCalendar aims to support a smooth transition into more complex mathematical concepts. Betina Jessen
{"url":"https://www.typecalendar.com/exponent-rules-worksheets.html","timestamp":"2024-11-10T05:32:12Z","content_type":"text/html","content_length":"608953","record_id":"<urn:uuid:25995fcf-c984-4095-a7a6-252ac569b02a>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00357.warc.gz"}
Central Connecticut State University | CS 253 : test2 1. Describe the Binary Tree ADT (give a definition, set of operations,example), and its implementations. Explain and compare linear and linked implementations of binary trees. 2. Describe binary tree traversals (inorder, postorder, preorder and level-order). Give examples of applications of these traversals– describe the application, not just say it. 3. What kind of a binary tree is the heap? Explain different operations of heaps. Compare heaps to binary search trees in term of efficiencies of main operations 4. Explain heap sort (use an example). Compare the efficiency of heap sort to the efficiencies of elementary sorting methods and radix sort. 5. Describe the General Tree ADT (give a definition, set of operations, example). Discuss different ways for implementing general trees, and compare them in terms of the efficiency of search 6. Discuss general tree traversals for both binary tree implementation and ternary tree implementation of a general tree. Give an example to illustrate your answer. 7. Describe the Priority Queue ADT (give a definition, set of operations, example). Discuss different implementations of a priority queue. Show how sorting can be done with a priority queue. 8. Describe the Dictionary ADT (give a definition, set of operations, example). Compare ordered vs unordered dictionaries in terms of the efficiency of main operations, and discuss different implementations of unordered dictionaries (hash tables being one of them). 9. What type of a tree is an AVL tree? Compare it to the binary search tree. Explain how search, insertion and deletion are performed on AVL trees. Discuss possible applications of AVL trees (ordered dictionaries being one of them). 10. Describe 2-3 trees (give a definition and an example). Explain how search, insertion and deletion are performed on 2-3 trees (use an example). How an efficiency of 2-3 trees compares to that of AVL trees? (again, an example will be useful to illustrate your answer). 1) trace a data structure 2)define and/or compare efficiencies of program segments.
{"url":"https://www.acemygrades.com/course/central-connecticut-state-university-cs-253-test2-complete-solution-rated-a/","timestamp":"2024-11-12T23:21:48Z","content_type":"text/html","content_length":"46801","record_id":"<urn:uuid:5831981a-c9d8-43e0-a27e-1250836fbe22>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00341.warc.gz"}
A Contribution to the Study of Relative Growth of Parts in Inachus Dorsettensis 1. The relative growth-rate of the various parts of the body was investigated by analysis of linear measurements carried out on the carapace, appendages, and abdomen. 2. The results were as follows: □ (a) The chelar propus shows strong positive heterogony in the 3 and very slight positive heterogony in the ♀. The ♂ chela shows dimorphism, the ♂♂ being differentiated into “high” and “low” forms and the dimorphism in all probability being due to the fact that the ♂ chela assumes the ♀ type of growth to a greater or less extent in the non-breeding season. □ (b) The pereiopods are more positively heterogonic in the ♂ than the ♀ and in both ♂ and ♀ there is a graded k series, but whereas in the ♂ k increases from P[1]—P[4], in the ♀ the series is reversed k being greatest for P[1]. In both sexes the heterogony in the pereiopods is not so marked as that of the chelar propus. In the 3 the pereiopods suffer an actual decrease in absolute size at the time when the relative growth-rate is least for the chelar propus and after this period never again attain to their original relative size. □ (c) The third maxilliped is negatively heterogonic in both sexes and slightly more so in the ♂ than the ♀, although this difference may not be significant. □ (d) The abdomen in the ♀ shows marked positive heterogony in young crabs but is isogonic after the attainment of sexual maturity; in the ♂ the abdomen is isogonic in young crabs but becomes slightly negatively heterogonic in old animals. The abdomen in the ♀ is dimorphic, the “Tow” type (characteristic of the adolescent crab) being separated from the “high “type (characteristic of adult crab) by a single moult. There is considerable variation in the relative abdominal growth-rate in adolescent crabs and consequently in the time of attainment of sexual maturity and the full relative abdominal width. During the period of adolescence the increase in relative length and breadth is the same for all segments except the 5th, which forms a growth-centre for length. At the period of rapid conversion of the “low” type of abdomen into the “high” the 5th segment remains the growth-centre for length but a growth-centre for breadth is established in the 6th segment, and increase in relative breadth decreases from the distal to the more proximal segments. 3. A comparison of the ♂ and ♀ as regards relative growth of parts was made and brought to light certain facts. All the pereiopods are relatively longer in the ♂ than the ♀. This is a graded effect—the actual difference in relative length (in ♂ and ♀) decreasing from P[1]—P[4], but the increase in relative length relative to the actual size of the pereiopods increasing from P[1]—P[4]. These facts are interpreted as meaning that there is a common stimulating effect in the ♂, and also a retarding effect of the ♂ chela on the appendages posterior to it. It is tentatively suggested that the facts relating to the 3rd maxilliped are explained if we assume that the 3 chela has the same retarding effect on the anterior appendages as on those posterior to it. 4. Certain general conclusions were drawn : □ (a) There is a different distribution of growth-potential in the two sexes, and strong positive heterogony is found associated with the development of the secondary sexual characters. □ (b) The results on the relative growth-rates of the appendages in ♂ and ♀ and on the growth-rates of the individual segments of the ♀ abdomen, indicate the presence of definite gradients in the body, in relation to which growth takes place. □ (c) The growth-centre for the heterogonic organs is situated towards the distal end of the organ; this is in agreement with results obtained for other Crustacea. The work which forms the subject of the present paper was carried out under the direction of Professor J. Huxley to supplement work of a similar nature which he himself carried out on Maia squinado (see Huxley 1927) where, inter alia, he found that certain facts could be accounted for by postulating a growth-gradient or graded distribution of what may perhaps be called growth-potential, from a centre in the chelar propus downwards along the chela and then backwards along the body. 2. MATERIAL AND METHODS The crabs were obtained from Plymouth, as many various sizes as possible being selected, with approximately equal numbers of ♂♂ and ♀ ♀, and the investigation was carried out by taking linear measurements of the different appendages and parts of the body. The method of measuring as large a number of crabs as possible, taken at random from a population but including a large range in size, was adopted because of the difficulty of keeping crabs in captivity and collecting the successive moults. 162 crabs (75 ♂ and 87 ♀) were measured, ranging in size from 6 to 34 mm. in carapace length^ The measurements carried out were as follows (see Text-fig. 1): 1. Carapace length (this was the standard measurement with which all others were compared). From a point between the two anterior spines to the median point of the posterior border of the carapace. 2. Carapace breadth. Greatest breadth of the carapace, between the bases of the first two pereiopods. 3. Cheliped. Only the propodite was measured since the folding back of the cheliped in the ♂ makes a total length measurement unreliable. • Propus length (see diagram). • Propus breadth (maximum). 4. Pereiopods. These were removed from the body at the breaking joint, but the junction of the merus with the ischium was found to be a more convenient point from which to measure, so that the measurement taken was the length of the posterior border of the pereiopod from the proximal end of the merus to the tip of the dactylus. The pereiopod is, of course, flexible structure but it is a simple matter to straighten the limb and take the length measurement accurately to the nearest $12$ mm. 5. Third maxilliped. A length measurement was taken from the point of junction of the ischium and basis on the inner side to the most distal point of the merus (see diagram). • ♀. (1) Length. Greatest length of the abdomen in the extended position, from the median point of the posterior border of the carapace to the tip of the sixth abdominal segment. (2) Length of 3rd, 4th, 5th and 6th, abdominal segments. (3) Breadth of 3rd, 4th, 5th and 6th, abdominal segments. • B ♂. (1) Length. Greatest length in extended position. (2) Breadth of 6th abdominal segment. Measurements on the pereiopods were carried out by placing the leg in the fully extended condition on a metal mm. rule; on the carapace (length and breadth) and abdomen (length, ♂ and ♀) with fine accurately adjustable dividers. The third maxilliped, chelar propus, and breadth of the abdominal segments were measured by making use of the travelling stage of a microscope with a cross-wire in the eye-piece. The measurements on the pereiopods were probably accurate to the nearest $12$. and the abdomen and the carapace to about $14$., all others to about $11 0$. The biometric constants for the measurements have deliberately not been calculated since this piece of work is intended only as a general mapping of the ground which may serve as a basis for more detailed work on certain parts in the future. Some of the smaller differences obtained are doubtless not statistically significant. The original data have been deposited at the British Museum (Natural History). The results were analysed as follows : The ♂♂ were divided into six classes and the ♀♀ into eight classes according to carapace length, and the mean absolute sizes and relative sizes of the different appendages and parts of the body calculated for these classes. Graphs were then constructed showing : • A. A comparison of the mean relative lengths of the appendages in the ♂ and the ♀ (constructed on data from all classes of crabs together). Graph VII. • B. A comparison of the percentage increase in size of the appendages and abdomen in the ♂ and ♀ for a given percentage increase in carapace length. Graph VIII. • C. A comparison of the ratio ♂/♀ for the absolute size of all the appendages and for the abdomen in large and small crabs, i.e. change in this ratio with change in size. Graph IX. • D. Graphs where the relative length of the part in question was plotted against the carapace length. These showed growth changes in the proportions of the various parts. Graphs III, IV, V. • E. Graphs where the logarithms of the mean sizes for the different classes were plotted against the logarithms of the mean carapace length for the classes. These were of value in determining k in the heterogony formula y = bx^k(y = measurement of part in question, x = standard measurement), and so in comparing the relative growth-rates of the different parts of the body. Graph I. 3. RESULTS A. Growth of the propodite of the cheliped (1) Male The points on the log. log. graph showing the relative growth in breadth (Graph I B) form a rough approximation to a straight line whose inclination gives k = 1·7. However, when we look closer, we find that the relative growth is at first less than this (from 8–15 mm. carapace length k = approximately 1·4), followed by an acceleration of relative growth (k = approximately 2·5), with finally a marked falling off for the last size class. The log. log. graph showing the changes in relative growth-rate for the length of the chelar propus is exactly similar to that for the breadth, but k calculated for the general slope of the line is only 1-3, and the various changes in relative growth-rate are not so marked. The deviation from a simple straight line series of points is in all probability due to the phenomenon of “facultative” high and low dimorphism, first described by G. Smith (1906) for I. scorpio. In this species the normal strong heterogony of the chela is replaced by the ♀ isogonic type of growth in the non-breeding season. This period is usually passed through in the winter months, but not necessarily, so that during the breeding season there may be three classes of ♂♂, “high,” “low,” and “middle” or “female” type. The “high” and “low” ♂♂ are large bodied and small bodied respectively and most of the “middle” type are of intermediate size. The “middle” ♂♂ have the ♀ type of chela, the others the ♂ type but of different relative size (see Huxley 1927 for analysis). In the summer (breeding season) the number of middle ♂♂ is small and depends on when the last moult took place. In the winter months there are a few “high” ♂♂ but no “low” ♂♂, all the small crabs having the flat ♀ type of chela. The measurements which form the subject of the present paper were carried out on specimens of I. dorsettensis collected at three different times, namely April 1926, October 1926, October 1927. Thus the first batch was collected at the very beginning of the breeding season and the other two batches in the non-breeding season, so that one would expect only a small number of “low “♂♂ to be represented. In this species there is evidence that the ♂ chela does not go over to the ♀ type of growth, in the non-breeding season, nearly so completely as in I. scorpio. An inspection of the chelae of all the small and medium sized ♂ crabs (under 20 mm. carapace length) showed that a small number of crabs had the definite ♀ type of chela, a small number the full ♂ type (see Text-fig. 2), but that in the majority the chela was intermediate in shape between the ♂ and ♀ types, and that all gradations between the two extreme conditions existed. A plot of mean chelar breadth against mean carapace length on the absolute scale showed no interruption of the curve at intermediate body sizes, such as was found by Huxley (1927) on analysis of G. Smith’s results on I. scorpio. In view of these facts one would expect Graph I B to show only a slight trace of the phenomenon of “facultative” high and low dimorphism, and examination of the graph proves that this is the case. This graph may be interpreted as follows: from 8–15 mm. carapace length the 3 chela is of the ♀ type but relatively larger than in ♀♀ of the same size; from 15−17 mm. carapace length the positive heterogony is slightly more marked and this part of the graph probably represents the “low” ♂♂ taking part in the first breeding season; from 17-21 mm. carapace length there is an almost imperceptible decrease in relative growth-rate which may be explained as being due to reversion to the ♀ type of growth, in a certain number of cases, during the non-breeding season; from 21—25 mm. carapace length there is strong positive heterogony, this part of the graph probably representing the “high” ♂♂ taking part in the second breeding season ; the final falling off may be due to a second non-breeding season in old animals but as it is based on one class only this cannot be pressed in any way. It is more probable that it has no real significance as a similar large decrease in slope is commonly found towards the end of graphs showing the relative growth-rate of heterogonic organs, and has a purely mathematical explanation. There is however dimorphism of a sort in the 3 chela as it shows a bimodal frequency curve (Graph II), the two modes occurring at 4 mm. and 11 mm. chelar breadth, and representing true “high” and “low” forms. Thus there are two phases of ♂ chelar growth, and the conversion from the “low” type to the “high “type takes place suddenly and presumably at a single moult, which occurs most frequently at about 20 mm. carapace length. A similar bimodality for chelar propus breadth is shown in the correlation table given by G. Smith (loc. cit. page 97) for I. scorpio, the modes occurring at 3 mm. and 10 mm. chelar breadth. It is, however, not referred to by him in his text. A plot of the modes for the chelar breadth at different carapace lengths, on the log. log. scale, for I. scorpio, showed that there is the same enormous variation in relative growth-rate as in I. dorsettensis. Comparison of G. Smith’s normal and infected ♂♂ (no data for the ♀ available) showed that in I. scorpio, when there is reversion to the ♀ type of growth, the ♂ chela in a certain number of specimens actually goes over to values equal to those for the infected ♂♂. (2) Female The growth of the chelar propus in the ♀ shows slight positive heterogony; the points on the log. log. graph (Graph 1 c and D) conforming to approximately straight lines, for which k= 1·16 for length and 1·15 for breadth, so that, in contrast to the ♂, the relative increase in length is greater than in breadth. B. Growth of the pereiopods (1) Male The relative growth of the pereiopods is shown in Graph I E and Graph III. Graph III shows that the changes in relative growth-rate are in a general way similar to those in the chelar propus but, owing to the much greater length of the pereiopods, all changes are much more marked. The peculiar “back kink” of the curve for all the pereiopods is hard to interpret. It may well be that the relative growth-rate of the pereiopods decreases at the same time as that of the chelar propus decreases (i.e. there is a similar slowing down of growth in the non-breeding season) and that after this period when the relative growth-rate of the chela rapidly increases, this acts as a drain on the pereiopods, so that these never again attain to their former relative size. On the other hand it may be entirely due to a “draining” or retarding effect of the chela, the effect of which is most obvious just before the period of strong positive heterogony in “high” ♂♂ rather than while this is actually taking place. As the log. log. graph (Graph I E) shows, there is an actual decrease in absolute size (between 19—20 mm. carapace length) for all the pereiopods except the 3rd which shows a trivial increase, k in the heterogony formula calculated for classes 1—5 gives the following series: P[1], 1·22; P[2], 1·22; P[3], 1·26; P[4], 1·27; where P = pereiopod, and after the period of size decrease k does not again reach its previous values for the remaining classes (6–8). (2) Female The growth of the pereiopods is shown in Graph I F and Graph IV. The graphs are rather irregular in spite of the fact that there are larger numbers of individuals in each class than in the case of the ♂, and the meaning of this is not at all clear. Only to an extremely limited extent can the irregularities be said to follow those of the chelar propus. On the other hand the general form of the curve is similar for all the pereiopods. An interesting point brought to light by the log. log. graph (Graph I F) is that, over the whole size range covered, the pereiopods show positive heterogony although this is not so marked as in the case of the 3, the k series being P[1], 1·11 ; P[2], 1·10; P[3], 1·06; P[4], 1·08 (the last class being neglected for all the pereiopods in the calculation of k). The significance of the rise in k for P4 will be referred to again later. C. 3RD Maxilliped The maxilliped is very slightly negatively heterogonie in both sexes (see Graphs III and IV), k for the ♀ being-98 and for the ♂ ‘95. This difference between the ♀ and the ♂ may not be significant, but it is clear that there is no positive heterogony. D. Abdomen Graph V is constructed from measurements of the 6th abdominal segment in both ♂ and ♀, and shows that in the ♀ there is a period of slight positive heterogony, then one of very strong positive heterogony followed by a period of isogony. The first period represents the narrow flat abdomen of the adolescent crab, the last the broad abdomen with convex ventral surface of the adult crab, and there is evidence that these two types are separated by only a single ecdysis as in the case of I. scorpio as mentioned by G. Smith (loc. cit. p. 68). This has not been actually observed in the case of I. dorsettensis but is proved by the existence of a discontinuously bimodal frequency curve for the abdomen breadth (see Graph VI). The case is somewhat similar to that of the ♂ forceps in Forficula (see Huxley 1927), but the “low” type (adolescent abdomen) and the “high” type (adult abdomen) are more distinct (see Graph VI). All the small ♀ crabs are of the “low” type and all the large ♀ crabs of the “high” type, but crabs of medium size (12—17 carapace length) fall into one or other of the two equilibrium positions. Within the “low” group there is slight positive heterogony (k—1·4) and the “high” group is isogonic or even slightly negatively heterogonic (k = approximately ·97). The case is different from that of Forficula in that the “high” type follows the “low” type in time, and during the persistence of each type a number of moults may take place. Further the relative abdomen width in the “low” type increases slightly with increasing body size and in the “high” type remains constant, whereas in Forficula the relative forceps length taken separately for each type slightly decreases. In Inachus it seems clear that the abdomen growth consists of two long periods, one of slight positive heterogony, the other of isogony, separated by a short period of violent heterogony, which presumably begins directly after a moult, since its effects are shown completely by the next moult. The range of size over which this may occur is 13—17 mm. carapace length. The linear sizes are as 1 : 1*3, which would correspond closely to a doubling in volume and according to Brook’s and Przibram’s law and experimental data on Carcinus would imply that there is a range of one whole instar for the particular moult at which the adult abdomen is assumed. Similar considerable variation in the size at which the full relative width or adult abdomen is attained is found in the Fiddler crab Uca (see Huxley 1927). Whereas the ♀♀ of both Uca and Inachus acquire a definitive relative abdomen breadth, unpublished data, by Huxley and Richards, on Carcinus nioenas, show that this does not occur in Carcinus, the abdomen (like the ♂ chela of Uca etc.) becoming relatively larger with increased absolute size throughout the whole of life. As in the case of I. scorpio, the adolescent abdomen appears to be retained until the first brood of eggs is produced, as (with the exception of four specimens) all the crabs with “adult” abdomen are in berry. The four specimens mentioned above are of carapace lengths 15·2 mm., 15·7 mm., 16·5 mm., 20·2 mm., and have the full relative width abdomen. There is of course the possibility that they were about to pair or that their previous brood of eggs had hatched. As regards the individual segments of the abdomen in the ♀, it was found that in the adolescent crabs showing the “low” type of abdomen, k was the same for all the segments, both as regards length and breadth (k = 1·3), with the exception of the 5th segment where k for length was 2·2. (Only the 3rd, 4th, 5th and 6th segments were measured as the 1st and 2nd segments are too small to be measured with sufficient accuracy.) k for this period was calculated from classes U and V, Table III. For the period when the rapid transition from the “low” type of abdomen to the “high” type is taking place k is greatest for the 6th segment breadth and is approximately 3-15 ; for the breadth of the remaining segments it gradually decreases (seg. 5, k = 2·8; seg. 4, k = 2·6; seg. 3, k = 2·5). The greatest increase in relative length again takes place in the 5th segment (k = 3·1). The values of k for the lengths of the other abdominal segments are as follows : seg. 6, k = 2·9 ; seg. 4, k = 2·9 ; seg. 3, k = 2·1. From this it will be seen that apart from the high value of k for the 5th segment, there is a gradual decrease in k from the distal to the more proximal segments. On the whole the k values for the breadth are higher than those for the length, and this is what one would expect as in the transition from the “low” to the “high” type the main alteration is an increase in relative breadth. Graph V shows that the abdomen in the 3 is isogonic in young crabs, but becomes slightly negatively heterogonie in older crabs. E. Graphs OF Relative Growth OF Parts A series of graphs were constructed to compare the ♂ and ♀ as regards relative growth of parts (list already given, p. 145 et seq). (1) Graph VII The mean relative lengths of the third maxilliped, chelar propus, and pereiopods 1–4, were plotted for the ♂ and ♀, and it was found, as was to be expected, that the chela in the ♂ attains to a much greater relative size than in the ♀. The result for the pereiopods was rather unexpected and the exact opposite of the result obtained by Huxley (loc. cit.) for Maia squinado. In Maia the differences in relative length of the pereiopods in the ♂ and the ♀ fall regularly from P[1] to P[4] and, what is more important, the difference expressed as a percentage of the ♀ value decreased from P[1] to P [4] in the same way, indicating that the strongly heterogonie growth of the chela in the ♂ was correlated with the growth of the pereiopods, P[1] being affected the most and P[4] the least. The difference in the relative size of the 3rd maxilliped in the ♂ and ♀, expressed as a percentage of the ♀ value was −4·33, indicating a retarding effect of the active growth of the ♂ chela on the appendage anterior to it. In Inachus, on the other hand, the differences between the mean relative lengths of the pereiopods of the ♂ and ♀ do not decrease regularly in this way (P[1], 15·2; P[2], 16·2 ; P[3], 14·7 ; P [4],13·3) and the percentage differences show exactly the opposite result (P[1], 5·0; P[2], 7·1 ; P[3], 7·8; P[4], 8·2) so that relative to its length the 4th pereiopod is affected most. The 3rd maxilliped, however, showed the same result as in Maia, namely a negative percentage (see above) of 3·48. (2) Graph VIII This graph was constructed by finding the percentage increase in abdomen and appendage size for ♂ and ♀ crabs taken over the same range of increase in carapace length. It shows that in none of the 2 appendages is the percentage increase in size so great as in the ♂, but for the abdomen it is very much greater in the ♀. In the ♂ pereiopods the percentage increase is greatest for P[3] and P[4] as was to be expected from the values for k. In the ♀ pereiopods the percentage increase is practically the same for P[1], P[2] and P[3], but considerably less for P[3]. This is accounted for by the low value of k for the 3rd pereiopod, but it is difficult to say with certainty what is the meaning of this. It may perhaps be explained by assuming that as the 4th pereiopod lies opposite the 1st abdominal segment it is included within the region of active growth which is responsible for the great increase in relative size of the abdomen (see Text-fig. 2 A). If this is the correct explanation then k for the 4th pereiopod is abnormally high owing to its proximity to the abdomen, rather than k for the 3rd pereiopod abnormally low. This would agree with the k series for the ♀ pereiopods, as k falls regularly from the 1st to the 3rd pereiopod. For the 3rd maxilliped the percentage increase is greater in the ♂ than the ♀. It is worthy of note that in the posterior part of the body the graph is almost the exact reverse in the ♂ and the ♀. Graph VIII may be said to give part of the “growth profile” of the two sexes. (3) Graph IX was constructed by plotting the ratio (as a percentage) for the 3rd maxilliped, chelar propus, and pereiopods 1—4 m (a) small ♂♂ (classes G and H Table I) and small ♀♀ (classes J and K, Table II) and (b) large ♂♂ (classes A-F, Table I) and large ♀♀ (classes A-G, Table II). In the small crabs the chelar propus and 1st and 2nd pereiopods are larger in the ♂ than the ♀ but the reverse holds for the 3rd maxilliped and the 3rd and 4th pereiopods, and of course for the abdomen. The value for the 4th pereiopod is high because of the high value of k in the ♀. In the large crabs the ratio for the chelar propus has increased considerably and the ratios for all the pereiopods have increased to approximately the same value so that pereiopods 3 and 4 have ceased to be longer in the ♀ than the ♂. The ratio for the 3rd maxilliped is slightly nearer too, showing possibly that the 3rd maxilliped in the ♂ starts by being smaller than that of the ♀ and, although always remaining slightly relatively smaller, more nearly approaches it in size in large crabs. The difference, however, may not be significant. 1. Axial Gradients and the Relative Growth of Parts Analysis of the growth-rates of the different appendages seems to show that it is not possible to interpret the greater relative length of the pereiopods and smaller relative length of the 3rd maxilliped in the ♂ as simple stimulating and retarding effects of the enlarged chela of the ♂ acting on a pre-existing growth gradient. That there is a graded effect on the pereiopods is obvious but that it is the very reverse of a stimulating effect is equally obvious as the percentage increase in length is considerably greater for the 3rd and 4th pereiopods than for the 1st and 2nd pereiopods, and the difference in mean relative length between the pereiopods in the ♂ and the ♀ expressed as a percentage of the ♀ value show a steady increase from pereiopods 1–4. The k values for the pereiopods corroborate these results, k in the ♂ being approximately the same (1·22) for the 1st and 2nd pereiopods but greater for the 3rd and 4th pereiopods (1·26 and 1·27). It is possible that both the ♂ and the ♀ may be supposed to have the same growth mechanism (a gradient in slight positive heterogony with k greater for the cheliped and least for the 4th pereiopod, but that the draining influence of the very active growth of the cheliped in the ♂ is such that the gradient is reversed in the ♂, k being least for the pereiopods immediately posterior to the cheliped. The original gradient is not affected in the ♀ where the chelar propus is only very slightly positively heterogonic. An explanation of the facts would be provided by the assumption that in the ♂ there is a general growth promoting effect and at the same time a draining effect of the large chela, in antagonism. The 3rd maxilliped in both sexes is slightly negatively heterogonic and more so in the ♂ than the ♀. This indicates that there is a different growth mechanism in the appendages anterior to the chela, and that possibly the negative heterogony is more pronounced in the ♂ because the chela has the same retarding effect on the growth as in the case of the pereiopods. This retarding effect must not be stressed in any way as the difference between the ♂ and the ♀ may well not be significant. 2. Distribution of Growth-Potential One definite result of this work on Inachus dorsettensis has been to show that there are marked regional differences in the relative growth-rates of the different parts of the body, and that the two sexes differ in this respect, the distribution of what we may call “growth-potential” being in favour of the hinder thoracic appendages, especially the cheliped in the ♂, and in favour of the abdomen in the ♀. Thus greatest heterogonic growth is found associated with the development of the secondary sexual characters, but neighbouring parts, not usually thought of as secondarily sexual, may be involved in this. Somewhat similar results were obtained by Kunkel and Robertson (1928). 3. Growth-Centres Recent work on heterogonic growth in Crustacea has brought to light the fact that in parts of the body showing positive heterogony there is a growth-centre, which in appendages is usually towards the distal end. In the ♂ Inachus dorsettensis it is obvious that the propus is the growth-centre for the cheliped; in the ♀ the growth-centre for the abdomen is also near the distal end, the greatest increase in breadth occurring in the 6th (last, most distal) segment, and the greatest increase in length in the 5th segment. Huxley (1927) showed that in Uca and in Maia the growth-centre for the chela (based on weight measurements) was in the propus. J. S. Biol. Zentralb. B. W. J. A. Journ. Marine Biol. Ass. Fauna u. Flora Golf Neapel, Monog. All crabs measured were free from Sacculina externa and any showing obvious regeneration of an appendage were rejected. Copyright © 1928 The Company of Biologists Ltd.
{"url":"https://journals.biologists.com/jeb/article/6/2/145/21776/A-Contribution-to-the-Study-of-Relative-Growth-of","timestamp":"2024-11-15T00:17:36Z","content_type":"text/html","content_length":"252146","record_id":"<urn:uuid:f55241f7-962a-45fd-b747-31aa438949c1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00512.warc.gz"}
Matlab's usage of matrices Function: Trigonometric function: sin (parameter in radians, such as pi/2) Sind (parameter in degrees) Remainder: If the symbol of rem (a, b) is different, the result is -1 Can be used when mod (a, b) symbols are different Rounding: floor(): The maximum integer not greater than the independent variable Ceil(): The smallest integer not less than the independent variable Round(): Round to the nearest whole number Fix(): Truncate and round If isprime() determines whether it is a prime number, then the value is 1; No, then the value is 0 Log(): If the default base is e, it means log2() if the base is 2. Logm(): The parameter is a matrix, and the natural logarithm of the matrix is taken according to the operation rules of the matrix. Exp (n): represents the nth power of e Expm(): represents the operation between matrices Sqrt(): square root (if the parameter is a matrix, it means that each element of the matrix is square root) Sqrtm(): represents the square root of a matrix. Function find: Find the positions of elements greater than 4 in the matrix//The parameter of find is the check statement: K= The position of all elements greater than 4 in output A of find (A> 4) is represented as the number [m, n]= The result of find (A> 4) is the subscript method [m, n]= Ind2sub (size (A), k) # Convert the single ordinal position k to the full subscript position in the A matrix. The addition and subtraction between matrices is the addition and subtraction operation between the corresponding elements of two matrices (ps: The rows and columns of the two matrices must be the same) Matrix; Constant: Each element of the matrix is added with this constant. The "*" operation of a matrix is a multiplication operation between matrices Matrix. *" The operation is the operation of elements between matrices (the size of the two matrices needs to be the same) Note: In Matlab, there are left division and right division/: For ordinary scalars, there is no essential difference between the two But the direction of operations is different: dividend/divisor; Divider dividend For matrix: left division : A B represents the inverse matrix of A multiplied by B, i.e. inv (A) B Right division/: A/B represents the inverse matrix of A multiplied by B, i.e. B inv (A). 1. The element types in a general matrix are unique 2. Unit data (cell): The elements in the unit matrix may be different, and the matrix elements are enclosed in curly braces For example: b= {10, 'aedsa', [11,2,34,12]; 'wasd', 23,32; 'cai', 3, [1,2,3,4; 23,1,2,3]} Its representation method is similar to a general matrix, but the positional information is enclosed in {} curly braces. Comparison of sizes between matrices, such as A> B//A, B need to be of the same dimension: in the matrix, the corresponding comparison sizes of each element are as follows: a> b. The value at this point is 1; Otherwise, it is 0. Logical operations: AND (&), OR (|), XOR (xor)< If the two matrices are the same, they are 0, but if they are different, they are 1. For example: xor (A> 10, B< 10) If at the same position, both matrices satisfy or do not satisfy their respective conditions, then the value here is 0; On the contrary, it is 1> Function rand: Randomly generated number B= Rand (3,5) represents B as a random number matrix with 3 rows and 5 columns. A (:, [4,5])=..: Represent the reassignment of a matrix, reassign all rows, 4, and 5 columns of A//Brackets cannot fall: [] This statement can only have one comma, but there are multiple columns, so add brackets to all rows. Universal special matrix: Zero: Generate all 0 matrices Ones: Generate all 1 matrices Eye: Generate identity matrix (not necessarily a square matrix) Rand: 0~1 random number matrix Randn: Generate a standard normal distribution random number matrix with a mean of 0 and a variance of 1 Magic: Rubik's cube matrix, which is a random square matrix where the sum of elements in each row and column is the same Vander: A van der Mond matrix, where the last column is all 1. If the second to last column is specified, then the third to last column is the square of the second column, and the fourth to last column is the cube of the second column... For example, A= Vander ([1; 2; 3; 5]). Extract diagonal elements of a matrix: If the parameter of the diag() function is a matrix, output its diagonal elements; If the parameter is a vector, generate a matrix with diagonal as the vector element and other elements as 0 Extract the upper triangular elements of the matrix: triu() function, output the elements above the main diagonal, and return all the following elements to 0 output The trium() function is opposite to triu() Function for flipping matrices: 1. Left right flipping: fliplr (x) Image from Yiling 1314 Play audio: sound (a, fa) Processing the matrix of audio files will affect the audio. Using matrices to process image files: 1. Read in image information a= Imread ('image name ')//It may be a three-dimensional matrix 2. Display image: image (a) 3. Color can be changed, color gamut self inspection.
{"url":"https://iopenv.com/V5TQIQFQ7/Matlabs-usage-of-matrices","timestamp":"2024-11-06T14:42:03Z","content_type":"text/html","content_length":"18507","record_id":"<urn:uuid:43fc510d-0a2f-4e2d-a19d-a7f95000d6b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00869.warc.gz"}
Baillehache Pascal's personal website MiniFrame, a framework for solving combinatorial optimization problems Around 10 years ago I used to participate to programming contests on the CodinGame platform. These were contests where the goal was to code a bot playing some kind of game against other participants, in other words solving problems related to combinatorial optimization in real time. I had develop a framework based on the Minimax algorithm which I reused on each contest and allowed me to reach a high ranking. Revisiting this framework and making it publicly available was on my todo list since then. I finally took time to do it and will introduce it in this article. Combinatorial optimization In combinatorial optimization problems we are looking for a subset of elements, inside a discrete and finite set, which maximise an objective function. This can be interpreted as, choosing one element among all possible, then another one, and so on according to some constraints until an end condition, and finally evaluating the result subset according to the objective function. The problem is then, how to choose an element at each step to obtain the subset with maximum value. If the problem is simple, the solution can be obtained by exploring all possible combinations and choosing the best one (brute force method). In real life problems, it is however impractical most of the time, as the number of combination is too large and exceeds the computation time/memory For example, in the context of a two players game, the subset is made of the moves constituting one game instance, choosen among all the possible sequences of move allowed by the game rules. The objective function is the win of the game for a given player. The rules are generally well defined, but the opponent moves are not known in advance, and even when the winning condition is well defined, the evaluation of a position during the game can be difficult. As another example, in the context of resource attribution, the problem becomes the selection of pairs resource/user, choosen among all possible pairs. The objective function could then be the maximisation of user satisfaction. Here, the set of possible pairs is known with certainty from the beginning, but the definition of the objective function (what satisfies the user) can be difficult. Greedy algorithm The simplest algorithm for this kind of problems is the Greedy algorithm. At each step, the best choice is determined with a heuristic based only on local information available at that step. This method is extremely easy to implement and fast to execute, but rarely produces a good solution when applied to real life complex problems. The MiniMax algorithm (MM) was designed to solve two players sequential games. The tree of all possible sequences of move is progressively constructed in a breadth first order, and each time a node is computed, it is evaluated with the objective function and its score is backpropagated toward the root of the tree. The backpropagation stops when, for a given node, there exists a sibling having a more advantageous value from the point of view of the active player at that node's parent node (see the Wikipedia page for more details). Solving the game using MM consists then of choosing at each node the next move with best value according to the current player. The name MiniMax comes from the fact that if the value of a node represents the advantage of a position for player one, then the optimal path in the tree is found by choosing the next node maximising the value when it's player one's turn, and the next node minimising the value when it's player two's turn. I always find that way of thinking confusing, and rather than maximising/minimising relative to one player I personally prefer to see it as maximising from the point of view of the current player. Both ways of thinking are strictly equivalent, but when adapting Minimax to games with more than two players, or even one player, I found the "always maximising" view more natural. Monte Carlo Tree Search While Minimax explores the tree in a breadth first manner, another approach is to explore it in a depth first one. This is known as the Monte Carlo Tree Search (MCTS) algorithm. In this algorithm, instead of exploring all subnodes of a node, we choose only one at random. The exploration then goes from the root of the tree down to an end node along one single random path, and starts again from the root along a different random path. The evaluation of a node is then equal to the average of the evaluation of its currently explored subtree's end nodes. Compare to MM, it is advantageous when there is no well defined way to evaluate incomplete subsets (mid-game positions in the context of a game) and only complete subsets (end-game positions) have a well defined evaluation (win/loss for a game). In the latter case, MM can't evaluate a node value until it has reached end nodes, which generally doesn't happen until nearing the end of the game due to the breadth first exploration and limited time/memory. In the other hand, as MCTS explores only a random subset of all possible paths, its evaluation of a node is only an approximation of its real value, and by choosing randomly MCTS takes the risk of missing an optimal path. Completely randomly choosing the path to explore then appears suboptimal, and the UCB1 formula (\(max\left(\frac{w_i}{n_i}+c\sqrt{\frac{ln(N_i)}{n_i}}\right)\)) was developped to improve on that point by recommending which next node to explore (MCTS+UCB1 is referred to as UCT). Unfortunately, it's only a recommendation and still doesn't guarantee it won't miss the optimal path. Also, UCB1 can only be used for evaluation functions with a binary output (win/loss), making UCT less general than MCTS or MM. MCTS doesn't suffer from that limitation, but may not lead to the optimal answer. Even if it has explored the whole tree, due to the averaging of leaves value the optimal solution may be hidden within a subtree full of low value leaves. In that case MCTS would prefer in the early nodes a branch leading to a subtree with lower maximal value but higher average one. The two problems above make me think that, except if there is no evalutation function on intermediate nodes and so no choice but to avoid MM, MCTS or UCT are much less appealing than MM. Alpha-Beta Pruning The above methods all give instructions on how to explore the tree and how to choose a branch from a node to maximise the outcome. However they are all eventually equivalent to a brute force method: given enough resources they will explore the whole tree. UCT tries to improve on that point by spending more time on subtrees that look the more promising, but it would be better if it was possible to completely discard subtrees which are certain to contain only suboptimal solutions. This is the idea behind the Alpha-Beta pruning algorithm (ABP). In the MiniMax algorithm, one can observe that, in the context of a multi-players sequential game, as each player chooses the branch leading to the best node from its point of view, exploration of branches to other nodes can be discarded as soon as we know they won't lead to a better solution. To ensure the value of a node, MM is modified from breadth first to depth first, and the pruning of not-yet computed branches from \(N_i\) occurs as soon as \(V^*_{P_{i-1}}(N_{i-1})>V^*_{P_{i-1}}(N_i)\) where \(N_{i-1}\) is the parent node of \(N_i\), \(P_i\) is the active player at node \(N_i\), and \(V^*_P(N)\) is the best branch value at node \(N\) from the point of view of player \(P\). In theory, ABP indeed prunes a large part of the tree, ensuring to return the same optimal path as MM in a much faster time. However, the depth first exploration is generally incompatible with resource constraints: only a part of the tree will ever be explored. In the worst case only one branch of nodes near the root of the tree are explored as the exploration ends deep in their subtree before coming back to them. In the context of a game that would be equivalent of always considering only one single next move. Surely not the right strategy to win. For that reason it is suggested to shuffle the branches duing exploration. So in practice we must limit the exploration to a carefully choosen depth ensuring it completes within the resources limit. This requires to be able to evaluate all nodes (not only end ones), and then comes the risk of pruning accidentally optimal branches due to imperfect mid-game evaluation. Choosing a maximum depth also has a big disadvantage: some part of the tree may have higher branching factor than others. Without a maximum depth, MM always goes as deep as it can within the resources limit. Setting a static maximum depth would cause early stop of the exploration even if it could have explored further. Here again, I personally find difficult to believe that ABP could outperform MM, except in situations where resources constraint is not a problem. MiniMax siblings pruning Of course, pruning to improve MM is extremely appealing. Unsatisfied by ABP I've decided to make my own, which I'll call 'siblings' pruning. When a node is visited during exploration its childs are added to the list of nodes to explore only under the condition their value is within a threshold from the value of the best one among them. The threshold is controlled by the user. When a node is first visited, the pruning occurs based on local information. As the exploration goes deeper into the tree and the values backpropagate, when a node is visited again the pruning occurs based more and more on deep, reliable, values. For a very high threshold value, it behaves as standard MM. For a very low threshold value, it explores only the paths which are the best according to the information available so far. In between, the lower the threshold value the more pruning, hence the deeper the exploration can go in the same amount of time, but the higher the chance to ignore an optimal path hidden behind a seemingly "bad" position in middle game. The higher the threshold value the less pruning, hence less deep exploration in the same amount of time, but lower chance of missing an optimal path. Automatic search for the optimal threshold value can easily be implemented. Results below will show it is indeed an efficient way of pruning. All the methods introduced above can be used to solve combinatorial optimisation problems as long as they can be represented as trees. More precisely, in term of implementation, they can be used if the user can provide: 1. a definition of data representing one node in the tree 2. a definition of data representing one transition from one node to one of its childs 3. a function creating the root node 4. a function computing the list of possible transitions from a node 5. a function computing the child node resulting from a transition from its parent node 6. a function checking if a node is a leaf 7. a function evaluating a node It is then possible to create a reusable framework implementing the algorithms introduced above, ready to use by a user who only need to implement the two data structures and five functions above. MiniFrame is a framework written in C doing just that. In addition, it also provides tools useful to implement the user functions, debug them and search for the best parameters. You can download it here (see at the end of the article for old versions). I describe below how to use it and how it performs on concrete examples. Tic-tac-toe is the most simple example possible, so let's start with that. I guess everyone knows this game so I won't explain the rules here (if you need a refresher the Wikipedia article is here). Some vocabulary: in MiniFrame a node is called a "world", a transition between two nodes is called an "action", and entities at the origin of the actions (in a game, the players) are called the "actors". We first need to tell to MiniFrame how many actors there are, and how many actions there can be from any given world. For performance optimisation, MiniFrame uses its own memory management functions and a memory pool to avoid dynamic allocation during execution. Hence the necessity to indicate beforehand the maximum number of actions per world. The data structures are super simple: a grid of 9 by 9 cells. The cell value encodes an empty cell (-1), a cross (0) or a circle (1). An action is represented with the row and column, and a value (same as cell). MiniFrameActionFields and MiniFrameWorldFields are macros to automatically add internal fields used by MiniFrame. The five functions (plus one to commonalise code) implementing the problem to solve (here, the game) are as follow. In MiniFrameWorldSetToInitialWorld, the second argument userData is used to pass optional data for initialisation (cf MiniFrameWorldCreateInitData below). If null, default values are used. The aliveActors array indicates which actors are 'alive'. Here both players are playing until the end of the game, but on 3+ players games, one player may be eliminated in mid-game and only the remaining players keep playing. That array allows to handle such cases. Also, expert readers will surely see many things that could be optimised. The point here is just to give a simple example. In addition to these structures and functions, MiniFrame actually needs five more functions: 1. a function comparing if two worlds are to be considered the same (used when looking an instance of world into the tree) 2. a function copying the user defined properties of a world into another 3. a function printing a world on a stream in a human readable format 4. a function printing an action on a stream in a human readable format 5. a function to create initialisation data used by MiniFrameWorldSetToInitialWorld. In this simple example, it's always an empty grid, but this function becomes useful in more complex games/ problems to automatically create random intialisation state used by the other tools (cf below). MiniFrame usage Then, one can use MiniFrame in several different ways. They almost all have in common the creation of a MiniFrame instance, setting its parameter, using it, and finally freeing it. As explained above, MiniFrame uses a memory pool for better performance, so one must provide the number of world instances MiniFrame will have to work with. Of course the more the better, it just depends on the resource constraints of the environment MiniFrame is running in. The helper function MiniFrameGetNbWorldForMemSize calculates the number of world instances fitting in a given amount of memory, such as the user can specifiy either the available number of worlds either the available amount of memory in bytes. Next, the initial world and the algorithm used (MM, MCTS, ABP) are set. Here as we used MM the depth of exploration is also set. For this simple example, we know there can't be more than 9 moves, so setting the max depth to 10 ensures MiniFrame will explore down to the end (in the limit of available time and memory). Note that the maximum exploration depth is relative to the current world of course (at current step 0 it would explore up to step 9, at current step 1 it would explore up to step 10, and so on...). An executable binary can be compiled as follow. The source file of MiniFrame uses a macro (MINIFRAME_WORLD_IMPL) defined in the compilation options, which points to the source file implementing the game/problem. That source file is included in MiniFrame and the whole code is compiled as one unit. A Makefile rule would look as follow. It's also possible to have several problems solved using MiniFrame at once in the same project. Instead of compiling an executable, compile an object file (-c titactoe.o instead of -o tictactoe) and define in tictactoe.c some exposed functions instead of the main() for the project to interact with the tic-tac-toe solver. Then use the exposed functions and link the object file in the project as One game simulation Coming back to the part left undefined in the main(), let see how to run one game simulation using MiniFrame. The framework has two modes of exploration: in the main process, or in background. MiniFrame is indeed multithreaded. The main thread interacts with the user, another thread performs the exploration, and another thread manages the memory. The goal is of course to improve performance in term of execution speed. If you're wondering why not more threads, well I'd like to, but if you're used to work with multithreading you surely know how much of a headache it quickly becomes. So, moving slowly but surely, I improved from my old CodinGame framework's single thread to just 3 for now. Another motivation (or de-motivation ?) is that using more threads doesn't help on the CodinGame platform. Back to the code, one can run one game simulation as follow (in background): One local copy of the current world is kept in memory. On more complex example it would be updated by the main process with external input (like the opponent actual moves). MiniFrameRunBackground(mf) starts the exploration thread in background. No need to worry about the memory management thread, MiniFrame does it for you. Then we loop until the end of the simulated game, wait some time for the exploration thread to do its job, get the computed best action, apply to the local copy of the world and display it, step the local world and inform MiniFrame of the new current actual world. An example output looks like this: As one would expect, playing perfecty against itself leads to a tie ! Running the same simulation but in the main thread would look like this: MiniFrameRunOneStep(mf) is blocking and computes as many worlds as it can in the limit of the available memory. A time limit can also be set with MiniFrameSetMaxExploreTimeMs(mf, timeMs). Note that if no time limit is set MiniFrameRunOneStep(mf) performs only one pass on the tree. That's probably not what you want, in particular when using MCTS. Debug mode Even on an example as simple as tic-tac-toe I've been able to mess up the world implementation and not get it right from the first compilation, genius! Once the world implementation is burried into the MiniFrame framework, debugging what's wrong is really challenging. A debugging tool is a must, and of course I provide one. It allows the user to manually run the exploration process and check the exploration tree. The undefined portion of the main file becomes in that case just one line: This starts an interactive console which can be used as follow. Available commands are: An example of use on tic-tac-toe would look like this. The user first checks the memory information, then the initial world. He/she runs one step of exploration and checks the updated memory information. Next he/she checks that actions have been created on the current initial world, and moves to the next world according to the current best action. Next he/she comes back to the previous world, move to the next world through another action, and set that new world as the actual current world before running one more step of exploration. Finally he/she displays again the information about memory. Thanks to this debugging tool, one can quickly check that the initial world, calculated actions and result of actions, etc... are as expected. It is also helpful when designing the evaluation function: one can see how the exploration tree varies according to it, and explore the tree to better understand why an action ends up being the best. Qualification and server mode From my experience, implementing the world is the easy part of the process. Even when it isn't straight forward or well defined, tree exploration is robust enough to provide 'ok' solution even with a clumsy implementation. For example, in a tournament of CodinGame, the referee will send you the current actual game at each turn, hence the eventual mistakes get corrected. The exploration will be incorrect, but if the implementation is not completely meaningless, it will in general lead to an acceptable next action. Of course, the more exact your implementation is, the more accurate the exploration will be, and the better next action you can expect. But anyway, you'll never have enough time and memory to calculate 'the' best action. It's a good example of "don't care if it's the best, care if it's good enough", which is a general truth on a real world battleground. Rather, the design of a good evaluation function is where the headaches and sleepless nights await you. Unclear and contradicting goals are common, in a game for example you often need to balance aggressive and defensive moves. And in practice, tweaking 'just a little' your evaluation function generally leads to completely different behaviour for your AI, which is very confusing and need careful examination with the debugging tool. At the same time you generally want the evaluation function to be as fast as possible, while using as few memory as possible. It's really the place where one can shine. How to evaluate wether one evaluation function leads to better results than another is also a big matter of concern. The only reasonable way is to run many times your different implementations against each other, and have a way to evaluate the results. The first problem here is time. At beginning, there will be clear improvement on the evaluation function and few runs should give clear results. But as you approach a good solution, improvements will be more and more subtle, and you'll need more and more runs to get an idea of which version is the best. More runs equal more time, which you generally do not have. The second problem is how to evaluate the results. Here, my go-to has always been the ELO ranking system. It's easy to implement and use, supports any number of players (including one: run two single player games and choose the winner as the one with higher score) and give a clear and reliable ranking. To make the evaluation function design task easier, MiniFrame provides a qualification mode which automatically evaluates several implementations using ELO ranking. The qualification mode uses another mode of execution: the server mode. In that mode, MiniFrame runs as a TCP/IP server, with the exploration process in background, and communicates with an external client to update the actual current world and get the best action. In the context of a game with two players using that server mode, there would be two servers (one for each player) and one client (acting as the referee between the players). One can simply starts MiniFrame in server mode with the following code (still as the undefined portion of the main() introduced earlier): The server automatically chooses an ip and port, which are written into the file located at filenameServerAddr for a client application to retrieve. In the context of MiniFrame it's fine to use the same file for all server, there is no risk of conflicts by overwriting. MiniFrameSetServerVerbose(mf, false) allows to turn on/off the verbose mode for the server. A client interacts with the server as follow: The ip and port are retrieved from the file the server has updated, and used to start a connection between the client and server(s) (only one server in that example but there would be one per player in a multiplayer game). Initialisation data are created using MiniFrameWorldCreateInitData(...) and sent to the server(s) with MiniFrameServerInit(...). Then the simulation runs in a loop until the current world becomes an end world. At each step (game's turn) the current world is actualised on the server (the one of the current turn's player if multiple players) using MiniFrameServerActualiseWorld(conn, &world). The next action is requested with MiniFrameServerGetAction(conn, &action) and the current world is updated accordingly. Here the referee may actually update the current world in a differet manner than what the server thought. It would be the case for example for incomplete information games. Running MiniFrame in server mode also allows to interface it with any code able to communicate through TCP/IP. It opens the door to many various uses of MiniFrame by other softwares. Back to the qualification mode. First we need several instances to be qualified. I like to do it using compilation arguments and macros and keeping everything in one single source file for the world implementation. For example, in tic-tac-toe two different evaluation functions would be defined like this (the second one being wrong on purpose): Obviously, the user is not limited to the evaluation function. The type of search, or MiniFrame's parameters, or even world implementation could also vary according to VER. Only the two data structures defining the world and action must be identical between all versions under qualification (for the referee to have a common communication pattern with all servers). Then the two binaries, one per version could be compiled like this: To be used with the qualification mode, each version's binary should be running in server mode (cf details above). The binary for the qualification itself looks like this: A MiniFrameQualifier is created to set the qualification parameters, and the qualification itself is done with MiniFrameQualification(). The qualification output its result on stdout, but also returns it as a MiniFrameQualificationResult structure for the user to use it in anyway he/she likes. The parameters of the qualification are: the number of rounds (one round consists of one game for each combination of actors), the number of qualified instances and the path to the binay of each instance, the path to the file used to exchange the server ip and port, the 'thinking' time between each step, optional user data (argument of MiniFrameWorldCreateInitData()). The thresholdScore parameter is used for early termination of the qualification. If negative, the qualification runs for the given number of rounds. If positive, the qualification early quits if the best instance's score is larger than the second best's score plus the threshold. This allows to save time when one version clearly outperfoms the others and no more rounds are necessary to know that it is the best one. Running the qualification gives the following output, where we clearly see the version with the correct evaluation function winning over the one with the wrong evaluation function. Parameters search mode The qualification mode introduced above is already a precious tool to design a good evaluation function, but MiniFrame offers even more: a parameter automatic search mode. The evaluation formula is almost always subject to some parameters (weight, heat map, threshold, ...). Looking for their optimal value is a tedious task, which can be automated. In MiniFrame I reuse the qualification mode and differential evolution algorithm to provide a parameters search mode. A binary running in server mode and accepting the parameters value in argument is prepared as follow: Then the source for the binary for the search is as follow: nbEpoch is the number of epochs in the differential evolution algorithm. nbHyperParam is the number of searched parameters. hyperParamMins and hyperParamMaxs define the range of possible value for each searched parameters. seedHyperParams lets the user provides a value for each parameters for the first instance of the first epoch (useful to force a starting point in the differential evolution algorithm). pathInstance is the path to the binary accepting the parameters in argument (cf above). nbInstance is the number of instance in the differential evoluation algorithm. Other parameters are the same as for qualification mode. In this example settings are pretty low, in reality you would use much larger number of instances, rounds and epochs. But be aware that it very quickly explodes in term of execution time. Running the automatic search on the simple tic-tac-toe example gives the expected results, any positive value is the best value: Mancala and Reversi Tic-tac-toe is a good test case thanks to its simplicity, but is not really useful or representative of the performance of MiniFrame. Lets see two more interesting examples: Mancala and Reversi. If you don't know these games, I let you refer to, respectively, this article and this article. Mancala has a very clear evaluation function: simply the current number of captured stones. It's a very good candidate for a concrete use of MiniMax. Indeed I've already implemented a web app a long time ago allowing a human to play against a bot using MM. It predates MiniFrame, but still a good representative of what can be done. It's available here. A minimalist version using MiniFrame is also available here. Implementing Mancala in MiniFrame can be done as follow: Reversi is an even more interesting candidate because it's still easy to implement, but a good evaluation function is extremely difficult to find. As the coins can flip until the last move, there is no obvious way to know which player is leading during the game. A version developped with MiniFrame is available to play online here (excuse the extreme lack of effort put into the interface, the point here is just to prove it's working). As a tournament of Reversi is available on CodinGame (here), it also allows me to evaluate the performance of a solution based on MiniFrame. At the time of writing my solution ranks 51st among 474 entries. Not at the top of the world but good enough for me to believe MiniFrame runs as expected and is performant. The difference with stronger solution lies probably in the evaluation function and optimisation of the world implementation. To avoid spoiling the tournament I don't share the code of my implementation here. As I've written at the beginning of this article, this framework can be applied to solve any combinatorial optimisation problem that can be represented as a tree. To illustrate this, I've used MiniFrame to create a Sudoku solver. At the same time it also shows that it can be used when there is only one 'player'. In that case, one node (or 'world') represents a grid, totally or partially filled, with any value in [1,9] in the non-empty cells. One transition (or 'action') consists of filling one empty cell with a value. The initial world is the initial sudoku grid. A grid with no empty cells is an end world. The evaluation function is simply the number of non-empty cell. The creation of possible actions from a given world is where the magic happens. A dummy solution would be to consider all combinations (empty cell, value in [1,9]). This would lead to an enormous amount of actions for starting grids. The trick here is to do what I call 'pruning by design'. Instead of all combinations, search one cell where there is only one value possible. If such (cell, value) exists, set it as the only possible action from that world. If no such (cell, value) exists, consider pairs (cell, value) which are valids. This dramatically reduces the branching factor, with a single action per world most of the time. MiniFrame in MiniMax mode (without pruning) and 10Mb of memory was enough to solve all the grids I have tried (well under one second). This solver is also available as a web app here, and the code looks like this: Note that I'm using here an helper function, MiniFrameStepBestAction(), which is equivalent to getting the best action and actualising the current world at once, convenient when using MiniFrame to solve that kind of problems. Mars lander Another claim I've made is that MiniFrame can be used under real time constraints. The CodinGame solution for Reversi already proves it: it must output its action within 150ms, which it does successfully. As one more example, I've choosen another problem available on CodinGame: the Mars lander. Here the goal is a simulated landing on the surface of the planet Mars. The problem is limited to 2D, but follows the physical reality of such an operation. The lander is subject to gravity, controls its attitude through change in orientation and thrust, and must satisfies constraints of position, speed and orientation for a successful landing, while using a limited amount of fuel. Here, the 'world' is the status of the lander (position, speed, attitude, landscape), an 'action' is a command to activate the thruster with a given power, or to change orientation. To limit the number of possible actions, I limit an action to be either thrust either orientation (on CodinGame both are allowed simultaneously), the change of orientationto be \(\pm1\) degree, the thrust to be one of {0,4} when the lander is not above the landing area, and one of {0,2,4} when it is above it. The evaluation function is divided into three, depending on where the lander is located relative to the landing area. It is crafted in a way to control the behaviour of the lander. Away from the landing area, it gets near to it as fast as possible while avoiding the landscape. High above the landing area it stabilises its horizontal speed and start the descent. Low above the landing area, it focus on a smooth landing within the allowed constraints. Withing 100ms time constraint between each action and 10Mb of available memory, using MiniMax with siblings pruning allows the lander to plan its control several seconds ahead. It achieves 100% successful landings on 100 tests with random initial conditions (attitude and landscape). The animation below gives an illustration of the landing maneuvers. The red dot is the lander, the grey areay is the landscape profile (highest mountain is 484m for scale), the landing area is the flat area at the center of the screen. Comparison MM, MCTS, ABP Finally, here is a comparison of performances of MM (with/without siblings pruning), MCTS, and ABP on the Mancala and Reversi games. Using MiniFrame's qualification tool of course. • Comparison of MM without siblings pruning, and MM with siblings pruning for several threshold values on Reversi (150ms per step, 100Mb of memory, 10 rounds): Results show that 1) there is an optimal value for the pruning threshold, 2) the siblings pruning with an appropriate threshold value outperforms MM without pruning. I think the A shape of the results can be explained as follow: a too small threshold discards moves that are at short term bad but prove to be optimal choice at long term; a too large threshold is equivalent to no pruning at all, hence shallower exploration; intermediate thresholds are a good balance between keeping actions which are not optimal locally but may prove optimal later and pruning actions which looks locally really too bad and save exploration resources to go deeper in the tree. As the evaluation function is more accurate and the exploration resources increase, I expect the optimal threshold to move to the left (stronger pruning). • Comparison of ABP with various maximum depth of exploration, and MCTS on Reversi (150ms per step, 100Mb of memory, 10 rounds): Results show that 1) the maximum depth has an influence on the performance, 2) better performances are obtained for a small maximum depth, 3) MCTS performs similarly to ABP with a smal maximum depth. I think the results can be explained as follow: ABP depth 64 should be equivalent to MCTS, however my implementation of MCTS explores around 15% more worlds than ABP in the same time, hence the better rank of MCTS; as the maximum depth of ABP increases the rank decreases due to the increasing number of missed branches; the evaluation function is probably a good one, because evaluating everything at depth 4 is clearly better than evaluating depper and missing branches; MCTS as good as ABP depth 4 probably means that there is not a big difference overall between all these version. • Comparison of MM without siblings pruning, MM with siblings pruning (threshold 5), MCTS, and ABP (depth 4) on Reversi (150ms per step, 100Mb of memory): Results show that 1) the type of algorithm significantly impacts the performance, 2) MM with pruning > MM without pruning > ABP > MCTS. I think the results can be explained as follow. In the game of Othello the situation changes hugely until the last move of the game, it makes it very difficult to evaluate moves at long term and optimal moves are hidden in subtrees of bad moves. This makes a partial exploration of the tree particularly inefficient and explains MM > (MCTS and ABP). At the same time, we have to deal with the resources constraint. Pruning is necessary, and siblings pruning proves to be a very good way to do it. • Comparison of MM without siblings pruning, and MM with siblings pruning for several threshold values on Mancala (150ms per step, 100Mb of memory, 20 rounds): Results show that 1) a strong pruning has a very negative impact on performance, 2) other pruning thresholds have no significant impact relative to each other and relative to no pruning. In my implementation of Mancala, the evaluation function is simply the current score. It may not vary for several moves, and when it does it's in a much more quiet way than the one for Reversi. The exploration should go must deeper to notice that a locally bad move ends up being a good one. This explains why pruning here doesn't make a big difference, except if pruning so strongly than it shies away from risk, always waiting for killer moves that never come. • Comparison of ABP with various maximum depth of exploration, and MCTS on Mancala (150ms per step, 100Mb of memory, 20 rounds): Results show that 1) there is an optimal maximum depth, 2) ABP at the appropriate depth outperform MCTS. Again, MCTS being faster than ABP explains why it performs better than ABP depth 64. But contrarily to Reversi, the evaluation function is worst at predicting the outcome of a game, hence increasing a bit the maximum depth performs better than a minimal one, up to a point where the quantity of missed branches overcomes the advantage of a deeper exploration. • Comparison of MM without siblings pruning, MM with siblings pruning (threshold 20), MCTS, and ABP (depth 12) on Mancala (150ms per step, 100Mb of memory): Results show that 1) the type of algorithm significantly impacts the performance, 2) MM with pruning > MM without pruning > ABP > MCTS. Overall, results are similar to those obtained on Reversi. In this article, I've introduced the MiniFrame framework implementing MiniMax, MonteCarlo Tree Search, and AlpaBeta Pruning. This framework allows to implement solvers in C programming language for combinatorial optimisation problems. The framework provides tools (debugger, evaluation function qualification, hyper parameters search, TCP/IP server) useful for the implementation of the problem, the design of the evaluation function, and the usage of the solver in various ways and context. It also provides an original pruning method for the MiniMax algorithm, which I've shown to perform better than other methods on two examples. The framework has been validated and qualified on several examples introduced here, and on the CodinGame platform with the following bot programming contests (ranking as of 2024-09-20): • Othello (57th/525) • Ultimate Tic-Tac-Toe (567th/9026, gold league) • Wondev Woman (98th/1918, gold league) • Spring Challenge 2021 (818th/8288, gold league) • Olymbits (287th/5497, gold league) • Smash The Code (214th/2848, gold league) MiniFrame has also been used to solve a combinatorial optimisation problem for the company Sakanotochu. The implemented solution is currently used to help define optimal vegetable baskets in a subscription system, saving the users several hours of manual work every week. The code source of the current version of MiniFrame is available under GPL here. Any questions or comments are welcome by email. List of versions: • v1.3.0: miniframe.1.3.0.tar.gz. Add print of invalid commands in server mode. Add MF_MINIMAX_RELATIVE_PRUNING option. Add MF_FREEING_MEM_MULTITHREAD option. Modif to get the best action from sente's point of view only in case of tie. Add display of local score in debug console. Add flagDisplayWorld in MiniFrameQualifier. Add log of server id in verbose mode. Released on 2024/09/20. • v1.2.1: miniframe.1.2.1.tar.gz. Add montecarlo_memoryless type of exploration. Add automatic contracting/expanding strategy for pruning threshold. Add return value to MiniFrameSetActualWorld() to indicate if the world in argument was found. Add MiniFrameWorldGetSrcAction() to get the world at the origin of a given action. Refactor the communication between the server and client to allow for server based data formatting (enabling the implemetation of fog-of-war in games for example). Refactor MFUpdateScoreMiniMax(). Add 'threshold' and 'prune' commands to the debug console. Released on 2024/05/21. • v1.1.0: miniframe.1.1.0.tar.gz. Refactor private MFGetBestAction() as public MiniFrameWorldGetBestAction(). Modify MiniFrameGetCurrentWorld() to include the non-user-defined properties in the returned copy of the current world. Released on 2023/11/30. • v1.0.0: miniframe.1.0.0.tar.gz. Initial version available publicly. Released on 2023/11/12. C programming 179 views
{"url":"https://baillehachepascal.dev/2023/miniframe.php","timestamp":"2024-11-10T01:41:05Z","content_type":"text/html","content_length":"79122","record_id":"<urn:uuid:5aebdfca-20fc-443a-bb4d-e4ac2eb7ff88>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00736.warc.gz"}
Developmental Math Emporium Learning Outcomes • Use the definition of proportion In the section on Ratios and Rates we saw some ways they are used in our daily lives. When two ratios or rates are equal, the equation relating them is called a proportion. A proportion is an equation of the form [latex]{\Large\frac{a}{b}}={\Large\frac{c}{d}}[/latex], where [latex]b\ne 0,d\ne 0[/latex]. The proportion states two ratios or rates are equal. The proportion is read [latex]\text{``}a[/latex] is to [latex]b[/latex], as [latex]c[/latex] is to [latex]d\text{``.}[/latex] The equation [latex]{\Large\frac{1}{2}}={\Large\frac{4}{8}}[/latex] is a proportion because the two fractions are equal. The proportion [latex]{\Large\frac{1}{2}}={\Large\frac{4}{8}}[/latex] is read “[latex]1[/latex] is to [latex]2[/latex] as [latex]4[/latex] is to [latex]8[/latex]“. If we compare quantities with units, we have to be sure we are comparing them in the right order. For example, in the proportion [latex]{\Large\frac{\text{20 students}}{\text{1 teacher}}}={\Large\ frac{\text{60 students}}{\text{3 teachers}}}[/latex] we compare the number of students to the number of teachers. We put students in the numerators and teachers in the denominators. Write each sentence as a proportion: 1. [latex]3[/latex] is to [latex]7[/latex] as [latex]15[/latex] is to [latex]35[/latex]. 2. [latex]5[/latex] hits in [latex]8[/latex] at bats is the same as [latex]30[/latex] hits in [latex]48[/latex] at-bats. 3. [latex]\text{\$1.50}[/latex] for [latex]6[/latex] ounces is equivalent to [latex]\text{\$2.25}[/latex] for [latex]9[/latex] ounces. [latex]3[/latex] is to [latex]7[/latex] as [latex]15[/latex] is to [latex]35[/latex]. Write as a proportion. [latex]{\Large\frac{3}{7}}={\Large\frac{15}{35}}[/latex] [latex]5[/latex] hits in [latex]8[/latex] at-bats is the same as [latex]30[/latex] hits in [latex]48[/ latex] at-bats. Write each fraction to compare hits to at-bats. [latex]{\Large\frac{\text{hits}}{\text{at-bats}}}={\Large\frac{\text{hits}}{\text Write as a proportion. [latex]{\Large\frac{5}{8}}={\Large\frac{30}{48}}[/latex] [latex]\text{\$1.50}[/latex] for [latex]6[/latex] ounces is equivalent to [latex]\text{\$2.25}[/latex] for [latex]9[/latex] Write each fraction to compare dollars to ounces. [latex]{\Large\frac{$}{\text{ounces}}}={\Large\frac{$}{\text{ounces}}}[/ Write as a proportion. [latex]{\Large\frac{1.50}{6}}={\Large\frac{2.25}{9}}[/latex] Look at the proportions [latex]{\Large\frac{1}{2}}={\Large\frac{4}{8}}[/latex] and [latex]{\Large\frac{2}{3}}={\Large\frac{6}{9}}[/latex]. From our work with equivalent fractions we know these equations are true. But how do we know if an equation is a proportion with equivalent fractions if it contains fractions with larger numbers? To determine if a proportion is true, we find the cross products of each proportion. To find the cross products, we multiply each denominator with the opposite numerator (diagonally across the equal sign). The results are called a cross products because of the cross formed. The cross products of a proportion are equal. Cross Products of a Proportion For any proportion of the form [latex]{\Large\frac{a}{b}}={\Large\frac{c}{d}}[/latex], where [latex]b\ne 0,d\ne 0[/latex], its cross products are equal. Cross products can be used to test whether a proportion is true. To test whether an equation makes a proportion, we find the cross products. If they are the equal, we have a proportion. Determine whether each equation is a proportion: 1. [latex]{\Large\frac{4}{9}}={\Large\frac{12}{28}}[/latex] 2. [latex]{\Large\frac{17.5}{37.5}}={\Large\frac{7}{15}}[/latex] Show Solution try it
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/writing-proportions/","timestamp":"2024-11-10T10:59:59Z","content_type":"text/html","content_length":"54290","record_id":"<urn:uuid:7d26b245-b0ce-48c4-9f95-20c1f9814327>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00444.warc.gz"}
Introduction to the Physics of the Cosmos Code: 44078 ECTS Credits: 6 Degree Type Year Semester 4313861 High Energy Physics, Astrophysics and Cosmology OB 0 1 Use of Languages Principal working language: english (eng) Jordi Isern Vilaboy Enrique Gaztañaga Balbas Francisco Javier Castander Serentill Josep Maria Trigo Rodríguez Oriol Pujolas Boix Objectives and Contextualisation The course is intended to provide students with a complete and thorough introductory course to Particle Physics, Astrophysics and Cosmology, who should be able to use such knowledge as a solid basis for the following more specialized courses. Since it is a transversal course for all students who choose the specific programs on High Energy Physics, Astrophysics and Cosmology, it provides basic knowledge on the alternative itinerary the student has not chosen. Finally, since students come from different academic backgrounds, this course tends to unify and balance out the students’ academic skills and abilities. • Continue the learning process, to a large extent autonomously • Understand the basics in the main areas of high energy physics, astrophysics and cosmology • Use acquired knowledge as a basis for originality in the application of ideas, often in a research context. • Use mathematics to describe the physical world, select the appropriate equations, construct adequate models, interpret mathematical results and make critical comparisons with experimentation and Learning Outcomes 1. Understand the basics of astrophysics: coordinates, distances, magnitudes. 2. Understand the basics of astrophysics: structure and evolution of stars and galaxies. 3. Understand the basics of cosmology: distance ladder, expansion of the universe. 4. Understand the basics of cosmology: large scale structure. 5. Understand the basics of particle physics: cross sections, relativistic kinematics. 6. Understand the basics of particle physics: symmetries and interactions. 7. Use group theory to understand the SU(2) and SU(3) symmetries in hadrons. 8. Use online, English bibliographic tools to get more detailed information about the content of the course. Outline of the Course General Introduction to Particle Physics Mass, spin and Poincaré group Relativistic kinematics Interaction amplitudes and cross section Discrete symmetries Continuous symmetries Hadrons and the Quark Model General concepts of Astronomy Structure and evolution of stars and planets Structure and evolution of galaxies Introduction to General Relativity Introduction to Cosmology Theory lectures and exercises. Class-work and Homework Annotation: Within the schedule set by the centre or degree programme, 15 minutes of one class will be reserved for students to evaluate their lecturers and their courses or modules through Title Hours ECTS Learning Outcomes Type: Directed Theory Lectures 45 1.8 2, 1, 3, 5, 8 Type: Supervised Study of theoretical foundations 45 1.8 2, 1, 3, 5, 8 Type: Autonomous Discussion, work groups, group exercises 45 1.8 2, 1, 3, 5, 8 One exam on High Energy Physics and on Astrophysics/Cosmology (fifty fifty weighted) One homework on High Energy Physics One homework on Astrophysics/Cosmology Assessment Activities Title Weighting Hours ECTS Learning Outcomes Homework Astrophysics and Cosmology 25% 6 0.24 4, 2, 1, 3, 8 Homework on High Energy Physics 25% 6 0.24 6, 5, 8 Written exam (multiquestion test) 50% 3 0.12 4, 6, 2, 1, 3, 5, 7 "Particle Physics" - Third Edition, B. R. Martin and G. Shaw, Wiley and Sons 2008 "Quantum Field Theory in a Nutshell" A. Zee, Princeton University Press 2003 "The Standard Model: A Primer", C. P. Burgess and G. D. Moore, CUP 2007 "An Introduction to Quantum Field Theory", M. E. Peskin and D. V. Schroeder, Addison-Wesley 1995 “An introduction to modern astrophysics” D A Ostlie, BW Carroll CUP 2017 “Introduction to paticle and astroparticle physics” A. de Angelis, M. Pimenta Springer 2018 "Physical Foundations of Cosmology" V. Mukhanov, CUP 2005 Outline of the Course General Part I General concepts of Astronomy Structure and evolution of stars and planets Structure and evolution of galaxies Introduction to General Relativity Introduction to Cosmology Part II Outline of the Course General Introduction to Particle Physics Mass, spin and Poincaré group Relativistic kinematics Interaction amplitudes and cross section Discrete symmetries Continuous symmetries Hadrons and the Quark Model
{"url":"https://guies.uab.cat/guies_docents/public/portal/html/2022/assignatura/44078/en","timestamp":"2024-11-10T01:54:26Z","content_type":"application/xhtml+xml","content_length":"22469","record_id":"<urn:uuid:110ebb87-3bdd-4578-80ec-6351605d969f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00161.warc.gz"}
Project not showing/crashing when adding a simple line of code 24th July 2021, 16:37 #1 Join Date Jul 2021 Qt products Hello all, I'm trying to make a project which includes a square matrix of objects and for each entry I want to add a list of neighbours which are the 8 surrounding cells (cells on the edge of the matrix will connect to the other side, so each cell should have 8 neighbours). I have this piece of code: Qt Code: 1. int i; 2. int j; 4. for (i = 0; i<NCOL; i++) { 5. for (j = 0; j<NCOL; j++) { 6. mat[i][j].neigh.push_back(mat[(i-1+NCOL)%NCOL][(j-1+NCOL)%NCOL]); 7. //mat[i][j].neigh.push_back(mat[i][(j-1+NCOL)%NCOL]); 8. mat[i][j].neigh.push_back(mat[(i+1)%NCOL][(j-1+NCOL)%NCOL]); 9. mat[i][j].neigh.push_back(mat[(i-1+NCOL)%NCOL][j]); 10. mat[i][j].neigh.push_back(mat[(i+1)%NCOL][j]); 11. mat[i][j].neigh.push_back(mat[(i-1+NCOL)%NCOL][(j+1)%NCOL]); 12. mat[i][j].neigh.push_back(mat[i][(j+1)%NCOL]); 13. mat[i][j].neigh.push_back(mat[(i+1)%NCOL][(j+1)%NCOL]); 14. } 15. } To copy to clipboard, switch view to plain text mode Now, with this piece of code the project runs fine, however when I uncomment the second line of the loop, the project won't run... Sometimes, it crashes imediately, sometimes it says in the application output that the app is starting and it just stays that way until I force quit it, I had both error 1 and error 3 and for twice now I had my pc restarting while waiting for the mainwindow to show up. I even had those errors claiming I don't have permission. I really do not understand what is going on here I realized that when trying to run the app with the uncommented line, qt takes about 90% of my computer memory, but I don't know if this is part of the problem since a little line of code should not be such a problem. I would really appreciate if anyone knows what is happening as I cannot advance in my project until I solve this problem. Last edited by d_stranz; 24th July 2021 at 17:14. Reason: missing [code] tags It is difficult to understand what you intend with this code. Apparently you have a two dimensional array data structure (mat), and each of the elements of mat contains a QList / std list of mat So think about what is happening each time through the loop. The first time, mat is empty. So for i = 0, j = 0, you are pushing onto mat[0][0]'s list eight empty copies of mat. The next time through, i = 0, j = 1, and you push eight more copies of mat onto mat[0][1], one of which (mat[0][0]) already contains eight copies of mat. I don't know how big NCOL is and it is too early in the morning for me to do the math, but by the end of the double loop, you have a gazillion copies of mat sitting around. None of them represent the actual state of mat, since each of the copies was made at a different time in the building of mat. The solution is to not make copies of mat. You don't need them. All you need to do to keep track of the neighbors of mat[i][j] is a list of pairs of [i][j] indexes of those neighbors: Qt Code: 1. mat[i][j].neigh.push_back( std::make_pair( index_i, index_j ) ); To copy to clipboard, switch view to plain text mode where index_i and index_j are the indexes of whichever neighbor you are pushing. <=== The Great Pumpkin says ===> Please use CODE tags when posting source code so it is more readable. Click "Go Advanced" and then the "#" icon to insert the tags. Paste your code between them. I ran my code in VSC and after some seconds it return the bad_alloc error, which means it was indeed an absurd amount of objects. Each entry of my matrix is an object I called "Cell" and my goal was for each cell to have 8 other cells as neighbours. Since NCOL is 40, I thought I would have 40*8 obejcts in the neighbours lists. Seeing your explanation, I realize I had the math all wrong and although I don't think it is as bad as you mentioned, it is still an exponential amount of objects which is not good. Anyways, I followed your suggestion of making a list of pairs and the app is now running fine, so thank you Since NCOL is 40, I thought I would have 40*8 objects Actually 40 * 40 * 8 if your math had been correct. But I am not sure if I can calculate the true number. So let's see: For the first row: mat[0][0] contains 8 copies of mat, none of which have copies of mat. mat[0][1] contains 8 more copies, one of which (mat[0][0]) also has 8 copies mat[0][2] - mat[0][38] also each contain 8 copies, one of which (mat[0][n-1]) has 8 more mat[0][39] has 8 copies plus 16 more (mat[0][0] and mat[0][38]) (I think your grid wraps) For the second row: mat[1][0] has 8 copies, plus mat[0][0] and mat[0][1] for 16 more. But mat[0][1] also has mat[0][0] for 8 more. mat[1][2] has 8, plus mat[0][0], mat[0][1] and mat[0][2] for 24 more. The last two have [0][n-1] for another 16 same for 3 - 38. mat[1][39] has 8 plus 24 plus 16 plus the 32 from mat[1][0] because of wrapping and it just gets worse. By the time you get to [39][39] the total is somewhere near a gazillion. :-) Glad the simpler solution worked for you. <=== The Great Pumpkin says ===> Please use CODE tags when posting source code so it is more readable. Click "Go Advanced" and then the "#" icon to insert the tags. Paste your code between them. 24th July 2021, 17:33 #2 25th July 2021, 00:05 #3 Join Date Jul 2021 Qt products 25th July 2021, 16:49 #4
{"url":"https://www.qtcentre.org/threads/71764-Project-not-showing-crashing-when-adding-a-simple-line-of-code?s=8e724a04ee0c26826a4dd5133e7581fd","timestamp":"2024-11-09T21:53:49Z","content_type":"application/xhtml+xml","content_length":"75562","record_id":"<urn:uuid:39e7ec1d-53e6-4536-9b85-feb65cfb3145>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00756.warc.gz"}
On the power of threshold circuits with small weights for ISIT 1991 ISIT 1991 Conference paper On the power of threshold circuits with small weights View publication Linear threshold elements (LTEs) are the basic processing elements in artificial neural networks. An LTE computes a function that is a sign of a weighted sum of the input variables. The weights are arbitrary integers; actually they can be very big integers-exponential in the number of input variables. However, in practice, it is very difficult to implement big weights. So the natural question that one can ask is whether there is an efficient way to simulate a network of LTEs with big weights by a network of LTEs with small weights. We prove the following results: 1) every LTE with big weights can be simulated by a depth-3, polynomial size network of LTEs with small weights, 2) every depth-d, polynomial size network of LTEs with big weights can be simulated by a depth-(2d+ 1), polynomial size network of LTEs with small weights. To prove these results, we use tools from harmonic analysis of Boolean functions. Our technique is quite general, it provides insights to some other problems. For example, we were able to improve the best known results on the depth of a network of threshold elements that computes the COMPARISON, ADDITION and PRODUCT of two n-bits numbers, and the MAXIMUM and the SORTING of n n-bit numbers.
{"url":"https://research.ibm.com/publications/on-the-power-of-threshold-circuits-with-small-weights","timestamp":"2024-11-12T13:55:20Z","content_type":"text/html","content_length":"74733","record_id":"<urn:uuid:1fc900b3-da08-46fc-9f1b-bd7c5b58be8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00294.warc.gz"}
Bernoulli’s principle and gas flow in context of gas velocity Bernoulli's principle and gas flow in context of gas velocity 27 Aug 2024 Journal of Fluid Dynamics Volume 12, Issue 3, 2022 Bernoulli’s Principle and Gas Flow: An Exploration of Velocity Relationships This article delves into the fundamental principles governing gas flow, with a particular emphasis on Bernoulli’s principle. We examine the relationships between pressure, velocity, and energy in the context of compressible fluids, providing a comprehensive overview of the underlying physics. Bernoulli’s principle, first proposed by Daniel Bernoulli in 1738, states that an increase in the velocity of a fluid (liquid or gas) results in a corresponding decrease in pressure. This fundamental concept has far-reaching implications for our understanding of fluid dynamics and is essential for the design and analysis of various engineering systems. Mathematical Formulation The relationship between pressure (P), density (ρ), and velocity (v) can be expressed using Bernoulli’s equation: P + 1/2 * ρ * v^2 = constant where * denotes multiplication. This equation demonstrates that an increase in velocity is accompanied by a decrease in pressure, assuming the density of the fluid remains constant. Gas Flow and Velocity In the context of gas flow, Bernoulli’s principle can be applied to understand the relationships between pressure, velocity, and energy. The kinetic energy (KE) of a gas can be expressed as: KE = 1/2 * ρ * v^3 This equation highlights the direct relationship between velocity and kinetic energy. Energy Conservation The total energy (E) of a fluid is conserved, meaning that it remains constant throughout the flow process. This can be expressed using the following equation: E = KE + PE where PE represents potential energy. The conservation of energy principle ensures that any increase in velocity is accompanied by a corresponding decrease in pressure. In conclusion, Bernoulli’s principle provides a fundamental understanding of the relationships between pressure, velocity, and energy in the context of gas flow. The mathematical formulations presented in this article demonstrate the direct relationships between these variables, highlighting the importance of considering velocity when analyzing fluid dynamics problems. Related articles for ‘gas velocity’ : • Reading: Bernoulli’s principle and gas flow in context of gas velocity Calculators for ‘gas velocity’
{"url":"https://blog.truegeometry.com/tutorials/education/7803ea050e83386a4b6ff78de68424f6/JSON_TO_ARTCL_Bernoulli_s_principle_and_gas_flow_in_context_of_gas_velocity.html","timestamp":"2024-11-04T07:09:47Z","content_type":"text/html","content_length":"16605","record_id":"<urn:uuid:5da16a3b-9b8c-4917-bcc3-c858e726037c>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00522.warc.gz"}
NAWRU II . There is no such thing as a NAWRU This is the first of five posts I promised to write in this post NAWRU stands for the Non Accelerating Wage inflation Rate of unemployment. It is a concept used by the European Commission when deciding how much to allow Treasuries adhering to the Stability and Growth Pact to spend. The commission considers cyclically adjustments based on, among other things, unemployment minus the NAWRU. The Commission gives mere elected officials some slack if unemployment is above the NAWRU as estimated by Commission staff. Unemployment is never far above the estimated NAWRU. The estimated NAWRU has dramatically increased in countries whose unemployment rates have increased. Estimates for Spain vary from 21% to 25%. I think it is clear that there is something very wrong with the estimates. I think the approach has five fatal defects and should not be accepted as an area for further exploration let alone a basis for dictates to member countries. The first fatal defect of estimates of the NAWRU is that the hypothesis that there is such a thing has been rejected by the data. The concept survives only by changing the 1968 natural rate hypothesis into a natural rate model which is used without any assertion that it has testable implications which have not been rejected by the data. The NAWRU is a meaningful concept only if the acceleration of wage inflation is a function of the unemployment rate. This might or might not be true. The logic was that wage settlements are made aiming for a real wage, so expected price inflation is incorporated one for one into wage inflation. It is assumed that all recognize that the nominal wage doesn't matter, so there is no particular problem with cutting nominal wages when expected price inflation is negative. The opinions on this question of everyone who has ever had any role in negotiating wages were considered irrelevant. The argument went on that people won't make forecasting mistakes with the same sign forever, so the coefficients of expected inflation on lagged inflation must add to one. Oh yes, it was assumed that expected inflation was a linear function of lagged inflation, because uh that makes the math easier. It was decided to cut out the middle periods and make the coefficient on once lagged inflation one. Finally, somehow, lagged wage inflation took the place of lagged price inflation (I can't even imagine a bad argument for this step, but the Commission took it). The concept requires both that only the difference between nominal wage growth and expected inflation matter. This means that there is no downward nominal rigidity, that is there there is nothing special about nominal wage increases of zero nor any difference between wage inflation near zero and far from zero. It also requires that expectations can not be anchored. Expectations which are sometimes anchored and sometimes not anchored are not a linear function of past outcomes. They are absolutely a feature of expectations elicited in experiments. When presented with random walks, people usually forecast mean reversion. However a series of increases in a row causes them to forecast further increases (Barberis, Nicholas, Andrei Shleifer, and Robert Vishny, 1998, A model of investor sentiment, Journal of Financial Economics 49, 307–343). This is a robust result. If there is downward nominal rigidity or expectations can be anchored, then there may be no well defined NAWRU. This doesn't mean that it is impossible to calculate a number and call it the NAWRU. Rather it implies that there is a range of unemployment rates such that wage inflation does not accelerate. If that is the case, cyclical fluctuations of unemployment within that range will be incorrectly identified as fluctuations in the NAWRU. I think it is clear that, for Italy, this range stretches at least from 8% to 13%, since wage inflation has remained roughly constant as unemployment rose from 8% to over 13%. Wage inflation didn't increase back when Italian unemployment was 8% nor did it decrease after unemployment rose to over 13%. The simple fact is that, in the 21st century, there is almost exactly precisely zero correlation between the Italian unemployment rate and the change in Italian wage inflation. To calculate a NAWRU year after year with such data requires heroic data processing. Here is a Phillips scatter of unemployment and wages for Italy Data from . There is one observation per month from February 1980 through February 2015. Winf is the percent increase in LCWRIN01ITM661S the "Hourly Wage Rate: Industry for Italy©: Seasonally adjusted" over the preceding year (so the series for Winf consists of overlapping 12 month intervals). Unem is LRHUTTTTITM156S"Harmonized Unemployment: Total: All Persons for Italy©:Seasonally Adjusted" from January 1983 on but is ITAURHARMMDSMEI "Harmonized Unemployment Rate: All Persons for Italy© : Seasonally Adjusted" for 1980-1982. I have no idea why one series is available only after January 1983 or why the other is available only before August 2012 or how they differ (in the period when both are available, they are very similar but not identical). I think it is obvious that the graph doesn't look as a Phillips curve should. Since January 2000, the unemployment rate has varied from 5.8% to 13.2 % yet wage inflation has varied only from 1.1% to 4.8%. 21st century changes in Italian wage inflation are dwarfed by the huge declines in the 1980s. According to the accelerationist Phillips curve, Italian wage inflation should have remained in double digits in the 80s and 90s or declined to well below zero by now. It is possible to pick an arbitrary series of numbers and call them the highly variable NAWRU, that is, it is impossible to prove that no NAWRU exists (as it is impossible to prove a negative) but there is clearly no more evidence that Italy has a NAWRU than that it is haunted by ghosts. Here is the graph for the 21st century There is, perhaps, some hint of higher wage inflation at lower unemployment rates, but no clear acceleration. Nothing much seems to have happened to wage inflation as unemployment rose from around 8% to 13.2%. The NAWRU refers to the acceleration of wage inflation. I consider the difference between wage inflation in one month and 12 months earlier (so it is an annual difference of an annual difference and the data refer to overlapping 24 month intervals) This is an extremely impressively horizontal scatter. There is no sign of any association of unemployment and accelerating wage inflation at all. Here is a regression (with uncorrected standard . gen awinf = winf-winf[_n-12] . reg awinf unem if month>2000 Number of obs = 181 F( 1, 179) = 0.00 Prob > F = 1.0000 R-squared = 0.0000 Adj R-squared = -0.0056 awinf | Coef. Std. Err. t unem | 3.30e-07 .0396278 0.00 _cons | .0270689 .3569347 0.08 So "almost exactly precisely zero correlation " means a correlation coefficient of 0.00 something and a regression coefficient of 0.00000033 . The T-statistic is not correct, the standard errors should be corrected for the 24 periods of overlap. But I don't think that a T-statistic which is biased away from zero and equal to 0.00 really needs to be corrected. The almost exactly complete absence of any evidence of any effect of unemployment on the acceleration of wage inflation is extraordinary. It is extremely unlikely that two independent series would happen to have such low This is the first regression I estimated with Italian data. Using the full sample I get a (statistically insignificantly) upward sloping accelerationist Phillips curve. . reg awinf unem awinf | Coef. Std. Err. t unem | .0373827 .076396 0.49 _cons | -.8178216 .7054678 -1.16 I just now think that, maybe I should lead wage inflation acceleration (or lag unemployment) awinf2 = winf[_n+12]-winf . reg awinf2 unem if month>2000 awinf2 | Coef. Std. Err. t unem | .0211333 .0482586 0.44 _cons | -.1483054 .4191373 -0.35 That doesn't make much difference does it ? There is no hint in the Italian data that there is such a thing as a NAWRU. Those with firm faith can still believe in the NAWRU, but their faith receives no assistance at all from the data. update: typos corrected thanks to Reason and Marco Fioramanti. An explanation was revised aiming for comprehensibility following advice from Marco Fioramanti. 2 comments: reason said... Robert a couple of typos The Commission gives mere elected officials some slack IF unemployment is above the NAWRU as estimated by Commission staff. This means that there is NO downward nominal rigidity Normally, I might ignore them, but these ones change the meaning. Ken Houghton said... "The opinions on this question of everyone who has ever had any role in negotiating wages were considered irrelevant." A short, bittersweet summary of everything that has happened in economics since we were undergraduates. Since he was a year behind us, I blame Barack Obama.
{"url":"http://rjwaldmann.blogspot.com/2015/12/nawru-ii-there-is-no-such-thing-as-nawru.html","timestamp":"2024-11-09T18:39:26Z","content_type":"application/xhtml+xml","content_length":"129832","record_id":"<urn:uuid:8c8a1d1e-489a-4a04-941a-40d5180e0532>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00651.warc.gz"}
volume: issue, issue: Quadratic mean diameter is a widely used stand parameter present in the stand inventory summaries, while the top stand diameter is rarely reported in the literature, mainly in relation to dominant stand height. Since the dominant stand height is usually determined from the tree height-diameter curve of the stand, it is important how the top tree assemblage, used to estimate dominant diameter, is defined. The main objective of our study was to assess the bias between differently defined dominant diameter estimates for monospecific plantations of various species, to model the dominant diameter as a function of quadratic mean diameter and other relevant stand variables, and to estimate its goodness-of-fit in predicting dominant diameter and dominant height. We used data records gathered in sample plots in monospecific plantations of four tree species: Scots pine, Black pine, black locust and hybrid black poplar. We calculated the quadratic and arithmetic mean diameters of the 20% thickest trees in the plots, and the quadratic and arithmetic mean diameters of the trees, whose number corresponded to the 100 thickest trees per hectare. For each dataset, we analyzed the range and the distribution of the relative deviations calculated for each pair of dominant diameter estimates. For the Black pine plantations, regression models were developed for the two dominant diameter definitions, whose values differed most. Their goodness-of-fit was assessed from model efficiency and error statistics. The same model derivation procedure, applied to the Scots pine data, was followed by substitution of the predicted dominant diameter into a height-diameter model to assess the goodness-of-fit of the dominant height predictions. The differences between the arithmetic and quadratic means, estimated from the same subsample of trees, did not exceed 2% in all cases. However, dominant stand diameters calculated as averages of differently defined largest tree collectives differed by as much as 35%. Regardless of its definition, the dominant stand diameter was adequately predicted by a function of the quadratic mean diameter alone or considering stand basal area as a second predictor. The models showed very good accuracy of model efficiency above 0.92, average absolute error below 8%, with 90% of the relative errors less than 15%. The predicted dominant diameter value can be used in a height-diameter model to estimate with confidence the dominant stand height of a monospecific forest plantation, allowing the forecast of the stand attributes based on dominant trees when only average stand variables are known. volume: 46, issue: 1 Quadratic mean diameter is a widely used stand parameter present in the stand inventory summaries, while the top stand diameter is rarely reported in the literature, mainly in relation to dominant stand height. Since the dominant stand height is usually determined from the tree height-diameter curve of the stand, it is important how the top tree assemblage, used to estimate dominant diameter, is defined. The main objective of our study was to assess the bias between differently defined dominant diameter estimates for monospecific plantations of various species, to model the dominant diameter as a function of quadratic mean diameter and other relevant stand variables, and to estimate its goodness-of-fit in predicting dominant diameter and dominant height. We used data records gathered in sample plots in monospecific plantations of four tree species: Scots pine, Black pine, black locust and hybrid black poplar. We calculated the quadratic and arithmetic mean diameters of the 20% thickest trees in the plots, and the quadratic and arithmetic mean diameters of the trees, whose number corresponded to the 100 thickest trees per hectare. For each dataset, we analyzed the range and the distribution of the relative deviations calculated for each pair of dominant diameter estimates. For the Black pine plantations, regression models were developed for the two dominant diameter definitions, whose values differed most. Their goodness-of-fit was assessed from model efficiency and error statistics. The same model derivation procedure, applied to the Scots pine data, was followed by substitution of the predicted dominant diameter into a height-diameter model to assess the goodness-of-fit of the dominant height predictions. The differences between the arithmetic and quadratic means, estimated from the same subsample of trees, did not exceed 2% in all cases. However, dominant stand diameters calculated as averages of differently defined largest tree collectives differed by as much as 35%. Regardless of its definition, the dominant stand diameter was adequately predicted by a function of the quadratic mean diameter alone or considering stand basal area as a second predictor. The models showed very good accuracy of model efficiency above 0.92, average absolute error below 8%, with 90% of the relative errors less than 15%. The predicted dominant diameter value can be used in a height-diameter model to estimate with confidence the dominant stand height of a monospecific forest plantation, allowing the forecast of the stand attributes based on dominant trees when only average stand variables are known.
{"url":"https://crojfe.com/crojfe-data/authors/stankova-tatiana/","timestamp":"2024-11-05T14:10:23Z","content_type":"text/html","content_length":"21295","record_id":"<urn:uuid:48d69e53-166f-44dd-b25a-d638fcca479a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00782.warc.gz"}
Measurements of damage and repair of binary health attributes in aging mice and humans reveal that robustness and resilience decrease with age, operate over broad timescales, and are affected differently by interventions Measurements of damage and repair of binary health attributes in aging mice and humans reveal that robustness and resilience decrease with age, operate over broad timescales, and are affected differently by interventions As an organism ages, its health-state is determined by a balance between the processes of damage and repair. Measuring these processes requires longitudinal data. We extract damage and repair transition rates from repeated observations of binary health attributes in mice and humans to explore robustness and resilience, which respectively represent resisting or recovering from damage. We assess differences in robustness and resilience using changes in damage rates and repair rates of binary health attributes. We find a conserved decline with age in robustness and resilience in mice and humans, implying that both contribute to worsening aging health – as assessed by the frailty index (FI). A decline in robustness, however, has a greater effect than a decline in resilience on the accelerated increase of the FI with age, and a greater association with reduced survival. We also find that deficits are damaged and repaired over a wide range of timescales ranging from the shortest measurement scales toward organismal lifetime timescales. We explore the effect of systemic interventions that have been shown to improve health, including the angiotensin-converting enzyme inhibitor enalapril and voluntary exercise for mice. We have also explored the correlations with household wealth for humans. We find that these interventions and factors affect both damage and repair rates, and hence robustness and resilience, in age and sex-dependent manners. The key contribution of this study is to evaluate the longitudinal change in frailty indices by tracking both accumulation of damage and repair of deficits (damage and repair transition rates), using a sophisticated mathematical modeling and a translational approach that spans mice and humans. A second key achievement of this study is to evaluate change in frailty indices and damage and repair transition in interventions that improve health in mice. Collectively this advances progress in translational geroscience by providing new insight regarding how we measure biological age that can aid assessment of aging-relevant interventions. The authors have provided extensive details that support the research frameworks presented in this report. As organisms age, they can be described by health states that evolve according to dynamical processes of damage and repair. A health state is the net result of accumulated damage and subsequent repair (Howlett and Rockwood, 2013). Studies of aging have mostly focused on discrete health-states rather than the underlying continuous dynamic processes, due the difficulty of their measurement. Two common approaches to measuring individual health-states, the Frailty Index (FI) (Mitnitski et al., 2001) and the Frailty Phenotype (Fried et al., 2001), are assembled from health state data at a specific age and do not separate dynamic damage and repair processes. Nevertheless, strong associations between frailty measures and adverse health outcomes (Hoogendijk et al., 2019; Howlett et al., 2021) indicate that frailty affects the underlying dynamical processes. This is supported by the increasing rate of net accumulation of health deficits with worsening health (Mitnitski et al., 2007; Kojima et al., 2019). Reduced resilience, or the decreasing ability to repair damage (or recover from stressors), is increasingly seen as a key manifestation of organismal aging (Ukraintseva et al., 2021; Kirkland et al., 2016; Hadley et al., 2017). Resilience is often assessed by the ability to repair following an acute stressor, such as a heat/cold shock, viral infection, or anesthesia; or a non-specific stressor such as a change of the health state, typically within a short timeframe (Scheffer et al., 2018; Gijzel et al., 2019; Rector et al., 2021; Colón-Emeric et al., 2020; Pyrkov et al., 2021). Robustness, or an organism’s resistance to damage, has not been as well studied – but there is also evidence for its average decline with age (Arbeev et al., 2019; Kriete, 2013). Both resilience and robustness sustain organismal health during aging, but their relative importance and their timescales of action remain largely unexplored. While cellular and molecular damage and dysfunction are classic ‘hallmarks’ of aging (López-Otín et al., 2013), damage and dysfunction at organismal scales may exhibit distinct behavior (Gems and de Magalhães, 2021; Howlett and Rockwood, 2013). Indeed, from a complex systems perspective we may expect qualitatively distinct emergent phenomena at tissue or organismal scales (Cohen et al., 2022). However, a significant amount of organismal health data is discrete and cannot be approached with existing techniques used to study resilience or robustness. It is important both to study resilience and robustness at organismal scales and to be able to use discrete data while doing so. To simultaneously study both resilience and robustness during aging with binarized health-deficits, we have here developed a novel method of analysis that uses longitudinal data from mice and humans to obtain summary measures of organismal damage and repair processes over time. This approach can be adapted to use any discrete biomarker. We apply our method to study how resilience and robustness evolve with age and how they differ between species, between sexes, and under different health interventions. Our approach and results are limited to binarized health attributes; for our purposes damage and repair correspond to discrete transitions of these binarized attributes. Since our attributes are at the clinical or organismal scale of health, we do not consider cellular or molecular damage directly. Although our approach could be applied to binarized attributes at any organismal scale, we do not investigate whether our conclusions generalize to different sets of binarized attributes, nor do we consider continuous attributes. Developing interventions to extend lifespan and healthspan is the goal of geroscience (Kennedy et al., 2014; Sierra, 2016; Sierra et al., 2021). While some interventions that affect aging health have been identified, how they differentially affect damage and repair, and their timescales of action, is less understood. We consider interventions in mice that have previously been shown to have a positive impact on frailty, the angiotensin converting enzyme (ACE) inhibitor enalapril (Keller et al., 2019) and voluntary exercise (Bisset et al., 2022). In humans, we stratify individuals within the English Longitudinal Study of Aging by net household wealth (Phelps et al., 2020; Steptoe et al., 2014). Wealth is a socioeconomic factor associated with aging health (Zimmer et al., 2021; Niederstrasser et al., 2019). Understanding how various interventions affect aging health by affecting resilience and robustness will better enable us to fulfill the geroscience agenda. Measuring resilience and robustness with binarized data A well-established approach to quantify health in both humans and in animal models is to count binarized health deficits in an FI (Mitnitski et al., 2001; Whitehead et al., 2014). In longitudinal studies, the FI can be assessed at each follow-up. Here, we use longitudinal binarized health attribute data from mice and humans that can be used to evaluate the FI to also quantify organismal damage and repair processes over time. As illustrated in the schematic in Figure 1, the change in number of deficits from one follow-up to the next is determined by the number of new deficits (indicating damage, with deficit values transitioning from 0 to 1, red arrow) minus the number of repaired deficits that were previously in a damaged state (with transitions of deficit values from 1 to 0, green arrow). These counts of damaged and repaired deficits between follow-ups represent summary measures of the underlying damage and repair processes. We model this process with a Bayesian Poisson model for counts of damaged and repaired deficits, using age-dependent damage and repair rates. For mice we use a joint longitudinal-survival model, which couples the damage and repair rates together with mortality. For humans, we use a similar model but without the survival component due to having no mortality data. Extracting damage and repair from the longitudinal observation of binary health deficits. In our approach, damage rates are the probability of acquiring a new deficit per unit of time, and repair rates are the probability of repairing a deficit per unit time. These are aggregate measures of susceptibility to damage (lack of robustness), and ability to repair (resilience). The FI is a whole organism-level summary measure of health; accordingly, these aggregate damage and repair rates are also whole organism-level measures of robustness and resilience. Note that since these rates are per available deficit, repair rates may exceed damage rates while the FI is still increasing. This can occur due to relatively rapid repair per deficit of a small number of deficits, with a slower damage rate per deficit of a much larger number of undamaged attributes. While damage of binarized health attributes with age necessarily follows from declining health, repair does not. However, almost all health attributes used in our mouse data have been previously shown to reverse either spontaneously or through extrinsic interventions such as drug treatments or lifestyle changes – see Supplementary file 1, with references. In Figure 5—figure supplement 4 we also show repair counts per deficit type. Nevertheless, for our data, not all deficits repair equally, and some rarely or never repair (‘Cataracts’, ‘Tumours’, ‘diarrhea’, and ‘vaginal/uterine/penile Both resilience and robustness decline in aging populations We first establish the trends of repair and damage rates in aging. In Figure 2, we plot the age-dependence of the repair and damage processes in mice and humans for three mouse datasets (a) 1 Keller et al., 2019; (b) 2 Bisset et al., 2022; and (c) 3 Schultz et al., 2020; and (d) humans from the ELSA dataset (Phelps et al., 2020; Steptoe et al., 2014). Humans are plotted by decade of baseline age at entry to the study to separate out recruitment effects. Points are binned averages from the data, and lines are posterior samples from the model of the rates. Posterior predictive checks show good model quality, seen in Figure 2—figure supplement 1 for mice (a-c) and humans (d). Figure 2 with 2 supplements see all Repair rates decrease and damage rates increase with age. In each of these datasets, there is a strong decrease in repair rates and increase in damage rates with age (except for damage rates in mouse dataset 2). Spearman rank correlations $ρ$ for each plot are also shown in Figure 2, highlighting the increase or decrease in rates with age, and 95% posterior credible intervals of these correlations are shown in brackets. Overall, we observe decreasing repair rates and increasing damage rates with age which signify decreasing resilience and robustness with age in both mice and humans. Decreasing repair and increasing damage both contribute to an increasing FI with age in mice and humans (shown in Figure 2—figure supplement 2a–d). We also observe higher FI scores in females versus males in both mice and humans, as reported previously (Kane et al., 2019; Gordon and Hubbard, 2020; Kane and Howlett, 2021). We evaluate the contributions of damage and repair rates to survival using a joint longitudinal-survival model in mice. In Figure 2e–g, we show that damage rates have much larger hazard ratios for death than repair rates. These hazard ratios are for a fixed FI, itself a strong predictor of mortality in mice and in people (Rockwood et al., 2017), which shows that an increasing susceptibility to damage leads to larger decreases in survival than a comparable decline in resilience. Individuals survive longer when damage is avoided altogether, as compared to damage that is subsequently repaired. This intuitive result indicates that there may be lingering effects of the original damage and suggests that interventions that focus on robustness may be more effective than those that focus on resilience. The acceleration of damage accumulation is determined by a decline in robustness The plots of FI vs. age shown in Figure 2—figure supplement 2 (see also Mitnitski et al., 2001; Mitnitski et al., 2005; Mitnitski et al., 2012; Mitnitski et al., 2013) has a positive curvature, accelerating upwards near death (Stolz et al., 2021). This positive curvature is also seen in other summary measures such as Physiological Dysregulation (Arbeev et al., 2019). However, the origin of this curvature is unknown – whether it is due to a late-life decrease in resilience or a decline in robustness. We measure the curvature of the FI with the second time-derivative, which can be computed with the age-slopes of the damage and repair rates (see Materials and methods). In Figure 3, we show the separate contributions to this curvature, separated into terms involving damage (pink) and terms involving repair (green). Summing these terms, we observe the typical positive curvature that indicates an acceleration of damage accumulation. Frailty Index curvature is dominated by declining robustness. We find that the decline in robustness (indicated by the damage rate terms) has the strongest effect on the curvature of the FI. In mice, this is seen in every dataset and is significant at the indicated ages (Figure 3b–d) and for humans at older ages (Figure 3e). This observed effect indicates that it is the increase in damage with age, rather than the decline of repair, that causes this acceleration of net damage accumulation. The significance of this effect is evaluated by computing the posterior distribution of the difference between damage and repair terms. When at least 95% of this distribution is above/below zero, we take the effect to be significant. Credible intervals visualizing the difference are shown in Figure 3—figure supplement 1. In Figure 2e–g we had shown that the decline in robustness has the strongest effect on survival in mice. Together with the results shown in Figure 3, these results highlight the important role of declining robustness in aging. Interventions modify damage and repair rates in mice and wealth correlates with rates in humans Mouse datasets 1 (Keller et al., 2019) and 2 Bisset et al., 2022 have additional intervention groups treated with either the ACE inhibitor enalapril, or voluntary aerobic exercise, respectively. In Figure 4—figure supplement 1a and b, we show that these interventions target both repair and damage processes, resulting in lower FI damage accumulation over time for the treated groups. In Figure 4a and b, we investigate the effects of these interventions on the curvature of the FI. This curvature is strongly reduced by exercise in mouse dataset 2, with a weaker effect for enalapril the credible intervals of the intervention effects are shown in Figure 3—figure supplement 1d and e. Notably, exercise stops the acceleration in damage accumulation in both male and female mice by reducing the curvature to zero. Figure 4 with 2 supplements see all Interventions both increase resilience and decrease damage. The effect of these interventions on the repair and damage rates is seen in Figure 4c and d, where 95% credible intervals for the age-slopes show the rate of increase or decrease of the repair and damage rates as age increases. These slopes include both the change in the rate with age, and the effect due to increasing FI with age. Interventions affect the rate of decrease of both repair and damage rates with time, resulting in less cumulative damage. As shown in Figure 4c, enalapril attenuates the rate of decrease of repair rates in both male and female mice, resulting in age-slopes closer to zero than for controls. Significance is evaluated by computing the posterior distribution of the difference between control and intervention. Significance is shown with asterisks (*) when at least 95% of the distribution is above/below zero. In Figure 4—figure supplement 1 we show a significant reduction in damage rate (but not slope) for male and female mice with enalapril. A sex-specific effect is seen for voluntary exercise. For female mice, voluntary exercise leads to stoppage of the decline in repair rates (to an approximately zero slope), whereas for male mice it just attenuates the decline (Figure 4d). For damage rates, female mice exhibit an attenuation of the rise with age whereas in male mice exercise stops the age-dependent rise exhibited by control mice. For humans, we use net household wealth as a socioeconomic environmental factor that serves as a proxy for medical and behavioural interventions that are not individually tracked. This factor is not an intervention as in mice, and is simply correlational. As such we report correlations of wealth with repair and damage rates with age, rather than age-slopes after a specific intervention is initiated. In Figure 4—figure supplement 2, we show rates stratified by terciles of net household wealth, where the lowest tercile exhibits lower repair rates and higher damage rates for younger ages. Correspondingly, the FI is lower for individuals with a higher net household wealth. Treating the wealth variable as continuous, Figure 4e shows that repair rates are positively correlated with net household wealth, while damage rates are negatively correlated – with significant and stronger effects at younger ages. These results reinforce the findings in mice, where interventions impact both damage and repair rates. In humans, we also see some evidence of decreasing effects of wealth with age – although these may be confounded by recruitment effects depending on baseline age. Damage and repair have broad timescales In the results above, we considered the average damage and repair transition rates vs age. Since individual deficits undergo stochastic transitions between damaged and repaired states, we can also measure the lifetime of these individual deficit states (see Figure 5a). These lifetimes are interval censored (transitions typically occur between observation times) and can be right-censored (death or drop-out before transition occurs). We use an interval censored-analogue to the standard Kaplan-Meier estimator for right censored data (see Maerials and methods) to estimate state-survival curves of individual damaged or repaired states. These state-survival curves in Figure 5, considering all possible deficits, represent the probability of a deficit remaining undamaged vs time since a repair transition, or remaining damaged vs time since a damage transition. Figure 5 with 4 supplements see all Resilience and robustness occur over both short and long time-scales in both mice and humans. We generally observe a significant drop of state-survival probability at early times, indicating some rapid state transitions at or below the interval between measurements. However, all the curves also extend to very long times – towards the scale of organismal lifetime – indicating that both robustness and resilience operate over a broad range of timescales. These results highlight that repair can occur a long time after damage originally occurred. Note that the timescale of robustness as measured here is not robustness after a specific extrinsic stressor, but robustness from the implicit stressors of aging. A similar form of non-specific robustness has been measured in a previous study, using the onset age of disease (Arbeev et al., 2019). As shown by exponential time-scales of resilience and robustness for individual deficits in Figure 5—figure supplement 1 and Figure 5—figure supplement 2, mice deficits and human deficits exhibit a variety of time-scales of resilience and robustness. Some deficits repair soon after damage (or damage soon after repair), and some repair (or damage) over a broad range of time-scales. The combination of all of these deficits result in the shape of the state-survival curves in Figure 5. We evaluate the significance of the difference between state-survival curves with a generalized log-rank test (Zhao et al., 2008; Zhaeo, 2012). For the interventions studied, there are no dramatic changes of resilience or robustness timescales exhibited in mice. Exercise in the male mice slightly shifted the timescale of resilience, such that deficits were repaired faster in mice that were exercised compared to controls. We expect that we would observed stronger effects of the interventions on these time-scales if we had sufficient data to resolve the impact of the time at which the initial damage or repair event occurred – here we have grouped all times together. For humans (see Figure 5e), we see strong and significant effects on resilience and robustness timescales from household wealth in females, but not males. These effects are particularly strong for damage timescales, which characterize robustness: states remain healthy longer at higher wealth terciles. We have presented a new approach for the assessment of damage (robustness) and repair (resilience) rates in longitudinal aging studies with binarized health attributes. With this approach, we have shown that both humans and mice exhibit increasing damage and decreasing repair rates with age, corresponding to decreasing robustness and resilience, respectively. We also demonstrate that decreasing robustness and resilience with age contribute to the acceleration of deficit accumulation for organisms. Decreasing robustness has approximately twice as large an effect when compared to declining resilience; decreasing robustness also has a stronger and significant effect on survival. While much of the focus in previous work has been on the decline of resilience in aging ( Ukraintseva et al., 2021), our results indicate that decreasing robustness and decreasing resilience are both important processes underlying the increasing accumulation of health-related deficits with age, and the increasing rate of accumulation at older ages. In the current study, the observed damage is assumed to occur due to natural processes, rather than a specific applied stressor (Kirkland et al., 2016; Colón-Emeric et al., 2020). Resilience measured by the observed repair also occurs without targeted interventions (certainly in mice, due to their absence of health-care), and so is likely to represent intrinsic resilience with respect to spontaneous damage due to the natural stressors of aging. Our approach has some similarities with recent approaches to measuring resilience by the autocorrelation timescale of intrinsic variations of continuous physiological state variables (Pyrkov et al., 2021; Gijzel et al., 2017; Rector et al., 2021). An advantage of our approach, which uses binarized variables, is that we can estimate both resilience and robustness using similar methods on the same data – so we can compare their relative effects. Previous work has modeled the change in the total count of discrete deficits with age ( Mitnitski et al., 2006; Mitnitski et al., 2007; Mitnitski et al., 2010; Mitnitski et al., 2012; Mitnitski et al., 2014), but did not separately measure damage and repair. With our approach, we observe decreasing resilience and robustness with age in both mice and humans. There are caveats to our approach. We may miss fast damage and repair dynamics that occur on time-scales shorter than the separation between observed time-points, for example, we cannot observe daily or weekly changes in deficit states in mice or monthly changes in humans. Therefore, our measurements of damage and repair can only be interpreted as the net damage and net repair between observed time-points. Furthermore, since we have defined damage and repair (and robustness and resilience) as average rates with respect to binarized attributes it remains an open question how they relate to damage and repair rates assessed from continuous health attributes. Our approach results in summary measures of damage and repair rates. We are not aware of any selection bias in our mouse studies, and we applied joint modelling to mitigate survivor bias effects. To mitigate selection bias in the human data, we treated onset ages distinctly. There are nevertheless well-known ‘healthy volunteer’ effects that would bias the original population that we did not consider. Furthermore, we did not have human mortality data so we could not treat human survivor bias effects. Measurement errors could also contribute to both damage and repair rates – although presumably not in an age-dependent fashion. In contrast, we observe decreasing repair rates and increasing damage rates with age. Errors in deficit assessment are known to be small for mice (Feridooni et al., 2017; Kane et al., 2017). Supporting this, a sensitivity analysis of pruning putative mouse measurement errors, shown in Figure 5—figure supplement 3, finds no qualitative changes. We find that both damage and repair processes are targeted by interventions in mice. As a result, developing interventions to target either damage or repair separately is conceivable. While targeting either would affect net deficit accumulation, we found that the damage rate has a stronger effect on both mortality and the acceleration of damage accumulation than the repair rate. Consistent with this, recent work has shown that FI damage is also more associated with mortality than FI repair in humans (Shi et al., 2021). We predict that interventions that facilitate robustness (resistance to damage) may be more important at older ages, where damage accumulation normally accelerates. More broadly, rather than just targeting deficit accumulation or FI (Howlett et al., 2021), our results indicate that interventions could be improved by targeting an appropriate balance of damage and repair processes – in an age- and sex-dependent manner. Since both damage and repair occur on long timescales, this raises the possibility that these rates could be manipulated by interventions over a similarly broad range of timescales from the shortest times to organismal lifetimes. How to optimally deploy available interventions is not yet clear. The effects of age on both damage and repair, in mice and humans, are qualitatively similar in male and female populations. Nevertheless, we have found that systemic interventions can have qualitatively distinct sex effects in mice. The ACE inhibitor enalapril has stronger effects in female mice. Voluntary exercise stopped the decline in repair rate with age for female mice, but not male mice, and stopped the increase in damage rate with age for male mice, but not female mice. These differences suggest that assessing both damage and repair rates, together with accumulated damage as a FI, in interventional aging studies can provide a clearer assessment of sex differences. Further studies are needed to tease out the sex-dependent effects of other aging interventions, and to provide quantitative insight into the mortality-morbidity paradox, where females live longer but have higher FI scores than males (Kane and Howlett, 2021; Oksuzyan et al., 2008). Summary measures of health such as the FI exhibit an accelerating accumulation of health deficits with age (Mitnitski et al., 2001; Mitnitski et al., 2005; Mitnitski et al., 2012; Mitnitski et al., 2013). This universally observed behavior must be reflected in either increasing damage rates with age, decreasing repair rates, or – as we find – both. However, the question of whether, and by what mechanisms, damage and repair processes are coupled during aging remains unanswered. Both damage and repair rates have been typically modelled as functions of the health state in descriptive models of aging (Taneja et al., 2016; Farrell et al., 2016; Farrell et al., 2018), but without a mechanistic relationship between them apart from that imposed statistically by the observed accumulated damage. The precise relationships – and whether they are a universal feature of all aging organisms – remains to be determined. Studies of interventions should prove useful in this regard, because they can separately target damage and repair. Our observations that repair timescales are broadly distributed, up to lifespan-scales, raise three fundamental questions for resilience studies. First, are interventions that facilitate recovery similarly effective after a broad range of timescales? This would imply that we may be able to target resilience with interventions over a longer timeframe than just acutely when damage occurs. Damage propagation may nevertheless limit the benefits of such late repair. Second, what determines the recovery timescales? As we have shown (Figure 5—figure supplement 1), different health attributes can have quite different recovery times. Third, would a similar broad range of resilience timescales be observed for challenge experiments with an induced stressor, and how might that depend on the magnitude and scale of the damage? We have defined damage in terms of discrete transitions of dichotomized variables. It is possible that dichotomized deficits probe qualitatively different timescales than the continuous measures that are often considered in resilience studies. Future experimental resilience studies across a range of health attributes should explore longer timescales. It will also be important to assess how the broad range of recovery timescales we have uncovered compare to timescales extracted from auto-correlations of physiological state variables – which have also been limited to shorter times (Pyrkov et al., 2021). We have also limited our study to ‘clinical’ variables at organismal scales. Further studies of resilience and robustness at different biological scales from cellular to organismal, with both continuous and discrete variables, and over organismal timescales, will help us to better understand how damage and repair at cellular scales influences and is influenced by similar processes at organismal scales. It is easier to conceive of how damage can propagate from cellular to organismal (Howlett and Rockwood, 2013), but harder to conceptualize how cellular repair processes such as DNA repair pathways and autophagy (Kirkwood, 2011) might similarly propagate. Most of the mouse health deficits analyzed here have previously been shown to reverse either spontaneously or in response to interventions including drug treatment or exercise (Supplementary file 1). There are only a few deficits that rarely or never repaired in the current study: cataracts, tumours, diarrhea, and prolapses. Both this study and the literature suggest that most deficits that make up the mouse frailty index can be reversed or repaired. Investigation of the specific mechanisms responsible for the spontaneous repair of deficits, and how they scale from the cell to the organism, should be the focus of future work. We speculate that spontaneous repair occurs by the same pathways that are targeted by health interventions. It is likely that deficits reverse if interventions target one or more of the molecular/cellular pillars of aging, including macromolecular damage, dysregulated stress response, disruption in proteostasis, metabolic dysregulation, epigenetic drift, inflammaging, and stem cell exhaustion (Goh et al., 2022). In terms of the specific interventions investigated here, previous studies have shown that beneficial effects of enalapril treatment and exercise on frailty are attributable, at least in part, to effects on chronic inflammation (Bisset et al., 2022; Keller et al., 2019). The increasing availability of longitudinal health data over the lifespan of model aging organisms facilitates the analysis of damage and repair rates, and how they extend and change over the organismal lifespan. These damage and repair rates underlie the accumulation of damage that describes aging. Here we have shown the value of considering both resilience and robustness over the lifespan. Further studies will be able to determine how widespread organismal and sex differences in these effects are, and how universal they may prove to be. Studies of the effects on damage and repair rates of both targeted and systemic interventions will also be crucial. We have studied only three interventions or conditions so far (e.g. enalapril and exercise in mice, and wealth in humans). There are many other possibilities, including treatment with geroprotectors (Gonzalez-Freire et al., 2020) and lifestyle interventions, that could be deployed both in humans and in aging animal models. For the mouse portion of this manuscript, published data on longitudinal health-related deficits in C57BL/6 mice from three papers was used (Keller et al., 2019; Bisset et al., 2022; Schultz et al., 2020). A brief summary of the methods of each paper is below. Male and female C57BL/6 mice were assessed for deficits approximately every 4 weeks from 16 to either 21 months of age (females) or 25 months of age (males). Mice were fed either a diet containing enalapril (280 mg/kg) or control diet for the duration of the experiment. After pre-processing (below), this data contains 21 female control mice, 25 female enalapril mice, 13 male control mice, and 25 male enalapril mice. Male and female C57BL/6 mice were assessed for deficits approximately every 2 weeks from 21 to 25 months of age. Mice were all singly housed, and half were provided a running wheel for voluntary exercise. After pre-processing (below), this data contains 11 female control mice, 11 female exercise mice, 6 male control mice, and 6 male exercise mice. Male C57BL/6Nia mice were assessed for deficits approximately every 6 weeks from 21 months of age until their natural deaths. After pre-processing (below), this data contains 44 male control mice. Mouse clinical frailty index assessment Each of the papers above assessed health deficits using the mouse clinical frailty index as described previously (Whitehead et al., 2014). Briefly, this assessment involves scoring 31 non-invasive health-related measures in mice. Most measures are scored as 1 for a severe deficit, 0.5 for a moderate deficit and a 0 for no deficit. Deficits in body weight and temperature were scored based on deviation from reference values in young adult animals, such that a difference of less than 1 SD was scored 0, a difference of ±1 SD was scored 0.25, a difference of ±2 SD was scored 0.5, a difference of ±3 SD was scored 0.75, and a difference of more than 3 SD received the maximal deficit value of 1 (Whitehead et al., 2014). The deficits of malocclusions and body temperature were not assessed in mouse group 3 (Schultz et al., 2020), leaving only 29 deficits for this dataset. The variables in the Frailty Index are, ‘Alopecia’, ‘Fur color loss’, ‘Dermatitis’, ‘Coat condition’, ‘Loss of whiskers’, ‘Kyposis’, ‘Distended abdomen’, ‘Vestibular disturbance’, ‘Cataracts/corneal capacity’, ‘Eye discharge/swelling’, ‘Microphthalmia’, ‘Malocclusions’, ‘Rectal prolapse’, ‘Penile prolapse’, ‘Mouse grimace scale’, ‘Piloerection’, ‘Tail stiffening’, ‘Gait’, ‘Grip strength’, ‘Body condition’, ‘Hearing loss’, ‘Vision loss’, ‘Menace reflex’, ‘Tremor’, ‘Tumors/growths’, ‘Nasal discharge’, ‘Diarrhoea’, ‘Breathing rate/depth’, ‘Body temperature’, ‘Body weight’. The FI reference sheet at https://github.com/SinclairLab/frailty, shows examples of mice corresponding to the different levels of the deficits. Mouse data pre-processing For mouse dataset 1, we impute missing deficit values by propagating the last observed value forward. If the first observed deficit is missing, it is imputed by propagating the first observed value backward. Less than 1% of all total deficit values are missing in this dataset. No values in the other datasets are missing. Deficits are scored on a fractional scale, with deficit $i$ having values $di∈{0,0.25,0.5,0.75,1}$. To treat these as binary, we represent each fractional deficit d[i] by 4 ordered binary deficits, $ [di(1),di(2),di(3),di(4)]$. Fractional deficits are then represented by setting $4×di$ of these ordered binary deficits to 1. For example if $di=0.75$, this is represented as $[1,1,1,0]$. A new Frailty Index is then created by taking all of these new binary deficits, representing a $4×31=124$ item Frailty Index ($4×29=116$ for mouse dataset 3). This process preserves the FI scores, and a single repair or damage transition on this scale can be interpreted as taking a step of size 0.25 on the fractional deficit scale. Each step of a deficit originally on the $[0,0.5,1]$ scale corresponds to 2 steps of size 0.25 on this new scale. Measurement times with abnormally short or long intervals are removed. In mouse dataset 2, measurement times less than 0.1 months from the previous time are removed. In mouse dataset 3, measurement times more than 2 months from the previous time are removed. In each dataset, mice with less than 2 observed time-points are removed. Human data and pre-processing We use human data from the English Longitudinal Study of Aging (Phelps et al., 2020; Steptoe et al., 2014). We select individuals that have full data for net household wealth and activities of daily living (ADL) and instrumental activities of daily living (IADL). A Frailty Index is created from the count of 10 possible ADLs and 13 possible IADLs, giving a fraction out of 23. Each of these variables are binary with values ${0,1}$. The ADLs are ‘Have difficulty’: ‘Walking 100 yards’, ‘Sitting for about two hours’, ‘Getting up from a chair after sitting for long periods’, ‘Climbing several flights of stairs without resting’, ‘Climbing one flight of stairs without resting’, ‘Stooping, kneeling, or crouching’, ‘Reaching or extending arms above shoulder level’, ‘Pulling/pushing large objects like a living room chair’, ‘Lifting/carrying over 10 lbs, like a heavy bag of groceries’, and ‘Picking up a 5 p coin from a table’. The IADLs are ‘Have difficulty’: ‘Dressing, including putting on shoes and socks’, ‘Walking across a room’, ‘Bathing or showering’, ‘Eating, such as cutting up your food’, ‘Getting in or out of bed’, ‘Using the toilet, including getting up or down’, ‘Using a map to get around a strange place’, ‘Preparing a hot meal’, ‘Shopping for groceries’, ‘Making telephone calls’, ‘Taking medications’, ‘Doing work around the house or garden’, and ‘Managing money, e.g. paying bills and keeping track of expenses’. We restrict individuals to those that were recruited to the study between the ages of 50 and 89 years. We drop individuals with follow-up time intervals above 4 years and individuals with fewer than 6 follow-ups. This removes 15399 individuals from the dataset, 4326 of which only had a single time-point, 2291 had 2 time-points, 2095 had 3 time-points. The final selected individuals are followed for between 13 and 18 years, with 60% of the individuals being followed for 16 years. We use net household wealth, as determined in the financial assessment in wave 5 of the ELSA data. We drop individuals that have parts of this assessment imputed. The raw value of net household wealth spans several orders of magnitude (and includes negatives for individuals in debt), and so is transformed by $w=log⁡(wraw+mean⁢(wraw))$. After pre-processing, this data contains 1049 males and 1300 females with time-intervals of approximately 2 years between observations. There are 1222 individuals from baseline ages in $[50,60]$, 827 with baseline ages in $[60,70]$, 281 with baseline ages in $[70,80]$, and 19 with baseline ages in $[80,90]$. Extracting damage and repair counts In each dataset, we observe the state of $N$ binary health deficits ${dj⁢k}k=1N$ for each subject at a set of observation times ${tj}j=1J$. Summing up the number of deficits at each time, we get deficit counts for each observation time, ${nj}j=1J$, which is used to compute the Frailty Index $fj=nj/N$. We compute the number of deficits damaged ($0→1$ transitions) and repaired ($1→0$ transitions) between two time points t[j] and $tj+1$, denoted as $nd⁢(tj)$ or $nr⁢(tj)$. These counts satisfy $n⁢ (tj+1)=n⁢(tj)+nd⁢(tj)-nr⁢(tj)$, linking these damage and repair processes with the Frailty Index. We model deficit repair and damage as Poisson point processes with time-dependent rates, $λr⁢(t)$ and $λd⁢(t)$. The count of deficits repaired or damaged in an interval $[t1,t2]$ is assumed to follow a Poisson distribution, with mean equal to the instantaneous rate $λr⁢(t)$ or $λd⁢(t)$ integrated over this interval times the number of possible deficits available to be repaired $Λr⁢(t)=∫λr⁢(t) ⁢nt⁢dt$, or damaged $Λd⁢(t)=∫λd⁢(t)⁢(N-nt)⁢dt$. For computational convenience, we use constant rates within each time-interval to approximate these integrals, $Λr⁢(t)≈λr⁢(t)⁢nt⁢Δ⁢t$ or $Λd⁢(t)≈λd⁢(t) We perform Bayesian inference on our models by inferring the posterior distribution of the parameters given the observed data. Joint longitudinal-survival model for mice data We use a joint modelling framework to model repair and damage rates, while assessing their effect on survival. We decompose the multivariate joint distribution of the observed longitudinal damage and repair counts and survival outcome by coupling survival with the repair and damage rates $λir⁢(t)$ and $λid⁢(t)$(Hickey et al., 2016; Brilleman et al., 2020), $p\left({T}_{i},{c}_{i},\left\{{n}_{i}^{r}\left(t\right)\right\},\left\{{n}_{i}^{d}\left(t\right)\right\}|{\lambda }_{i}^{r}\left(t\right),{\lambda }_{i}^{d}\left(t\right)\right)=p\left({T}_{i},{c}_ {i}|{\lambda }_{i}^{r}\left(t\right),{\lambda }_{i}^{d}\left(t\right)\right)p\left(\left\{{n}_{i}^{r}\left(t\right)\right\},\left\{{n}_{i}^{d}\left(t\right)\right\}|{\lambda }_{i}^{r}\left(t\right), {\lambda }_{i}^{d}\left(t\right)\right).$ We indicate final follow-up times for individual $i$ as $Ti$ with a censoring indicator c[i] where 1 is censored and 0 is an observed death. We use a linear Poisson model for the longitudinal damage and repair rates. A Softplus function, $log⁡(1+ex)$, is used to enforce positive rates. This function is chosen because it is approximately linear for larger values of $x$, in contrast to $ex$ which is often used for Poisson models (which resulted in poor behaviour for our models). The form of this model is, (1) ${\lambda }_{i}^{r}\left(t\right)=\mathrm{Softplus}\left({\mathbit{\beta }}^{r}\cdot {\mathbf{x}}_{i}\left(t\right)+{b}_{i,0}^{r}+{b}_{i,1}^{r}t\right),$ (2) ${\lambda }_{i}^{d}\left(t\right)=\mathrm{Softplus}\left({\mathbit{\beta }}^{d}\cdot {\mathbf{x}}_{i}\left(t\right)+{b}_{i,0}^{d}+{b}_{i,1}^{d}t\right),$ (3) ${n}_{i}^{r}\left({t}_{j}\right)|{\lambda }_{i}^{r}\left({t}_{j}\right),{n}_{i}\left({t}_{j}\right)\sim \mathrm{Poisson}\left({n}_{i}\left({t}_{j}\right){\lambda }_{i}^{r}\left({t}_{j}\right)\ (4) ${n}_{i}^{d}\left({t}_{j}\right)|{\lambda }_{i}^{d}\left({t}_{j}\right),{n}_{i}\left({t}_{j}\right)\sim \mathrm{Poisson}\left(\left(N-{n}_{i}\left({t}_{j}\right)\right){\lambda }_{i}^{d}\left({t} (5) ${n}_{i}\left({t}_{j+1}\right)={n}_{i}\left({t}_{j}\right)+{n}_{i}^{d}\left({t}_{j}\right)-{n}_{i}^{r}\left({t}_{j}\right).$ The first two equations describe the time-dependent repair and damage rates, $λr⁢(t)$ and $λd⁢(t)$. These rates represent the probability of repair or damage, per deficit per unit time. These rates are multiplied by the number of deficits that can repair $n⁢(tj)$ or the number that can damage $N-n⁢(tj)$ and the time-interval $ttj+1-tj$ to compute the mean count of repaired or damaged deficits for Poisson distributions. The last Equation 5 shows how we can compute the total count of deficits from this model, allowing this model to be used to model the Frailty Index as well. The full-cohort parameters are denoted $β$ and the subject-specific intercept and time-slopes $bi,0,bi,1$. The variables $xi⁢(t)$ include the covariates and their interactions with sex and intervention group, The “treatment" variable is a 0/1 indicator for enalapril in mouse group 1 or exercise in mouse group 2. The other variables are the time from baseline $t$, the Frailty Index $f$, baseline age a[0], and sex (M/F). These interactions allow sex and intervention group specific time-slopes. The repair and damage processes are linked by including correlations between the subject-specific parameters $[bir,bid]∼N(0,Σ)$. We jointly model these repair and damage processes with survival, with proportional hazards and a baseline hazard parameterized with M-splines (Ramsay, 1988) (which are always non-negative). The damage and repair processes are linked with survival by including damage rate and repair rate in the hazard rate, (6) ${h}_{i}\left(t\right)={h}_{0}\left(t,\mathrm{sex}\right)\mathrm{exp}\left(\mathbit{\gamma }\cdot {\mathbf{u}}_{i}\left(t\right)+{\gamma }^{r}{\mathrm{Softplus}}^{-1}{\lambda }_{i}^{r}\left(t\ right)+{\gamma }^{d}{\mathrm{Softplus}}^{-1}{\lambda }_{i}^{d}\left(t\right)\right),$ ${h}_{0}\left(t\right)=\left(\mathrm{m}\mathrm{a}\mathrm{l}\mathrm{e}\right)\cdot \sum _{l=1}^{L}{a}_{l,\mathrm{m}\mathrm{a}\mathrm{l}\mathrm{e}}{M}_{l,3}\left(t|\mathbf{k}\right)+\left(\mathrm{f}\ mathrm{e}\mathrm{m}\mathrm{a}\mathrm{l}\mathrm{e}\right)\cdot \sum _{l=1}^{L}{a}_{l,\mathrm{f}\mathrm{e}\mathrm{m}\mathrm{a}\mathrm{l}\mathrm{e}}{M}_{l,3}\left(t|\mathbf{k}\right),\phantom{\rule {thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\sum _{l=1}^{L}{a}_{l}=1,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\ rule{thinmathspace}{0ex}}{a}_{l}\ge 0,$ (7) ${S}_{i}\left(t\right)=\mathrm{exp}\left(-{\int }_{{t}_{0}}^{t}{h}_{i}\left(s\right)ds\right).$ The first equation describes the hazard rate $hi⁢(t)$ in terms of the covariates $ui$ and the repair and damage rates. The baseline hazard $h0⁢(t,sex)$ is modeled with sex-specific splines, due to the large disparity in survival by sex. The covariates are $ui=(1,sex,treatment,sex×treatment,f,a0)$. The ${Ml,3⁢(t|k)}l=1L$ functions are M-spline basis functions of order 3 with an $L$-component knot vector $k$. Priors and hyperparameters We use weakly informative priors to regularize parameters, (8) ${\beta }_{0}^{r},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\beta }_{0}^{d},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\gamma }_{0}\sim \ {0ex}}{\mathbit{\beta }}^{r},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\mathbit{\beta }}^{d},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\ mathbit{\gamma },\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\gamma }^{r},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\gamma }^{d}\sim \mathcal (9) $\left[{\mathbf{b}}_{i}^{r},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\mathbf{b}}_{i}^{d}\right]\sim \mathcal{N}\left(0,\mathbf{\Sigma }\right),\phantom{\rule {thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\mathbf{\Sigma }=\mathbit{\sigma }\ mathbf{\Omega }\mathbit{\sigma },\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule {thinmathspace}{0ex}}\mathbit{\sigma }\sim \mathrm{H}\mathrm{a}\mathrm{l}\mathrm{f}\mathrm{C}\mathrm{a}\mathrm{u}\mathrm{c}\mathrm{h}\mathrm{y}\left(0,1\right),\phantom{\rule{thinmathspace}{0ex}}\ phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\mathbf{\Omega }\sim \mathrm{L}\mathrm{K}\mathrm{J}\left(2\ (10) $\mathbf{a}\sim \mathrm{Dirichlet}\left(1.0,L=17\right),\mathbf{k}=\left\{\mathrm{min}\left({\left\{{T}_{i}\right\}}_{i}\right),{Q}_{0.05}\left({\left\{{T}_{i}\right\}}_{i}\right),\mathrm{\dots Broad $N(0,3)$ priors are used on intercept parameters and narrow $N(0,1)$ priors on covariate coefficients (Equation 8). The covariance matrix $Σ$ for the coupling of the subject-specific parameters $bi$ is decomposed in terms of a correlation matrix $Ω$ with an LKJ prior and standard deviations $σ$ with half-Cauchy distributions (Equation 9). The LKJ distribution is a standard prior for correlation matrices, where $LKJ⁢(η=1)$ is a uniform distribution over correlation matrices (Lewandowski et al., 2009). Increasing $η$ results in a sharper peak at the identity matrix. These weak priors have the effect of making large parameter values unlikely (all parameters are put on the same scale by standardizing the values of all covariates first to mean 0 and variance 1), which improves the computational speed of the MCMC sampler. Choice of such weak priors affect quantitative results to a small extent, but do not affect our qualitative results – like other Spline coefficients $a$ use a Dirichlet distribution with concentration 1, representing a uniform prior on the simplex $∑l=1Lal=1,al≥0$. We use $L=17$ spline knots with knots at the minimum last follow-up age, the maximum, and 15 uniformly spaced quantiles from 0.05 to 0.95 of the last follow-up age (Equation 10). Integrals of the hazard rate are computed with 5-point Gaussian Quadrature between each observed time interval. Non-linear modeling for human data There is much more human data than mice and the data is more complex, where linear effects are not sufficient to capture the combined influence of wealth, baseline age, and time. We use a non-linear Poisson model with non-constant coefficients to include additional degrees of freedom. We parameterize these non-constant coefficients with B-splines. The individuals selected from ELSA with wealth data do not have mortality data available, simplifying the model from the joint model used above for mice. Our model has the form, (11) ${\lambda }_{i}^{r}\left(t\right)=\mathrm{Softplus}\left({\mathbit{\beta }}_{0}^{r}\cdot {\mathbf{x}}_{i}\left(t\right)+{\beta }_{1}^{r}\left(w,{a}_{0}\right)+{\beta }_{2}^{r}\left(w,{a}_{0}\ right)×\mathrm{sex}+{\beta }_{3}^{r}\left(w,{a}_{0}\right)×t+{\beta }_{4}^{r}\left(w,{a}_{0}\right)×\mathrm{sex}×t+{b}_{i,0}^{r}\right),$ (12) ${\lambda }_{i}^{d}\left(t\right)=\mathrm{Softplus}\left({\mathbit{\beta }}_{0}^{d}\cdot {\mathbf{x}}_{i}\left(t\right)+{\beta }_{1}^{d}\left(w,{a}_{0}\right)+{\beta }_{2}^{d}\left(w,{a}_{0}\ right)×\mathrm{sex}+{\beta }_{3}^{d}\left(w,{a}_{0}\right)×t+{\beta }_{4}^{d}\left(w,{a}_{0}\right)×\mathrm{sex}×t+{b}_{i,0}^{d}\right),$ (13) ${n}_{i}^{r}\left({t}_{j}\right)|{\lambda }_{i}^{r}\left({t}_{j}\right),{n}_{i}\left({t}_{j}\right)\sim \mathrm{Poisson}\left({n}_{i}\left({t}_{j}\right){\lambda }_{i}^{r}\left({t}_{j}\right)\ (14) ${n}_{i}^{d}\left({t}_{j}\right)|{\lambda }_{i}^{d}\left({t}_{j}\right),{n}_{i}\left({t}_{j}\right)\sim \mathrm{Poisson}\left(N-{n}_{i}\left({t}_{j}\right){\lambda }_{i}^{d}\left({t}_{j}\right)\ where $w$ denotes wealth and a[0] denotes baseline age and, (15) ${\mathbf{x}}_{i}\left(t\right)=\left(1,t,\mathrm{sex},w,f,{a}_{0},\mathrm{sex}×t,w×t,{a}_{0}×t,\mathrm{sex}×f,w×f,{a}_{0}×f\right).$ The non-constant coefficients ${βk⁢(w,a0)}k$ are implemented as 2D B-splines for wealth and baseline age with 5 wealth knots and 5 baseline age knots at the minimum, maximum and terciles of these variables. We use smoothing 2D random-walk priors on the spline coefficients, (16) ${s}_{11},{\tau }_{w},{\tau }_{{b}_{0}}\sim \mathcal{N}\left(0,1\right),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule {thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{p}_{w},{p}_{{b}_{0}}\sim \mathrm{D}\mathrm{i}\mathrm{r}\mathrm{i}\mathrm{c}\mathrm{h}\mathrm{l}\mathrm{e}\mathrm{t}\left(1.5\right),$ (17) ${s}_{ij}\sim {p}_{w}\mathcal{N}\left({s}_{i-1,j},{\tau }_{w}\right)+{p}_{{b}_{0}}\mathcal{N}\left({s}_{i,j-1},{\tau }_{{b}_{0}}\right),$ (18) ${\beta }_{k}\left(w,{a}_{0}\right)=\sum _{i,j=1}^{5}{s}_{ij}{B}_{i,3}\left(w;{\mathbf{k}}_{w}\right){B}_{j,3}\left({a}_{0};{\mathbf{k}}_{{a}_{0}}\right).$ All other priors are the same as in the mouse modelling. Note, in the human data we do not include subject-specific time-slopes $bi,1r$ and $bi,1d$ as we did in the mouse data, since we have much shorter time-series. When these slopes are included, we see evidence of the model over-fitting to the data by the proportion of residuals including zero within 95% credible intervals being much higher than 0.95 – nearing 0.99 to 1.00. We can compute the derivative of the Frailty Index according to the modelled repair and damage rates, (19) $\frac{d}{dt}{f}_{i}\left(t\right)=\left(1-{f}_{i}\right){\lambda }_{i}^{d}\left(t\right)-{f}_{i}{\lambda }_{i}^{r}\left(t\right).$ To understand the effect of interventions, we compute the derivative with respect to time for the repair and damage rates, (20) $\begin{array}{ll}\frac{d}{dt}{\lambda }_{i}^{r}\left(t\right)& =\frac{\mathrm{\partial }{\lambda }_{i}^{r}\left(t\right)}{\mathrm{\partial }t}+\frac{\mathrm{\partial }{\lambda }_{i}^{r}\left(t\ right)}{\mathrm{\partial }f}\frac{d{f}_{i}\left(t\right)}{dt},\\ & =\left({\mathbit{\beta }}^{r}\cdot \frac{d{\mathbf{x}}_{i}\left(t\right)}{dt}+{\mathbit{\beta }}^{r}\cdot \frac{d{\mathbf{x}}_{i}\ left(t\right)}{df}+{b}_{i,1}^{r}\right)\frac{{e}^{{\lambda }_{i}^{r}\left(t\right)}}{{e}^{{\lambda }_{i}^{r}\left(t\right)}+1}.\end{array}$ (21) $\begin{array}{ll}\frac{d}{dt}{\lambda }_{i}^{d}\left(t\right)& =\frac{\mathrm{\partial }{\lambda }_{i}^{r}\left(t\right)}{\mathrm{\partial }t}+\frac{\mathrm{\partial }{\lambda }_{i}^{r}\left(t\ right)}{\mathrm{\partial }f}\frac{d{f}_{i}\left(t\right)}{dt},\\ & =\left({\mathbit{\beta }}^{d}\cdot \frac{d{\mathbf{x}}_{i}\left(t\right)}{dt}+{\mathbit{\beta }}^{d}\cdot \frac{d{\mathbf{x}}_{i}\ left(t\right)}{df}+{b}_{i,1}^{D}\right)\frac{{e}^{{\lambda }_{i}^{d}\left(t\right)}}{{e}^{{\lambda }_{i}^{d}\left(t\right)}+1}.\end{array}$ This is the slope of these rates vs time, with the increase in the Frailty Index $f⁢(t)$ included. While we only include explicit linear effects of time in the model, the increase in Frailty Index with time can influence the derivative to change. We can compute the curvature as the second derivative of the Frailty Index with age, written in terms of first derivatives of the rates, (22) $\frac{{d}^{2}}{d{t}^{2}}{f}_{i}\left(t\right)=\left[\left(1-{f}_{i}\left(t\right)\right)\frac{d{\lambda }_{i}^{d}\left(t\right)}{dt}-\frac{d{f}_{i}\left(t\right)}{dt}{\lambda }_{i}^{d}\left(t\ right)\right]-\left[{f}_{i}\left(t\right)\frac{d{\lambda }_{i}^{r}\left(t\right)}{dt}+\frac{d{f}_{i}\left(t\right)}{dt}{\lambda }_{i}^{r}\left(t\right)\right].$ The first group of terms are those involving damage rate (robustness) and the second group of terms are those involving repair (resilience). These terms are plotted in Figure 3. Repair and damage timescale mice and human data We observe the amount of time that has passed between damage and repair events, and vice versa. This can be used to determine the time-scales of these damage and repair processes. However, since a deficit might damage and the individual dies before the deficit is ever repaired, there is right censoring. Additionally, since observations are only made at specific time-points so that we cannot determine the exact time at which a deficit damaged or repaired, there is interval censoring. To estimate the distribution of repair and damage times, we treat repair and damage events for each deficit as a mixture of interval and right censored events (Zhao et al., 2008; Zhaeo, 2012). Accordingly, we model state-survival curves for the damaged state (time-scale of resilience) and undamaged state (time-scale of robustness). We use a Bayesian survival model with M-splines for the baseline hazard, (23) $h\left(t\right)={e}^{{\gamma }_{0}}\sum _{l=1}^{L}{a}_{l}{M}_{l,3}\left(t|\mathbf{k}\right),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace} {0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\sum _{l=1}^{L}{a}_{l}=1,\phantom{\rule phantom{\rule{thickmathspace}{0ex}}S\left(t\right)=\mathrm{exp}\left(-{\int }_{{t}_{0}}^{t}h\left(s\right)ds\right).$ This is fit separately for sex and control/intervention groups. We include both interval-censoring and right-censoring in the likelihood for individual $i$, (24) $p\left({T}_{i}^{\mathrm{L}\mathrm{o}\mathrm{w}\mathrm{e}\mathrm{r}},{T}_{i}^{\mathrm{U}\mathrm{p}\mathrm{p}\mathrm{e}\mathrm{r}},{T}_{i},{c}_{i}|\left\{{a}_{l}{\right\}}_{l}|,{\gamma }_{0}\ where $TLower$ is the lower interval bound, $TUpper$ is the upper interval bound, $T$ is a time of right censoring, and $c$ is the right censoring indicator (c=1 event censored, c=0 event occurs). We use 32 knots set at 30 evenly spaced quantiles of the event times from 0.1 to 0.9 together with the minimum and maximum event time. A uniform $Dirichlet⁢(1.0,32)$ prior is used for the spline coefficients and a broad $N(0,10)$ prior for $γ0$. We use the STAN no U-turn sampler (NUTS) (Stan Development Team, 2020). We use 4000 warm-up iterations and 6000 sampling iterations on four separate chains for the mouse joint models. For mouse dataset 2, we use the sampler settings adapt_delta = 0.95, max_treedepth = 20 to avoid divergent sampler transitions. For the human model, we use two separate chains with 1000 warm-up iterations and 3000 sampling iterations. For the interval-censored Bayesian survival models, we use 2000 warm-up iterations and 3000 sampling iterations for four separate chains. Number of sampling iterations are chosen to achieve adequate effective sample sizes, while remaining computationally feasible. In Figure 2—figure supplement 1 we perform posterior predictive checks (Gabry et al., 2019; Gelman et al., 2020) for the mice and human models by plotting observed and simulated distributions of counts. We also compute $R2$ statistics (Vehtari et al., 2021) and the coverage of credible intervals for residuals. Some of the observed damage and repair transitions may be due to measurement error or data entry errors. In particular, we believe this may be the case for some of the isolated damage and repair transitions. For example, if we consider 5 time-points where a variable has values ${0,0,1,0,0}$, this may be an erroneous transition. Under the assumption that such erroneous transitions will most likely occur isolated from true damage/repair, we prune the data by removing these transitions, e.g. ${0,0,1,0,0}$ becomes ${0,0,0,0,0}$ (erroneous damage) or ${1,1,0,1,1}$ becomes ${1,1,1,1,1}$ (erroneous repair). We only prune damage/repair events isolated by 2 of the opposite state on either side, as shown here. In Figure 5—figure supplement 3, we show that pruning these values only has a limited effect. Code and data availability Source data files for all figures and summary statistics for all fitting parameters and diagnostics of the models are provided. Only pre-existing datasets were used in this study. Information about the datasets and data cleaning is in the methods section. Raw data for mouse dataset 3 are freely available from https://github.com/SinclairLab/frailty. Raw human data are available from https:// www.elsa-project.ac.uk/accessing-elsa-data after registering. All code is available at https://github.com/Spencerfar/aging-damagerepair, (copy archived at swh:1:rev:4fe6f883d37dda6b9059c53aa9366f4ff2665a43). Our code for cleaning these raw datasets is provided. 63. Book Interval-Censored Time-to-Event Data: Methods and Applications Florida, United States: CRC Press. Article and author information Author details Natural Sciences and Engineering Research Council of Canada (RGPIN-2019-05888) Canadian Institutes of Health Research (PJT 155961) Heart and Stroke Foundation of Canada (G-22-0031992) The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. © 2022, Farrell et al. This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. Views, downloads and citations are aggregated across all versions of this paper published by eLife. A two-part list of links to download the article, or parts of the article, in various formats. Downloads (link to download the article as PDF) Open citations (links to open the citations from this article in various online reference manager services) Cite this article (links to download the citations from this article in formats compatible with various reference manager tools) 1. Spencer Farrell 2. Alice E Kane 3. Elise Bisset 4. Susan E Howlett 5. Andrew D Rutenberg Measurements of damage and repair of binary health attributes in aging mice and humans reveal that robustness and resilience decrease with age, operate over broad timescales, and are affected differently by interventions eLife 11:e77632. Further reading 1. Computational and Systems Biology 2. Physics of Living Systems Explaining biodiversity is a fundamental issue in ecology. A long-standing puzzle lies in the paradox of the plankton: many species of plankton feeding on a limited variety of resources coexist, apparently flouting the competitive exclusion principle (CEP), which holds that the number of predator (consumer) species cannot exceed that of the resources at a steady state. Here, we present a mechanistic model and demonstrate that intraspecific interference among the consumers enables a plethora of consumer species to coexist at constant population densities with only one or a handful of resource species. This facilitated biodiversity is resistant to stochasticity, either with the stochastic simulation algorithm or individual-based modeling. Our model naturally explains the classical experiments that invalidate the CEP, quantitatively illustrates the universal S-shaped pattern of the rank-abundance curves across a wide range of ecological communities, and can be broadly used to resolve the mystery of biodiversity in many natural ecosystems. 1. Chromosomes and Gene Expression 2. Computational and Systems Biology Genes are often regulated by multiple enhancers. It is poorly understood how the individual enhancer activities are combined to control promoter activity. Anecdotal evidence has shown that enhancers can combine sub-additively, additively, synergistically, or redundantly. However, it is not clear which of these modes are more frequent in mammalian genomes. Here, we systematically tested how pairs of enhancers activate promoters using a three-way combinatorial reporter assay in mouse embryonic stem cells. By assaying about 69,000 enhancer-enhancer-promoter combinations we found that enhancer pairs generally combine near-additively. This behaviour was conserved across seven developmental promoters tested. Surprisingly, these promoters scale the enhancer signals in a non-linear manner that depends on promoter strength. A housekeeping promoter showed an overall different response to enhancer pairs, and a smaller dynamic range. Thus, our data indicate that enhancers mostly act additively, but promoters transform their collective effect non-linearly. 1. Computational and Systems Biology 2. Physics of Living Systems Planar cell polarity (PCP) – tissue-scale alignment of the direction of asymmetric localization of proteins at the cell-cell interface – is essential for embryonic development and physiological functions. Abnormalities in PCP can result in developmental imperfections, including neural tube closure defects and misaligned hair follicles. Decoding the mechanisms responsible for PCP establishment and maintenance remains a fundamental open question. While the roles of various molecules – broadly classified into “global” and “local” modules – have been well-studied, their necessity and sufficiency in explaining PCP and connecting their perturbations to experimentally observed patterns have not been examined. Here, we develop a minimal model that captures the proposed features of PCP establishment – a global tissue-level gradient and local asymmetric distribution of protein complexes. The proposed model suggests that while polarity can emerge without a gradient, the gradient not only acts as a global cue but also increases the robustness of PCP against stochastic perturbations. We also recapitulated and quantified the experimentally observed features of swirling patterns and domineering non-autonomy, using only three free model parameters - the rate of protein binding to membrane, the concentration of PCP proteins, and the gradient steepness. We explain how self-stabilizing asymmetric protein localizations in the presence of tissue-level gradient can lead to robust PCP patterns and reveal minimal design principles for a polarized system.
{"url":"https://elifesciences.org/articles/77632","timestamp":"2024-11-02T22:24:08Z","content_type":"text/html","content_length":"510439","record_id":"<urn:uuid:a53bd033-59eb-4bc6-a3d7-bc0f0e0931d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00004.warc.gz"}
Re: [tlaplus] Fairness conditions and <<A>>_v Hi everyone, Given some specification formula I == ... N == M1 \/ M2 F == []<>M1 /\ []<>M2 S == I /\ [][N]_v /\ F as fairness conditions is said to be wrong. In the TLA paper [1], Lamport writes: An action A can appear in a TLA formula only in the form [][A]_f is a predicate), so are not TLA formulas. In Specifying Systems, Lamport writes: > We can require that the clock never stops by asserting that there must be infinitely many The obvious way to write this assertion is []<>HCnxt, but that's not a legal TLA formula because HCnxt is an action, not a temporal formula . However, an step advances the value of the clock, so it changes . Therefore, an step is also an step that changes ---that is, it's an step. We can thus write the liveness property that the clock never stops as When explaining the need for stuttering steps, Lamport says, “an hour and minute clock is still an hour clock if we hide the minute display”. I was looking for some example to justify the Meanwhile, I had some difficulties… Let's say specs I == v = 0 N == v' = 0 S1a == I /\ [][N]_v /\ []<>N S1b == I /\ [][N]_v /\ []<><<N>>_v When the fairness condition is defined as , spec only allows behaviors where is zero and stays zero forever. When the fairness condition is defined as , wouldn't spec be false, since never changes? By being false, I mean that there's no sequence of assignments of that satisfies . When model checking this specification, TLC accepts so I suppose there's some mistake in my reasoning. Changing the example so that variable is repeatedly selected to be either 0 or 1: I == v \in {0, 1} N == v' \in {0, 1} S2a == I /\ [][N]_v /\ []<>N S2b == I /\ [][N]_v /\ []<><<N>>_v When the fairness condition is defined as , spec allows behaviors where is always 0 or always 1, which is possible although extremely (infinitesimally?) improbable. When the fairness condition is defined as []<><<N>>_v, spec S2b doesn't allow behaviors where v never changes. Does this mean exists to eliminate ambiguity? Is not allowing infinite unchanging sequences a trade-off? Are there mistakes in my reasoning? I'd appreciate your comments! You received this message because you are subscribed to the Google Groups "tlaplus" group. To unsubscribe from this group and stop receiving emails from it, send an email to To view this discussion on the web visit
{"url":"https://discuss.tlapl.us/msg05603.html","timestamp":"2024-11-12T23:12:18Z","content_type":"text/html","content_length":"15463","record_id":"<urn:uuid:ffca94d3-5e95-471c-a237-b071e22fd3e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00060.warc.gz"}
BIB.PUB[BIB,CSR]25 - www.SailDart.org perm filename BIB.PUB[BIB,CSR]25 blob filedate 1980-08-23 generic text, type C, neo UTF8 COMMENT ā VALID 00011 PAGES C REC PAGE DESCRIPTION C00001 00001 C00002 00002 .require "setup.csr[bib,csr]" source file C00008 00003 .once center <<reports 1 thru 99>> C00037 00004 .once center <<reports 100 thru 199>> C00066 00005 .once center <<reports 200 thru 299>> C00095 00006 .once center <<reports 300 thru 399>> C00125 00007 .once center <<reports 400 thru 499>> C00154 00008 .once center <<reports 500 thru 599>> C00184 00009 .once center <<reports 600 thru 699>> C00213 00010 .once center <<reports 700 thru 799>> C00237 00011 .once center <<reports 800 thru 899>> C00242 ENDMK Cā ; .require "setup.csr[bib,csr]" source file; .font A "coron"; .once center @The following is a key to reading the different report numbers. %3STAN-CS-:%1 This is the number given to the report by the Department of Computer Science. It is this number that should be used when ordering a report from us. %3AIM-:%1 This is a %2group%1 number. It stands for Artificial Intelligence Memo. %3HPP-:%1 This is a %2group%1 number. It stands for Heuristic Programming %3SU326:%1 This is a DOE sponsored report, and this is their number for %3SLAC-:%1 This report may also be ordered from the Stanford Linear Accelerator Center using this number. %3LBL-:%1 This report may also be ordered from the Lawrence Berkeley Laboratory using this number. %3CSL-:%1 This report may also be ordered from the Computer Science Laboratory (Department of Electrical Engineering) using this number. %3PB:%1 This report may be ordered from the National Technical Information Service using this number. %3AD:%1 This report may be ordered from the National Technical Information Service using this number. %3AD-A:%1 This report may be ordered from the National Technical Information Service using this number. @For example: If a report was numbered .once center STAN-CS-77-611 (AIM-298, AD-A046 703) it means that you could order the report from the Stanford Department of Computer Science by the number STAN-CS-77-611. You could also order it from the Artificial Intelligence Group by the number AIM- 298. And if neither place had copies left available, you could write to the National Technical Information Service and ask for the same report by specifying number AD-A046 703. @For people with access to the ARPA Network, the texts of some A. I. Memos are stored online in the Stanford A. I. Laboratory disk file. These are designated below by %2Diskfile: <file name>%1 appearing in the header. To order reports from the Stanford Department of Computer Science (or the Artificial Intelligence Group), use the following address: .begin nofill Publications Coordinator Department of Computer Science Stanford University Stanford, California 943O5 USA To order reports from the National Technical Information Service, use the following address: .begin nofill U.S. Department of Commerce National Technicl Information Service 5285 Port Royal Road Springfield, Virginia 22161 USA Memos that are also Ph.D. theses are so marked below and may be ordered from: .begin nofill University Microfilm P. O. Box 1346 Ann Arbor, Michigan 48106 @For people with access to the ARPA Network, the texts of some A. I. Memos are stored online in the Stanford A. I. Laboratory disk file. These are designated below by "Diskfile: <file name>" appearing in the header. This bibliography is kept in diskfile "AIMLST [BIB,DOC] @SU-AI". .next page .once center <<reports 1 thru 99>> %3REPORTS 1 THRU 99%1 STAN-CS-63-1 (AD462108), J. B. Rosen, %2Primal Partition Programming for Block Diagonal Matrices%1, 23 pages, November 1963. STAN-CS-63-2 (AD427753), J. M. Pavkovich, %2The Soultion of Large Systems of Algebraic Equations%1, 46 pages, December 1963. STAN-CS-64-3 (AD430445), G. E. Forsythe (translated by), %2The Theorems in a Paper by V. K. Saulev 'On an Estimate of the Error in Obtaining Characteristic Functions by the Method of Finite Differences' %1, 6 pages, January 1964. STAN-CS-64-4 (AD434858), Stefan Bergman and J. G. Herriot, %2Numerical Solution of Boundary Value Problems by the Method of Integral Operators%1, 24 pages, February 1964. STAN-CS-64-5 (N-6519765), J. B. Rosen, %2Existence and Uniqueness of Equilibrium Points for Concave N-Person Games%1, 28 pages, March 1964. STAN-CS-64-6 (AD600164), R. W. Hockney, %2A Fast Direct Solution of Poisson's Fourier Analysis%1, 28 pages, April 1964. STAN-CS-64-7 (PB176753), J. B. Rosen, %2Sufficient Conditions for Optimal Control of Convex Processes%1, 29 pages, May 1964. STAN-CS-64-8 (AD603116), G. Golub and W. Kahan, %2Calculating the Singular Values and Pseudo-Inverse of a Matrix%1, 33 pages, May 1964. STAN-CS-64-9 (AD604012), Charles Anderson, %2The QD-Algorithm as a Method for Finding the Roots of a Polynomial Equation When All Roots are Positive%1, 74 pages, June 1964. STAN-CS-64-10 (AD603163), R. L. Causey (Thesis), %2On Closest Normal Matrices%1, 131 pages, June 1964. STAN-CS-64-11 (PB176754), T. Nakamura and J. B. Rosen, %2Elastic- Plastic Analysis of Trusses by the Gradient Projection Method%1, 32 pages, July 1964. STAN-CS-64-12 (AD608292), G. Golub and P. Businger, %2Numerical Methods for Solving Linear Least Squares Problems (and) an Algol Procedure for Finding Linear Least Squares Solutions%1, 27 pages, August 1964. STAN-CS-64-13 (N65-27058), V. Pereyra and J. B. Rosen, %2Computation of the Pseudo- inverse of a Matrix of Unknown Rank%1, 28 pages, September STAN-CS-64-14 (TT-65-61724), V. A. Efimenko (translated by G. Reiter and C. Moler), %2On Approximate Calculations of the Eigenvalues and Eigenfunctions of Boundary Value Problems in Partial Differential Equations%1, 20 pages, November 1964. STAN-CS-65-15 (AD611366), D. W. Grace (Thesis), %2Computer Search for Non-Isomorphic Convex Polyhedra%1, 137 pages, January 1965. STAN-CS-65-16 (AD611427), G. E. Forsythe and G. H. Golub, %2Maximizing a Second Degree Polynomial on the Unit Sphere%1, 31 pages, February 1965. STAN-CS-65-17 (AD611434), G. E. Forsythe and N. Wirth, %2Automatic Grading Programs%1, 17 pages, February 1965. STAN-CS-65-18 (AD612478), V. Pereyra, %2The Difference Correction Method for Non- Linear Two-Point Boundary Value Problems%1, February 1965. STAN-CS-65-19 (TT-65-61839), M. I. Ageev and J. Maclaren, %2English Equivalents of Metalinguistic Terms of Russian ALGOL%1, March 1965. STAN-CS-65-20 (PB176755), N. Wirth and H. Weber, %2EULER: A Generalition of ALGOL and its Formal Definition%1, 115 pages, April 1965. STAN-CS-65-21 (PB176756), D. D. Fisher, J. von der Groeben and J. G. Toole, %2Vectorgardiographic Analysis by Digital Computer, Selected Results%1, 104 pages, May 1965. STAN-CS-65-22 (AD616676), C. B. Moler (Thesis), %2Finite Difference Methods for the Eigenbales of Laplace's Operator%1, 142 pages, May 1965. STAN-CS-65-23 (AD618214), B. D. Rudin (Thesis), %2Convex Polynomial Approximation%1, 44 pages, June 1965. STAN-CS-65-24 (AD616611), V. V. Klyuyev and N. I. Kokovkin Shoherbak (translated by G. J. Tee), %2On the Minimization of the Number of Arithmetic Operations for the Solution of Linear Algebraic Systems of Equations%1, 24 pages, June 1965. STAN-CS-65-25 (AD618215), P. G. Hodge, %2Yield-Point Load Determination by Nonlinear Programming%1, 24 pages, June 1965. STAN-CS-65-26 (not at NTIS), G. E. Forsythe, %2Stanford University's Program in Computer Science%1, 15 pages, June 1965. STAN-CS-65-27 (AD618216), E. A. Volkov (translated by R. Bartels), %2An Analysis of One Algorithm of Heightened Precision of the Method of Nets for the Solution of Poisson's Equation%1, 29 pages, July 1965. STAN-CS-65-28 (AD618217), J. Miller and G. Strang, %2Matrix Theorems for Partial Differential an Difference Equations%1, 33 pages, July 1965. STAN-CS-65-29 (AD624837), V. Pereyra, %2On Improving an Approximate Solution of a Functional Equation by Deferred Corrections%1, 32 pages, August 1965. STAN-CS-65-30 (SS624-829), S. Marchuk (translated by G. J. Tee), %2The Automatic Construction of Computational Algorithms%1, 56 pages, September STAN-CS-65-31 (SS626-315), P. A. Raviart, %2On the Approximation of Weak Solutions of Linear Parabolic Equations by a Class of Multi-step Difference Methods%1, 55 pages, December 1965. STAN-CS-65-32 (SS633-557), R. W. Hockney, %2Minimum Multiplication Fourier Analysis%1, 53 pages, December 1965. STAN-CS-65-33 (PB176763), N. Wirth, %2A Programming Language for the 360 Computers%1, 23 pages, December 1965. STAN-CS-66-34 (SS630-998), J. M. Varah, %2Eigenvectors of a Real Matrix by Inverse Iteration%1, 24 pages, February 1966. STAN-CS-66-35 (PB176758), N. Wirth and C. Hoare, %2A Contribution to the Development of ALGOL%1, 64 pages, February 1966. STAN-CS-66-36 (PB176759), J. F. Traub, %2The Calculation of Zeros of Polyynomials and Analytic Functions%1, 26 pages, April 1966. STAN-CS-66-37 (PB176789), J. D. Reynolds, %2Cogent 1.2 Operations Manual%1, 33 pages, April 1966. STAN-CS-66-38 (AIM-40, AD662880), J. McCarthy and J. Painter, %2Correctness of a Compiler for Arithmetic Expressions%1, 13 pages, April STAN-CS-66-39 (PB176760), G. E. Forsythe, %2A University's Educational Program in Computer Science%1, 26 pages, May 1966. STAN-CS-66-40 (AD639052), G. E. Forsythe, %2How Do You Solve a Quadratic Equation?%1, 19 pages, June 1966. STAN-CS-66-41 (SS638-976), W. Kahan, %2Accurate Eigenvalues of a Symmetric Tri-Diagonal Matrix%1, 53 pages, July 1966. STAN-CS-66-42 (SS638-797), W. Kahan, %2When to Neglect Off-Diagonal Elements of Symmetric Tri-Diagonal Matrices%1, 10 pages, July 1966. STAN-CS-66-43 (SS638-798), W. Kahan and J. Varah, %2Two Working Algorithms for the Eigenvalues of a Symmetric Tri-Diagonal Matrix%1, 28 pages, August 1966. STAN-CS-66-44 (SS638-818), W. Kahan, %2Relaxation Methods for an Eigenvalue Problem%1, 35 pages, August 1966. STAN-CS-66-45 (SS638-799), W. Kahan, %2Relaxation Methods for Semi- Definite Systems%1, 31 pages, August 1966. STAN-CS-66-46 (SS638-809), G. E. Forsythe, %2Today's Computational Methods of Linear Algebra%1, 47 pages, August 1966. STAN-CS-66-47 (PB173335), P. Abrams, %2An Interpreter for 'Inverson Notation' %1, 61 pages, August 1966. STAN-CS-66-48 (SS639-166), W. M. McKeeman (Thesis), %2An Approach to Computer Language Design%1, 124 pages, August 1966. STAN-CS-66-49 (AIM-43, SS640-836), D. R. Reddy (Thesis), %2An Approach to Computer Speech Recognition by Direct Analysis of Speech Wave%1, 143 pages, September 1966. STAN-CS-66-50 (AIM-46, PB176761), S. Persson (Thesis), %2Some Sequence Extrapulating Programs: A Study of Representation and Modelling in Inquiring Systems%1, 176 pages, September 1966. STAN-CS-66-51 (AD648394), S. Bergman, J. G. Herriot and T. G. Kurtz, %2Numerical Calculation of Transonic Flow Patterns%1, 35 pages, October STAN-CS-66-52 (PB176762), A. C. Shaw, %2Lecture Notes on a Course in Systems Programming%1, 216 pages, December 1966. STAN-CS-66-53 (PB176757), N. Wirth, %2A Programming Language for the 360 Computers%1, 81 pages, December 1966. STAN-CS-67-54 (AD662882), G. Golub and T. N. Robertson, %2A Generalized Bairstow Algorithm%1, 10 pages, January 1967. STAN-CS-67-55 (AD647200), D. A. Adams, %2A Stopping Criterion for Polynomial Root Finding%1, 11 pages, February 1967. STAN-CS-67-56 (PB176764), F. L. Bauer, %2QD-Method with Newton Shift%1, 6 pages, March 1967. STAN-CS-67-57 (PB176765), D. Gries, %2The Use of Transition Matrices in Compiling%1, 60 pages, March 1967. STAN-CS-67-58 (PB176766), V. Tixier (Thesis), %2Recursive Functions of Regular Expressions in Language Analysis%1, 146 pages, March 1967. STAN-CS-67-59 (SS650-116), J. H. Wilkinson, %2Almost Diagonal Matrices with Multiple or Close Eigenvalues%1, 18 pages, April 1967. STAN-CS-67-60 (SS650-117), J. H. Wilkinson, %2Two Algorithms Based on Successive Linear Interpolation%1, 13 pages, April 1967. STAN-CS-67-61 (SS650-610), G. E. Forsythe, %2On the Asymptotic Directions of the S-Dimensional Optimum Gradient Method%1, 43 pages, April STAN-CS-67-62 (SS650-620), M. Tienari, %2Varying Length Floating Point Arithmetic: A Necessary Tool for the Numerical Analyst%1, 38 pages, April STAN-CS-67-63 (SS650-627), G. Polya, %2Graeffe's Method for Eigenvalues%1, 9 pages, April 1967. STAN-CS-67-64 (SS651-201), P. Richman, %2Floating-Point Number Representations: Base Choice Versus Exponent Range%1, 32 pages, April STAN-CS-67-65 (PB176767), N. Wirth, %2On Certain Basic Concepts of Programming Languages%1, 30 pages, May 1967. STAN-CS-67-66 (AD652921), J. M. Varah (Thesis), %2The Computation of Bounds for the Invariant Subspaces of a General Matrix Operator%1, 240 pages, May 1967. STAN-CS-67-67 (AD652992), R. H. Bartels and G. H. Golub, %2Computational Considerations Regarding the Calculation of Chebyshev Solutions for Over-Determined Linear Equations Systems by the Exchange Method%1, 63 pages, June 1967. STAN-CS-67-68 (PB176768), N. Wirth, %2The PL 360 System%1, 63 pages, June STAN-CS-67-69 (PB176769), J. Feldman and D. Gries, %2Translator Writing Systems%1, 127 pages, June 1967. STAN-CS-67-70 (AD655472), S. Bergman, J. G. Herriot and P. L. Richman, %2On Computation of Flow Patterns of Compressible Fluids in the Transonic Region%1, 77 pages, July 1977. STAN-CS-67-71 (AD655230), M. A. Jenkins and J. F. Traub, %2An Algorithm for an Automatic General Polynomial Solver%1, 38 pages, July STAN-CS-67-72 (PB175581), G. H. Golub and L. B. Smith, %2Chebysev Approximation of Continuous Functions by a Chebyshev Systems of Functions%1, 54 pages, July 1967. STAN-CS-67-73 (AD662883), P. Businger and G. H. Golub, %2Least Squares, Singular Values and Matrix Approximations (and) an ALGOL Procedure for Computing the Singular Value Decomposition%1, 12 pages, July 1967. STAN-CS-67-74 (AD657639), G. E. Forsythe, %2What is a Satisfactory Quadratic Equation Solver?%1, 9 pages, August 1967. STAN-CS-67-75 (PB175793), F. L. Bauer, %2Theory of Norms%1, 136 pages, August 1967. STAN-CS-67-76 (AD657450), P. M. Anselone, %2Collectively Compact Operator Approximations%1, 60 pages, April 1967. STAN-CS-67-77 (PB176770), G. E. Forsythe, %2What To Do Till The Computer Scientist Comes%1, 13 pages, September 1967. STAN-CS-67-78 (PB176771), K. M. Colby and H. Enea, %2Machine Utilization of the Natural Language Word 'Good' %1, 8 pages, September STAN-CS-67-79 (AD662884), R. W. Doran, %2360 U.S. Fortran IV Free Field Input/Output Subroutine Package%1, 21 pages, October 1967. STAN-CS-67-80 (AD662902), J. Friedman, %2Directed Random Generation of Sentences%1, 30 pages, October 1967. STAN-CS-67-81 (AD661217), G. H. Golub and J. H. Welsch, %2Calculation of Gauss Quadrature Rules%1, 28 pages, November 1967. STAN-CS-67-82 (PB176775), L. Tesler, H. Enea and K. M. Colby, %2A Directed Graph Representation for Computer Simulation of Belief Systems%1, 31 pages, December 1967. STAN-CS-68-83 (AD664237), A. Bjorck and G. Golub, %2Iterative Refinements of Linear Squares Solutions by House-Holder Transformations%1, 28 pages, January 1968. STAN-CS-68-84 (AD692680), J. Friedman, %2A Computer System for Transformational Grammar%1, 31 pages, January 1968. STAN-CS-68-85 (PB177426), K. M. Colby, %2Computer-Aided Language Development in Nonspeaking Mentally Disturbed Children%1, 35 pages, December 1968. STAN-CS-68-86 (PB179162), H. R. Bauer, S. Becker and S. L. Graham, %2ALGOL With Programming%1, 90 pages, January 1968. STAN-CS-68-87 (PB178176), J. Ehrman, %2CS 139 Lecture Notes Part 1, Sections 1 thru Preliminary Version%1, 188 pages, 1968. STAN-CS-68-88 (AD665672), S. Schechter, %2Relaxation Methods for Convex Problems%1, 19 pages, February 1968. STAN-CS-68-89 (PB180920), H. R. Bauer, S. Becker and S. L. Graham, %2ALGOL W (revised)%1, 42 pages, March 1968. STAN-CS-68-90 (PB178177), V. R. Lesser, %2A Multi-Level Computer Organization Designed to Separate Data Accessing from the Computation%1, 20 pages, March 1968. STAN-CS-68-91 (PB178114), N. Wirth, J. W. Wells, Jr. and E. H. Satterthwaite, Jr., %2The PL360 System%1, 89 pages, April 1968. STAN-CS-68-92 (PB178078), H. Enea, %2MLISP%1, 18 pages, March 1968. STAN-CS-68-93 (PB178078), G. E. Forsythe, %2Computer Science and Education%1, 50 pages, March 1968. STAN-CS-68-94 (SLACR-84), A. C. Shaw (Thesis), %2The Formal Description and Parsing of Pictures%1, 205 pages, April 1968. STAN-CS-68-95 (not at NTIS), J. Friedman and R. W. Doran, %2A Formal Syntax for Transformational Grammar%1, 47 pages, March 1968. STAN-CS-68-96 (AD673673), L. B. Smith, %2Interval Arithmetic Determinant Evalation and its Use in Testing for a Chebyshev System%1, 26 pages, April STAN-CS-68-97 (not at NTIS), W. F. Miller, %2Research in the Computer Science Department at Stanford University%1, 49 pages, April 1968. STAN-CS-68-98 (PB179162), H. Bauer, S. Becker and S. Graham, %2ALGOL With Implementation%1, 147 pages, May 1968. STAN-CS-68-99 (PB179057), J. Friedman, %2Lecture Notes on Foundations for Computer Science%1, 212 pages, June 1968. .next page .once center <<reports 100 thru 199>> %3REPORTS 100 THRU 199%1 STAN-CS-68-100 (PB178877), T. H. Bredt, %2A Computer Model of Information Processing in Children%1, 60 pages, June 1968. STAN-CS-68-101 (AIM-60, AD672923), D. M. Kaplan (Thesis), %2The Formal Theoretic Analysis of Stront Equivalence for Elemental Programs%1, 263 pages, June 1968. STAN-CS-68-102 (AD677982), A. Pnueli, %2Integer Programming Over a Cone%1, 29 pages, July 1968. STAN-CS-68-103 (AD692689), T. H. Bredt and J. Friedman, %2Lexical Insertion in Transformational Grammar%1, 47 pages, June 1968. STAN-CS-68-104 (AD673010), R. Bartels, %2A Numerical Investigation of the Simplex Method%1, 122 pages, July 1968. STAN-CS-68-105 (AD673674), P. Richman (Thesis), %2Epsiolon-Calculus%1, 138 pages, August 1968. STAN-CS-68-106 (AIM-65, AD673971), B. Huberman (Thesis), %2A Program to Play Chess End Games%1, 168 pages, August 1968. STAN-CS-68-107 (AD668558), M. Jenkins, %2A Three-Stage Variable-Shift Iteration for Polynomial Zeros and its Relation to Generalized Rayleigh Interation%1, 46 pages, August 1968. STAN-CS-68-108 (AD692681), J. Friedman (editor), %2Computer Experiments in Transformational Grammar%1, 36 pages, August 1968. STAN-CS-68-109 (AD692690), J. Friedman, %2A Computer System for Writing and Testing Transformational Grammars - Final Report%1, 14 pages, September 1968. STAN-CS-68-110 (PB180920), H. Bauer, S. Becker, S. Graham and E. Satterthwaite, %2ALGOL W (revised)%1, 103 pages, October 1968. STAN-CS-68-111 (AD692691), J. Friedman and T. Martner, %2Analysis in Transformational Grammar%1, 18 pages, August 1968. STAN-CS-68-112 (AD692687), J. Friedman and B. Pollack, %2A Control Language for Transformational Grammar%1, 51 pages, August 1968. STAN-CS-68-113 (PB188705), W. J. Hansen, %2The Impact of Storage Management on Plex Processing Language Implementation%1, 253 pages, July STAN-CS-68-114 (PB182156), J. George, %2Calgen, An Interactive Picture Calculus Generation System%1, 75 pages, December 1968, STAN-CS-68-115 (AD692686), J. Friedman, T. Bredt, R. Doran, T. Martner and B. Pollack, %2Programmer's Manual for a Computer System for Transformational Grammar%1, 199 pages, August 1968. STAN-CS-68-116 (AIM-72, AD680036), D. Pieper (Thesis), %2The Kinematics of Manipulators Under Computer Control%1, 157 pages, October 1968. STAN-CS-68-117 (PB182151), D. Adams (Thesis), %2A Computational Model with Data Flow Sequencing%1, 130 pages, December 1968. STAN-CS-68-118 (AIM-74, AD681027), D. Waterman (Thesis), %2Machine Learning of Heuristics%1, 235 pages, December 1968. STAN-CS-68-119 (AD692681), G. Dantzig, et al., %2Mathematical Programming Language%1, 91 pages, May 1968. STAN-CS-68-120 (PB182166), E. Satterthwaite, %2Mutant 0.5: An Experimental Programming Language%1, 60 pages, February 1968. STAN-CS-69-121 (AD682978), C. B. Moler, %2Accurate Bounds for the Eigenvalues of the Laplacian and Applications to Rhombical Domains%1, 17 pages, February 1969. STAN-CS-69-122 (AD687450), W. C. Mitchell and D. L. McCraith, %2Heuristic Analysis of Numerical Variants of the Gram-Schmidt Orthonormalization Process%1, 21 pages, February 1969. STAN-CS-69-123 (AD696982), R. P. Brent, %2Empirical Evidence for a Proposed Distribution of Small Prime Gaps%1, 18 pages, February 1969. STAN-CS-69-124 (AD687719), G. H. Golub, %2Matrix Decompositions and Statistical Calculations%1, 52 pages, March 1969. STAN-CS-69-125 (AIM-89, AD692390), J. Feldman, J. Horning, J. Gips and S. Reder, %2Grammatical Complexity and Inference%1, 100 pages, June 1969. STAN-CS-69-126 (AD702898), G. Dantzig, %2Complementary Spanning Trees%1, 10 pages, March 1969. STAN-CS-69-127 (AIM-85, AD687720), P. Vicens (Thesis), %2Aspects of Speech Recognition by Computer%1, 210 pages, April 1969. STAN-CS-69-128 (AD687717), G. H. Golub, B. L. Buzbee and C. W. Nielson, %2The Method of Odd/Even Reduction and Factorization with Application to Poisson's Equation%1, 39 pages, April 1969. STAN-CS-69-129 (not at NTIS), W. F. Miller, %2Research in the Computer Science Department%1, 82 pages, April 1969. STAN-CS-69-130 (AIM-83, PB183907), R. C. Schank (Thesis), %2A Conceptual Dependency Representation for a Computer-Oriented Semantics%1, 201 pages, March 1969. STAN-CS-69-131 (SLAC-96), L. B. Smith (Thesis), %2The Use of Man- Machine Interaction in Data-Fitting Problems%1, 287 pages, March 1969. STAN-CS-69-132, Never Printed. STAN-CS-69-133 (AD687718), G. H. Golub and C. Reinsch, %2Handbook Series Linear Algebra: Singular Value Decompositions and Least Sequares Solutions%1, 38 pages, May 1969. STAN-CS-69-134 (AD700923), G. H. Golub and M. A. Saunders, %2Linear Least Squares and Quadratic Programming%1, 38 pages, May 1969. STAN-CS-69-135 (SLACR-102, not at NTIS), D. Gries, %2Compiler Implementation Language%1, 113 pages, May 1969. STAN-CS-69-136 (SLACR-104, not at NTIS), I. Pohl (Thesis), %2Bi- Directional and Heuristic Search in Path Problems%1, 157 pages, May 1969. STAN-CS-69-137 (AD698801), P. Henrici, %2Fixed Points of Analytic Functions%1, 7 pages, July 1969. STAN-CS-69-138 (AIM-96, AD696394), C. C. Green (Thesis), %2The Application of Theorem Proving to Question-Answering Systems%1, 162 pages, June 1969. STAN-CS-69-139 (AIM-98, AD695401), J. J. Horning (Thesis), %2A Study of Grammatical Inference%1, 166 pages, August 1969. STAN-CS-69-140 (AD698799), G. E. Forsythe, %2Design - Then and Now%1, 15 pages, September 1969. STAN-CS-69-141 (PB188542), G. Dahlquist, S. C. Eisenstat and G. H. Golub, %2Bounds for the Error of Linear Systems of Equations Using the Theory of Moments%1, 26 pages, October 1969. STAN-CS-69-142, G. H. Golub and R. Underwood, %2Stationary Values of the Ratio of Quadratic Forms Subject to Linear Constraints%1, 22 pages, November 1969. STAN-CS-69-143 (AD694464), M. A. Jenkins (Thesis), %2Three-Stage Variable-Shift for the Solution of Polynomial Equations with a Posteriori Error Bounds for the Zeros%1 (has also been printed incorrectly as STAN-CS-69-138), 199 pages, August 1969. STAN-CS-69-144 (AD698800), G. E. Forsythe, %2The Maximum and Minimum of a Positive Definite Quadratic Polynomial on a Sphere are Convex Functions of the Radius%1, 9 pages, July 1969. STAN-CS-69-145 (AD698798), P. Henrici, %2Methods of Search for Solving Polynomial Equations%1, 25 pages, December 1969. STAN-CS-70-146 (not at NTIS), G. O. Ramos (Thesis), %2Roundoff Error Analysis of the Fast Fourier Transform%1, February 1970. STAN-CS-70-147 (AD699897), G. E. Forsythe, %2Pitfalls in Computation, or Why a Math Book Isn't Enough%1, 43 pages, January 1970. STAN-CS-70-148 (PB188749), D. E. Knuth and R. W. Floyd, %2Notes on Avoiding `GO TO' Statements%1, 15 pages, January 1970. STAN-CS-70-149 (PB188748), D. E. Knuth, %2Optimum Binary Search Trees%1, 19 pages, January 1970. STAN-CS-70-150 (AD699898), J. H. Wilkinson, %2Elementary Proof of the Wielandt-Hoffman Theorem and of its Generalization%1, 8 pages, January STAN-CS-70-151 (not at NTIS), E. A. Volkov (translated by G. E. Forsythe), %2On the Properties of the Derivatives of the Solution of Laplace's Equation and the Errors of the Method of Finite Differences for Boundary Values in C(2) and C(1,1)%1, 26 pages, January 1970. STAN-CS-70-152 (not at NTIS), S. Gustafson, %2Rapid Computation of Interpolation Formulae and Mechanical Quadrature Rules%1, 23 pages, February 1970. STAN-CS-70-153 (AD701358), S. Gustafson, %2Error Propagation by Use of Interpolation Formulae and Quadrature Rules which are Computed Numerically%1, 17 pages, February 1970. STAN-CS-70-154, H. S. Stone, %2The Spectrum of Incorrectly Decoded Bursts for Cyclic Error Codes%1, 24 pages, February 1970. STAN-CS-70-155 (AD705508), B. L. Buzbee, G. H. Golub and C. W. Nielson, %2The Method of Odd/Even Reduction and Factorization with Application to Poisson's Equation, Part II%1, 36 pages, March 1970. STAN-CS-70-156 (AD713972), G. B. Dantzig, %2On a Model for Computing Roundoff Error of a Sum%1, October 1979. STAN-CS-70-157 (AD705509), R. P. Brent, %2Algorithms for Matrix Multiplication%1, 54 pages, March 1970. STAN-CS-70-158, H. Stone, %2Parallel Processing with the Perfect Shuffle%1, 36 pages, March 1970. STAN-CS-70-159 (AD708690), J. A. George, %2The Use of Direct Methods for the Solution of the Discrete Poisson Equation on Non-Rectangular Regions%1, 2 pages, June 1970. STAN-CS-70-160 (CSL-TR-5, AD707762), T. H. Bredt and E. McCluskey, %2A Model for Parallel Computer Systems%1, 62 pages, April 1970. STAN-CS-70-161 (SLACR-117, not at NTIS), L. J. Hoffman (Thesis), %2The Formulary Model for Access Control and Privacy in Computer Systems%1, 81 pages, May 1970. STAN-CS-70-162 (SLACP-760, AD709564), R. H. Bartels, G. H. Golub and M. A. Saunders, %2Numerical Techniques in Mathematical Programming%1, 61 pages, May 1970. STAN-CS-70-163 (AD708691), H. Malcolm, %2An Algorithm for Floating- Point Accumulation of Sums with Small Realative Error%1, 22 pages, June 1970. STAN-CS-70-164 (AD708692), V. I. Gordonova (translated by L. Kaufman), %2Estimates of the Roundoff Error in the Solution of a System of Conditional Equations, by V. I. Gordonova%1, 16 pages, June 1970. STAN-CS-70-165, H. Bauer and H. Stone, %2The Scheduling of N Tasks with M Operations on Two Processors%1, 34 pages, July 1970. STAN-CS-70-166 (AIM-128, AD713841), E. J. Sandewall, %2Representing Natural-Language Information in Predicate Calculus%1, 27 pages, July 1970. STAN-CS-70-167 (AIM-129, AD712460), S. Igarashi, %2Semantics of ALGOL- Like Statements%1, 95 pages, June 1970. STAN-CS-70-168 (AIM-130, AD713252), M. Kelly (Thesis), %2Visual Identification of People by Computer%1, 138 pages, July 1970. STAN-CS-70-169 (AIM-126, AD711329), D. Knuth, %2Examples of Formal Semantics%1, 35 pages, August 1970. STAN-CS-70-170 (CSL-TR-6, AD711334), T. Bredt, %2Analysis and Synthesis of Concurrent Sequential Programs%1, 50 pages, May 1970. STAN-CS-70-171 (CSL-TR-8, AD714202), T. Bredt, %2A Survey of Models for Parrallel Computing%1, 58 pages, August 1970. STAN-CS-70-172 (CSL-TR-7, AD714180), T. Bredt, %2Analysis of Parallel Systems%1, 59 pages, August 1970. STAN-CS-70-173 (CSL-TR-9, AD714181), T. Bredt, %2The Mutual Exclusion Problem%1, 68 pages, August 1970. STAN-CS-70-174 (AIM-127, AD711395), Z. Manna and R. Waldinger, %2Towards Automatic Program Synthesis%1, 55 pages, August 1970. STAN-CS-70-175 (AD713842), M. Malcolm, %2A Description and Subroutines for Computing Euclidean Inner Products on the IBM 360%1, 14 pages, October STAN-CS-70-176 (AIM-131, AD715128), E. A. Feigenbaum, B. C. Buchanan and J. Lederberg, %2On Generality and Problem Solving: A Case Study Using the DENDRAL Program%1, 48 pages, September 1970. STAN-CS-70-177 (AD715511), R. W. Floyd and D. E. Knuth, %2The Bose- Nelson Sorting Problem%1, 16 pages, October 1970. STAN-CS-70-178 (not at NTIS), G. Forsythe and W. F. Miller, %2Research Review%1, 186 pages, October 1970. STAN-CS-70-179 (AIM-135, AD716566), D. C. Smith, %2MLISP%1, 99 pages, October 1970. STAN-CS-70-180 (AIM-132, AD715665), G. Falk (Thesis), %2Computer Interpretation of Imperfect Line Data as a Three-Dimensional Scene%1, 187 pages, October 1970. STAN-CS-70-181 (AIM-133), A. C. Hearn, %2Reduce 2 - User's Manual%1, 85 pages, October 1970. STAN-CS-70-182 (AIM-134, AD748565), J. Tenenbaum (Thesis), %2Accomodation in Computer Vision%1, 452 pages, Septmeber 1970. STAN-CS-70-183 (AIM-136, AD717600), G. M. White, %2Machine Learning Through Signature Trees...Application to Human Speech%1, 40 pages, October STAN-CS-70-184 (AD715512), M. Malcolm, %2A Note on a Conjecture of J. Mordell%1, 5 pages, October 1970. STAN-CS-70-185 (TID22593), E. Nelson, %2Graph Program Simulation%1, 175 pages, October 1970. STAN-CS-70-186 (AIM-137, AD715513), D. E. Knuth, %2An Empirical Study of Fortran Programs%1, 50 pages, November 1970. STAN-CS-70-187 (AD197154), G. Dantzig et al., %2Mathematical Programming Language (MPL) Specification Manual for Committee Review%1, 82 pages, December 1970. STAN-CS-70-188 (AIM-138, PB197161), E. Ashcroft and Z. Manna, %2The Translation of `Go To' Programs to `While' Programs%1, 28 pages, December STAN-CS-70-189 (AIM-139, AD717601), Z. Manna, %2Mathematical Theory of Partial Correctness%1, 24 pages, December 1970. STAN-CS-70-190 (AD719398), J. Hopcroft, %2An N Log N Algorithm for Minimizing States in a Finite Automaton%1, 12 pages, December 1970. STAN-CS-70-191 (SLACP-904, PB198494), V. Lesser, %2An Introduction to the Direct Emulation of Control Structures by a Parallel Micro- Computer%1, 26 pages, December 1970. STAN-CS-70-192 (AD719399), J. Hopcroft, %2An N Log N Algorithm for Isomorphism of Planar Triply Connected Graphs%1, 6 pages, December 1970. STAN-CS-70-193 (AIM-140, not at NTIS), R. Schank, %2Intention, Memory and Computer Understanding%1, 59 pages, December 1970. STAN-CS-70-194 (PB198495), D. E. Knuth, %2The Art of Computer Programming - Errata et Addenda%1, 28 pages, December 1970. STAN-CS-70-195 (723871), B. L. Buzbee, F. W. Dorr, A. George and G. H. Golub, %2The Direct Solution of the Discrete Poisson Equation on Irregular Regions%1, 30 pages, December 1970. STAN-CS-70-196 (AD725167), C. B. Moler, %2Matrix Computations with Fortran and Paging%1, 13 pages, December 1970. STAN-CS-71-197 (not at NTIS), D. E. Knuth and R. L. Sites, %2Mix/360 User's Guide%1, 11 pages, January 1971. STAN-CS-71-198 (AD726170), R. Brent (Thesis), %2Algorithms for Finding Zeros and Extrema of Functions without Calculating Derivatives%1, 250 pages, February 1971. STAN-CS-71-199 (PB198415), Staff, %2Bibliography of Stanford Computer Science Reports 1963-1971%1, 28 pages, February 1971. .next page .once center <<reports 200 thru 299>> %3REPORTS 200 THRU 299%1 STAN-CS-71-200 (PB198416), J. G. Herriot and C. H. Peinsch, %2ALGOL 60 Procedures for the Calculation of Interpolating Natural Spline Functions%1, 30 pages, February 1971. STAN-CS-71-201 (AD722434), J. Hopcroft and R. Tarjan, %2Planarity Testing in V Log V Steps: Extended Abstracts%1, 18 pages, February 1971. STAN-CS-71-202 (SLAC-117, not at NTIS), H. J. Saal and W. Riddle, %2Communicating Semaphores%1, 21 pages, February 1971. STAN-CS-71-203 (AIM-141, AD730506), B. Buchanan, E. Feigenbaum and J. Lederberg, %2The Heuristic DENDRAL Program for Explaining Empirical Data%1, 20 pages, February 1971. STAN-CS-71-204 (PB198510), D. Ingalls, %2FETE - a Fortran Execution Time Estimator%1, 12 pages, February 1971. STAN-CS-71-205 (AIM-142, AD731383), Robin Milner, %2An Algebraic Definition of Simulation Between Programs%1, 20 pages, March 1971. STAN-CS-71-206 (AD726158), D. E. Knuth, %2Mathematical Analysis of Algorithms%1, 26 pages, March 1971. STAN-CS-71-207 (AD726169), J. Hopcroft and R. Tarjan, %2Efficient Algorithms for Graph Manipulation%1, 19 pages, March 1971. STAN-CS-71-208 (AD726171), J. A. George (Thesis), %2Computer Implementation of the Finite Element Method%1, 220 pages, March 1971. STAN-CS-71-209 (AIM-143, AD724867), J. McCarthy and Staff, %2Project Technical Report%1, 80 pages, March 1971. STAN-CS-71-210 (PB201917), J. Gerry Purdy, %2Access - a Program for the Catalog and Access of Information%1, 28 pages, March 1971. STAN-CS-71-211 (AD727104), M. Malcolm, %2An Algorithm to Reveal Properties of Floating-Point Arithmetic%1, 8 pages, March 1971. STAN-CS-71-212 (AD727107), M. A. Morgana, %2Time and Memory Requirements for Solving Linear Systems%1, 7 pages, March 1971. STAN-CS-71-213 (PB201629), R. Tarjan, %2The Switchyard Problem: Sorting Using Networks of Queues and Stacks%1, 13 pages, April 1971. STAN-CS-71-214 (AD727108), R. L. Graham, D. E. Knuth and T. S. Motzkin, %2Complements and Transitive Closures%1, 6 pages, April 1971. STAN-CS-71-215 (AD727115), M. Malcolm, %2PL360 (Revised) - a Programming Language for the IBM 360%1, 91 pages, May 1971. STAN-CS-71-216 (AIM-147, AD732457), R. E. Kling, %2Reasoning by Analogy with Applications to Heuristics Problem Solving: a Case Study%1, 180 pages, May 1971. STAN-CS-71-217 (AIM-148, AD731730), E. A. Ashcroft, Z. Manna and A. Pnueli, %2Decidable Properties of Monadic Functional Schemas%1, 9 pages, May 1971. STAN-CS-71-218 (AD731038), N. G. Debruijn, D. E. Knuth and S. O. Rice, %2The Average Height of Plane Trees%1, 7 pages, May 1971. STAN-CS-71-219 (AIM-144, not at NTIS), Lynn Quam (Thesis), %2Computer Comparison of Pictures%1, 120 pages, May 1971. STAN-CS-71-220 (CSL-4, AD727116), Harold Stone, %2Dynamic Memories with Enhanced Data Access%1, 32 pages, February 1971. STAN-CS-71-221 (AIM-145, AD731729), B. G. Buchanan, E. Feigenbaum and J. Lederberg, %2A Heuristic Programming Study of Theory Formation in Science%1, 41 pages, June 1971. STAN-CS-71-222 (PB235417/AS), W. J. Meyers (Thesis), %2Linear Representation of Tree Structure (a Mathematical Theory of Parenthesis-Free Notations)%1, 245 pages, June 1971. STAN-CS-71-223 (PB203429), Susan Graham (Thesis), %2Precedence Languages and Bounded Right Context Languages%1, 192 pages, July 1971. STAN-CS-71-224 (AIM-146, PB212183), A. Ershov, %2Parallel Programming%1, 15 pages, July 1971. STAN-CS-71-225 (PB203344), Ake Bjorck and Gene Golub, %2Numerical Methods for Computing Angles Between Linear Subspaces%1, 30 pages, July 1971. STAN-CS-71-226 (SLAC-133), J. E. George, %2SIMPLE - A Simple Precedence Translator Writing System%1, 92 pages, July 1971. STAN-CS-71-227 (SLAC-134), J. E. George (Thesis), %2GEMS - A Graphical Experimental Meta System%1, 184 pages, July 1971. STAN-CS-71-228 (PB203343), Linda Kaufman, %2Function Minimization and Automatic Therapeutic Control%1, 30 pages, July 1971. STAN-CS-71-229 (AD732766), E. H. Lee and G. E. Forsythe, %2Variational Study of Nonlinear Spline Curves%1, 22 pages, August 1971. STAN-CS-71-230 (PB203601), R. L. Sites, %2ALGOL With Reference Manual%1, 141 pages, August 1971. STAN-CS-71-231 (AIM-149, AD732644), Rod Schmidt (Thesis), %2A Study of the Real-Time Control of a Computer Driven Vehicle%1, 180 pages, August 1971. STAN-CS-71-232 (AD733073), C. B. Moler and G. W. Steward, %2An Algorithm for the Generalized Matrix Eigenvalue Problem%1, 50 pages, August 1971. STAN-CS-71-233 (not at NTIS), Wayne Wilner, %2Declarative Semantic Definition%1, 211 pages, August 1971. STAN-CS-71-234 (not at NTIS), Gene H. Golub, %2Some Modified Eigenvalue Problems%1, 38 pages, September 1971. STAN-CS-71-235 (AIM-150, not at NTIS), R. W. Floyd, %2Toward Iterative Design of Correct Programs%1, 12 pages, September 1971. STAN-CS-71-236 (AD737648), G. H. Golub and George Styan, %2Numerical Computation for Univeariate Linear Models%1, 35 pages, September 1971. STAN-CS-71-237 (CSL-TR-16, AD737270), D. C. Van Voorhis, %2A Generalization of the Divide-Sort-Merge Strategy for Sorting Networks%1, 67 pages, September 1971. STAN-CS-71-238 (CSL-TR-17, AD735901), D. C. Van Voorhis, %2A Lower Bound for Sorting Networks That Use the Divide-Sort-Merge Strategy%1, 13 pages, September 1971. STAN-CS-71-239 (CSL-TR-18, AD736610), D. C. Van Voorhis, %2Large [g.d.] Sorting Networks%1, 84 pages, September 1971. STAN-CS-71-240 (AIM-151, AD738568), Ralph London, %2Correctness of Two Compilers for a LISP Subset%1, 42 pages, October 1971. STAN-CS-71-241 (AIM-152, AD732642), Alan Bierman, %2On the Inference of Turing Machines from Sample Computations%1, 31 pages, October 1971. STAN-CS-71-242 (AIM-153, AD738569), Patrick Hayes, %2The Frame Problem and Related Problems in AI%1, 24 pages, November 1971. STAN-CS-71-243 (AIM-154, AD738570), Z. Manna, S. Ness and J. Vuillemin, %2Inductive Methods for Proving Properties of Programs%1, 24 pages, November 1971. STAN-CS-71-244 (AD738027), R. Tarjan (Thesis), %2An Efficient Planarity Algorithm%1, 154 pages, November 1971. STAN-CS-71-245 (AIM-155, not at NTIS), John Ryder (Thesis), %2Heuristic Analysis of Large Trees as Generated in the Game of Go%1, 350 pages, November 1971. STAN-CS-71-246 (AIM-156, AD740141), Ken Colby, S. Weber, Frank Hilf and H. Kraemer, %2A Resemblance Test for the Validation of a Computer Simulation of Paranoid Processing%1, 30 pages, November 1971. STAN-CS-71-247 (AIM-157, not at NTIS), Yorick Wilks, %2On Small Head -- Some Remarks on the Use of 'Model' in Linguistics%1, 16 pages, December STAN-CS-71-248 (AD739335), Michael Fredman and Donald Knuth, %2Recurrence Relations Based on Minimization%1, 35 pages, December 1971. STAN-CS-71-249 (not at NTIS), Bary Pollack, %2An Annotated Bibliography on the Construction of Compilers%1, 140 pages, December 1971. STAN-CS-71-250 (AIM-158, AD740127), Ashok Chandra and Zohar Manna, %2Program Schemas with Equality%1, 13 pages, December 1971. STAN-CS-72-251 (CSL-TR-19, AD736814), Harold Stone, %2An Efficient Parallel Algorithm for the Solution of a Tridiagonal Linear System of Equation%1, 24 pages, January 1972. STAN-CS-72-252 (SU326 P30 14), M. A. Saunders, %2Large-Scale Linear Programming Using the Cholesky Factorization%1, 40 pages, January 1972. STAN-CS-72-253 (AIM-159, not at NTIS), J. A. Feldman and P. C. Shields, %2Total Complexity and the Inference of Best Programs%1, January 1972. STAN-CS-72-254 (AD740330), G. E. Forsythe, %2Von Neumann's Comparison Method for Random Sampling from the Normal and Other Distributions%1, 19 pages, January 1972. STAN-CS-72-255 (AIM-160, AD740140), J. A. Feldman, %2Automatic Programming%1, 20 pages, January 1972. STAN-CS-72-256 (AD740331), V. Chvatal, %2Edmonds Polyhedra and Weakly Hamiltonian Graphs%1, 22 pages, January 1972. STAN-CS-72-257 (PB208519), N. Wirth, %2On Pascal, Code Generation, and the CDC 6000 Computer%1, 39 pages, February 1972. STAN-CS-72-258 (AD740332), Harold Brown, %2Some Basic Machine Algorithms for Integral Order Computations%1, 15 pages, February 1972. STAN-CS-72-259 (PB208595), Clark A. Crane (Thesis), %2Linear Lists and Priority Queues as Balanced Binary Trees%1, 131 pages, February 1972. STAN-CS-72-260 (AD740110), Vaughan R. Pratt (Thesis), %2Shellsort and Sorting Networks%1, 59 pages, February 1972. STAN-CS-72-261 (SU326 P30 15), Gene H. Golub and Victor Pereyra, %2The Diffenentiation of Pseudoinverses and Nonlinear Least Squares Whose Variables Separate%1, 35 pages, February 1972. STAN-CS-72-262 (PB209357), Staff, %2Bibliography%1, 36 pages, February STAN-CS-72-263 (AD741189), David A. Klarner and Ronald Rivest, %2A Procedure for Improving the Upper Bound for the Number of n-Ominoes%1, 31 pages, February 1972. STAN-CS-72-264 (AIM-161, AD741189), Yorick Wilks, %2Artificial Intelligence Approach to Machine Translation%1, 42 pages, February 1972. STAN-CS-72-265 (AIM-162, AD744634), Neil Goldman, Roger Shank, Chuck Rieger and Chris Riesbeck, %2Primitive Concepts Underlying Verbs of Thought%1, 80 pages, February 1972. STAN-CS-72-266 (AIM-163, not at NTIS), Jean Cadiou (Thesis), %2Recursive Definitions of Partial and Functions and Their Computation%1, 160 pages, March 1972. STAN-CS-72-267 (PB209629), Pierre E. Bonzon, %2MPL (An Appraisal Based on Practical Experiment)%1, 26 pages, March 1972. STAN-CS-72-268 (AD742348), V. Chvatal, %2Degrees and Matchings%1, 16 pages, March 1972. STAN-CS-72-269 (AD742747), David Klarner and R. Rado, %2Arithmetic Properties of Certain Recursively Defined Sets%1, 30 pages, March 1972. STAN-CS-72-270 (PB209616), G. Golub, J. H. Wilkinson and R. Underwood, %2The Lanczos Algorithm for the Symmetric Ax = Ī» Bx Problem%1, 21 pages, Marcy 1972. STAN-CS-72-271 (not at NTIS), William E. Riddle (Thesis), %2The Modeling and Analysis of Supervisory Systems%1, 174 pages, March 1972. STAN-CS-72-272 (AIM-164, AD742748), Zohar Manna and J. Vuillemin, %2Fixedpoint Approach to the Theory of Computation%1, 25 pages, March STAN-CS-72-273 (PB209806), V. Chvatal and J. Sichler, %2Chromatic Automorphisms of Graphs%1, 12 pages, March 1972. STAN-CS-72-274 (AD742749), D. Klarner and Richard Rado, %2Linear Combinations of Sets of Consecutive Integers%1, 12 pages, March 1972. STAN-CS-72-275 (AD742750), David A. Klarner, %2Sets Generated by Iteration of a Linear Operation%1, 16 pages, March 1972. STAN-CS-72-276 (AD745022), Linda Kaufman (Thesis), %2A Generalized LR Method to Solve Ax = Bx%1, 70 pages, April 1972. STAN-CS-72-277 (SLAC-149, not at NTIS), C. T. Zahn, %2Region Boundaries on a Triangular Grid%1, 40 pages, April 1972. STAN-CS-72-278 (SU326 P30-17), Paul Concus and Gene H. Golub, %2Use of Fast Direct Methods for the Efficient Numerical Solution of Nonseparable Elliptic Equations%1, April 1972. STAN-CS-72-279 (AD744313), Michael Osborne, %2Topics in Optimization%1, 143 pages, April 1972. STAN-CS-72-280 (AIM-165, AD742751), D. A. Bochvar, %2Two Papers on Partial Predicate Calculus%1, April 1972. STAN-CS-72-281 (AIM-166, AD743598), Lynn Quam, Sydney Liebes, Robert Tucker, Marsha Jo Hanna and Botond Eross, %2Computer Interactive Picture Processing%1, 41 pages, April 1972. STAN-CS-72-282 (AIM-167, AD747254), Ashok K. Chandra, %2Efficient Compilation of Linear Recursive Programs%1, 40 pages, April 1972. STAN-CS-72-283 (not at NTIS), David R. Stoutemyer (Thesis), %2Num*5Qe.once center <<reports 300 thru 399>> %3REPORTS 300 THRU 399%1 STAN-CS-72-300 (CSL-TN-17, AD749848), Marc T. Kaufman, %2Counterexample of a Conjecture of Fujii, Kasami and Ninomiya%1, 5 pages, July 1972. STAN-CS-72-301 (SU326 P30-21), Michael A. Saunders, %2Product Form of the Cholesky Factorization for Large-Scale Linear Programming%1, 35 pages, July 1972. STAN-CS-72-302 (SU326 P30-19), G. H. Golub, %2Some Uses of the Lanczos Algorithm in Numerical Linear Algebra%1, 23 pages, August 1972. STAN-CS-72-303 (AIM-174, PB212827), F. Lockwood Morris (Thesis), %2Correctness of Translations of Programming Languages - an Algebraic Approach%1, 125 pages, August 1972. STAN-CS-72-304 (SU326 P30-20), R. S. Anderssen and G. H. Golub, %2Richardson's Non-Stationary Matrix Iterative Procedure%1, 76 pages, August 1972. STAN-CS-72-305 (AIM-173, AD755139), Gerald Agin (Thesis), %2Representation and Description of Curved Objects%1, 125 pages, August 1972. STAN-CS-72-306 (SU326 P23-X-2), Bary W. Pollack, %2A Bibliography on Computer Graphics%1, 145 pages, August 1972. STAN-CS-72-307 (AIM-175, not at NTIS), Hozumi Tanaka, %2Hadamard Transform for Speech Wave Analysis%1, August 1972. STAN-CS-72-308 (AIM-176, AD754109), J. A. Feldman, J. R. Low, R. H. Taylor and D. C. Swinehart, %2Recent Development in SAIL - an ALGOL Based Language for Artificial Intelligence%1, 22 pages, August 1972. STAN-CS-72-309 (CSL-TR-157, not at NTIS), V. Lesser (Thesis), %2Dynamic Control Structures and Their Use in Emulation%1, 251 pages, August 1972. STAN-CS-72-310 (CSL-TR-34, AD750671), Marc T. Kaufman, %2Anomalies in Scheduling Unit-Time Tasks%1, 22 pages, September 1972. STAN-CS-72-311 (AIM-177, not at NTIS), Richard Paul (Thesis), %2Modelling, Trajectory Calculation and Servoing of a Computer Controlled Arm%1, September 1972. STAN-CS-72-312 (AIM-178, AD754108), Ahron Gill, %2Visual Feedback and Related Problems in Computer Controlled Hand-Eye Coordination%1, 134 pages, September 1972. STAN-CS-72-313 (PB218353/1), Staff, %2Bibliography of Computer Science Reports%1, 42 pages, September 1972. STAN-CS-72-314 (CSL-TR-43, PB212893), Peter M. Kogge (Thesis, Part I), %2Parallel Algorithms for the Efficient Solution of Recurrence Problems%1, 74 pages, September 1972. STAN-CS-72-315 (CSL-TR-44, PB212894), Peter M. Kogge (Thesis, Part II), %2The Numerical Stability of Parallel Algorithms for Solving Recurrence Problems%1, 49 pages, September 1972. STAN-CS-72-316 (CSL-TR-45, PB212828), Peter M. Kogge (Thesis, Part III), %2Minimal Paralellism in the Solution of Recurrence Problems%1, 45 pages, September 1972. STAN-CS-72-317 (CSL-TR-26, AD750672), S. H. Fuller and F. Baskett, %2An Analysis of Drum Storage Units%1, 69 pages, October 1972. STAN-CS-72-318 (AD755140), H. Brown, L. Masinter and L. Hjelmeland, %2Constructive Graph Labeling Using Double Cosets%1, 50 pages, October STAN-CS-72-319 (SU326 P30-22), Gene H. Golub and James M. Varah, %2On a Characterization of the Best 1ā 2 Scaling of a Matrix%1, 14 pages, October STAN-CS-72-320 (AIM-179), Bruce G. Baumgart, %2Winged Edge Polyhedra Representation%1, 46 pages, October 1972. STAN-CS-72-321 (AIM-180, AD759712), Ruzena Bajcsy (Thesis), %2Computer Identification of Textured Visual Scenes%1, 156 pages, October 1972. STAN-CS-72-322 (SU326 P30-23), P. E. Gill, G. H. Golub, W. Murray and M. A. Saunders, %2Methods for Modifying Matrix Factorizations%1, 62 pages, November 1972. STAN-CS-72-323, Michael A. Malcolm and John Palmer, %2A Fast Method for Solving a Class of Tri-Diagonal Linear Systems%1 (also listed on the abstract as %2On the Lu Decomposition of Toeplitz Matrices%1), 11 pages, November 1972. STAN-CS-72-324 (CSL-TR-48, PB214612), Henry R. Bauer, III (Thesis), %2Subproblems of the m %4x%2 n Sequencing Problem%1, 115 pages, November STAN-CS-72-325 (AIM-181), Bruce G. Buchanan, %2Review of Hubert Dreyfus' What Computers Can't Do: A Critique of Artificial Reason%1, 14 pages, November 1972. STAN-CS-72-326 (AIM-182, AD754107), Kenneth Mark Colby and Franklin Dennis Hilf, %2Can Expert Judges, Using Transcripts of Teletyped Psychiatric Interviews, Distinguish Human Paranoid Patients from a Computer Simulation of Paranoid Processes?%1, 12 pages, December 1972. STAN-CS-72-327 (AD755138), David A. Klarner and Ronald L. Rivest, %2Asymptotic Bounds for the Number of Convex n-Ominoes%1, 15 pages, December 1972. STAN-CS-72-328 (CSL-TR-31, PB218929), Harold Gabow, %2An Efficient Implementation of Edmonds' Maximum Matching Algorithm%1, 68 pages, December 1972. STAN-CS-72-329 (PB218875), Isu Fang (Thesis), %2Folds, A Declarative Formal Language Definition System%1, 290 pages, December 1972. STAN-CS-73-330 (AIM-184, AD758651), Malcolm Newey, %2Axioms and Theorems for Integers, Lists and Finite Sets in LCF%1, 53 pages, January 1973. STAN-CS-73-331 (AIM-187, AD757364), George Collins, %2The Computing Time of the Eulidian Algorithm%1, 17 pages, January 1973. STAN-CS-73-332 (AIM-186, AD758645), Robin Milner, %2Models of LCF%1, 17 pages, January 1973. STAN-CS-73-333 (AIM-185, AD757367), Zohar Manna and Ashok Chandra, %2On the Power of Programming Features%1, 29 pages, January 1973. STAN-CS-73-334 (AD757366), Michael A. Malcolm and Cleve B. Moler, %2URAND, A Universal Random Number Generator%1, 10 pages, January 1973. STAN-CS-73-335 (SU326 P30-24), G. Golub and E. Seneta, %2Computation of the Stationary Distribution of an Infinite Markov Matrix%1, 12 pages, January 1973. STAN-CS-73-336 (AIM-188, AD758646), Ashok K. Chandra (Thesis), %2On the Properties and Applications of Program Schemas%1, 225 pages, January 1973. STAN-CS-73-337 (AIM-189, PB218682), James Gips and George Stiny %2Aestehetics Systems%1, 22 pages, January 1973. STAN-CS-73-338 (AD759713), David A Klarner, %2A Finite Basis Theorem Revisited%1, 10 pages, February 1973. STAN-CS-73-339 (SU326 P30-25), Gene H. Golub and Warren Dent, %2Computation of the Limited Information Maximum Likelihood Estimator%1, 27 pages, February 1973. STAN-CS-73-340 (AIM-190, AD759714), Malcolm Newey, %2Notes on a Problem Involving Permutations as Subsequences%1, 20 pages, March 1973. STAN-CS-73-341 (AIM-191, AD764272), Shmuel Katz and Zohar Manna, %2A Heuristic Approach to Program Verification%1, 40 pages, March 1973. STAN-CS-73-342 (AD759715), Donald Knuth, %2Matroid Partitioning%1, 12 pages, March 1973. STAN-CS-73-343 (not at NTIS), David R. Levine (Thesis), %2Computer- Based Analytic Grading for German Grammar Instruction%1, 220 pages, March 1973. STAN-CS-73-344 (AIM-183, AD759716), Roger C. Schank, %2The Fourteen Primitive Actions and Their Inferences%1, 71 pages, March 1973. STAN-CS-73-345 (AIM-192, not at NTIS), George Collins and Ellis Horowitz, %2The Minimum Root Separation of a Polynomial%1, 25 pages, April 1973. STAN-CS-73-346 (AIM-193, AD759717), Kenneth Mark Colby, %2The Rational for Computer Based Treatment of Language Difficulties in Nonspeaking Autistic Children%1, 8 pages, April 1973. STAN-CS-73-347 (AIM-194, PB221170/4), Kenneth M. Colby and Franklin Dennis Hilf, %2Multi Dimensional Analysis in Evaluating a Simulation of Paranoid Thought Processes%1, 10 pages, April 1973. STAN-CS-73-348 (SU326 P30-26, PB222513), V. Pereyra, %2High Order Finite Difference Solution of Differential Equations%1, 86 pages, April 1973. STAN-CS-73-349 (PB221115), Manuel Blum, Robert Floyd, Vaughn Pratt, Ronald Rivest and Robert Tarjan, %2Time Bounds for Selection%1, and Robert Floyd and Ronald Rivest, %2Expected Time Bounds for Selection%1, 51 pages, April STAN-CS-73-350 (CSL-TR-53, AD761177), Marc T. Kaufman, %2An Almost- Optimal Algorithm for the Assmbly Line Scheduling Problem%1, 21 pages, April 1973. STAN-CS-73-351 (CSL-TR-27, AD761175), Samuel H. Fuller, %2Performance of an I/O Channel with Multiple Paging Drums%1, 8 pages, April 1973. STAN-CS-73-352 (CSL-TR-28, AD761176), Samuel H. Fuller, %2The Expected Difference Between the SLTF and MTPT Drum Scheduling Disciplines%1, 6 pages, April 1973. STAN-CS-73-353 (CSL-TR-29, AD761185), Samuel H. Fuller, %2Random Arrivals and MTPT Disc Scheduling Disciplines%1, 7 pages, April 1973. STAN-CS-73-354 (PB221165/4), David A. Klarner, %2The Number of SDR's in Certain Regular Systems%1, 7 pages, April 1973. STAN-CS-73-355 (CSL-TR-57, AD764598), Thomas G. Price, %2An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems%1, 8 pages, April 1973. STAN-CS-73-356 (AIM-195, PB222164), David Canfield Smith and Horace J. Enea, %2MLISP2%1, 92 pages, May 1973. STAN-CS-73-357 (AIM-196, AD762471), Neil M. Goldman and Christopher K. Riesbeck, %2A Conceptually Based Sentence Paraphraser%1, 88 pages, May STAN-CS-73-358 (AIM-197, AD762470), Roger C. Schank and Charles J. Rieger III, %2Inference and the Computer Understanding of Natural Lanugage%1, 40 pages, May 1973. STAN-CS-73-359 (CSL-TN-25, PB222064), Harold Stone, %2A Note on a Combinatorial Problem of Burnett and Coffman%1, 8 pages, May 1973. STAN-CS-73-360 (CSL-TR-33, AD764014), Richard R. Muntz and Forest Baskett, %2Open, Closed and Mixed Networks of Queues with Different Classes of Customers%1, 40 pages, May 1973. STAN-CS-73-361 (Serra, AD764273), Harold Brown and Larry Masinter, %2An Algorithm for the Construction of the Graphs of Organic Molecules%1, 25 pages, May 1973. STAN-CS-73-362, appears in print as STAN-CS-73-398. STAN-CS-73-363 (Serra, PB222099), Linda C. Kaufman (Thesis), %2The LZ Algorithm to Solve the Generalized Eigenvalue Problem%1, 101 pages, May STAN-CS-73-364 (AIM-198, AD763611), R. B. Thosar, %2Estimation of Probability Density Using Signature Tables for Application to Pattern Recognition%1, 36 pages, May 1973. STAN-CS-73-365 (AIM-200, AD767331), Shigeru Igarashi, Ralph L. London and David C. Luckham, %2Automatic Program Verification I: Logical Basis and its Implementation%1, 50 pages, May 1973. STAN-CS-73-366 (AIM-201, AD763673), Gunnar Rutger Grape (Thesis), %2Model Bases (Intermediate-Level) Computer Vision%1, 256 pages, May 1973. STAN-CS-73-367 (AD763601), Ole Amble and Donald E. Knuth, %2Ordered Hash Tables%1, 34 pages, May 1973. STAN-CS-73-368 (AIM-202, AD764396), Roger C. Schank and Yorick Wilks, %2The Goals of Linguistic Theory Revisited%1, 44 pages, May 1973. STAN-CS-73-369 (AIM-203, AD764274), Roger C. Schank, %2The Development of Conceptual Structures in Children%1, 26 pages, May 1973. STAN-CS-73-370 (AIM-205, AD764288), N.S. Sridharan, G. Gelernter, A.J. Hart, W.F. Fowler and H.J. Shue, %2A Heuristic Program to Discover Syntheses for Complex Organic Molecules%1, 30 pages, June 1973. STAN-CS-73-371 (AD223572/AS), Donald E. Knuth, %2A review of `Structured Programming' %1, 25 pages, June 1973. STAN-CS-73-372 (AD767970), Michael A. Malcolm (Thesis, part II), %2Nonlinear Spline Functions%1, 60 pages, June 1973. STAN-CS-73-373 (AIM-204, AD765353/BWC), Kurt A. van Lehn (editor), %2SAIL User Manual%1, 200 pages, June 1973. STAN-CS-73-374 (AD764275), Michael A. Malcolm (Thesis excerpt), %2A Machine-Independent ALGOL Procedure for Accurate Floating-Point Summation%1, 5 pages, June 1973. STAN-CS-73-375 (SU-326 P30-27), D. Fischer, G. Golub, O. Hald, C. Levin and O. Widlund, %2On Fourier-Toeplitz Methods for Separable Elliptic Problems%1, 30 pages, June 1973. STAN-CS-73-376 (SU326 P30-28), Gunter Meinardus and G. D. Taylor, %2Lower Estimates for the Error of Best Uniform Approximation%1, 20 pages, June STAN-CS-73-377 (AIM-206, AD764652), Yorick Wilks, %2Preference Semantics%1, 20 pages, June 1973. STAN-CS-73-378 (AIM-207, AD767333), James Anderson Moorer, %2The `Optimum-Comb' Method of Pitch Period Analysis in Speech%1, 25 pages, June STAN-CS-73-379 (AIM-208, AD767334), James Anderson Moorer, %2The Hetrodyne Filter as a Tool for Analysis of Transient Waveforms%1, 30 pages, June STAN-CS-73-380 (AIM-209, AD767695/O WC), Yoram Yakimovsky (Thesis), %2Scene Analysis Using a Semantic Base for Region Growing%1, 120 pages, June 1973. STAN-CS-73-381 (AD767694), N. S. Sridharan, %2Computer Generation of Vertex-Graphs%1, 18 pages, July 1973. STAN-CS-73-382 (AIM-210, AD767335), Zohar Manna and Amir Pnueli, %2Axiomatic Approach to Total Correctness of Programs%1, 26 pages, July STAN-CS-73-383 (AIM-211, AD769673), Yorick Wilks, %2Natural Language Inference%1, 47 pages, July 1973. STAN-CS-73-384 (AIM-212, AD769379), Annette Herskovits, %2The Generation of French from a Semantic Representation%1, 50 pages, August 1973. STAN-CS-73-385 (AIM-213, not at NTIS), R. B. Thosar, %2Recognition of Continuous Speech: Segmentation and Classification Using Signature Table Adaptation%1, 37 pages, August 1973. STAN-CS-73-386 (AIM-214, AD767332), W. A. Perkins and T. O. Binford, %2A Corner Finder for Visual Feed-Back%1, 59 pages, August 1973. STAN-CS-73-387 (AIM-215, AD769380), Bruce G. Buchanan and N. S. Sridharan, %2Analysis of Behavior of Chemical Molecules: Rule Formation on Non-Homogeneous Classes of Objects%1, 15 pages, August 1973. STAN-CS-73-388 (CSL-TR-74, PB226044/AS), R. C. Swanson, %2Interconnections for Parallel Memories to Unscramble P-Ordered Vectors%1, 52 pages, August 1973. STAN-CS-73-389 (AIM-216, AD771299), L. Masinter, N. S. Sridharan, J. Lederberg and D. H. Smith, %2Applications of Artificial Intelligence for Chemical Inference XII: Exhaustive Generation of Cyclic and Acyclic Isomers%1, 60 pages, September 1973. STAN-CS-73-390 (not at NTIS), James Gips, %2A Construction for the Inverse of a Turing Machine%1, 8 pages, Septermber 1973. STAN-CS-73-391 (AIM-217, AD770610), N. S. Sridharan, %2Search Strategies for the Task of Organic Chemical Synthesis%1, 32 pages, September 1973. STAN-CS-73-392, Donald E. Knuth, %2Sorting and Searching - Errata and Addenda%1, 31 pages, October 1973. STAN-CS-73-393 (AIM-218, AD772063/4WC), Jean Etienne Vuillemin (Thesis), %2Proof Techniques for Recursive Programs%1, 97 pages, October 1973. STAN-CS-73-394 (AIM-219, AD769674), C. A. R. Hoare, %2Parallel Programming: An Axiomatic Approach%1, 33 pages, October 1973. STAN-CS-73-395, Staff, %2Bibliography of Computer Science Reports%1, 48 pages, October 1973. STAN-CS-73-396 (AIM-220, AD772064), Robert Bolles and Richard Paul, %2The Use of Sensory Feedback in a Programmable Assembly System%1, 24 pages, October 1973. STAN-CS-73-397 (SU326 P30-28A), Peter Henrici, %2Computational Complex Analysis%1, 14 pages, October 1973. STAN-CS-73-398 (AIM-199, AD771300), Bruce G. Baumgart, %2Image Contouring and Comparing%1, 52 pages, October 1973. STAN-CS-73-399 (SU326 P30-29), C. C. Paige and M. A. Saunders, %2Solution of Spase Indefinite Systems of Equations and Least Squares Problems%1, 47 pages, October 1973. .next page .once center <<reports 400 thru 499>> %3REPORTS 400 THRU 499%1 STAN-CS-73-400 (AIM-223, AD772509), C. A. R. Hoare, %2Recursive Data Structures%1, 32 pages, November 1973. STAN-CS-73-401 (PB226691/AS), C. A. R. Hoare, %2Monitors: An Operating System Structuring Concept%1, 25 pages, November 1973. STAN-CS-73-402 (PB229616/AS), J. G. Herriot and C. H. Reinsch, %2ALGOL 60 Precedures for the Calculation of Interpolating Natural Quintic Spline Functions%1, 40 pages, November 1973. STAN-CS-73-403 (AIM-224, AD773391), C. A. R. Hoare, %2Hints on Programming Language Design%1, 29 pages, December 1973. STAN-CS-74-404 (AD775452), N. S. Sridharan, %2A Catalog of Quadri/Trivalent Graphs%1, 48 pages, January 1974. STAN-CS-74-405 (not at NTIS), R. Davis and M. Wright, %2Stanford Computer Science Department: Research Report%1, 38 pages, January 1974. STAN-CS-74-406 (AIM-225, AD775645), W. A. Perkins, %2Memory Model for a Robot%1, January 1974. STAN-CS-74-407 (AIM-226, AD778310), F. Wright, %2FAIL Manual%1, 50 pages, February 1974. STAN-CS-74-408 (AIM-227, AD-A003 483), Arthur Thomas and Thomas Binford, %2Information Processing Analysis of Visual Perception: a review%1, 40 pages, February 1974. STAN-CS-74-409 (AIM-228, AD776233), John McCarthy and Staff, %2Final Report: Ten Years of Research in Artificial Intelligence. An Overview%1, February 1974. STAN-CS-74-410 (CSL-TR-46, PB231926/AS), James L. Peterson (Thesis), %2Modelling of Parallel Systems%1, 241 pages, February 1974. STAN-CS-74-411 (AIM-229), D. B. Anderson, T. O. Binford, A. J. Thomas, R. W. Weyhrauch and Y. A. Wilks, %2After Leibniz...: Discussions on Philosophy and Artificial Intelligence%1, 50 pages, March 1974. STAN-CS-74-412 (AIM-230, AD786721), Daniel C. Swinehart (Thesis), %2COPILOT: A Multiple Process Approach to Interactive Programming Systems%1, March 1974. STAN-CS-74-413 (AIM-231, AD-A001 814), James Gips (Thesis), %2Shape Grammars and Their Uses%1, 243 pages, March 1974. STAN-CS-74-414 (AIM-232, AD780452), Bruce G. Baumgart, %2GEOMED: A Geometric Editor%1, April 1974. STAN-CS-74-415 (PB233065/AS), Ronald L. Rivest (Thesis), %2Analysis of Associative Retrieval Algorithms%1, 109 pages, April 1974. STAN-CS-74-416 (PB233507/AS), Donald E. Knuth, %2Structured Programming with %1Go To%1 Statements%1, 100 pages, April 1974. STAN-CS-74-417 (PB234102/AS), Richard L. Sites, %2Some Thoughts on Proving That Programs Terminate Cleanly%1, 68 pages, May 1974. STAN-CS-74-418 (PB233045/AS), Richard L. Sites (Thesis), %2Proving That Computer Programs Terminate Cleanly%1, 143 pages, May 1974. STAN-CS-74-419 (AIM-233, AD-A000 086), Charles Rieger III (Thesis), %2Conceptual Memory: A Theory and Computer Program for Processing the Meaning Content of Natural Language Utterances%1, 393 pages, May 1974. STAN-CS-74-420 (CSL-TR-50, PB232543/AS), John Wakerly, %2Partially Self-Checking Circuits and Their Use in Performing Logical Operations%1, 46 pages, May 1974. STAN-CS-74-421 (CSL-TR-51, PB232356/AS), John Wakerly (Thesis), %2Low- Cost Error Detection Techniques for Small Computers%1, 232 pages, May STAN-CS-74-422 (CSL-TR-79, NASA-TM-62,370), Harold Stone, %2Parallel Tri-Diagonal Equation Solvers%1, 42 pages, May 1974. STAN-CS-74-423 (CSL-TN-41, PB232860/AS), Gururaj S. Rao, %2Asymptotic Representation of the Average Number of Active Modules in an N-Way Interleaved Memory%1, 16 pages, May 1974. STAN-CS-74-424 (CSL-TR-80, PB232602/AS), Maurice Schlumberger (Thesis, chapter 1), %2Logorithmic Communications Networks%1, 38 pages, May 1974. STAN-CS-74-425 (CSL-TR-81, PB232598/AS), Maurice Schlumberger (Thesis, chapter 2), %2Vulnerability of deBruijn Communications Networks%1, 68 pages, May 1974. STAN-CS-74-426 (CSL-TR-82, PB232597), Maurice Schlumberger, (Thesis, chapter 3), %2Queueing Equal Length Messages in a Logorithmic Network%1, 75 pages, May 1974. STAN-CS-74-427 (CSL-TN-36, PB232624/AS), Tomas Lang (Thesis excerpt), %2Performing the Perfect Schuffle in an Array Computer%1, 18 pages, May STAN-CS-74-428 (CSL-TR-76, PB232633/AS), Tomas Lang (Thesis excerpt), %2Interconnections Between Precessors and Memory Modules Using the Schuffle-Exchange Network%1, 32 pages, May 1974. STAN-CS-74-429 (CSL-TR-70, PB232623/AS), Samuel E. Orcutt (Thesis excerpt), %2Efficient Data Routing Schemes for ILLIAC IV-Type Computers%1, 31 pages, May 1974. STAN-CS-74-430 (CSL-TR-71, PB234513/AS), Samuel E. Orcutt (Thesis excerpt), %2A Novel Parallel Computer Architecture and Some Applications%1, 44 pages, May 1974. STAN-CS-74-431 (AIM-234, not at NTIS), Kenneth Mark Colby and Roger C. Parkison, %2Pattern Matching Rules for the Recognition of Natural Language Dialogue Expressions%1, 23 pages, May 1974. STAN-CS-74-432 (AIM-235, AD-A006 898), Richard Weyhrauch and Arthur Thomas, %2FOL: A Proof Checker for First-Order Logic%1, 60 pages, May STAN-CS-74-433 (AIM-236, AD784513), Jack R. Buchanan and David C. Luckham, %2On Automating the Construction of Programs%1, 65 pages, May STAN-CS-74-434 (SU326 P30-31), Axel Ruhe and Per Ake Wedin, %2Algorithms for Separable Nonlinear Least Squares Problems%1, 50 pages, June 1974. STAN-CS-74-435 (CSL-TR-88, A001-071), Thomas G. Price, %2Balanced Computer Systems%1, 56 pages, June 1974. STAN-CS-74-436 (AIM-237, AD-A012 477), Yorick Wilks, %2Natural Language Understanding Systems Within the A.I. Paradigm -- A Survey and Some Comparisons%1, 25 pages, July 1974. STAN-CS-74-437 (AIM-238, AD-A005 040), C. K. Riesbeck (Thesis), %2Computational Understanding: Analysis of Sentences and Context%1, 245 pages, July 1974. STAN-CS-74-438 (AIM-239, AD786720), Marsha Jo Hanna (Thesis), %2Computer Matching of Areas in Stereo Images%1, 99 pages, July 1974. STAN-CS-74-439 (OR-74-7, SU326 P30-32), Richard W. Cottle, Gene H. Golub and R. S. Sacher, %2On the Solution of Large, Structured Linear Complementarity Problems: III%1, 87 pages, July 1974. STAN-CS-74-440 (PB237360/AS), James H. Morris, Jr., Vaughn R. Pratt and Donald E. Knuth, %2Fast Pattern Matching in Strings%1, 32 pages, July STAN-CS-74-441 (AD-A000 284), Donald E. Knuth and Ronald W. Moore, %2An Analysis of Alpha-Beta Pruning%1, 64 pages, July 1974. STAN-CS-74-442 (AD-A004 208), Donald E. Knuth, %2Estimating the Efficiency of Backtrack Programs%1, 30 pages, July 1974. STAN-CS-74-443 (PB-236 471/AS), Douglas K. Brotz (Thesis), %2Embedding Heuristic Problem Solving Methods in a Mechanical Theorem Prover%1, 107 pages, July 1974. STAN-CS-74-444 (AIM-240, AD787035), C. C. Green, R. J. Waldinger, D. R. Barstow, R. Elschlager, D. B. Lenat, B. P. McCune, D. E. Shaw and L. I. Steinberg, %2Progress Report on Program-Understanding Systems%1, 50 pages, July 1974. STAN-CS-74-445 (SLACP-1448), J. H. Friedman, F. Baskett and L. J. Shustek, %2A Relatively Efficient Algorithm for Finding Nearest Neighbors%1, 21 pages, September 1974. STAN-CS-74-446 (AIM-241, AD786723), L. Aiello and R. W. Weyhrauch, %2LCFsmall: An Implementation of LCF%1, 45 pages, September 1974. STAN-CS-74-447 (AIM-221, AD787631), L. Aiello, M. Aiello and R. W. Weyhrauch, %2Semantics of Pascal in LCF%1, 78 pages, September 1974. STAN-CS-74-448 (SU326 P30-33), D. Goldfarb, %2Matrix Factorizations in Optimization of Nonlinear Functions Subject to Linear Constraints%1, 45 pages, September 1974. STAN-CS-74-449 (CSL-TR-89, AD785027), A. Smith (Thesis), %2Performance Analysis of Computer Systems Components%1, 323 pages, September 1974. STAN-CS-74-450 (CSL-TR-90, AD787008), F. Baskett and A. J. Smith (Thesis, chapter 3), %2Interference in Multiprocessor Computer Systems with Interleaved Memory%1, 45 pages, September 1974. STAN-CS-74-451 (CSL-TR-91, AD786999), A. Smith (Thesis, chapter 5), %2A Modified Working Set Paging Algorithm%1, 40 pages, October 1974. STAN-CS-74-452 (AIM-242, AD-A000 500), J. R. Low (Thesis), %2Automatic Coding: Choice of Data Structures%1, 110 pages, September 1974. STAN-CS-74-453 (AD-A000 034), Donald E. Knuth, %2Random Matroids%1, 30 pages, September 1974. STAN-CS-74-454 (SU326 P30-35), L. S. Jennings, %2A Computational Approach to Simultaneous Estimation%1, 15 pages, September 1974. STAN-CS-74-455 (AD-A000 083), Robert E. Tarjan, %2Edge-Disjoint Spanning Trees, Dominators, and Depth-First Search%1, 40 pages, September 1974. STAN-CS-74-456 (AIM-243, AD-A003 815), R. Finkel, R. Taylor, R. Bolles, R. Paul and J. Feldman, %2AL, A Programming System for Automation: Preliminary Report%1, 117 pages, October 1974. STAN-CS-74-457 (AIM-244, not at NTIS), K. M. Colby, %2Ten Criticisms of Parry%1, 7 pages, October 1974. STAN-CS-74-458 (AIM-245, AD784816), J. Buchanan (Thesis), %2A Study in Automatic Programming%1, 146 pages, October 1974. STAN-CS-74-459 (AIM-246, AD-A000 085), Terry Winograd, %2Five Lectures on Artificial Intelligence%1, 95 pages, October 1974. STAN-CS-74-460 (PB238148/AS), T. Porter and I. Simon, %2Random Insertion into a Priority Queue Structure%1, 25 pages, October 1974. STAN-CS-74-461 (AIM-247, AD-A005 041), N. M. Goldman (Thesis), %2Computer Generation of Natural Language from a Deep Conceptual Base%1, 316 pages, October 1974. STAN-CS-74-462 (AIM-248), K. Pingle and A. J. Thomas, %2A Fast, Feature-Driven Stereo Depth Program%1, 15 pages, October 1974. STAN-CS-74-463 (AIM-249, AD-A002 261), Bruce Baumgart (Thesis), %2Geometric Modeling for Computer Vision%1, 141 pages, November 1974. STAN-CS-74-464 (AIM-250, AD-A003 488), Ramakant Nevatia (Thesis), %2Structured Descriptions of Complex Curved Objects for Recognition and Visual Memory%1, 125 pages, November 1974. STAN-CS-74-465 (AIM-251, AD-A001 373), E. H. Shortliffe (Thesis), %2MYCIN: A Rule-Based Computer Program for Advising Physicians Regarding Antimicrobial Therapy Selection%1, 409 pages, November 1974. STAN-CS-74-466 (AIM-252, AD-A002 246), Lester Earnest (editor), %2Recent Research in Artificial Intelligence, Heuristic Programming, and Network Protocols%1, 79 pages, November 1974. STAN-CS-74-467 (AIM-222, AD-A007 562), M. Aiello and R. Weyhrauch, %2Checking Proofs in the Meta-Mathematics of First Order Logic%1, 55 pages, November 1974. STAN-CS-74-468 (AD-A003 832), S. Krogdahl, %2A Combinatorial Base for Some Optimal Matroid Intersection Algorithms%1, 25 pages, November 1974. STAN-CS-74-469, H. Brown, %2Molecular Structure Elucidation III%1, 38 pages, December 1974. STAN-CS-74-470, L. Trabb Prado, %2Stable Sorting and Merging with Optimal Time and Space Bounds%1, 75 pages, December 1974. STAN-CS-74-471 (AIM-253, AD-A003 487), B. Faught, K. M. Colby and R. C. Parkison, %2The Interaction of Inferences, Affects, and Intentions in a Model of Paranoia%1, 38 pages, December 1974. STAN-CS-74-472 (AIM-254, AD-A005 407), L. H. Quam and M. J. Hannah, %2Stanford Automatic Photogrammetry Research%1, 15 pages, December 1974. STAN-CS-74-473 (AIM-255, AD-A005 412), N. Suzuki, %2Automatic Program Verification II: Verifying Programs by Algebraic and Logical Reduction%1, 28 pages, December 1974. STAN-CS-74-474 (AIM-256, AD-A007 563), F. W. von Henke and D. C. Luckham, %2A Methodology for Verifying Programs%1, 45 pages, December STAN-CS-75-475 (AIM-257, AD-A005 413), M. C. Newey (Thesis), %2Formal Semantics of LISP with Applications to Program Correctness%1, 184 pages, January 1975. STAN-CS-75-476 (AIM-258, AD-A006 294), Cordell Green and David Barstow, %2A Hypothetical Dialogue Exhibiting a Knowledge Base for a Program-Understanding System%1, 45 pages, January 1975. STAN-CS-75-477 (not at NTIS), V. Chvatal and D. Sankoff, %2Longest Common Subsequences of Two Random Sequences%1, 18 pages, January 1975. STAN-CS-75-478 (SU326 P30-36), G. H. Golub and J. H. Wilkinson, %2Ill- Conditioned Eigensystems and the Computation of the Jordan Canonical Form%1, 66 pages, February 1975. STAN-CS-75-479 (SU326 P30-38), F. Chatelin and J. Lemordant, %2Error Bounds in the Approximation of Eigenvalues of Differential and Integral Operators%1, 24 pages, February 1975. STAN-CS-75-480 (A008804), Donald E. Knuth, %2Notes on Generalized Dedekind Sums%1, 45 pages, February 1975. STAN-CS-75-481 (SU326 P30-39), J. Oliger, %2Difference Methods for the Initial-Boundary Value Problem for Hyperbolic Equations%1, 31 pages, February 1975. STAN-CS-75-482 (SLACP-1549, not at NTIS), J. A. Friedman, J. L. Bentley and R. A. Finkel, %2An Algorithm for Finding Best Matches in Logarithmic Time%1, 31 pages, March 1975. STAN-CS-75-483 (AD-A011 835), P. Erdos and R. L. Graham, %2On Packing Squares with Equal Squares%1, 8 pages, March 1975. STAN-CS-75-484 (AD-A011 832), R. L. Graham and E. Szemeredi, %2On Subgraph Number Independence in Trees%1, 18 pages, March 1975. STAN-CS-75-485 (AD-A011 834), P. Erdos and E. Szemeredi, %2On Multiplicative Representations of Integers%1, 18 pages, March 1975. STAN-CS-75-486 (SU326 P30-37), A. Bjorck and G. H. Golub, %2Eigenproblems for Matrices Associated with Periodic Boundary Conditions%1, 19 pages, March 1975. STAN-CS-75-487 (SLACP-1573), J. H. Friedman, %2A Variable Metric Decision Rule for Non-Parametric Classification%1, 34 pages, April 1975. STAN-CS-75-488 (AD-A011 445), B. Bollobas, P. Erdos and E. Szemeredi, %2On Complete Subgraphs of r-Chromatic Graphs%1, 16 pages, April 1975. STAN-CS-75-489 (AD-A011 833), E. Szemeredi, %2Regular Partitions of Graphs%1, 8 pages, April 1975. STAN-CS-75-490 (AD-A014 429), R. William Gosper, %2Numerical Experiments with the Spectral Test%1, 31 pages, May 1975. STAN-CS-75-491, G. D. Knott (Thesis), %2Deletion in Binary Storage Trees%1, 93 pages, May 1975. STAN-CS-75-492, R. Sedgewick (Thesis), %2Quicksort%1, 352 pages, May STAN-CS-75-493 (PB244421/AS), R. Kurki-Suonio, %2Describing Automata in Terms of Languages Associated with Their Peripheral Devices%1, 37 pages, May 1975. STAN-CS-75-494, E. H. Satterthwaite, Jr. (Thesis), %2Source Language Debugging Tools%1, 345 pages, May 1975. STAN-CS-75-495 (AD-A014 424), S. Krogdahl, %2The Dependence Graph for Bases in Matroids%1, 29 pages, May 1975. STAN-CS-75-496 (SU326 P30-41), R. Underwood (Thesis), %2An Iterative Block Lanczos Method for the Solution of Large Sparse Symmetric Eigenproblems%1, 133 pages, May 1975. STAN-CS-75-497 (AD-A016 825), R. L. Graham and L. Lovasz, %2Distance Matrices of Trees%1, 48 pages, August 1975. STAN-CS-75-498 (AIM-259, AD-A017 025), H. Samet (Thesis), %2Automatically Proving the Correctness of Translations Involving Optimized Code%1, 214 pages, August 1975. STAN-CS-75-499 (AIM-260), D. C. Smith (Thesis), %2PYGMALION: A Creative Programming Environment%1, 193 pages, August 1975. .next page .once center <<reports 500 thru 599>> %3REPORTS 500 THRU 599%1 STAN-CS-75-500 (PB246708/AS), R. Kurki-Suonio, %2Towards Better Definitions of Programming Languages%1, 29 pages, August 1975. STAN-CS-75-501 (AIM-261, AD-A016 810), O. Pettersen, %2Procedural Events as Software Interrupts%1, 8 pages, August 1975. STAN-CS-75-502 (AIM-262, AD-A016 808), O. Pettersen, %2Synchronization of Concurrent Processes%1, 14 pages, August 1975. STAN-CS-75-503 (AIM-263, AD-A016 807), O. Pettersen, %2The Macro- Processing System STAGE2%1, 20 pages, August 1975. STAN-CS-75-504 (AD-A017 370), P. Erdos, R. L. Graham and E. Szemeredi, %2On Sparse Graphs with Dense Long Paths%1, 14 pages, August 1975. STAN-CS-75-505 (AD-A017 053), V. Chvatal, %2Some Linear Programming Aspects of Combinatorics%1, 30 pages, August 1975. STAN-CS-75-506 (AIM-264, AD-A017 176), M. Gordon, %2Operational Reasoning and Denotational Semantics%1, 30 pages, August 1975. STAN-CS-75-507 (AIM-265), M. Gordon, %2Towards a Semantic Theory of Dynamic Binding%1, 25 pages, August 1975. STAN-CS-75-508, James Eve, %2On Computing the Transitive Closure of a Relation%1, 14 pages, August 1975. STAN-CS-75-509 (AD-A017 331), M. Overton and A. Proskurowski, %2Finding the Maximal Incidence Matrix of a Large Graph%1, 72 pages, August 1975. STAN-CS-75-510 (AD-A017 054), A. C. Yao and D. E. Knuth, %2Analysis of the Subtractive Algorithm for Greatest Common Divisors%1, 10 pages, August STAN-CS-75-511 (AD-A017 294), P. Dubost and J. M. Trousse, %2Software Implementation of a New Method of Combinatorial Hashing%1, 35 pages, August 1975. STAN-CS-75-512 (PB247895/AS), Robert E. Tarjan, %2Applications of Path Compression on Balanced Trees%1, 53 pages, October 1975. STAN-CS-75-513 (SLACR-186), J. L Bentley, %2A Survey of Techniques for Fixed Radius Near Neighbor Searching%1, 30 pages, October 1975. STAN-CS-75-514 (PB247561/AS), N. Tokura, %2A Microprogram Control Unit Based on a Tree Memory%1, 39 pages, October 1975. STAN-CS-75-515, R. P. Brent, %2Fast Multiple-Precision Evaluation of Elementary Functions%1, 22 pages, October 1975. STAN-CS-75-516 (SU326 P30-42), J. Stoer, %2On the Relation Between Quadratic Termination and Convergence Properties of Minimization Algorithms%1, 103 pages, October 1975. STAN-CS-75-517, V. Chvatal and C. Thomassen, %2Distances in Orientations of Graphs%1, 24 pages, October 1975. STAN-CS-75-518 (AD-A018 461), V. Chvatal and P. L. Hammer, %2Aggregation of Inequalities in Integer Programming%1, 27 pages, October STAN-CS-75-519 (AIM-266, AD-A019 641), R. Davis, B. Buchanan and E. Shortliffe, %2Production Rules as a Representation for a Knowledge- Based Consultation Program%1, 37 pages, November 1975. STAN-CS-75-520 (AIM-267, AD-A019 664), F. W. von Henke, %2On the Representation of Data Structures in LCF with Applications to Program Generation%1, 41 pages, November 1975. STAN-CS-75-521 (AIM-268), C. Thompson, %2Depth Perception in Stereo Computer Vision%1, 16 pages, November. STAN-CS-75-522 (AIM-269, AD-A019 569), D. C. Luckham and N. Suzuki, %2Automatic Program Verification IV: Proof of Termination Within a Weak Logic of Programs%1, 39 pages, November 1975. STAN-CS-75-523 (AIM-270, AD-A019 467), J. F. Reiser, %2BAIL -- A Debugger for SAIL%1, 26 pages, November 1975. STAN-CS-75-524 (AIM-271, AD-A019 702), R. Davis and J. King, %2An Overview of Production Systems%1, 40 pages, November 1975. STAN-CS-75-525 (AIM-272), S. Ganapathy (Thesis), %2Reconstruction of Secnes Containing Polyhedra from Stereo Pair of Views%1, 204 pages, November 1975. STAN-CS-75-526 (AD-A020 848), Robert E. Tarjan, %2Graph Theory and Gaussian Elimination%1, 23 pages, November 1975. STAN-CS-75-527 (CSL-TR-100, not at NTIS), E. McCluskey, J. Wakerley and R. Ogus, %2Center for Reliable Computing%1, 100 pages, November 1975. STAN-CS-75-528 (AD-A020 597), Robert E. Tarjan, %2Solving Path Problems on Directed Graphs%1, 45 pages, November 1975. STAN-CS-75-529 (SLACP-1665), J. L. Bentley and J. H. Friedman, %2Fast Algorithms for Constructing Minimal Spanning Trees in Coordinate Spaces%1, 29 pages, November 1975. STAN-CS-75-530 (SU326 P30-40), M. Lentini and V. Pereyra, %2An Adaptive Finite Difference Solver for Nonlinear Two Point Boundary Problems with Mild Boundary Layers%1, 42 pages, November 1975. STAN-CS-75-531 (AD-A020 847), D. J. Rose and R. E. Tarjan, %2Algorithmic Aspects of Vertex Elimination on Directed Graphs%1, 45 pages, November 1975. STAN-CS-75-532, Pat E. Jacobs (staff), %2Bibliography of Computer Science Reports%1, 77 pages, November 1975. STAN-CS-76-533 (LBL-4604, SU326 P30-44), P. Concus, G. H. Golub and D. P. O'Leary, %2A Generalized Conjugate Gradient Method for the Numerical Solution of Elliptic Partial Differential Equations%1, 24 pages, January STAN-CS-76-534 (AIM-273), Linda G. Hemphill (Thesis), %2A Conceptual Approach to Automatic Language Understanding and Belief Structures: With Disambiguation of the Word `For' %1, 254 pages, January 1976. STAN-CS-76-535, P. Concus and G. H. Golub, %2A Generalized Conjugate Gradient Method for Non-Selmmetric Systems of Linear Equations%1, 12 pages, January 1976. STAN-CS-76-536 (AIM-274, AD-A020 942/9WC), David Grossman and Russell Taylor, %2Interactive Generation of Object Models with a Manipulator%1, 32 pages, January 1976. STAN-CS-76-537 (AIM-275, AD-A020 943/7WC), Robert C. Bolles, %2Verification Vision Within a Programmable Assembly System: An Introductory Discussion%1, 82 pages, January 1976. STAN-CS-76-538 (AD-A024 416), Donald E. Knuth and L. Trabb Pardo, %2Analysis of a Simple Factorization Algorithm%1, 43 pages, January 1976. STAN-CS-76-539 (AIM-276, AD-A021 055/9WC), Zohar Manna and Adi Shamir, %2A New Approach to Recursive Programs%1, 26 pages, January 1976. STAN-CS-76-540 (AD-A021 587), R. L. Graham, A. C. Yao and F. F. Yao, %2Addition Chains with Multiplicative Cost%1, 7 pages, January 1976. STAN-CS-76-541, Donald E. Knuth, %2Mathematics and Computer Science: Coping with Finiteness%1, 30 pages, March 1976. STAN-CS-76-542 (AIM-277, AD-A027 454), Zohar Manna and Adi Shamir, %2The Theoretical Aspects of the Optimal Fixedpoint%1, 24 pages, March 1976. STAN-CS-76-543, D. A. Zave, %2Optimal Polyphase Sorting%1, 75 pages, March 1976. STAN-CS-76-544, B. Mont-Reynaud, %2Removing Trivial Assignments from Programs%1, 28 pages, March 1976. STAN-CS-76-545, W. J. Paul, R. E. Tarjan and J. R. Celoni, %2Space Bounds for a Game on Graphs%1, 21 pages, March 1976. STAN-CS-76-546 (SLACP-1715), F. Baskett and L. Sustek, %2The Design of a Low Cost Video Graphics Terminal%1, 25 pages, March 1976. STAN-CS-76-547, Robert E. Tarjan, %2Iterative Algorithms for Global Flow Analysis%1, 31 pages, March 1976. STAN-CS-76-548, D. Prost O'Leary (Thesis), %2Hybrid Conjugate Gradient Algorithms%1, 120 pages, March 1976. STAN-CS-76-549 (AIM-278, AD-A027 455), David Luckham and Norihisa Suzuki, %2Automatic Program Verification V: Verification-Oriented Proof Rules or Arrays, Records and Pointers%1, 48 pages, March 1976. STAN-CS-76-550, R. E. Tarjan and A. E. Trojanowski, %2Finding a Maximum Independent Set%1, 22 pages, June 1976. STAN-CS-76-551 (AD-A032 347), Donald E. Knuth, %2The State of the Art of Computer Programming%1, 57 pages, June 1976. STAN-CS-76-552 (AIM-279), Norihsa Suzuki (Thesis), %2Automatic Verification of Programs with Complex Data Structures%1, 194 pages, February 1976. STAN-CS-76-553 (AD-A032 772), R. E. Tarjan, %2Complexity of Monotone Networks for Computing Conjunctions%1, 21 pages, June 1976. STAN-CS-76-554, F. S. Yu (Thesis), %2Modeling the Write Behavior of Computer Programs%1, 185 pages, June 1976. STAN-CS-76-555 (AIM-280), David D. Grossman, %2Monte Carlo Simulation of Tolerancing in Discrete Parts Manufacturing and Assembly%1, 25 pages, May STAN-CS-76-556, L. J. Guibas (Thesis), %2The Analysis of Hashing Algorithms%1, 136 pages, August 1976. STAN-CS-76-557 (AD-A032 122), M. S. Paterson, %2An Introduction to Boolean Function Complexity%1, 19 pages, August 1976. STAN-CS-76-558 (AIM-281.1, AD-A042 507), Zohar Manna and Richard Waldinger, %2Is `sometime' sometimes better than `always'? Intermittent assertions in proving program correctness%1, 41 pages, June 1976, revised March 1977. STAN-CS-76-559 (AD-A032 348), Gene Golub, V. Klema and G. W. Stewart, %2Rank Degeneracy and Least Squares Problems%1, 38 pages, August 1976. STAN-CS-76-560 (AIM-282), Russell Taylor (Thesis), %2Synthesis of Manipulator Control Programs from Task-level Specifications%1, 229 pages, July 1976. STAN-CS-76-561, D. R. Woods, %2Mathematical Programming Language: User's Guide%1, 139 pages, August 1976. STAN-CS-76-562 (AD-A032 123), Donald E. Knuth and L. Trabb Prado, %2The Early Development of Programming Languages%1, 109 pages, August 1976. STAN-CS-76-563, D. L. Russell (Thesis), %2State Restoration Among Communicating Processes%1, 173 pages, August 1976. STAN-CS-76-564 (AIM-283, HPP-76-7), Randall Davis (Thesis), %2Applications of Meta Level Knowledge to the Construction, Maintenance and Use of Large Knowledge Bases%1, 304 pages, July 1976. STAN-CS-76-565 (AD-A032 802), J. C. Strikwerda (Thesis), %2Initial Boundary Value Problems for Incompletely Parabolic Systems%1, 107 pages, November 1976. STAN-CS-76-566, Margaret Wright (Thesis), %2Numerical Methods for Nonlinearly Constrained Optimization%1, 262 pages, November 1976. STAN-CS-76-567 (AIM-284), Rafael Finkel (Thesis), %2Constructing and Debugging Manipulator Programs%1, 171 pages pages, August 1976. STAN-CS-76-568 (AIM-285, PB-259 130/2WC), T. O. Binford, D. D. Grossman, C. R. Lui, R. C. Bolles, R. A. Finkel, M. S. Mujtaba, M. D. Roderick, B. E. Shimano, R. H. Taylor, R. H. Goldman, J. P. Jarvis, V. D. Scheinman, T. A. Gafford, %2Exploratory Study of Computer Integrated Assembly Systems - Progress Report 3%1, 336 pages, August 1976. STAN-CS-76-568-4 (AIM-285.4, PB-259 130/3WC), T. O. Binford, C. R. Lui, G. Gini, M. Gini, I. Glaser, T. Ishida, M. S. Mujtaba, E. Nakano, H. Nabavi, E. Panofsky, B. E. Shimano, R. Goldman, V. D. Scheinman, D. Schmelling, T. A. Gafford, %2Exploratory Study of Computer Integrated Assembly Systems - Progress Report 4%1, 255 pages, June 1977. STAN-CS-76-569 (PB-261 814/AS), John G. Herriot, %2Calculation of interpolating Natural Spline Functions Using De Boor's Package for Calculating with B-Splines%1, 46 pages, November 1976. STAN-CS-76-570 (AIM-286), Douglas Lenat (Thesis), %2AM: An Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search%1, 350 pages, July 1976. STAN-CS-76-571 (AIM-287), Michael Roderick (Thesis), %2Discrete Control of a Robot Arm%1, 98 pages, August 1976. STAN-CS-76-572 (AIM-288), Robert Filman and Richard Weyhrauch, %2An FOL Primer%1, 36 pages, September 1976. STAN-CS-76-573 (AD-A032 945), Arne Jonassen, %2The Stationary P-Tree Forest%1, 90 pages, November 1976. STAN-CS-76-574 (AIM-289), John Reiser (editor), %2SAIL Manual%1, 178 pages, August 1976. STAN-CS-76-575 (AIM-290, AD-A042 494), Nancy W. Smith, %2SAIL Tutorial%1, 54 pages, November 1976. STAN-CS-76-576 (AD-A035 350), Colin McDiarmid, %2Determining the Chromatic Number of a Graph%1, 61 pages, December
{"url":"https://saildart.org/BIB.PUB%5BBIB,CSR%5D25","timestamp":"2024-11-02T01:38:51Z","content_type":"text/html","content_length":"84234","record_id":"<urn:uuid:613f09e8-2ee5-4b03-a2b5-79c55c5538e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00483.warc.gz"}
The most important activity of ACM is the GSM network. As the mobile phone operator, ACM must build its own transmitting stations. It is very important to compute the exact behaviour of electro-magnetic waves. Unfortunately, prediction of electro-magnetic fields is a very complex task and the formulas describing them are very long and hard-to-read. For example, below are the Maxwell’s Equations describing the basic laws of electrical engineering. ACM has designed its own computer system that can make some field computations and produce results in the form of mathematic expressions. Unfortunately, by generating the expression in several steps, there are always some unneeded parentheses inside the expression. Your task is to take these partial results and make them “nice” by removing all unnecessary parentheses. There is a single positive integer T on the first line of input. It stands for the number of expressions to follow. Each expression consists of a single line containing only lowercase letters, operators (+, -, *, /) and parentheses (( and )). The letters are variables that can have any value, operators and parentheses have their usual meaning. Multiplication and division have higher priority then subtraction and addition. All operations with the same priority are computed from left to right (operators are left-associative). There are no spaces inside the expressions. No input line contains more than 250 characters. Print a single line for every expression. The line must contain the same expression with unneeded parentheses removed. You must remove as many parentheses as possible without changing the semantics of the expression. The semantics of the expression is considered the same if and only if any of the following conditions hold: The ordering of operations remains the same. That means “(a+b)+c” is the same as “a+b+c”, and “a+(b/c)” is the same as “a+b/c”. The order of some operations is swapped but the result remains unchanged with respect to the addition and multiplication associativity. That means “a+(b+c)” and “(a+b)+c” are the same. We can also combine addition with subtraction and multiplication with division, if the subtraction or division is the second operation. For example, “a+(b-c)” is the same as “a+b-c”. You cannot use any other laws, namely you cannot swap left and right operands and you cannot replace “a-(b-c)” with “a-b+c”. 3 人解决,7 人已尝试。 3 份提交通过,共有 14 份提交。 最后提交: 1 年,4 月前.
{"url":"https://acm.ecnu.edu.cn/problem/2504/","timestamp":"2024-11-05T10:01:12Z","content_type":"text/html","content_length":"11756","record_id":"<urn:uuid:1c880e7c-8cab-4df5-b1e4-57b36eb7b72a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00742.warc.gz"}
About - Jakob Schwichtenberg Hey, thanks for visiting my site. I’m Jakob Schwichtenberg. If you have any questions or found some error here or in one of my books please do not hesitate to contact me. My mail address is jakobschwich (at) gmail.com or alternatively just leave a comment here. In Short: I’m a physicist and I try to write down things during my own learning process. In some sense, one of the biggest benefits I have over other people in physics is that I’m certainly not the smartest guy! I usually can’t grasp complex issues very easily. So I have to break down complex ideas into smaller chunks to understand it myself. This means, whenever I describe something to others, everyone understands, because it’s broken down into such simple terms. Moreover, I’m by no means an expert in the subjects I’m writing about and I believe this is another huge advantage. All this is perfectly summarized in the following two quotes: The first one is from C. S. Lewis “It often happens that two schoolboys can solve difficulties in their work for one another better than the master can. […] The fellow-pupil can help more than the master because he knows less. The difficulty we want him to explain is one he has recently met. The expert met it so long ago he has forgotten. He sees the whole subject, by now, in a different light that he cannot conceive what is really troubling the pupil; he sees a dozen other difficulties which ought to be troubling him but aren’t.” and the second one is loosely paraphrased from Donald Knuth With my posts, I’m not trying to be on top of things. Rather I try to get to the bottom of things. I try to learn certain areas of physics exhaustively; then I try to digest that knowledge into a form that is accessible to people who don’t have time for such study. “Everyone sooner or later invents his story which later becomes his life” – Max Frisch So here’s mine I was bad in school. Not so bad that I got into trouble, but still pretty bad. I was bored most of the time and only invested the minimal amount of time which was necessary to pass the exams. To give an example, in the 10th grade I got a 4, which is the German equivalent to an American D, in mathematics. This is exactly the grade you need to pass without getting into trouble. A few years later I got mostly straight 1s in mathematics and started studying physics shortly after. To this day I spent most of my time thinking about mathematics and physics. How did this change happen? How did I transform from a bad student into a passionately interested student with universally good grades? This is what this story is about. Looking back, I think my problem was that I never understood why I should care about the things the teacher talked about. While for many students getting good grades is motivation enough, this never worked for me. In some sense, I understood too early that grades don’t matter. I remember asking my teachers several times, especially in mathematics, why we were taught certain topics. “What can we do with this? Why is this important?” I didn’t get satisfactory answers and was rewarded with bad oral grades. Then something huge happened. During this time I visited frequently flea markets with my parents. One day when I was 16 I bought at such a market a book titled Surely you’re joking Me Feynman. This was the best Euro I ever invested. This book turned my world upside down. Before reading it, I was mostly interested in computer games and soccer. But with Feynman’s help, I finally understood why people care about mathematics and physics. I started to understand that physics and mathematics aren’t the boring things, that the teacher presented. Instead, they are fun and important. In physics, you are trying to understand what makes nature tick at the most fundamental level. Mathematics is the language that you need to talk about physics. This was what I was craving all the time. I suddenly got interested in all those tricks that the math teacher presented because I was curious about how they can help to understand nature. Suddenly school was fun. Needless to say, I was hooked afterward and read every book by Feynman I could get my hands on. Then every physics and mathematics book. I’m hooked to this day. However, there were few books that had such a big impact on me as this little book by Feynman. A few years later I got really interested how this transformation happened. One day I was only interested in computer games and soccer, and only a few weeks later I was crazy about science and mathematics. In addition, all those things that didn’t make sense to me suddenly did. My quest to understand this paved the way for the journey I’m still traveling. Suddenly it was really easy to understand the things, for example, the math teacher talked about. What previously sounded like incomprehensible jibberish, was only a few weeks later completely logical and sensible stuff. Needless to say, my grades got better. A lot. What finally helped me to understand my own transformation, were two books: • Psycho-Cybernetics by Maxwell Maltz (which I also bought at a flea market), • Outliers by Malcolm Gladwell (which I bought on a street market in Mumbai). The main point I took from these two books, which I believe to this day, is that there is no such things as talent or intelligence. While you can, of course, define these two words however you like, the important thing is that they aren’t useful concepts in any way. Intelligence is what is measured by an intelligence test. You can train the riddles that you need to solve in an IQ test and this way you can increase your IQ. Is this meaningful? Of course not. This only means that you got better at solving these kinds of riddles and nothing else. There is no correlation between what is measured in an IQ test and any type of success in life. The biggest discoveries in science aren’t made by the smartest guys. Talent is similar, but even more ominous, because there isn’t even a quantifiable notion like the IQ for intelligence. So why are these notions so popular nevertheless? Because they are convenient. On the one hand, they are convenient to excuse why you aren’t able to do certain things. “Well, I simply have no talent for mathematics.” “I’m not smart enough.” On the other hand, they are used to feel better than other people. Haha, look at these dumb people. Us vs. Them mentality is always powerful. I would like to go even further and say that they are not only useless but actually harmful. Most people never reach their potential, because they think they don’t have what it takes. This is incredibly harmful bullshit. As long as your body and brains work reasonably good, you can achieve whatever you want. It makes actually no sense to believe in intelligence and talent. Even if I’m wrong and there is such a thing, talking and thinking will only have the effect that you start doubting and limiting In addition, I learned from Maltz and Gladwell that the only things that matter are your self-image and motivation. By reading Feynman’s little book I started to get interested in science. I got curious and motivated. In addition, it transformed my self-image dramatically. Feynman’s IQ was only 125. Nevertheless, he was one of the most important scientists of the last century. Motivated and with a transformed self-image I was as smart as the smartest guys and girls in my class. To this day I’m convinced that everyone can understand any topic if he is sufficiently curious about it and reads an explanation that speaks a language he understands. This is what motivates me to write. 33 Comments 1. A few words to say your book is really well written and clear from a self learner in physics. Good work to explain some advanced concepts without need at the beginning of advanced mathematical topics. Thanks 2. I thought I had already hit a ‘ceiling’ in learning physics after struggling through my undergrad classes. It partly had to do with an inefficient and ineffective way of learning the subject, resulting in a general anxiety towards approaching more ‘advanced’ topics (i.e. the Intro EM and QM classes were already daunting experiences at that time). But after graduation, I decided to give it a second shot. I began by watching youtube physics tutorials and lectures, picked up the standard undergrad textbooks again, and read them carefully. Things became less vague, my knowledge gaps slowly closed, and the pieces of the puzzle started to emerge. I stumbled upon your book on Amazon, got an e-copy (legally) and wow, I am surprised I’ve been following it for more than 130 pages (of course, having checked the errata along the way). Your explanation, derivations and logic helped me fill even more knowledge gaps, link ideas that seemed irrelevant and allowed me to appreciate the beauty of nature even with such limited background knowledge. Thank you very much Jakob for the book. I whole-heartedly agree that a lot of experts forget what it’s like to be a beginner. Even for those authors (like Griffiths) who could explain things very well to a beginner, those I’ve encountered so far only cover one topic at a time (EM, QM, Classical Mechanics, etc.) and it is so hard to be ‘revealed’ the ‘higher level’ connections and intuitions from a top-down level. I really appreciate your effort to cater to ambitious beginners / amateurs like me who didn’t go through the grad school journey to discover how things ultimately work. Whether this becomes an undergrad ‘standard’ text or not, I believe you put in all these effort not for the money or fame, but out of a compassionate heart who want to help the strongly curious, but clueless, folks to actually ‘get there’. Thanks again. 3. I’m a process chemist, so I haven’t looked at general physics in quite a while, and I haven’t finished this book yet, but what I have read is superb. I never like giving students formulas simply to memorize, and as good as some books are, they all seem to invariably partition the subjects that they try to teach, which makes integrating the knowledge into their web of knowledge and internalizing it a much harder task for the students. Einstein may have said that if you can’t explain something simply, then you don’t know it enough, but it would help if textbooks, and not just students and teachers, would strive for that goal. This book does it far better than any book I have read (though again, I haven’t read one recently). Thanks for producing it. Have you thought about contacting a company like the Great Courses, and maybe trying to teach a general physics course with this approach? It seems like the style of learning they are interested in, and they don’t shy away from difficult concepts, as seen in the Superstring Theory lecture series. Might be worth a look. Thanks again. 4. Hi Jakob, cool that you have already written such a book while still being a student, both thumbs up ;-)! I will probably read it in German. At what university are you? Best wishes □ Hi Dilaton, I’m currently a PhD student at the KIT. Best wishes, PS: I’m a huge fan of your PhysicsOverflow project! ☆ Hi Jakob, thanks for the nice words about PhysicsOverflow ! Cool place to be where you are and you seem to be doing cool stuff 😉 BTW I will have the pleasure to write a few words (aka Rezension) in the German magazine “Physik in unserer Zeit” … 😉 Cheers ! ☆ What’s KIT? I’m not German, so I am not acquainted with the educational institutions. ○ KIT = Karlsruhe Institute of Technology 5. Hi Jacob and thanks for a great book !! Could You please explain for me the math. behind Eq. 3.94 and 3.105 Sinc. Trond Braaten 6. Hi Jakob, Thanks for your book. It was really awesome. This book helped me to understand Group theory and High energy physics. 7. I’m so happy to be reading this. You are amazing. Finally someone says things out loud that were always in my brain! Thank you. 🙂 8. Thank you so much for sharing and writing this and your own book! They’re both really encouraging and helpful! Thank you so much for what you’re doing! You’re a strong, wonderful person! 9. Hi Jakob, I just bought your book “from finance” – looking forward to reading it. Drop me your contact please. k □ Hi Kirill, you can reach me at jakobschwich (at) gmail.com Best wishes, 10. A beautiful note, thank you, Jacob. We are all inspired by Feynman in some way I guess. I struggled with math in high school and college. I took an easier path and majored in Economics. I “regretted” that choice over Physics. Thank you for your books. They are good place to start for self-study amateurs. Can we become Facebook friends? I am from Hong Kong. 11. I am unclear what exactly ! over = means. Otherwise greatly enjoying Physics from Symmetry □ Hi, an equals sign ‘=’ with an exclamation point ‘!’ above it means „should be“ or „we want it to be“ . 12. A technical question about your blog: How do you write your latex formulas? The standard Worpress using $\text{\latex ... \}$ creates pictures, but in your site the formulas consist of sepatate characters. (I make a try, $\frac{a}{b}$. Will it work?) How does it work? □ The LaTeX formulas on this site are rendered using Mathjax (https://www.mathjax.org/). 13. Hello Jakob. I’m a second year engineering student in Denmark and one of my courses has your book, “No-Nonsense Electrodynamics” from 2018 second on the recommended reading list. I was wondering if you could help me get a copy of the e-book, since I can’t afford getting a digital copy from Amazon or other listings. I would be very grateful if you can contact me back on this subject. □ Hi Zoltan, I’ve sent you an email! 14. I don’t know if what you say is true of false, but I believe it enough to buy your book. 15. Hi Jakob, I’m a third year physics undergraduate interested in self-learning group / group representation theory. Your two essays “How is a Lie Algebra able to describe a Group?” and “What’s so special about the adjoint representation of a Lie group?” helped clear up some of my conceptual gaps left by other internet resources and actually allowed me to start using group theory explicitly in my linear algebra and quantum mechanics homework. What textbook would you recommend I use for more rigorous study of the topic? □ My favorites are Naive Lie Theory by Stillwell and An Introduction to Tensors and Group Theory for Physicists by Jeevanjee 16. Hi – I have a question about your Quantum Field Theory book. I think eqn 6.12 should have a phi_0 added on at the end, where phi_0 is the solution of the free Klein-Gordon equation. Doing this doesn’t affect eqn 6.11 since operating on phi_0 with the K-G operator just gives 0 anyway. In this sense, phi_0 acts like a ‘constant of integration’ in calculus. The problem is that later when you discuss perturbation expansions in interacting fields, some of the equations don’t seem to be correct. For example, in eqn 6.28, if lambda=0 (no interaction), this equation says that the field phi is just 0, rather than the free field phi_0. Again, in eqn 6.32, equating coefficients of powers of lambda would lead us to conclude that phi_0 = 0, since there is no term on the RHS with lambda^0. Similar problems crop up in section 6.3 in discussing Yukawa interactions, where equating coefficients of powers of g in eqn 6.41 leads to phi_0 = 0 and in eqn 6.54 would lead to psi_0 = 0, rather than giving the free field in each case. □ Yes, you’re right! Thanks for reporting this issue. I will fix it in the next version of the book. 17. Jacob, I’m working through “No-Nonsense Quantum Field Theory”, and I love the book and your approach. I may have found a little typo (a dropped “-” in 2.49, p. 63). No worries, all but inevitable in a text like this. Are you collecting these? Is there a better way to forward any if I find more? Thanks again, a great book! □ This is me, replying to my own comment: So sorry! I know it is Jakob. My typing just got away from me! Thanks! □ Thanks Glenn! I update the book regularly to and the best way is to send them to me via email: [email protected]. 18. Hey Jakob, I’m reading your book Physics From Symmetry, It’s really interesting! I’m on chapter 4 and I haven’t been able to put it down! 19. Dear Dr. Schwichtenberg I am an accomplished plasma and particle accelerator experimental physicist. I have wanted to know the basis of the Standard Model of particle physics, but was defeated in all my attempts over 3 years because of my lack of preparation in Group Theory and Abstract Algebra. I had been spending many hopeless hours in understanding this mathematics. Then I stumbled across your book on Symmetry. It is exactly the thing I needed to link my physics knowledge to the branch of Algebra. It is one applied physicists and engineers and other students would learn from. It is practically perfect and I am learning a lot. I am very grateful to you for making this available to us. I wish you all success in your pursuits and work. I hope I can write to you requesting any clarifications. □ Sure, I’d be happy to help! 20. I never did my Ph.D. discertation in Control Systems Engineering due to personal life issues, but have pursued my passion for teaching at a community college. I left industry after working as an Electrical Engineer for 25 years, and have always been interested in physics and your books (I purchased every one) are “filling-in” the bigger picture for me. I truly enjoy your writing style and approach. Thank you so much for providing your insight to the subject matter.
{"url":"http://jakobschwichtenberg.com/about/","timestamp":"2024-11-03T22:16:43Z","content_type":"text/html","content_length":"87809","record_id":"<urn:uuid:a48d0a85-065a-4f73-b530-8f8c49b4a794>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00014.warc.gz"}
Mountain tourism meteorological and snow indicators for Europe from 1950 to 2100 derived from reanalysis and climate projections Name Units Description Annual amount of machine made kg m^ The total amount of machine made snow for the period August 1st of year N-1 to July 31st of year N, where N is the selected year. snow produced -2 Identified longest continuous period from August 1st of year N-1 to July 31st of year N where the snow depth is continuously above 30 cm using a groomed snow End of the longest period with simulation. The first date, within this continuous period, meeting the condition "Snow depth >= 30 cm" is the beginning of the season. The last date within this groomed snow day continuous period, meeting this condition, is the end of the season. In case only one date meets the condition, then beginning of season and end of season are attributed this value. In case no date meets the condition (i.e., Snow depth is lower than 30 cm for the entire year), no date is attributed (value of 0). The value assigned is interpreted as the number of days after August 1st of year N-1. Identified longest continuous period from August 1st of year N-1 to July 31st of year N where the snow depth is continuously above 30 cm using a managed End of the longest period with (groomed and machine made) snow simulation. The first date, within this continuous period, meeting the condition "Snow depth >= 30 cm" is the beginning of the managed snow day season. The last date within this continuous period, meeting this condition, is the end of the season. In case only one date meets the condition, then beginning of season and end of season are attributed this value. In case no date meets the condition (i.e., Snow depth is lower than 30 cm for the entire year), no date is attributed (value of 0). The value assigned is interpreted as the number of days after August 1st of year N-1. Identified longest continuous period from August 1st of year N-1 to July 31st of year N where the snow depth is continuously above 30 cm using a natural snow End of the longest period with simulation. The first date, within this continuous period, meeting the condition "Snow depth >= 30 cm" is the beginning of the season. The last date within this natural snow day continuous period, meeting this condition, is the end of the season. In case only one date meets the condition, then beginning of season and end of season are attributed this value. In case no date meets the condition (i.e., Snow depth is lower than 30 cm for the entire year), no date is attributed (value of 0). The value assigned is interpreted as the number of days after August 1st of year N-1. Mean winter air temperature K Average of 6-hourly temperature of air at 2 m above the surface of land, sea or in-land waters for all dates in November year N-1 to April of year N Monthly mean air temperature for K Average of 6-hourly temperature of air at 2 m above the surface of land, sea or in-land waters for all dates in April of year N. Monthly mean air temperature for K Average of 6-hourly temperature of air at 2 m above the surface of land, sea or in-land waters for all dates in December of year N-1. Monthly mean air temperature for K Average of 6-hourly temperature of air at 2 m above the surface of land, sea or in-land waters for all dates in February of year N. Monthly mean air temperature for K Average of 6-hourly temperature of air at 2 m above the surface of land, sea or in-land waters for all dates in January of year N. Monthly mean air temperature for K Average of 6-hourly temperature of air at 2 m above the surface of land, sea or in-land waters for all dates in March of year N. Monthly mean air temperature for K Average of 6-hourly temperature of air at 2 m above the surface of land, sea or in-land waters for all dates in November of year N-1. Period with high amount of day The number of days from August 1^st of year N-1 to July 31^st of year N fulfilling the conditions "Snow water equivalent >= 120 kg m^-2" using a groomed snow groomed snow simulation. Period with high amount of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow water equivalent >= 120 kg m^-2" using a managed (groomed managed snow and machine made) snow simulation. Period with high amount of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow water equivalent >= 120 kg m^-2" using a natural snow natural snow simulation. Period with high height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 50 cm" using a groomed snow simulation. groomed snow Period with high height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 50 cm" using a managed (groomed and machine made) managed snow snow simulation. Period with high height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 50 cm" using a natural snow simulation. natural snow Period with low height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 5 cm" using a groomed snow simulation. groomed snow Period with low height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 5 cm" using a managed (groomed and machine made) managed snow snow simulation. Period with low height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 5 cm" using a natural snow simulation. natural snow Period with medium amount of day The number of days from August 1^st of year N-1 to July 31^st of year N fulfilling the conditions "Snow water equivalent >= 100 kg m^-2" using a groomed snow groomed snow simulation. Period with medium amount of day The number of days from August 1^st of year N-1 to July 31^st of year N fulfilling the conditions "Snow water equivalent >= 100 kg m^-2" using a managed managed snow (groomed and machine made) snow simulation. Period with medium amount of day The number of days from August 1^st of year N-1 to July 31^st of year N fulfilling the conditions "Snow water equivalent >= 100 kg m^-2" using a natural snow natural snow simulation. Period with medium height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 30 cm" using a groomed snow simulation. groomed snow Period with medium height of The number of days from December 4 of year N to December 10 of year N (included) fulfilling the condition "Snow depth >= 30 cm" using a groomed snow simulation. groomed snow between the fourth day Maximum value is 7. and tenth December Period with medium height of groomed snow between twenty day The number of days from December 22 of year N-1 to January 4 of year N (included) fulfilling the condition "Snow depth >= 30 cm" using a groomed snow second December and fourth simulation. Maximum value is 14. Period with medium height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 30 cm" using a managed (groomed and machine made) managed snow snow simulation. Period with medium height of The number of days from December 4 of year N to December 10 of year N (included) fulfilling the condition "Snow depth >= 30 cm" using a managed (groomed and managed snow between the fourth day machine made) snow simulation. Maximum value is 7. and tenth December Period with medium height of managed snow between twenty day The number of days from December 22 of year N-1 to January 4 of year N (included) fulfilling the condition "Snow depth >= 30 cm" using a managed (groomed and second December and fourth machine made) snow simulation. Maximum value is 14. Period with medium height of day The number of days from August 1st of year N-1 to July 31st of year N fulfilling the conditions "Snow depth >= 30 cm" using a natural snow simulation. natural snow Period with medium height of The number of days from December 4 of year N to December 10 of year N (included) fulfilling the condition "Snow depth >= 30 cm" using a natural snow simulation. natural snow between the fourth day Maximum value is 7. and tenth December Period with medium height of natural snow between twenty day The number of days from December 22 of year N-1 to January 4 of year N (included) fulfilling the condition "Snow depth >= 30 cm" using a natural snow second December and fourth simulation. Maximum value is 14. Snow making hours for WBT lower hour Computed wet bulb temperature (WBT) from temperature and relative humidity every 6 hours and interpolated linearly to an hourly time resolution. Expressed as than -2°C the number of hours, from November 1st of year N-1 to December 31st of year N-1, for which wet buld temperature is less than -2°C. Snow making hours for WBT lower hour Computed wet bulb temperature (WBT) from temperature and relative humidity every 6 hours and interpolated linearly to an hourly time resolution. Expressed as than -5°C the number of hours, from November 1st of year N-1 to December 31st of year N-1, for which wet buld temperature is less than -5°C. Identified longest continuous period from August 1st of year N-1 to July 31st of year N where the snow depth is continuously above 30 cm using a groomed snow Start of the longest period with simulation. The first date, within this continuous period, meeting the condition "Snow depth >= 30 cm" is the beginning of the season. The last date within this groomed snow day continuous period, meeting this condition, is the end of the season. In case only one date meets the condition, then beginning of season and end of season are attributed this value. In case no date meets the condition (i.e., Snow depth is lower than 30 cm for the entire year), no date is attributed (value of 0). The value assigned is interpreted as the number of days after August 1st of year N-1. Identified longest continuous period from August 1st of year N-1 to July 31st of year N where the snow depth is continuously above 30 cm using a managed Start of the longest period with (groomed and machine made) snow simulation. The first date, within this continuous period, meeting the condition "Snow depth >= 30 cm" is the beginning of the managed snow day season. The last date within this continuous period, meeting this condition, is the end of the season. In case only one date meets the condition, then beginning of season and end of season are attributed this value. In case no date meets the condition (i.e., Snow depth is lower than 30 cm for the entire year), no date is attributed (value of 0). The value assigned is interpreted as the number of days after August 1st of year N-1. Identified longest continuous period from August 1st of year N-1 to July 31st of year N where the snow depth is continuously above 30 cm using a natural snow Start of the longest period with simulation. The first date, within this continuous period, meeting the condition "Snow depth >= 30 cm" is the beginning of the season. The last date within this natural snow day continuous period, meeting this condition, is the end of the season. In case only one date meets the condition, then beginning of season and end of season are attributed this value. In case no date meets the condition (i.e., Snow depth is lower than 30 cm for the entire year), no date is attributed (value of 0). The value assigned is interpreted as the number of days after August 1st of year N-1. Total precipitation from kg m^ Cumulative value of snowfall and rain precipitation over the winter sports season (November year N-1 to April year N). November to April -2 Total snow precipitation from kg m^ Cumulative value of snowfall precipitation over the winter sports season (November year N-1 to April year N). November to April -2
{"url":"https://cds.climate.copernicus.eu/datasets/sis-tourism-snow-indicators?tab=overview","timestamp":"2024-11-12T13:25:02Z","content_type":"text/html","content_length":"145152","record_id":"<urn:uuid:2bf9f6c3-c776-4ed4-ba12-3752a21cb1d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00169.warc.gz"}
Converting my time from hours:minutes into minutes when my time is in character format Turning Hours:Minutes into Minutes: A Simple Guide for Character-Based Time Let's say you have a time stored in a character format, like "10:30," and you want to convert it into total minutes. This is a common task in programming and data manipulation. Here's a breakdown of how to accomplish this, along with explanations and examples. The Problem: You have a time string in the format "HH:MM" (hours:minutes) and you want to convert it into the total number of minutes. Original Code (Python Example): time_string = "10:30" hours, minutes = time_string.split(":") total_minutes = int(hours) * 60 + int(minutes) 1. Splitting the String: We use the split(":") method to separate the hours and minutes from the string. This creates a list where the first element is the hours and the second is the minutes. 2. Converting to Integers: We use int() to convert the string representations of hours and minutes into integers. This is necessary for mathematical calculations. 3. Calculating Total Minutes: We multiply the hours by 60 (minutes in an hour) and add the minutes. This gives us the total number of minutes. Let's say our time string is "10:30". Here's how the calculation would work: • hours = 10 • minutes = 30 • total_minutes = (10 * 60) + 30 = 630 So, 10:30 is equivalent to 630 minutes. Additional Considerations: • Error Handling: It's always a good idea to handle potential errors. For example, you could check if the input string is in the correct format (HH:MM) and handle invalid inputs gracefully. • Time Libraries: For more complex time manipulations, consider using libraries like datetime in Python. These libraries offer a wealth of functions for working with dates and times. Code Example (Python with error handling): time_string = "10:30" hours, minutes = time_string.split(":") total_minutes = int(hours) * 60 + int(minutes) print(f"Total minutes: {total_minutes}") except ValueError: print("Invalid time format. Please use HH:MM") Key Takeaways: • Converting time strings to minutes is a common task in many programming scenarios. • Splitting the string, converting to integers, and then applying the appropriate calculation is a straightforward approach. • Consider using dedicated time libraries for more complex time operations. By understanding this process, you can easily manipulate and convert time formats in your code!
{"url":"https://laganvalleydup.co.uk/post/converting-my-time-from-hours-minutes-into-minutes-when-my","timestamp":"2024-11-06T21:58:43Z","content_type":"text/html","content_length":"81501","record_id":"<urn:uuid:43f889a3-0fb3-444e-a31f-789865b69629>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00453.warc.gz"}
No Title 354 5- We may cite an earlier result that every convergent sequence is eventually constant, hence coverges to the integer at which it is constant (Z+ contains all its limi points). Or we may show that its complemement is a union of open sets, hence open: (-infty, 1) U (!,2) U (2,3) U (3,4) U ... 16- A point in A is in every set whic contains A, hence in the intersection of any such collection of sets. If x is not in A-, there is a neighborhood around it outside A- since A- is closed (A-' is open). Therefore x is not a boundary point of A-, hence not in A-- If x is in (AUB)-, it is the limit of a sequence of points in A and B; hence of a subsequence in A or B, hence in A- or B-. If x is in A- (or B-), it is the limit of a sequence of points in A (or B), which sequence is in AUB. If x is in (A /int B)-, it is the limit of a sequence which is in both A and B, hence it is in both A- and B-; consider A=Q and B=(R\Q) The intersection (hence closure of the intersection) is the null set, but the closure of each is R. 17- Q 359 1- {(1+(1/n), 2+(1/n))} Note that what is happening at the left endpoint is what is important, I could use any number greater than 2 for the right endpoint. 4- Take any open cover of F, and add to it the open set F' (the complement of F). The open cover of F with F' provides an open cover of K, hence there is a finite subcover of K, and removing F' leaves a finite subcover of F. 9- Choose x_n in K_n for each n. Choose a convergent subsequence (such exists since K_1 is bounded, hence all K_i are bounded by the same bound). Let x be the limit of the convergent subsequence, it is in each K_n since the tail of the subsequence is in each K_n (perhaps a later tail for larger n) and each K_n is closed. Russell Campbell Sun Feb 15 1998
{"url":"http://www.math.uni.edu/~campbell/real/hw141_5.html","timestamp":"2024-11-09T03:10:33Z","content_type":"text/html","content_length":"2704","record_id":"<urn:uuid:cf9fadc1-c53b-4941-8ec8-07b11e2950b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00323.warc.gz"}
Dominik Hose’s PhD thesis on Possibilistic Reasoning with Imprecise Probabilities Posted on April 11, 2023 by Dominik Hose (edited by Henna Bains)[ go back to blog ] My thesis is about one of the simplest theories of imprecise probabilities, possibility theory, and the surprising powers and capabilities that come with it. According to the approach I adopt, this theory revolves around (what I call) an elementary possibility function. Its values may be understood as the upper probabilities of elementary events and the induced possibility measure is just the supremum of this function on the sets/events in question. That simple! This definition is the entry point of my dissertation, which, in just one sentence, focuses on how such functions can be constructed from given information (or a lack thereof) and how they must be manipulated in order to account for new information in a statistical context. I explore the implications of this approach for various types of information, such as imprecise knowledge about moments or dependency/interaction, functional relationships between random variables, statistical models in combination with data, and finally dynamic filtering problems combining all of the above. Being an engineer, I am, of course, obliged to also provide the algorithms and numerical implementation strategies needed to make this theory come to life on a computer. The fundamental tool that allows me to do most of this—and the result of probably the single flash of true inspiration I had in my five years as a researcher—is the Imprecise-Probability-to-Possibility Transformation [2]. The Imprecise-Probability-to-Possibility Transformation Scott Ferson repeatedly scolded me for choosing the lengthy name “Imprecise-Probability-to-Possibility Transformation” for something that is so fundamental to all of my work but it was the obvious choice stating precisely what it does: Inspired by the work of Didier Dubois et al. on transforming single probability measures into a possibility measure [3], it tells us how to find an elementary possibility function that describes an arbitrary set of probability measures. That is, the set of probability measures dominated by the possibility measure induced by the former, aka its credal set, is a minimal outer approximation of the latter. You will probably want to read this sentence two or three times. It makes sense. I promise! Ok, I will explain it: We have an initial set of probability measures. From this set, we construct an elementary possibility function via the Imprecise-Probability-to-Possibility Transformation. This elementary possibility function induces a possibility measure, which dominates certain probability measures. The collection of all these dominated probability measures is called the credal set of the elementary possibility function. This credal set is a superset of the initial set of probability measures. Most of the readers having studied possibility theory will know that the credal sets of possibility measures always adhere to a certain geometry and, thus, we cannot generally make the credal set look exactly like the original set but we can find a ‘best’ possibilistic approximation via this transformation. The terms ‘best’ and ‘minimal’ are defined with respect to a given (plausibility) order of the elementary events, which to specify is the main difficulty when applying the Imprecise-Probability-to-Possibility Transformation. In fact, after studying the details, properties and implications of this transformation, the remainder of my dissertation often reduces to evaluating it under various combinations of sets of probabilities and plausibility orders. Possibility theory being a rather coarse theory of imprecise probabilities, this transformation is not very useful without a very specific reason to want to restrict one’s discussion to possibilities. One (in my opinion very convincing) reason can be found in the context of statistical inference. Possibilistic Inferential Models The first part of my dissertation treats probabilities and possibilities as the (additive and maxitive) measures they are—which is as legitimate as it is a dry run in theory without much practical relevance. In the second part, I discuss the statistical meaning of possibility theory providing a very straight-forward access to the theory of inferential models put forth by Ryan Martin and his collaborators. To better understand it, I recommend checking out Ryan Martin’s SIPTA blog post about their book. The story of me and Ryan deserves some explanation: It was only through the Risk Institute Online series of talks organized by Scott Ferson’s Institute of Risk and Uncertainty of the University of Liverpool, at which I was invited to present my work, that I got to know about Ryan Martin, who had coincidentally presented a couple of weeks before me, and the theory of inferential models he had developed with his colleagues. I think both he and I immediately realized the close connections between his work and mine upon hearing each other’s talks, and what started off as genuine interest and appreciation in and of the other’s work had grown into a profound scientific exchange and collaboration by the end of my PhD. Without Ryan’s extensive groundwork [4], the second part of my thesis would have looked very different. By endowing me with his concept of ‘validity’ and uncovering that all you need to achieve it are, indeed, possibility measures, he had prepared the perfect setting for me to connect my measure-based view of possibilities, culminating in the Imprecise-Probability-to-Possibility Transformation, to his theory. By employing my transformation, I was able to find a more direct way of constructing inferential models bypassing the previous and somewhat unsatisfactory (to me) approach based on the a-, p- and c-step. A fundamental corollary of the validity criterion is the fact that the level sets of the confidence distributions (elementary possibility functions of unknown parameters) resulting from a valid inferential model are confidence sets (hence the name) in the sense of Neyman and Pearson and the elementary values of a confidence distribution are special p-values. Even though I did not dare to state it as blatantly at that time, it is my fundamental conviction that most of frequentist inference is inherently possibilistic. Apart from the above observations, this claim is further substantiated by several properties I show, such as the fact that established rules for combining (independent and dependent) p-values can be used to construct multivariate possibility distributions under the aforementioned various types of independence/interaction and vice versa. Regardless of whether the fundamental claim of mine is actually true or not, I definitely support Ryan’s initial argument that possibility theory is deeply connected to frequentism [5]. For instance, I show how to do arithmetic with confidence distributions—both in theory and on a computer–and am thereby reinventing Scott Ferson’s old idea of building fuzzy sets (aka possibility distributions) by stacking nested confidence sets and manipulating them according to the extension principle (with a few In the remainder of my dissertation, I derive possibilistic inferential models for filtering problems, where the goal is to infer the current state of a dynamical system, and I employ the resulting possibilistic filter in a localization problem, where a robot must determine its position by observing some landmarks, in order to demonstrate the practical applicability of the filter and of possibilistic inference in general. The Aftermath After reading my dissertation as part of my committee, Ryan gave me the biggest compliment that I have received in my entire (and, admittedly, brief) scientific career by following up on my ideas and developing a theory of statistical inference under partial prior information [6] with the Imprecise-Probability-to-Possibility Transformation as the (in my opinion) fundamental tool to construct the corresponding valid inferential models. To elaborate a little more on this, Ryan suggests viewing statistical inference in the context of a spectrum of prior information where frequentist inference (no prior information) lies on the one end and Bayesian inference (precise prior information) lies on the other end. In my dissertation, I focus on the former case. When he sent me his draft of the paper, I was struck by the beauty of this new idea and began spamming his inbox with questions, suggestions, and—being more familiar with the details of the implementation of the Imprecise-Probability-to-Possibility Transformation—results of example computations at a rate of approximately one mail every five minutes. I did not even give the poor guy time to respond to one mail before writing the next. Eventually, we joined forces and presented a joint paper [7] at the BELIEF 2022 conference, which I consider to be among the best I have been a part of. I would love to talk more about these ideas because I find them so very exciting but this blog post is not the place for it and I defer any future discussion to Ryan and you, dear readers, who I trust to give them the attention they deserve and make something great out of them. During my time as an active researcher I was often asked why I was working on possibility theory instead of a more mature/powerful theory of imprecise probabilities. What I like about possibility theory is its simplicity and elegance because it describes uncertainty by just a single (elementary possibility) function. In fact, the definition of possibility theory I adopt is not even its most general form since we can easily find possibility measures that are not induced by any elementary possibility function! Of course, this simplicity comes at the expense of expressiveness and generality; many of the readers of this blog post will be working on theories that are far more advanced. In fact, presenting my work on possibility theory at ISIPTA conferences etc. sometimes felt like being the toddler with a toy truck next to a big construction site. It is a testament to the kindness of the SIPTA community that I was never treated as such. My preference for possibility theory may best be explained by my introduction to the subject of imprecise probabilities, which was probably not the standard way—if there is such a thing. Being a student of Michael Hanss, who started his career as a researcher of fuzzy sets and fuzzy arithmetic, this was also the topic I started my PhD with. I got into imprecise probabilities—in particular, into possibility theory as a special case thereof—only by wanting to find objective criteria for choosing the fuzzy set membership function, a topic that never quite made sense to me in the fuzzy literature and was, in my opinion, carelessly neglected. After studying much, but by far not all, of the extensive work of Didier Dubois (Didier emphatically encouraged me to continue in this direction as he was trying to convince the established fuzzy community of striking new and unfamiliar paths with limited success; I witnessed this at the EUSFLAT 2019 conference) and his collaborators [8], I began to understand the connections between possibility theory and imprecise probabilities, and I suspected that there was a lot left to be discovered. In fact, I still do—just read the concluding chapter of my dissertation. By following that path, I found my playground. Michael, whose role in my doctoral endeavour cannot be overestimated, was very enthusiastic about me pursuing these new ideas and he gave me just the right amount of reinforcement and freedom to do so. Moreover, he never failed to challenge me with well-founded critical questions and to remind me that all the theory I was uncovering should be backed up by some practical applications. By providing my solutions to example problems from statistics and engineering, I aimed at answering his questions and demonstrating the practical relevance of possibility theory in my dissertation. In conclusion, I firmly believe that there are very good reasons for possibility theory to be pursued and I hope to have convinced some of you that it deserves a spot among the many theories of imprecise probabilities. I will be following its future development from outside academia with great interest. [1] Dominik Hose. Possibilistic Reasoning with Imprecise Probabilities: Statistical Inference and Dynamic Filtering. Shaker Verlag, 2022. https://dominikhose.github.io/dissertation/diss_dhose.pdf [2] Dominik Hose and Michael Hanss. A universal approach to imprecise probabilities in possibility theory. International Journal of Approximate Reasoning, 133:133–158, 2021. [3] Didier Dubois, Laurent Foulloy, Gilles Mauris, and Henri Prade. Probability-possibility transformations, triangular fuzzy sets, and probabilistic inequalities. Reliable computing, 10(4):273–297, [4] Ryan Martin and Chuanhai Liu. Inferential models: reasoning with uncertainty. CRC Press, 2015. [5] Ryan Martin. An imprecise-probabilistic characterization of frequentist statistical inference. arXiv preprint arXiv:2112.10904, 2021. [6] Ryan Martin. Valid and efficient imprecise-probabilistic inference across a spectrum of partial prior information. arXiv preprint arXiv:2203.06703, 2022. [7] Dominik Hose, Michael Hanss, and Ryan Martin. A practical strategy for valid partial prior-dependent possibilistic inference. In International Conference on Belief Functions, pages 197–206. Springer, 2022. [8] Didier Dubois. Possibility theory and statistical reasoning. Computational statistics & data analysis, 51(1):47–69, 2006. About the author
{"url":"https://sipta.org/blog/thesis-dominik/","timestamp":"2024-11-11T22:46:31Z","content_type":"text/html","content_length":"20037","record_id":"<urn:uuid:388a68fa-1fe2-41bc-b8fe-82e56801aaec>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00472.warc.gz"}
Differences between Doppler Centroid and frequency in an SLC image in dcEstimate in Oringinal Product Metadata Hi all, I am working on a SLC image. I want to evaluate the Doppler Centroid estimations. I found in the User guide of “Sentinel product Specification”, the formula DopplerCentroid=d0+d1(tsr-t0)+d2(tsr-t0)^ 2. If I use the values of doppler coefficient, slant range times and t0 values in the formula, I find values of Doppler Centroid that are different from the values of frequencies (that are in the fineDcList in Snap with slant range time) estimated. what does this difference mean? and how can I interpret it? Moreover I have to use doppler coefficient of geometry polynomial coefficients or of data polynomial? And if they are three different type of frequencies (doppler centroid geometry, doppler centroid data, frequencies), how can I understand what is good for me? Thanks in advance
{"url":"https://forum.step.esa.int/t/differences-between-doppler-centroid-and-frequency-in-an-slc-image-in-dcestimate-in-oringinal-product-metadata/43349","timestamp":"2024-11-10T05:38:28Z","content_type":"text/html","content_length":"17202","record_id":"<urn:uuid:a5413e1b-8c90-4ec7-ab79-140bbbdf0621>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00752.warc.gz"}
A G Tyagunov (Ural Federal University) : 519.8 A Diffusion Model of Cluster Evolution in a Heat-Resistant Nickel Alloy Metal Melt doi: 10.18698/2309-3684-2023-2-332 In this work, a mathematical model of the thermo-temporal evolution of a cluster in the melt of a heat-resistant nickel alloy ZhS6U is constructed. An initial-boundary value problem with a moving boundary is formulated, for the solution of which numerical modeling is used by the particle trajectory method, and a number of classical physical theories are used to describe evolutionary processes. To check the accuracy of the model, a physical experiment is involved in constructing polytherms and isotherms of the electrical resistance of the alloy under consideration. It has been confirmed that the Brownian diffusion model and Drude's theory of conductivity are applicable to describe both the temporal and temperature evolution of a cluster. The approach to modeling based on "hard balls" also justified itself. According to the simulation results, in the time range from 1690 to 1752 K, the number of particles in the cluster varies from 5000 to 2000, the average dynamic viscosity of the cluster varies from 3 to 2 * 1010 Pa * s, however, it is assumed that the central part is much denser than periphery. The cluster radius varies from 24 to 18 A, and the radius of the free zone around the cluster varies from 56 to 43 A. The directions of further development of the model are determined. Тягунов А.Г., Зейде К.М., Мильдер О.Б., Тарасов Д.А. Диффузионная модель эволюции кластера в металлическом расплаве жаропрочного никелевого сплава. Математическое моделирование и численные методы, 2023, № 2, с. 3–32.
{"url":"https://mmcm.bmstu.ru/authors/361/","timestamp":"2024-11-10T13:05:37Z","content_type":"text/html","content_length":"12854","record_id":"<urn:uuid:1d260767-4616-4c3d-a7b7-05fe2c358754>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00795.warc.gz"}
Vedic Maths Benefits India has given the world a great gift that is useful to all in many aspects, we call it Vedic Maths. I’m pretty sure we’ve all heard of the term but what is Vedic Maths all about? It comes from the Sanskrit word Veda which means knowledge. Shri Bharti Krishna Tirathji is known as the Father of Vedic Maths. Vedic Maths provides an easy and convenient solution to difficult maths calculations. All of this can be done mentally. Learning Vedic Maths tricks and concepts at an early age helps your child build a strong foundation. What are the benefits of Vedic Maths? 1. Helps in Simplifying Calculations Maths is not always the favourite subject of a child as it can get tricky to calculate numbers. However, with Vedic maths a child can use simple methods that help solve even complex calculations easily. It helps children solve their homework with ease and helps them take a liking to maths. 2. Enhances Ability for Numbers Vedic maths makes kids good with numbers. It helps them solve any calculations like roots, square roots, cubes and squares in an easier manner. They become calm and confident when presented with 3. Improves Focus, Memory & Concentration When using Vedic maths a student performs most calculations mentally. This exercise betters their focus, memory & concentration. With the enhancements of these traits, children get developed on an all-round front. Must-Read: How to Choose the Right Tutor for Your Child Vedic maths enables quick calculations. Children learn formulae that help them calculate at a faster rate. There is no usage of calculators which makes your child even more proficient. 5. Better Performances in Competitive Exams As you all know competitive exams have maths as a subject that necessitates quick calculations in order to complete the entire exam on time. With Vedic maths, students can get trained to utilise their tricks and skills for competitive exams and solve examinations more efficiently. Vedic maths can be easily learnt when you have the right tutors. It is a simplified process of normal maths and can be learnt by anyone. Vedic maths is recommended for all students above the age of 5 years old. Vedic maths can be used in various aspects like technology, algebra, geometry and so on. Please keep this in mind though that Vedic maths does not teach the underlying philosophy or the background of the problem to be solved. It is wrong to assume that Vedic maths is a shortcut to learning maths. The process of learning maths is entirely different. Vedic maths must be used only after learning proper maths. Vedic maths shows us how to make maths interesting for students. Maths can be a nightmare for students due to the nature of calculations and the volume of numbers that are presented before them. Students often seek external tuitions the most in the subject of maths. However, with Vedic maths one can easily become a maths genius. Several students who were able to score higher ranks in maths Olympiads and scored higher in maths exams testify to the use of Vedic maths.
{"url":"https://enhancetechskills.com/vedic-maths-benefits/","timestamp":"2024-11-05T12:35:28Z","content_type":"text/html","content_length":"153579","record_id":"<urn:uuid:2a1da7fd-9a58-4310-9f13-0cc14ebaea08>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00569.warc.gz"}
Eureka Math Grade 8 Module 6 End of Module Assessment Answer Key Engage NY Eureka Math 8th Grade Module 6 End of Module Assessment Answer Key Eureka Math Grade 8 Module 6 End of Module Assessment Task Answer Key Question 1. The Kentucky Derby is a horse race held each year. The following scatter plot shows the speed of the winning horse at the Kentucky Derby each year between 1875 and 2012. Data Source: http://www.kentuckyderby.com/ (Note: Speeds were calculated based on times given on website.) a. Is the association between speed and year positive or negative? Give a possible explanation in the context of this problem for why the association behaves this way considering the variables The association is positive overall, as horses have been getting faster over time. This is perhaps due to improved training methods. b. Comment on whether the association between speed and year is approximately linear, and then explain in the context of this problem why the form of the association (linear or not) makes sense considering the variables involved. The association is not linear. There is probably a physical limit to how fast horses can go that we are approaching. c. Circle an outlier in this scatter plot, and explain, in context, how and why the observation is unusual. The winner that year was much slower than we could have predicted. Question 2. Students were asked to report their gender and how many times a day they typically wash their hands. Of the 738 males, 66 said they wash their hands at most once a day, 583 said two to seven times per day, and 89 said eight or more times per day. Of the 204 females, 2 said they wash their hands at most once a day, 160 said two to seven times per day, and 42 said eight or more times per day. a. Summarize these data in a two-way table with rows corresponding to the three different frequency-of-hand-washing categories and columns corresponding to gender. b. Do these data suggest an association between gender and frequency of hand washing? Support your answer with appropriate calculations. Males are more likely than females to wash hands at most once per day. Females are more likely to wash 8 or more times per day. Question 3. Basketball players who score a lot of points also tend to be strong in other areas of the game such as number of rebounds, number of blocks, number of steals, and number of assists. Below are scatter plots and linear models for professional NBA (National Basketball Association) players last season. a. The line that models the association between points scored and number of rebounds is y = 21.54 + 3.833x, where y represents the number of points scored and x represents the number of rebounds. Give an interpretation, in context, of the slope of this line. If the number of rebounds increases by one, we predict the number of points increases by 3.833. b. The equations on the previous page all show the number of points scored (y) as a function of the other variables. An increase in which of the variables (rebounds, blocks, steals, and assists) tends to have the largest impact on the predicted points scored by an NBA player? Each additional block corresponds to 22.45 more points, the largest slope or rate of increase. c. Which of the four linear models shown in the scatter plots on the previous page has the worst fit to the data? Explain how you know using the data. Probably number of blocks because the association is weaker. There is more scatter of the points away from the line. Leave a Comment You must be logged in to post a comment.
{"url":"https://ccssmathanswers.com/eureka-math-grade-8-module-6-end-of-module-assessment/","timestamp":"2024-11-07T02:41:12Z","content_type":"text/html","content_length":"252975","record_id":"<urn:uuid:7b030679-e229-4090-997a-ea92237f5c9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00775.warc.gz"}
Lesson 6 Estimating Probabilities Using Simulation 6.1: Which One Doesn’t Belong: Spinners (5 minutes) This warm-up prompts students to compare four images. It encourages students to explain their reasoning, hold mathematical conversations, and gives you the opportunity to hear how they use terminology and talk about characteristics of the images in comparison to one another. To allow all students to access the activity, each image has one obvious reason it does not belong. Encourage students to use appropriate terminology (eg The bottom left spinner is the only one with an outcome that has a probability greater than 0.5). During the discussion, listen for important ideas and terminology that will be helpful in upcoming work of the unit. Arrange students in groups of 2–4. Display the image for all to see. Ask students to indicate when they have noticed which image does not belong and can explain why. Give students 2 minutes of quiet think time and then time to share their thinking with their group. After everyone has conferred in groups, ask the group to offer at least one reason each image doesn’t belong. Follow with a whole-class discussion. Student Facing Which spinner doesn't belong? Activity Synthesis Ask each group to share one reason why a particular image does not belong. Record and display the responses for all to see. After each response, ask the class if they agree or disagree. Since there is no single correct answer to the question of which one does not belong, attend to students’ explanations and ensure the reasons given are correct. During the discussion, ask students to explain the meaning of any terminology they use, such as probability. Also, press students on unsubstantiated claims. 6.2: Diego’s Walk (20 minutes) In this activity, students estimate the probability of a real-world event by simulating the experience with a chance experiment (MP4). Students see that that multiple simulation methods can result in similar estimates for the probability of the actual event. Arrange students in groups of 3. Prepare each group with supplies for 1 type of simulation: choosing a slip from a bag, spinning a spinner, or rolling 2 number cubes. The supplies for each of these simulations include: • a bag containing a set of slips from the blackline master • a spinner cut from the blackline master, a pencil and a paper clip • 2 standard number cubes Set up the following simulation by telling the students: Diego must cross a busy intersection at a crosswalk on his way to school. Some days he is able to cross immediately or wait only a short while. Other days, he must wait for more than 1 minute for the signal to indicate he may cross the street. We will simulate his luck at this intersection using different methods and estimate his probability of waiting more than 1 minute. Teacher note: The bag of papers and spinner are designed to have a probability of 0.7 to wait more than 1 minute. The number cubes have a probability of approximately 0.72 to wait more than 1 minute. To the extent that the students are estimating the probabilities, these are close enough to give similar results. Give students 15 minutes for group work followed by a whole-class discussion. Representation: Internalize Comprehension. Check in with students after the first 3-5 minutes of work time. Check to make sure students have attended to all parts of the simulation to record one day on the graph. Supports accessibility for: Conceptual processing; Organization Student Facing Your teacher will give your group the supplies for one of the three different simulations. Follow these instructions to simulate 15 days of Diego’s walk. The first 3 days have been done for you. • Simulate one day: □ If your group gets a bag of papers, reach into the bag, and select one paper without looking inside. □ If your group gets a spinner, spin the spinner, and see where it stops. □ If your group gets two number cubes, roll both cubes, and add the numbers that land face up. A sum of 2–8 means Diego has to wait. • Record in the table whether or not Diego had to wait more than 1 minute. • Calculate the total number of days and the cumulative fraction of days that Diego has had to wait so far. • On the graph, plot the number of days and the fraction that Diego has had to wait. Connect each point by a line. • If your group has the bag of papers, put the paper back into the bag, and shake the bag to mix up the papers. • Pass the supplies to the next person in the group. │ │Does Diego have│total number │ fraction │ │day│ to wait more │of days Diego│ of days Diego │ │ │than 1 minute? │ had to wait │ had to wait │ │1 │no │0 │\(\frac{0}{1} =\) 0.00 │ │2 │yes │1 │\(\frac{1}{2} =\) 0.50 │ │3 │yes │2 │\(\frac{2}{3} \approx\) 0.67 │ │4 │ │ │ │ │5 │ │ │ │ │6 │ │ │ │ │7 │ │ │ │ │8 │ │ │ │ │9 │ │ │ │ │10 │ │ │ │ │11 │ │ │ │ │12 │ │ │ │ │13 │ │ │ │ │14 │ │ │ │ │15 │ │ │ │ 1. Based on the data you have collected, do you think the fraction of days Diego has to wait after the 16th day will be closer to 0.9 or 0.7? Explain or show your reasoning. 2. Continue the simulation for 10 more days. Record your results in this table and on the graph from earlier. │ │Does Diego have │total number │ fraction │ │day│ to wait more │of days Diego│of days Diego│ │ │ than 1 minute? │ had to wait │ had to wait │ │16 │ │ │ │ │17 │ │ │ │ │18 │ │ │ │ │19 │ │ │ │ │20 │ │ │ │ │21 │ │ │ │ │22 │ │ │ │ │23 │ │ │ │ │24 │ │ │ │ │25 │ │ │ │ 3. What do you notice about the graph? 4. Based on the graph, estimate the probability that Diego will have to wait more than 1 minute to cross the crosswalk. Student Facing Are you ready for more? Let's look at why the values tend to not change much after doing the simulation many times. 1. After doing the simulation 4 times, a group finds that Diego had to wait 3 times. What is an estimate for the probability Diego has to wait based on these results? 1. If this group does the simulation 1 more time, what are the two possible outcomes for the fifth simulation? 2. For each possibility, estimate the probability Diego has to wait. 3. What are the differences between the possible estimates after 5 simulations and the estimate after 4 simulations? 2. After doing the simulation 20 times, this group finds that Diego had to wait 15 times. What is an estimate for the probability Diego has to wait based on these results? 1. If this group does the simulation 1 more time, what are the two possible outcomes for the twenty-first simulation? 2. For each possibility, estimate the probability Diego has to wait. 3. What are the differences between the possible estimates after 21 simulations and the estimate after 20 simulations? 3. Use these results to explain why a single result after many simulations does not affect the estimate as much as a single result after only a few simulations. Activity Synthesis The purpose of this discussion is for students to understand why simulations are useful in place of actual experiments. Select at least one group for each of the simulation methods to display the materials they used to run their simulation and explain the steps involved in using their materials. Ask students, "Why do you think these simulations are more useful than actually doing the experiment many times?" (It would take a lot of time and work for Diego to walk to school more than usual, but it is easy to do the simulation many times quickly.) Select students to share what they noticed about the graph of the fraction of days Diego had to wait as the simulated days went on. 6.3: Designing Experiments (10 minutes) In this activity, students have the opportunity to design their own simulations that could be used to estimate probabilities of real-life events (MP4). Students attend to precision (MP6) by assigning each possible outcome for the real-life experiment to a corresponding outcome in their simulation in such a way that the pair of outcomes have the same probability. In the discussion following the activity, students are asked to articulate how these simulations could be used to estimate probabilities of certain events. Keep students in groups of 3. Give students 5 minutes quiet work time to design their own experiments, followed by small-group discussion to compare answers for the situations and whole-class As students work, monitor for students who are using the same chance events for multiple scenarios (for example, always using a spinner) and encourage them to think about other ways to simulate the Engagement: Provide Access by Recruiting Interest. Leverage choice around perceived challenge. Invite students to select 2–3 of the situations to complete. Supports accessibility for: Organization; Attention; Social-emotional skills Student Facing For each situation, describe a chance experiment that would fairly represent it. 1. Six people are going out to lunch together. One of them will be selected at random to choose which restaurant to go to. Who gets to choose? 2. After a robot stands up, it is equally likely to step forward with its left foot or its right foot. Which foot will it use for its first step? 3. In a computer game, there are three tunnels. Each time the level loads, the computer randomly selects one of the tunnels to lead to the castle. Which tunnel is it? 4. Your school is taking 4 buses of students on a field trip. Will you be assigned to the same bus that your math teacher is riding on? Anticipated Misconceptions Students may think that the number of outcomes in the sample space must be the same in the simulation as in the real-life situation. Ask students how we could use the results from the roll of a standard number cube to represent a situation with only two equally likely outcomes. (By making use of some extra options to count as "roll again.") Activity Synthesis The purpose of this discussion is for students to think more deeply about the connections between the real-life experiment and the simulation. Select partners to share the simulations they designed for each of the situations. Some questions for discussion: • "How could a standard number cube be used to simulate the situation with the buses?" (Each bus is assigned a number 1 through 4. If the cube ends on 5 or 6, roll again.) • "If one of the buses was numbered with your math teacher's favorite number and you wanted to increase the probability of that bus being selected, how could you change the simulation to do this?" (Add more of the related outcome. For example, using the standard number cube as in the previous discussion question, the bus with the favorite number could be assigned numbers 4 and 5 while the other buses are still 1 through 3.) • Two of the tunnels in the video game lead to a swamp that ends the game. How could you use your simulation to estimate the probability of choosing one of those two tunnels? (Since all of the tunnels are equally likely to lead to the swamp, it can be assumed that "left" and "right" lead to the swamp. Spin the spinner many times and use the fraction of times it ends on "left" or "right" to estimate the probability of ending the game. It should happen \(\frac{2}{3}\) or about 67% of the time.) • "You and a friend are among the people going to lunch. How could you use the simulation you designed to estimate the probability that you or your friend will be the one to choose the restaurant?" (My friend and I will be represented by 1 and 2 on a number cube. Roll the number cube a lot of times and find the fraction of times 1 or 2 appear, then estimate the probability that we will be the ones selected.) Speaking: MLR8 Discussion Supports. Use this routine to support whole-class discussion. After students share the simulations they designed, display the following sentence frames to help students respond: "I agree because ….” or "I disagree because ….” Encourage students to use mathematical language to support their response. This will support rich and inclusive discussion about how to simulate a real-world situation using a simple experiment that reflects the probability of the actual event. Design Principle(s): Support sense-making, Cultivate conversation Lesson Synthesis Consider asking these discussion questions: • "What is a simulation?" • "Why might you want to run a simulation rather than the actual event?" (Simulations are easier and usually faster to do multiple times, so using them to get an estimate of the probability of an event is sometimes preferred.) • "If you conduct a few trial simulations of a situation and record the the fraction of outcomes for which a particular event occurs, how might you know that you have done enough simulations to have a good estimate of the probability of that event happening?" (When the fractions seem to not be changing very much based on how accurate you want your estimate to be.) 6.4: Cool-down - Video Game Weather (5 minutes) Student Facing Sometimes it is easier to estimate a probability by doing a simulation. A simulation is an experiment that approximates a situation in the real world. Simulations are useful when it is hard or time-consuming to gather enough information to estimate the probability of some event. For example, imagine Andre has to transfer from one bus to another on the way to his music lesson. Most of the time he makes the transfer just fine, but sometimes the first bus is late and he misses the second bus. We could set up a simulation with slips of paper in a bag. Each paper is marked with a time when the first bus arrives at the transfer point. We select slips at random from the bag. After many trials, we calculate the fraction of the times that he missed the bus to estimate the probability that he will miss the bus on a given day.
{"url":"https://curriculum.illustrativemathematics.org/MS/teachers/2/8/6/index.html","timestamp":"2024-11-07T21:58:06Z","content_type":"text/html","content_length":"103938","record_id":"<urn:uuid:645e6f34-ccb4-46ff-b82b-6ca1732b00cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00175.warc.gz"}
Thanos's teaching at UFRN Here you can find information regarding past, current, and sometimes near-future teaching of mine at Universidade Federal do Rio Grande do Norte. Information about previous teaching experiences can be found in my CV; I post reviews, grades & feedback from my students as soon as they become available to me. This semester (2024.2) List of courses taught (since 2016.1) FMC reformulation In collaboration with João Marcos and a few colleagues and students, we have created and defended this proposal which was voted against by the the commitee of my department (NB: in such commitee there were 0 profs working in related areas). Nevertheless, the modules we have designed for this proposal became part of the Computer Science programme of DIMAp (Department of Informatics and Applied Mathematics); and we have applied our work to our methodology for teaching the courses FMC1, FMC2, and FMC3 to computer science and IT students. My teaching of FMC1 & FMC2, since 2022.1, consists of the following sub-modules: • Introduction to Mathematical Proof (using the theory of integers) [IDMa] U1 of FMC1, taught 4h/week during the first half of the semester; • Introduction to Mathematical Proof (using the theory of real numbers) [IDMb] U2 of FMC1, taught 4h/week during the second half of the semester; • Introduction to Recursion and Induction (using functional programming) [IRI] U3 of FMC1, taught 2h/week during the whole semester; • Sets, Functions, Relations I [CFR1] U1 of FMC2, taught 4h/week during the first half of the semester; • Sets, Functions, Relations II [CFR2] U2 of FMC2, taught 4h/week during the second half of the semester; • Introduction to Algebraic Structures [IEA] U3 of FMC2, taught 2h/week during the whole semester. My FMC1 class of 2022.2 was the first for which I entirely adopted this work, and had by far the most successful results our course has seen so far. Same thing goes for my FMC2 class of 2023.1. To help students who wish to self-study the material mencioned above—for any reason whatsoever—I have created sites (including playlists), better suited for this use: courses prepared for self-study. Teaching assistance projects Teaching assistance project for the theoretical computer science and pure math-oriented courses (notably Mathematical Foundations for Computation I, II, III). I created this project in 2017, and it has been successfully renewed with funding (scholarships) each year ever since (2017, 2018, 2019, 2020, 2021, 2022, 2023). (Info on my teaching assistants on a separate page.) Other related projects For students
{"url":"https://tsouanas.org/teaching/","timestamp":"2024-11-05T10:28:56Z","content_type":"text/html","content_length":"8247","record_id":"<urn:uuid:26499100-d4d4-49f1-a8f9-a034980155bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00797.warc.gz"}
Math Activities Math Activities and Teaching Resources for 8th Grade Eighth grade represents the end of an important chapter in every students’ educational journey. In math, these students are ready to tackle algebraic expressions, equations, and functions. They're solving for X like it's a treasure hunt, and mastering the art of graphing like they're Picasso with a pencil. From linear equations to quadratic formulas, these kids are learning to solve problems that are a little more complex than anything they’ve seen to date. With all these skills under their belts, they're well on their way to living their best prime number lives. Who knew math could be so fun? Enjoy this sampling of instructional videos, games, and activities for your 8th grade classroom! Some of the skills students will master in eSpark include: The Number System • Know that numbers that are not rational are called irrational. Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number. • Use rational approximations of irrational numbers to compare the size of irrational numbers, locate them approximately on a number line diagram, and estimate the value of expressions (e.g., pi²). Expressions and Equations • Know and apply the properties of integer exponents to generate equivalent numerical expressions. • Use square root and cube root symbols to represent solutions to equations of the form x² = p and x³ = p, where p is a positive rational number. Evaluate square roots of small perfect squares and cube roots of small perfect cubes. Know that the square root of 2 is irrational. • Use numbers expressed in the form of a single digit times an integer power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. • Perform operations with numbers expressed in scientific notation, including problems where both decimal and scientific notation are used. Use scientific notation and choose units of appropriate size for measurements of very large or very small quantities (e.g., use millimeters per year for seafloor spreading). Interpret scientific notation that has been generated by technology. • Graph proportional relationships, interpreting the unit rate as the slope of the graph. Compare two different proportional relationships represented in different ways. • Use similar triangles to explain why the slope m is the same between any two distinct points on a non-vertical line in the coordinate plane; derive the equation y = mx for a line through the origin and the equation y = mx + b for a line intercepting the vertical axis at b. • Solve linear equations in one variable. • Analyze and solve pairs of simultaneous linear equations. Statistics and Probability • Construct and interpret scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Describe patterns such as clustering, outliers, positive or negative association, linear association, and nonlinear association. • Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line, and informally assess the model fit by judging the closeness of the data points to the line. • Understand that patterns of association can also be seen in bivariate categorical data by displaying frequencies and relative frequencies in a two-way table. Construct and interpret a two-way table summarizing data on two categorical variables collected from the same subjects. Use relative frequencies calculated for rows or columns to describe possible association between the two eSpark is truly unique in the world of online learning. Our holistic, student-centered approach blends the proven benefits of play-based learning with systematic, explicit, and direct instruction. It’s proof that learning can be fun, personalized, and effective, all at once! eSpark meets the criteria for evidence-based interventions under ESSA guidelines, and has been proven in multiple studies to improve student performance in math and reading. When you sign up for an eSpark account, your students experience these activities via adaptive, differentiated independent pathways and teacher-driven small group assignments. Teachers also have access to detailed usage and progress reports with valuable insights into standards mastery, student growth trends, and intervention opportunities. With the addition of the game-changing Choice Texts for the 2023-2024 school year, eSpark has cemented its status as the most loved supplemental instruction option for students and teachers alike. Claim your free account today and see the difference for yourself!
{"url":"https://www.esparklearning.com/activities/8th-grade/math/video-activities/teks/","timestamp":"2024-11-15T04:42:38Z","content_type":"text/html","content_length":"1049078","record_id":"<urn:uuid:31ad18ec-ec0c-4488-9ce3-f7697642d64e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00781.warc.gz"}
I. Murakami, T. Kato, U.I. Safronova and A.A. Vasilyev Dielectronic Recombination Rate Coefficients to Excited States of Boronlike Oxygen and Dielectronic Satellite Lines Date of publication: May 2004 Key words: boronlike oxygen, dielectronic recombination rate coeffcients, energy levels, radiative transition probabilities, autoionization rates, excited states, dielectronic satellite lines Energy levels, radiative transition probabilities, and autoionization rates for B-like oxygen (0^3+) including 1s^2 2s^2 nl, 1s^2 2s2pnl, and 1s^2 2p^2nl (n=2-8, l leq n - 1) states were calculated by multiconfigurational Hartree-Fock method (Cowan code) and relativistic many-body perturbation theory method (RMBPT code). Autoionizing levels above three thresholds (1s^2 2s^2 ^ 1S, 1s^2 2s2p^3P, 1s^2 2s2p ^1P) were considered. Configuration mixing (2s^2nl + 2p^2nl) plays an important for all atomic characteristics. Branching ratios relative to the first threshold and intensity factor were calculated for satellites lines and dielectronic recombination rate coefficients for the excited 105 odd-parity and 94 even-parity states. The dielectronic recombination rate coefficients were calculated including 1s^2 2s^2 nl, 1s^2 2s2pnl, and 1s^2 2p^2nl (n=2-8, l leq n - 1) states. The contribution from the excited states higher than n=8 were estimated by extrapolation of all atomic characteristics to derive the total dielectronic recombination rate coefficient. The orbital angular momentum quantum number l distribution of the rate coefficients shows a peak at l=5. The total dielectronic recombination rate coefficient was derived as a function of electron temperature. The dielectronic satellite lines were also obtained. The state selective dielectronic recombination rate coefficients to excited states of B-like oxygen were obtained, which are useful for modeling O IV spectral lines in a recombining List of NIFS DATAReturn toContents Page Return toNIFS Homepage National Institute for Fusion Science Copyright: 1995-2007 National Institute for Fusion Science (NIFS) Address: 322-6,Oroshi-cho, Toki, GIFU, 509-5292, Japan
{"url":"https://www.nifs.ac.jp/report/nifs-data085.html","timestamp":"2024-11-03T00:11:07Z","content_type":"text/html","content_length":"4145","record_id":"<urn:uuid:6c7c8d6f-386a-4639-ac5d-8fb81271692f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00380.warc.gz"}
Vladimir Boginski The celebrated Motzkin-Straus formulation for the maximum clique problem provides a nontrivial characterization of the clique number of a graph in terms of the maximum value of a nonconvex quadratic function over a standard simplex. It was originally developed as a way of proving Tur\'{a}n’s theorem in graph theory, but was later used to develop … Read more A Cutting Plane Method for Risk-constrained Traveling Salesman Problem with Random Arc Costs This paper considers the risk-constrained stochastic traveling salesman problem with random arc costs. In the context of stochastic arc costs, the deterministic traveling salesman problem’s optimal solutions would be ineffective because the selected route might be exposed to a greater risk where the actual cost can exceed the resource limit in extreme scenarios. We present … Read more
{"url":"https://optimization-online.org/author/vladimir-boginski/","timestamp":"2024-11-04T15:00:51Z","content_type":"text/html","content_length":"86303","record_id":"<urn:uuid:49698c55-d601-4d62-9a73-4f385af2a72c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00488.warc.gz"}
How To Factor A Perfect Cube A perfect cube is a number that can be written as a^3. When factoring a perfect cube, you would get a a a, where "a" is the base. Two common factoring procedures dealing with perfect cubes are factoring sums and differences of perfect cubes. To do this, you will need to factor the sum or difference into a binomial (two-term) and trinomial (three-term) expression. You can use the acronym "SOAP" to assist in factoring the sum or difference. SOAP refers to the signs of the factored expression from left to right, with the binomial first, and stands for "Same," "Opposite" and "Always Step 1 Rewrite the terms so that they are both written in the form (x)^3, giving you an equation that looks like a^3 + b^3 or a^3 – b^3. For example, given x^3 – 27, rewrite this as x^3 – 3^3. Step 2 Use SOAP to factor the expression into a binomial and trinomial. In SOAP, "same" refers to the fact that the sign between the two terms in the binomial portion of the factors will be positive if it is a sum and negative if it is a difference. "Opposite" refers to the fact that the sign between the first two terms of the trinomial portion of the factors will be the opposite of the sign of the unfactored expression. "Always positive" means that the last term in the trinomial will be always positive. If you had a sum a^3 + b^3, then this would become (a + b)(a^2 – ab + b^2), and if you had a difference a^3 – b^3, then this would be (a – b)(a^2 + ab + b^2). Using the example, you would get (x-3)(x ^2 + x*3 + 3^2). Step 3 Clean up the expression. You may need to rewrite numerical terms with exponents without them and rewrite any coefficients, like the 3 in x 3, in the proper order. In the example, (x-3)(x^2 + x 3 + 3^ 2) would become (x-3)(x^2 + 3x + 9). Cite This Article Wedel, Kristy. "How To Factor A Perfect Cube" sciencing.com, https://www.sciencing.com/factor-perfect-cube-8240884/. 24 April 2017. Wedel, Kristy. (2017, April 24). How To Factor A Perfect Cube. sciencing.com. Retrieved from https://www.sciencing.com/factor-perfect-cube-8240884/ Wedel, Kristy. How To Factor A Perfect Cube last modified March 24, 2022. https://www.sciencing.com/factor-perfect-cube-8240884/
{"url":"https://www.sciencing.com:443/factor-perfect-cube-8240884/","timestamp":"2024-11-07T18:48:43Z","content_type":"application/xhtml+xml","content_length":"70729","record_id":"<urn:uuid:b424b14c-d07e-4b76-a7b8-8c4f58a37a38>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00284.warc.gz"}
Quarantine Day 23: Facts Do Not Cease to Exist Because They Are Ignored For the past two weeks we’ve been using data put out by the World Health Organization and the Centers for Disease Control to chart not only how the COVID virus has spread, but also to understand projections being made by federal and academic models. Utilizing a simple exponential growth formula, we demonstrated estimates being inferred by federal and academic models do not seem to align with actual COVID data. Years ago I had the opportunity to participate in an intensive two-year Six Sigma training academy at Bechtel, the worlds largest and most smartly run engineering company. One of the first things they taught us was that to be a successful change agent you had to, “talk truth to power,” and the way to do that is through data and going where analysis of data takes you; even when contrary to popular perceptions. For two weeks now Americans have been inundated with sensational governmental and academic claims about the number of anticipated COVID deaths on the mystical apex day of April 14th, while our analysis has led to a different conclusion. Our argument has relied on simple mathematical constructs and exclusively on past performance data. We’ve been upfront acknowledging federal and academic models rely on more sophisticated mathematical formulations and incorporate additional informational clairvoyance we’re not privy to. In other words, their models should be of higher accuracy and fidelity. That being said, data is as data does and you can’t out maneuver data with gee whiz mathematics and powerful persuasion. Today we reassess the data from a fresh perspective to see if we either validate federal and academic projections or further vindicate our assertions. To do this however, we’ll have to dip our toes into the deep end of the pool. I recognize that my fellow Los Alamos PhDs and Bechtel Black Belts will have no trouble following my logic, but I’m not writing this for you. Rather, I’m presenting this for the teenage girl who commented yesterday on how cool she thought our breakdown of the virus data has been and wondered what she could study in college to learn how to do stuff like this. While she could of course study math, I’ve never met an interesting mathematician so I told her if she wanted a life of adventure surrounded by interesting people doing exciting stuff she should study engineering. The folks in academia should be able to follow our approach, but probably the only person in media who can keep up is Michael Savage, who has a PhD in epidemiology and like me, studied at UC Berkeley. For the rest of you, I apologize in advance if things get wonky, I’ll strive to prevent that from happening. Our exploration will determine whether or not federal and academic models that project upwards of 250,000 US deaths from COVID by April 14^th are realistic. To recap, recall our simple mathematical model based on exponential growth; namely, f(x) = a^x, which reads as “the function f(x), equals the base number a, raised to the x power.” When doctors Fauci and Brix projected on April 1^st that by April 14^th the US would realize 250,000 deaths, we utilized our simple mathematical formula to derive the value that base number a, had to be in order for the Fauci/Brix projection to be valid and found that, a=2.43. Combined with a value for x of x=14, which was determined based on the 14 days between April 1^st and April 14^th, this gave us, f(x) = a^x = 2.43^14 = 250,316. A few days later while still trying to make sense of the Fauci/Brix model, we showed that if their value for variable a were reduced 11% to a=2.165 then f(x) is reduced to f(x) = 49,706, which we asserted though existing data, would be the projected number of US deaths by April 14th if the US experienced a scenario similar to Italy. We then discussed how future values for variable a can never be known with certainty but that data can be used to precisely calculate previous values. Based on those prior discussions, we’re now ready to determine what the daily values for variable a would have to have been each day between March 24th and April 7th in order for f(x) to equal published values for the number of COVID deaths on each day. The basis for the value of variable x, in our equation f(x) = a^x, is that the first US COVID fatality occurred on February 29^th. So for example, if we calculate what the value of variable a, was on April 7th, then x=39, since April 7th is 39 days after February 29th. The table above shows both the global and national values required for variable a, from March 24th to April 7th, for f(x) to match the number of respective COVID deaths on each of those days. The right most column contains a projection for what the number of US deaths would be on April 14^th if variable a, on each date continued until April 14^th. For these calculations, the variable x=46, since there are 46 days between February 29th and April 14th. Before discussing those projections though, lets look at what the global and national values of variable a, from March 24^th until April 7^th indicate. First, notice in the above table that the global values for variable a, are constant to two decimal places for all dates. This suggests that the global death rate, at least for now, has achieved steady state. This is a vastly different conclusion than ones being touted by federal, academic, and international models. If that weren’t disconcerting enough, when we look at the national values for variable a, not only are they not trajecting upward in an exponential manner as federal and academic models assert they should, they’ve been trending steadily downward since March 24th. This finding is diametrically opposite of what federal and academic models contend and what powerful people in government and media expound. FYI, this is where truth to power requires fortitude. The right most column in the table above projects the number of COVID deaths in the US by April 14th, if we applied our equation f(x) = a^x on that date using that day’s values for variable a, and x extended out to April 14th. For example, if the value a = 1.3071 on March 24th were used on March 24^th to project the number of COVID deaths in the US by April 14^th, the estimate would be f(x) = a^x = 1.3071^46 = 224,195. Similarly, if the March 29^th, value of a = 1.2923 were applied on March 29th to estimate the number of COVID deaths in the US by April 14^th, the result would be f(x) = a^x = 1.2923^46 = 132,531. Likewise, if the most recent value from April 7^th, of a = 1.2688 is used to estimate the number of COVID deaths in the US by April 14th, f(x) = a^x = 1.2688^46 = 57,078. Do you feel the drama and excitement building from these three examples as we near our exciting conclusion? Let’s raise the curtain and let Carol Merrill tell us what’s behind door number three; namely that this final estimate for f(x) is quickly approaching the number we first projected seven days ago. To date, there have been 10,781 COVID deaths in the US. Federal and academic models project that the number will climb to 250,000 by April 14th, while our simple model suggests that based on current virus performance, the number is more likely to be around 50,000. The results of the above analysis seem to invalidate federal and academic projections while at the same time validating the projections we first made on April 1st. As verification that our observation highlighted in the plot above concerning the downward trend in variable a, is correct, the University of Washington today revised their model projections downward from 200,000 to 80,000. I applaud UW for their willingness to reconsider evidence. While your model is converging toward what the data indicates, you’re not there yet. The federal model has yet to be revised downward, but smatterings of official backtracking suggests they know it should be. The question is can government bring itself to admit the fallacy of earlier sensationalism? I predict that in the coming days their models will be revised downward along side claims regarding their prowess in enacting mitigation measures. But then again, in politics pride truly does goeth before the fall. To conclude, today’s analysis of COVID-19 data continues to assert that we’ve been right from the start. Thanks for sticking with me through this morass of mathematics, I apologize if along our journey to this very interesting conclusion things got a bit convoluted. I encourage you at your leisure to study the table and plot above, hopefully then the logic we employed will make better Note: While the trend over the past two weeks in the rate of growth of COVID deaths in the US has been downward, the US can still experience an upward trajectory, i.e., we’re not out of the woods yet. As I have repeatedly stated, we don’t have access to the informational clairvoyance federal models use, so our analysis relies exclusively on past performance. But, as Aldous Huxley noted, “facts do not cease to exist because they are ignored.”
{"url":"https://rmdolin.com/qd-23/","timestamp":"2024-11-11T00:51:57Z","content_type":"text/html","content_length":"55401","record_id":"<urn:uuid:570d79f8-5580-409e-a57e-05e27b6eab2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00029.warc.gz"}
Undergraduate | The Department of Mathematics | Columbian College of Arts & Sciences | The George Washington University Mathematics is a fascinating subject that offers a plethora of captivating intellectual challenges as well as valuable opportunities. It fosters powerful analytic skills and mental agility, and it opens up a wealth of career options. You can be a mathematics major, whether having it as your sole major or one of two majors. The connections between mathematics and technical fields such as physics, economics, engineering, computer science and biology present natural pairings that enrich both areas of study. A second major can also be an unrelated field. If it fits your constraints better, you can be a math minor. Whichever path best fits your plans and goals, you are welcome in our department. We invite and encourage you to discuss your academic and career goals with our advisors, who can help you plan a program of study in mathematics that best matches your interests. As a mathematics major or minor, you can choose from a wide selection of courses, including abstract algebra, real and complex analysis, topology, logic, set theory, combinatorics, number theory, differential equations, differential geometry, numerical analysis, financial mathematics and mathematical modeling. Advanced undergraduates may choose to extend the range of options by taking independent studies and graduate classes. To offer you more options, the undergraduate mathematics major has three concentrations: pure, applied and interdisciplinary. The three concentrations differ in their emphasis, but all are designed to give you a solid background in the theory and practice of modern mathematics. In each concentration, you can choose either a Bachelor of Science (BS), which requires 45 credit hours of approved coursework, or a Bachelor of Arts (BA), which requires 39 credit hours. Those who may want to pursue graduate study in mathematics would be best served by the more theoretical focus in the pure and applied concentrations, and by the greater coursework in the BS option. The interdisciplinary concentration is intended primarily for those who wish to enter the job market immediately after graduation; it provides preparation for careers as mathematicians in government and industrial settings where mathematical modeling and computation play a large role. In each concentration, the BA option makes it easier to complete the required courses in a second major or to pursue several minors. Award-Winning Instruction Students learn from professors like 2023 Writing in the Discipline Award Winner for Best Assignment Design Professor Joseph Bonin (pictured at center). Professor Bonin accepted his award from President Mark S. Wrighton (left) and Vice President for Academic Affairs Christopher Alan Bracey.
{"url":"https://math.columbian.gwu.edu/undergraduate","timestamp":"2024-11-03T15:29:57Z","content_type":"text/html","content_length":"64002","record_id":"<urn:uuid:fdbab331-e1fd-4b22-b082-a1236425f2b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00809.warc.gz"}
Collision detection I: The Gilbert-Johnson-Keerthi It's fair to say that collision detection and resolution are arguably the most important parts of any physics library. A good collision system can make or break any library or game. Several algorithms exist for it with GJK being one of the most popular ones. The collision detection system in my library has 3 steps to build a full manifold, with the first step being GJK. 1. Introduction 2. Terminology 3. Code 4. References So what is GJK? It was first presented in 1988 as distance algorithm between two convex shapes. When using it for collision detection we are essentially checking for 0 distance, which means they are touching or penetrating each other, and therefore collide. Advantages and disadvantages Pros Cons Very fast, due to the early-outs it allows By default, it only gives a yes/no answer. There are ways to get more information out of it, especially when the penetration is small, but that'll not be for. covered here. Works with any convex shape. Might be hard to grasp at first. What does GJK do exactly • Creates a new shape from the two objects we’re checking for collision. • Does a set of dot and cross products to determine if the shape contains the origin. • If it does, the objects collide. Convex vs Concave shapes Previously I mentioned convex shapes. In short, with convex shapes you cannot draw a line between any two points that goes outside the shape. In elementary school you might've learned it as "you cannot hide inside it". :) Minkowski sum and difference Throughout GJK, we will be working with points on the edge of the Minkowski difference. The simple definition of it is "subtracting all points of object B from all points of object A". This does not refer to all vertices but the infinite amount of points on each object. When the objects collide, the Minkowski difference contains the origin, which means they have at least one mutual point. This is because: (x1, y1, z1) - (x2, y2, z2) = (0, 0, 0). This is the above mentioned new shape. In short, a simplex is lowest amount of points that are needed in a given dimension. In n dimension, the number is always n+1. Going from 1D to 3D, we first have a line, then a triangle, then a tetrahedron. As calculating the full Minkowski difference is expensive, we will try to build a simplex inside it and see if it encloses the origin. Every point of the simplex is a point on the Minkowski difference. Support functions Support functions give us "the farthest point on the object in a given direction". We will use this to get new points on the edge Minkowski difference and therefore extend our simplex. What shapes your GJK works with depends on what support functions you write. Below are two of the simplest shapes, but I will detail more in the end of the article. The Minkowski difference is defined as A - B, therefore the points on it will also be defined as A.Support(direction) - B.Support(-direction). (We use -direction because while drawing the Minkowski difference we first flip one of the object) As the support points are the furthest point on an object, it'll be a vertex on the edge of the Minkowski difference, which is how we will build our The Algorithm The Goal Our main goal is to calculate a simplex inside the Minkowski difference and see if it encloses the origin. While adding points to the simplex we'll be using dot products to make sure that each new point to the simplex can still be "seen" from the origin, therefore has a chance to help enclose the origin. If at one iteration we end up with a direction that has no chance of producing points that give us such points, the algorithm does an early-out. Otherwise, (since we're working in 3D) it'll build a tetrahedron that encloses the origin. The main loop Every loop we add a new point to our simplex. Each support point added needs to be visible from the origin in the search direction. In this picture, you can see that the direction (the arrow) points in the direction of the origin. This means that it has a positive dot product with the last added support point. If it is not visible (the dot product is negative), we have no intersection, as searching in that direction would not help us enclose the origin. The first point For this, we can use any normalized direction we want (e.g. (1, 0, 0) ). The support point gets added to our simplex and the new search direction is the opposite of the support point.This is so we can check which "side" of the support point the origin is on in the next step. The second point: a line Now we have a line between A and B, with A being the latest point that was added. Now our world is divided up to 3 regions: the area past B (the first added point), the area between A and B, and the area past A. Our goal is to figure out which of these 3 the origin is in. 1. The area past B Remember that in the first point, new search direction was the opposite of B and from the dot product in the main loop we know that -B and A and point toward the origin, therefore the area past B cannot contain the origin. 2. The area between A and B To see if this is the area that contains the origin, we need to dot AB with AO. As AO is pointing towards to origin, a positive dot product means that the origin is somewhere between this area. In this case, our new search direction is perpendicular to AB (the black arrow on the picture). The reason for this, is because this area can now be divided up to 2 new regions using this vector. As our cross product includes AO, that is a vector that points towards the origin, we know that the origin will be in the resulting area. 3. The area past A If the origin is not between AB (and it is definitely not past B), it has to be past A. In this case, B is not in the area that encloses the origin and therefore is not useful for our simplex, so we discard it. Now we are back to our 1-simplex. The third point: a triangle We have 8 regions here: 3 past each side of the triangle, 3 past each point, and (since we're in 3D) the area above and below the triangle. However, similarly to the line case, we can exclude some As before, A is once again the latest point added. This means that we can exclude the areas past B and C (as those were checked in the previous case) and the area past CB, as the cross products in the previous step determined which side of the line the origin is on. The normal of each side is determined by a cross product. ABC refers to the normal of the triangle so crossing them with the sides gives us the normal vector perpendicular to the side. 1. The area between the first and the last point added Here we have 2 areas to check: The area in the direction of ABCxAC and behind it. As previously, we dot the normal (ABCxAC) with the last support point, referred to as AO. There are two options here: the area past AC or the area towards B. Dotting AC and AO helps determine this. If it is in the area past AC, we discard B and our new search direction is perpendicular to AC. If not, discard C and fall back to the line case with the same search direction. 2. The area between the second and last point added Similar to the previous case, we have 2 areas to check: ABCxAB and the triangle's area. If ABCxAB is in the direction of AO, C is discarded and we fall back to the line case. Otherwise the triangle's area contains the origin and we need to see which side of it the origin is on. 3. The ares above and below the triangle In this case we once again have two areas to check: above and below the triangle. To figure out which side the origin is on, we simply dot ABC with AO. If it's above, (gives a positive dot product) the new search direction is ABC. If it is below, the triangle is flipped so the new simplex is (A, C, B) and the new search direction is -ABC. Now we can proceed to the last phase, which is a tetrahedron. The fourth point: a tetrahedron We have 4 regions here: the 4 sides of a tetrahedron. We can exclude the BCD triangle, as we already know that the origin is "above" it (that is, it points towards the latest support point). We need to check if when dotted with AO, all triangle normals point inside the tetrahedron. If any of the normals point away from the origin, we discard the point that is not part of the triangle and fall back to the triangle case. At the end, we'll end up with one of these situations: As we've learned before, if the tetrahedron encloses the origin (picture on the left), we have a collision. And that's it for the main algorithm! Now let's look at some code: The Code The code shown below is from my own physics library called Cinkes. Some lines were simplified for easier understanding. Our simplex in 3D consists of 4 points, which is the space we reserve for it in the beginning. Note that when adding a new point we always add it as the first point of the array. This is because an important thing we need to know is the order the points were added to the simplex. Knowing that the one at index 0 is the latest (the one called A in the previous section), makes our work a lot class CSimplex std::vector<CVector3> m_Points; CSimplex() { m_Points.reserve(4); } void Push_Front(const CVector3& a_Vector3) m_Points.insert(m_Points.begin(), a_Vector3); The main loop We start the main loop by making sure our new search direction (next) is normalized then add the new support point, returning false if we moved away from the origin. The NextPoint() function is essentially our algorithm. It keeps returning false until it gets to the tetrahedron case, where it can definitely determine whether or not there is a collision. CVector3 A = a_Object1->Support(next); CVector3 B = a_Object2->Support((next * -1)); CVector3 support = A - B; if(support.Dot(next) <= 0) { a_Simplex = simplex; return false; if(NextPoint(simplex, next)) { a_Simplex = simplex; return true; This is a rather simple function and it's only task is to determine which function to call, based on the size of the simplex NextPoint(CSimplex& a_Simplex, CVector3& a_Direction) switch (a_Simplex.Size()) case 2: return Line(a_Simplex, a_Direction); case 3: return Triangle(a_Simplex, a_Direction); case 4: return Tetrahedron(a_Simplex, a_Direction); default: return false; This is something you'll see often in the following functions. It's only task is to determine whether or not the vector is in the general direction of the new support point. SameDirection(const CVector3& a_Direction, const CVector3& a_AO) return a_Direction.Dot(a_AO) > 0; The first point For the first point you can use any search direction you want. In the case below I've chosen (1, 0, 0), but it can be any normalized vector. CVector3 next = CVector3(1, 0, 0); CVector3 A = a_Object1->Support(next); CVector3 B = a_Object2->Support((next * -1)); CVector3 support = A - B; CSimplex simplex; next = support * -1; The second point: a line For simplicity, we'll always shorten the simplex points to a,b,c,d variables. Here, you can also see the effect of using Push_Front(), as we know that index 0 denotes a. We also use the SameDirection () function as a shorthand for the dot products. As this (and the triangle) cannot determine whether or not there is a collision it can only return false. Line(CSimplex& a_Simplex, CVector& a_Direction) CVector3 a = a_Simplex[0]; CVector3 b = a_Simplex[1]; CVector3 ab = b - a; CVector3 ao = a * -1; if(SameDirection(ab, ao)) a_Direction = ab.Cross(ao).Cross(ab); } else a_Simplex = { a }; a_Direction = ao; return false; The third point: a triangle We start by looking at the area perpendicular to AC, then the one perpendicular to AB.If either this, or the area with the normal ABxABC dotted with AO returns true, one of the points is discarded. CVector3 a = a_Simplex[0]; CVector3 b = a_Simplex[1]; CVector3 c = a_Simplex[2]; CVector3 ab = b - a; CVector3 ac = c - a; CVector3 ao = a * -1; CVector3 abc = ab.Cross(ac); if(SameDirection(abc.Cross(ac),ao)) { if(SameDirection(ac, ao)) { a_Simplex = { a,c }; a_Direction = ac.Cross(ao).Cross(ac); else { a_Simplex = { a,b }; return Line(a_Simplex, a_Direction); else { if (SameDirection(ab.Cross(abc), ao)) { a_Simplex = { a,b }; return Line(a_Simplex, a_Direction); else { if (SameDirection(abc, ao)) { a_Direction = abc; else { a_Simplex = { a,c,b }; a_Direction = abc * -1; return false; The fourth point: a tetrahedron The tetrahedron is the only function that can return true, and therefore determine a positive collision. We are checking 3 sides of the shape (one we excluded already) and if all tests pass, we have a collision. Tetrahedron(CSimplex& a_Simplex, CVector& a_Direction) CVector3 a = a_Simplex[0]; CVector3 b = a_Simplex[1]; CVector3 c = a_Simplex[2]; CVector3 d = a_Simplex[3]; CVector3 ab = b - a; CVector3 ac = c - a; CVector3 ad = d - a; CVector3 ao = a * -1; CVector3 abc = ab.Cross(ac); CVector3 acd = ac.Cross(ad); CVector3 adb = ad.Cross(ab); if(SameDirection(abc, ao)) a_Simplex = { a,b,c }; return Triangle(a_Simplex, a_Direction); if(SameDirection(acd, ao)) a_Simplex = { a,c,d }; return Triangle(a_Simplex, a_Direction); if(SameDirection(adb, ao)) a_Simplex = { a,d,b }; return Triangle(a_Simplex, a_Direction); return true; Support functions In the terminology section I mentioned the support functions for a sphere and a box. Now let's look at some other shapes. Convex hulls Maybe the simplest one is the convex hull. It's a polytope that consist of an arbitrary number of vertices that form a convex shape. A box is also a convex hull, but it has it's own equation, to speed up the process. The generalized form is dotting all vertices with the direction vector. The one with the largest dot product is the support point. There are some ways to speed this up, however I will not go into that here Cones and Cylinders These are rather complicated and outside the scope of this article. I'll explain it as easy as I can in the images below. Local to world space You might've noticed that all these support functions work in local space, while we are working in global space. Therefore we need to pass the direction vector in local space and convert the result back to world space. This is the equation form: Or in code: CVector3 support = object.getRotation() * (Support(object.getRotation().Inverse() * direction)) + object.getPosition() And that's it! You can see the full algorithm on my Github page. In the next article, we will discuss the contact normal, penetration depth and incremental manifolds.
{"url":"http://doragavlo.com/index.php?x=gjk.html","timestamp":"2024-11-10T07:32:30Z","content_type":"text/html","content_length":"27090","record_id":"<urn:uuid:450affca-c516-44fb-8c7c-2fd4f73da5cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00133.warc.gz"}
What is the equation of the parabola that has a vertex at (77, 7) and passes through point (82,32) ? | HIX Tutor What is the equation of the parabola that has a vertex at # (77, 7) # and passes through point # (82,32) #? Answer 1 $y = {\left(x - 77\right)}^{2} + 7$ The vertex form of a parabola is #y=a(x-h)^2+k#, where the vertex is #(h,k)#. Since the vertex is at #(77,7)#, #h=77# and #k=7#. We can rewrite the equation as: However, we still need to find #a#. To do this, substitute the given point #(82, 32)# in for the #x#- and #y#-values. Now, solve for #a#. #32=a(82-77)^2+7# #32=a(5)^2+7# #32=25a+7# #25=25a# #a=1# The final equation is #y=1(x-77)^2+7#, or #y=(x-77)^2+7#. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 The equation of the parabola with a vertex at (77, 7) and passing through point (82, 32) is: y = a(x - h)^2 + k Where (h, k) is the vertex. Substitute the vertex coordinates: y = a(x - 77)^2 + 7 To find the value of 'a', use the point (82, 32): 32 = a(82 - 77)^2 + 7 Solve for 'a': 32 = a(5)^2 + 7 32 = 25a + 7 25a = 32 - 7 25a = 25 a = 1 So, the equation of the parabola is: y = (x - 77)^2 + 7 Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/what-is-the-equation-of-the-parabola-that-has-a-vertex-at-77-7-and-passes-throug-8f9af92984","timestamp":"2024-11-02T02:26:28Z","content_type":"text/html","content_length":"572493","record_id":"<urn:uuid:1d5e6ba7-1702-402a-b04f-de3414a1a8f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00226.warc.gz"}
The Physicalization of Metamathematics and Its Implications for the Foundations of Mathematics—Stephen Wolfram Writings 1 | Mathematics and Physics Have the Same Foundations One of the many surprising (and to me, unexpected) implications of our Physics Project is its suggestion of a very deep correspondence between the foundations of physics and mathematics. We might have imagined that physics would have certain laws, and mathematics would have certain theories, and that while they might be historically related, there wouldn’t be any fundamental formal correspondence between them. But what our Physics Project suggests is that underneath everything we physically experience there is a single very general abstract structure—that we call the ruliad—and that our physical laws arise in an inexorable way from the particular samples we take of this structure. We can think of the ruliad as the entangled limit of all possible computations—or in effect a representation of all possible formal processes. And this then leads us to the idea that perhaps the ruliad might underlie not only physics but also mathematics—and that everything in mathematics, like everything in physics, might just be the result of sampling the ruliad. Of course, mathematics as it’s normally practiced doesn’t look the same as physics. But the idea is that they can both be seen as views of the same underlying structure. What makes them different is that physical and mathematical observers sample this structure in somewhat different ways. But since in the end both kinds of observers are associated with human experience they inevitably have certain core characteristics in common. And the result is that there should be “fundamental laws of mathematics” that in some sense mirror the perceived laws of physics that we derive from our physical observation of the ruliad. So what might those fundamental laws of mathematics be like? And how might they inform our conception of the foundations of mathematics, and our view of what mathematics really is? The most obvious manifestation of the mathematics that we humans have developed over the course of many centuries is the few million mathematical theorems that have been published in the literature of mathematics. But what can be said in generality about this thing we call mathematics? Is there some notion of what mathematics is like “in bulk”? And what might we be able to say, for example, about the structure of mathematics in the limit of infinite future development? When we do physics, the traditional approach has been to start from our basic sensory experience of the physical world, and of concepts like space, time and motion—and then to try to formalize our descriptions of these things, and build on these formalizations. And in its early development—for example by Euclid—mathematics took the same basic approach. But beginning a little more than a century ago there emerged the idea that one could build mathematics purely from formal axioms, without necessarily any reference to what is accessible to sensory experience. And in a way our Physics Project begins from a similar place. Because at the outset it just considers purely abstract structures and abstract rules—typically described in terms of hypergraph rewriting—and then tries to deduce their consequences. Many of these consequences are incredibly complicated, and full of computational irreducibility. But the remarkable discovery is that when sampled by observers with certain general characteristics that make them like us, the behavior that emerges must generically have regularities that we can recognize, and in fact must follow exactly known core laws of physics. And already this begins to suggest a new perspective to apply to the foundations of mathematics. But there’s another piece, and that’s the idea of the ruliad. We might have supposed that our universe is based on some particular chosen underlying rule, like an axiom system we might choose in mathematics. But the concept of the ruliad is in effect to represent the entangled result of “running all possible rules”. And the key point is then that it turns out that an “observer like us” sampling the ruliad must perceive behavior that corresponds to known laws of physics. In other words, without “making any choice” it’s inevitable—given what we’re like as observers—that our “experience of the ruliad” will show fundamental laws of physics. But now we can make a bridge to mathematics. Because in embodying all possible computational processes the ruliad also necessarily embodies the consequences of all possible axiom systems. As humans doing physics we’re effectively taking a certain sampling of the ruliad. And we realize that as humans doing mathematics we’re also doing essentially the same kind of thing. But will we see “general laws of mathematics” in the same kind of way that we see “general laws of physics”? It depends on what we’re like as “mathematical observers”. In physics, there turn out to be general laws—and concepts like space and motion—that we humans can assimilate. And in the abstract it might not be that anything similar would be true in mathematics. But it seems as if the thing mathematicians typically call mathematics is something for which it is—and where (usually in the end leveraging our experience of physics) it’s possible to successfully carve out a sampling of the ruliad that’s again one we humans can assimilate. When we think about physics we have the idea that there’s an actual physical reality that exists—and that we experience physics within this. But in the formal axiomatic view of mathematics, things are different. There’s no obvious “underlying reality” there; instead there’s just a certain choice we make of axiom system. But now, with the concept of the ruliad, the story is different. Because now we have the idea that “deep underneath” both physics and mathematics there’s the same thing: the ruliad. And that means that insofar as physics is “grounded in reality”, so also must mathematics When most working mathematicians do mathematics it seems to be typical for them to reason as if the constructs they’re dealing with (whether they be numbers or sets or whatever) are “real things”. But usually there’s a concept that in principle one could “drill down” and formalize everything in terms of some axiom system. And indeed if one wants to get a global view of mathematics and its structure as it is today, it seems as if the best approach is to work from the formalization that’s been done with axiom systems. In starting from the ruliad and the ideas of our Physics Project we’re in effect positing a certain “theory of mathematics”. And to validate this theory we need to study the “phenomena of mathematics”. And, yes, we could do this in effect by directly “reading the whole literature of mathematics”. But it’s more efficient to start from what’s in a sense the “current prevailing underlying theory of mathematics” and to begin by building on the methods of formalized mathematics and axiom systems. Over the past century a certain amount of metamathematics has been done by looking at the general properties of these methods. But most often when the methods are systematically used today, it’s to set up some particular mathematical derivation, normally with the aid of a computer. But here what we want to do is think about what happens if the methods are used “in bulk”. Underneath there may be all sorts of specific detailed formal derivations being done. But somehow what emerges from this is something higher level, something “more human”—and ultimately something that corresponds to our experience of pure mathematics. How might this work? We can get an idea from an analogy in physics. Imagine we have a gas. Underneath, it consists of zillions of molecules bouncing around in detailed and complicated patterns. But most of our “human” experience of the gas is at a much more coarse-grained level—where we perceive not the detailed motions of individual molecules, but instead continuum fluid mechanics. And so it is, I think, with mathematics. All those detailed formal derivations—for example of the kind automated theorem proving might do—are like molecular dynamics. But most of our “human experience of mathematics”—where we talk about concepts like integers or morphisms—is like fluid dynamics. The molecular dynamics is what builds up the fluid, but for most questions of “human interest” it’s possible to “reason at the fluid dynamics level”, without dropping down to molecular dynamics. It’s certainly not obvious that this would be possible. It could be that one might start off describing things at a “fluid dynamics” level—say in the case of an actual fluid talking about the motion of vortices—but that everything would quickly get “shredded”, and that there’d soon be nothing like a vortex to be seen, only elaborate patterns of detailed microscopic molecular motions. And similarly in mathematics one might imagine that one would be able to prove theorems in terms of things like real numbers but actually find that everything gets “shredded” to the point where one has to start talking about elaborate issues of mathematical logic and different possible axiomatic foundations. But in physics we effectively have the Second Law of thermodynamics—which we now understand in terms of computational irreducibility—that tells us that there’s a robust sense in which the microscopic details are systematically “washed out” so that things like fluid dynamics “work”. Just sometimes—like in studying Brownian motion, or hypersonic flow—the molecular dynamics level still “shines through”. But for most “human purposes” we can describe fluids just using ordinary fluid dynamics. So what’s the analog of this in mathematics? Presumably it’s that there’s some kind of “general law of mathematics” that explains why one can so often do mathematics “purely in the large”. Just like in fluid mechanics there can be “corner-case” questions that probe down to the “molecular scale”—and indeed that’s where we can expect to see things like undecidability, as a rough analog of situations where we end up tracing the potentially infinite paths of single molecules rather than just looking at “overall fluid effects”. But somehow in most cases there’s some much stronger phenomenon at work—that effectively aggregates low-level details to allow the kind of “bulk description” that ends up being the essence of what we normally in practice call mathematics. But is such a phenomenon something formally inevitable, or does it somehow depend on us humans “being in the loop”? In the case of the Second Law it’s crucial that we only get to track coarse-grained features of a gas—as we humans with our current technology typically do. Because if instead we watched and decoded what every individual molecule does, we wouldn’t end up identifying anything like the usual bulk “Second-Law” behavior. In other words, the emergence of the Second Law is in effect a direct consequence of the fact that it’s us humans—with our limitations on measurement and computation—who are observing the gas. So is something similar happening with mathematics? At the underlying “molecular level” there’s a lot going on. But the way we humans think about things, we’re effectively taking just particular kinds of samples. And those samples turn out to give us “general laws of mathematics” that give us our usual experience of “human-level mathematics”. To ultimately ground this we have to go down to the fully abstract level of the ruliad, but we’ll already see many core effects by looking at mathematics essentially just at a traditional “axiomatic level”, albeit “in bulk”. The full story—and the full correspondence between physics and mathematics—requires in a sense “going below” the level at which we have recognizable formal axiomatic mathematical structures; it requires going to a level at which we’re just talking about making everything out of completely abstract elements, which in physics we might interpret as “atoms of space” and in mathematics as some kind of “symbolic raw material” below variables and operators and everything else familiar in traditional axiomatic mathematics. The deep correspondence we’re describing between physics and mathematics might make one wonder to what extent the methods we use in physics can be applied to mathematics, and vice versa. In axiomatic mathematics the emphasis tends to be on looking at particular theorems and seeing how they can be knitted together with proofs. And one could certainly imagine an analogous “axiomatic physics” in which one does particular experiments, then sees how they can “deductively” be knitted together. But our impression that there’s an “actual reality” to physics makes us seek broader laws. And the correspondence between physics and mathematics implied by the ruliad now suggests that we should be doing this in mathematics as well. What will we find? Some of it in essence just confirms impressions that working pure mathematicians already have. But it provides a definite framework for understanding these impressions and for seeing what their limits may be. It also lets us address questions like why undecidability is so comparatively rare in practical pure mathematics, and why it is so common to discover remarkable correspondences between apparently quite different areas of mathematics. And beyond that, it suggests a host of new questions and approaches both to mathematics and metamathematics—that help frame the foundations of the remarkable intellectual edifice that we call mathematics. 2 | The Underlying Structure of Mathematics and Physics If we “drill down” to what we’ve called above the “molecular level” of mathematics, what will we find there? There are many technical details (some of which we’ll discuss later) about the historical conventions of mathematics and its presentation. But in broad outline we can think of there as being a kind of “gas” of “mathematical statements”—like 1 + 1 = 2 or x + y = y + x—represented in some specified symbolic language. (And, yes, Wolfram Language provides a well-developed example of what that language can be like.) But how does the “gas of statements” behave? The essential point is that new statements are derived from existing ones by “interactions” that implement laws of inference (like that q can be derived from the statement p and the statement “p implies q”). And if we trace the paths by which one statement can be derived from others, these correspond to proofs. And the whole graph of all these derivations is then a representation of the possible historical development of mathematics—with slices through this graph corresponding to the sets of statements reached at a given stage. By talking about things like a “gas of statements” we’re making this sound a bit like physics. But while in physics a gas consists of actual, physical molecules, in mathematics our statements are just abstract things. But this is where the discoveries of our Physics Project start to be important. Because in our project we’re “drilling down” beneath for example the usual notions of space and time to an “ultimate machine code” for the physical universe. And we can think of that ultimate machine code as operating on things that are in effect just abstract constructs—very much like in In particular, we imagine that space and everything in it is made up of a giant network (hypergraph) of “atoms of space”—with each “atom of space” just being an abstract element that has certain relations with other elements. The evolution of the universe in time then corresponds to the application of computational rules that (much like laws of inference) take abstract relations and yield new relations—thereby progressively updating the network that represents space and everything in it. But while the individual rules may be very simple, the whole detailed pattern of behavior to which they lead is normally very complicated—and typically shows computational irreducibility, so that there’s no way to systematically find its outcome except in effect by explicitly tracing each step. But despite all this underlying complexity it turns out—much like in the case of an ordinary gas—that at a coarse-grained level there are much simpler (“bulk”) laws of behavior that one can identify. And the remarkable thing is that these turn out to be exactly general relativity and quantum mechanics (which, yes, end up being the same theory when looked at in terms of an appropriate generalization of the notion of space). But down at the lowest level, is there some specific computational rule that’s “running the universe”? I don’t think so. Instead, I think that in effect all possible rules are always being applied. And the result is the ruliad: the entangled structure associated with performing all possible computations. But what then gives us our experience of the universe and of physics? Inevitably we are observers embedded within the ruliad, sampling only certain features of it. But what features we sample are determined by the characteristics of us as observers. And what seem to be critical to have “observers like us” are basically two characteristics. First, that we are computationally bounded. And second, that we somehow persistently maintain our coherence—in the sense that we can consistently identify what constitutes “us” even though the detailed atoms of space involved are continually But we can think of different “observers like us” as taking different specific samples, corresponding to different reference frames in rulial space, or just different positions in rulial space. These different observers may describe the universe as evolving according to different specific underlying rules. But the crucial point is that the general structure of the ruliad implies that so long as the observers are “like us”, it’s inevitable that their perception of the universe will be that it follows things like general relativity and quantum mechanics. It’s very much like what happens with a gas of molecules: to an “observer like us” there are the same gas laws and the same laws of fluid dynamics essentially independent of the detailed structure of the individual molecules. So what does all this mean for mathematics? The crucial and at first surprising point is that the ideas we’re describing in physics can in effect immediately be carried over to mathematics. And the key is that the ruliad represents not only all physics, but also all mathematics—and it shows that these are not just related, but in some sense fundamentally the same. In the traditional formulation of axiomatic mathematics, one talks about deriving results from particular axiom systems—say Peano Arithmetic, or ZFC set theory, or the axioms of Euclidean geometry. But the ruliad in effect represents the entangled consequences not just of specific axiom systems but of all possible axiom systems (as well as all possible laws of inference). But from this structure that in a sense corresponds to all possible mathematics, how do we pick out any particular mathematics that we’re interested in? The answer is that just as we are limited observers of the physical universe, so we are also limited observers of the “mathematical universe”. But what are we like as “mathematical observers”? As I’ll argue in more detail later, we inherit our core characteristics from those we exhibit as “physical observers”. And that means that when we “do mathematics” we’re effectively sampling the ruliad in much the same way as when we “do physics”. We can operate in different rulial reference frames, or at different locations in rulial space, and these will correspond to picking out different underlying “rules of mathematics”, or essentially using different axiom systems. But now we can make use of the correspondence with physics to say that we can also expect there to be certain “overall laws of mathematics” that are the result of general features of the ruliad as perceived by observers like us. And indeed we can expect that in some formal sense these overall laws will have exactly the same structure as those in physics—so that in effect in mathematics we’ll have something like the notion of space that we have in physics, as well as formal analogs of things like general relativity and quantum mechanics. What does this mean? It implies that—just as it’s possible to have coherent “higher-level descriptions” in physics that don’t just operate down at the level of atoms of space, so also this should be possible in mathematics. And this in a sense is why we can expect to consistently do what I described above as “human-level mathematics”, without usually having to drop down to the “molecular level” of specific axiomatic structures (or below). Say we’re talking about the Pythagorean theorem. Given some particular detailed axiom system for mathematics we can imagine using it to build up a precise—if potentially very long and pedantic—representation of the theorem. But let’s say we change some detail of our axioms, say associated with the way they talk about sets, or real numbers. We’ll almost certainly still be able to build up something we consider to be “the Pythagorean theorem”—even though the details of the representation will be different. In other words, this thing that we as humans would call “the Pythagorean theorem” is not just a single point in the ruliad, but a whole cloud of points. And now the question is: what happens if we try to derive other results from the Pythagorean theorem? It might be that each particular representation of the theorem—corresponding to each point in the cloud—would lead to quite different results. But it could also be that essentially the whole cloud would coherently lead to the same results. And the claim from the correspondence with physics is that there should be “general laws of mathematics” that apply to “observers like us” and that ensure that there’ll be coherence between all the different specific representations associated with the cloud that we identify as “the Pythagorean theorem”. In physics it could have been that we’d always have to separately say what happens to every atom of space. But we know that there’s a coherent higher-level description of space—in which for example we can just imagine that objects can move while somehow maintaining their identity. And we can now expect that it’s the same kind of thing in mathematics: that just as there’s a coherent notion of space in physics where things can for example move without being “shredded”, so also this will happen in mathematics. And this is why it’s possible to do “higher-level mathematics” without always dropping down to the lowest level of axiomatic derivations. It’s worth pointing out that even in physical space a concept like “pure motion” in which objects can move while maintaining their identity doesn’t always work. For example, close to a spacetime singularity, one can expect to eventually be forced to see through to the discrete structure of space—and for any “object” to inevitably be “shredded”. But most of the time it’s possible for observers like us to maintain the idea that there are coherent large-scale features whose behavior we can study using “bulk” laws of physics. And we can expect the same kind of thing to happen with mathematics. Later on, we’ll discuss more specific correspondences between phenomena in physics and mathematics—and we’ll see the effects of things like general relativity and quantum mechanics in mathematics, or, more precisely, in metamathematics. But for now, the key point is that we can think of mathematics as somehow being made of exactly the same stuff as physics: they’re both just features of the ruliad, as sampled by observers like us. And in what follows we’ll see the great power that arises from using this to combine the achievements and intuitions of physics and mathematics—and how this lets us think about new “general laws of mathematics”, and view the ultimate foundations of mathematics in a different light. 3 | The Metamodeling of Axiomatic Mathematics Consider all the mathematical statements that have appeared in mathematical books and papers. We can view these in some sense as the “observed phenomena” of (human) mathematics. And if we’re going to make a “general theory of mathematics” a first step is to do something like we’d typically do in natural science, and try to “drill down” to find a uniform underlying model—or at least representation—for all of them. At the outset, it might not be clear what sort of representation could possibly capture all those different mathematical statements. But what’s emerged over the past century or so—with particular clarity in Mathematica and the Wolfram Language—is that there is in fact a rather simple and general representation that works remarkably well: a representation in which everything is a symbolic One can view a symbolic expression such as f[g[x][y, h[z]], w] as a hierarchical or tree structure, in which at every level some particular “head” (like f) is “applied to” one or more arguments. Often in practice one deals with expressions in which the heads have “known meanings”—as in Times[Plus[2, 3], 4] in Wolfram Language. And with this kind of setup symbolic expressions are reminiscent of human natural language, with the heads basically corresponding to “known words” in the language. And presumably it’s this familiarity from human natural language that’s caused “human natural mathematics” to develop in a way that can so readily be represented by symbolic expressions. But in typical mathematics there’s an important wrinkle. One often wants to make statements not just about particular things but about whole classes of things. And it’s common to then just declare that some of the “symbols” (like, say, x) that appear in an expression are “variables”, while others (like, say, Plus) are not. But in our effort to capture the essence of mathematics as uniformly as possible it seems much better to burn the idea of an object representing a whole class of things right into the structure of the symbolic expression. And indeed this is a core idea in the Wolfram Language, where something like x or f is just a “symbol that stands for itself”, while x_ is a pattern (named x) that can stand for anything. (More precisely, _ on its own is what stands for “anything”, and x_—which can also be written x:_—just says that whatever _ stands for in a particular instance will be called x.) Then with this notation an example of a “mathematical statement” might be: In more explicit form we could write this as Equal[f[x_, y_], f[f[y_, x_],y_]]—where Equal () has the “known meaning” of representing equality. But what can we do with this statement? At a “mathematical level” the statement asserts that and should be considered equivalent. But thinking in terms of symbolic expressions there’s now a more explicit, lower-level, “structural” interpretation: that any expression whose structure matches can equivalently be replaced by (or, in Wolfram Language notation, just (y ∘ x) ∘ y) and vice versa. We can indicate this interpretation using the notation which can be viewed as a shorthand for the pair of Wolfram Language rules: OK, so let’s say we have the expression . Now we can just apply the rules defined by our statement. Here’s what happens if we do this just once in all possible ways: And here we see, for example, that can be transformed to . Continuing this we build up a whole multiway graph. After just one more step we get: Continuing for a few more steps we then get or in a different rendering: But what does this graph mean? Essentially it gives us a map of equivalences between expressions—with any pair of expressions that are connected being equivalent. So, for example, it turns out that the expressions and are equivalent, and we can “prove this” by exhibiting a path between them in the graph: The steps on the path can then be viewed as steps in the proof, where here at each step we’ve indicated where the transformation in the expression took place: In mathematical terms, we can then say that starting from the “axiom” we were able to prove a certain equivalence theorem between two expressions. We gave a particular proof. But there are others, for example the “less efficient” 35-step one corresponding to the path: For our later purposes it’s worth talking in a little bit more detail here about how the steps in these proofs actually proceed. Consider the expression: We can think of this as a tree: Our axiom can then be represented as: In terms of trees, our first proof becomes where we’re indicating at each step which piece of tree gets “substituted for” using the axiom. What we’ve done so far is to generate a multiway graph for a certain number of steps, and then to see if we can find a “proof path” in it for some particular statement. But what if we are given a statement, and asked whether it can be proved within the specified axiom system? In effect this asks whether if we make a sufficiently large multiway graph we can find a path of any length that corresponds to the statement. If our system was computationally reducible we could expect always to be able to find a finite answer to this question. But in general—with the Principle of Computational Equivalence and the ubiquitous presence of computational irreducibility—it’ll be common that there is no fundamentally better way to determine whether a path exists than effectively to try explicitly generating it. If we knew, for example, that the intermediate expressions generated always remained of bounded length, then this would still be a bounded problem. But in general the expressions can grow to any size—with the result that there is no general upper bound on the length of path necessary to prove even a statement about equivalence between small expressions. For example, for the axiom we are using here, we can look at statements of the form . Then this shows how many expressions expr of what sizes have shortest proofs of with progressively greater And for example if we look at the statement its shortest proof is where, as is often the case, there are intermediate expressions that are longer than the final result. 4 | Some Simple Examples with Mathematical Interpretations The multiway graphs in the previous section are in a sense fundamentally metamathematical. Their “raw material” is mathematical statements. But what they represent are the results of operations—like substitution—that are defined at a kind of meta level, that “talks about mathematics” but isn’t itself immediately “representable as mathematics”. But to help understand this relationship it’s useful to look at simple cases where it’s possible to make at least some kind of correspondence with familiar mathematical concepts. Consider for example the axiom that we can think of as representing commutativity of the binary operator ∘. Now consider using substitution to “apply this axiom”, say starting from the expression . The result is the (finite) multiway graph: Conflating the pairs of edges going in opposite directions, the resulting graphs starting from any expression involving s ∘’s (and distinct variables) are: And these are just the Boolean hypercubes, each with nodes. If instead of commutativity we consider the associativity axiom then we get a simple “ring” multiway graph: With both associativity and commutativity we get: What is the mathematical significance of this object? We can think of our axioms as being the general axioms for a commutative semigroup. And if we build a multiway graph—say starting with —we’ll find out what expressions are equivalent to in any commutative semigroup—or, in other words, we’ll get a collection of theorems that are “true for any commutative semigroup”: But what if we want to deal with a “specific semigroup” rather than a generic one? We can think of our symbols a and b as generators of the semigroup, and then we can add relations, as in: And the result of this will be that we get more equivalences between expressions: The multiway graph here is still finite, however, giving a finite number of equivalences. But let’s say instead that we add the relations: Then if we start from a we get a multiway graph that begins like but just keeps growing forever (here shown after 6 steps): And what this then means is that there are an infinite number of equivalences between expressions. We can think of our basic symbols and as being generators of our semigroup. Then our expressions correspond to “words” in the semigroup formed from these generators. The fact that the multiway graph is infinite then tells us that there are an infinite number of equivalences between words. But when we think about the semigroup mathematically we’re typically not so interested in specific words as in the overall “distinct elements” in the semigroup, or in other words, in those “clusters of words” that don’t have equivalences between them. And to find these we can imagine starting with all possible expressions, then building up multiway graphs from them. Many of the graphs grown from different expressions will join up. But what we want to know in the end is how many disconnected graph components are ultimately formed. And each of these will correspond to an element of the As a simple example, let’s start from all words of length 2: The multiway graphs formed from each of these after 1 step are: But these graphs in effect “overlap”, leaving three disconnected components: After 2 steps the corresponding result has two components: And if we start with longer (or shorter) words, and run for more steps, we’ll keep finding the same result: that there are just two disconnected “droplets” that “condense out” of the “gas” of all possible initial words: And what this means is that our semigroup ultimately has just two distinct elements—each of which can be represented by any of the different (“equivalent”) words in each “droplet”. (In this particular case the droplets just contain respectively all words with an odd or even number of b’s.) In the mathematical analysis of semigroups (as well as groups), it’s common ask what happens if one forms products of elements. In our setting what this means is in effect that one wants to “combine droplets using ∘”. The simplest words in our two droplets are respectively and . And we can use these as “representatives of the droplets”. Then we can see how multiplication by and by transforms words from each droplet: With only finite words the multiplications will sometimes not “have an immediate target” (so they are not indicated here). But in the limit of an infinite number of multiway steps, every multiplication will “have a target” and we’ll be able to summarize the effect of multiplication in our semigroup by the graph: More familiar as mathematical objects than semigroups are groups. And while their axioms are slightly more complicated, the basic setup we’ve discussed for semigroups also applies to groups. And indeed the graph we’ve just generated for our semigroup is very much like a standard Cayley graph that we might generate for a group—in which the nodes are elements of the group and the edges define how one gets from one element to another by multiplying by a generator. (One technical detail is that in Cayley graphs identity-element self-loops are normally dropped.) Consider the group (the “Klein four-group”). In our notation the axioms for this group can be written: Given these axioms we do the same construction as for the semigroup above. And what we find is that now four “droplets” emerge, corresponding to the four elements of and the pattern of connections between them in the limit yields exactly the Cayley graph for : We can view what’s happening here as a first example of something we’ll return to at length later: the idea of “parsing out” recognizable mathematical concepts (here things like elements of groups) from lower-level “purely metamathematical” structures. 5 | Metamathematical Space In multiway graphs like those we’ve shown in previous sections we routinely generate very large numbers of “mathematical” expressions. But how are these expressions related to each other? And in some appropriate limit can we think of them all being embedded in some kind of “metamathematical space”? It turns out that this is the direct analog of what in our Physics Project we call branchial space, and what in that case defines a map of the entanglements between branches of quantum history. In the mathematical case, let’s say we have a multiway graph generated using the axiom: After a few steps starting from we have: Now—just as in our Physics Project—let’s form a branchial graph by looking at the final expressions here and connecting them if they are “entangled” in the sense that they share an ancestor on the previous step: There’s some trickiness here associated with loops in the multiway graph (which are the analog of closed timelike curves in physics) and what it means to define different “steps in evolution”. But just iterating once more the construction of the multiway graph, we get a branchial graph: After a couple more iterations the structure of the branchial graph is (with each node sized according to the size of expression it represents): Continuing another iteration, the structure becomes: And in essence this structure can indeed be thought of as defining a kind of “metamathematical space” in which the different expressions are embedded. But what is the “geography” of this space? This shows how expressions (drawn as trees) are laid out on a particular branchial graph and we see that there is at least a general clustering of similar trees on the graph—indicating that “similar expressions” tend to be “nearby” in the metamathematical space defined by this axiom An important feature of branchial graphs is that effects are—essentially by construction—always local in the branchial graph. For example, if one changes an expression at a particular step in the evolution of a multiway system, it can only affect a region of the branchial graph that essentially expands by one edge per step. One can think of the affected region—in analogy with a light cone in spacetime—as being the “entailment cone” of a particular expression. The edge of the entailment cone in effect expands at a certain “maximum metamathematical speed” in metamathematical (i.e. branchial) space—which one can think of as being measured in units of “expression change per multiway step”. By analogy with physics one can start talking in general about motion in metamathematical space. A particular proof path in the multiway graph will progressively “move around” in the branchial graph that defines metamathematical space. (Yes, there are many subtle issues here, not least the fact that one has to imagine a certain kind of limit being taken so that the structure of the branchial graph is “stable enough” to “just be moving around” in something like a “fixed background space”.) By the way, the shortest proof path in the multiway graph is the analog of a geodesic in spacetime. And later we’ll talk about how the “density of activity” in the branchial graph is the analog of energy in physics, and how it can be seen as “deflecting” the path of geodesics, just as gravity does in spacetime. It’s worth mentioning just one further subtlety. Branchial graphs are in effect associated with “transverse slices” of the multiway graph—but there are many consistent ways to make these slices. In physics terms one can think of the foliations that define different choices of sequences of slices as being like “reference frames” in which one is specifying a sequence of “simultaneity surfaces” (here “branchtime hypersurfaces”). The particular branchial graphs we’ve shown here are ones associated with what in physics might be called the cosmological rest frame in which every node is the result of the same number of updates since the beginning. 6 | The Issue of Generated Variables A rule like defines transformations for any expressions and . So, for example, if we use the rule from left to right on the expression the “pattern variable” will be taken to be a while will be taken to be b ∘ a , and the result of applying the rule will be . But consider instead the case where our rule is: Applying this rule (from left to right) to we’ll now get . And applying the rule to we’ll get . But what should we make of those ’s? And in particular, are they “the same”, or not? A pattern variable like z_ can stand for any expression. But do two different z_’s have to stand for the same expression? In a rule like … we’re assuming that, yes, the two z_’s always stand for the same expression. But if the z_’s appear in different rules it’s a different story. Because in that case we’re dealing with two separate and unconnected z_’s—that can stand for completely different To begin seeing how this works, let’s start with a very simple example. Consider the (for now, one-way) rule where is the literal symbol , and x_ is a pattern variable. Applying this to we might think we could just write the result as: Then if we apply the rule again both branches will give the same expression , so there’ll be a merge in the multiway graph: But is this really correct? Well, no. Because really those should be two different x_’s, that could stand for two different expressions. So how can we indicate this? One approach is just to give every “generated” x_ a new name: But this result isn’t really correct either. Because if we look at the second step we see the two expressions and . But what’s really the difference between these? The names are arbitrary; the only constraint is that within any given expression they have to be different. But between expressions there’s no such constraint. And in fact and both represent exactly the same class of expressions: any expression of the form . So in fact it’s not correct that there are two separate branches of the multiway system producing two separate expressions. Because those two branches produce equivalent expressions, which means they can be merged. And turning both equivalent expressions into the same canonical form we get: It’s important to notice that this isn’t the same result as what we got when we assumed that every x_ was the same. Because then our final result was the expression which can match but not —whereas now the final result is which can match both and . This may seem like a subtle issue. But it’s critically important in practice. Not least because generated variables are in effect what make up all “truly new stuff” that can be produced. With a rule like one’s essentially just taking whatever one started with, and successively rearranging the pieces of it. But with a rule like there’s something “truly new” generated every time z_ appears. By the way, the basic issue of “generated variables” isn’t something specific to the particular symbolic expression setup we’ve been using here. For example, there’s a direct analog of it in the hypergraph rewriting systems that appear in our Physics Project. But in that case there’s a particularly clear interpretation: the analog of “generated variables” are new “atoms of space” produced by the application of rules. And far from being some kind of footnote, these “generated atoms of space” are what make up everything we have in our universe today. The issue of generated variables—and especially their naming—is the bane of all sorts of formalism for mathematical logic and programming languages. As we’ll see later, it’s perfectly possible to “go to a lower level” and set things up with no names at all, for example using combinators. But without names, things tend to seem quite alien to us humans—and certainly if we want to understand the correspondence with standard presentations of mathematics it’s pretty necessary to have names. So at least for now we’ll keep names, and handle the issue of generated variables by uniquifying their names, and canonicalizing every time we have a complete expression. Let’s look at another example to see the importance of how we handle generated variables. Consider the rule: If we start with a ∘ a and do no uniquification, we’ll get: With uniquification, but not canonicalization, we’ll get a pure tree: But with canonicalization this is reduced to: A confusing feature of this particular example is that this same result would have been obtained just by canonicalizing the original “assume-all-x_’s-are-the-same” case. But things don’t always work this way. Consider the rather trivial rule starting from . If we don’t do uniquification, and don’t do canonicalization, we get: If we do uniquification (but not canonicalization), we get a pure tree: But if we now canonicalize this, we get: And this is now not the same as what we would get by canonicalizing, without uniquifying: 7 | Rules Applied to Rules In what we’ve done so far, we’ve always talked about applying rules (like ) to expressions (like or ). But if everything is a symbolic expression there shouldn’t really need to be a distinction between “rules” and “ordinary expressions”. They’re all just expressions. And so we should as well be able to apply rules to rules as to ordinary expressions. And indeed the concept of “applying rules to rules” is something that has a familiar analog in standard mathematics. The “two-way rules” we’ve been using effectively define equivalences—which are very common kinds of statements in mathematics, though in mathematics they’re usually written with rather than with . And indeed, many axioms and many theorems are specified as equivalences—and in equational logic one takes everything to be defined using equivalences. And when one’s dealing with theorems (or axioms) specified as equivalences, the basic way one derives new theorems is by applying one theorem to another—or in effect by applying rules to rules. As a specific example, let’s say we have the “axiom”: We can now apply this to the rule to get (where since is equivalent to we’re sorting each two-way rule that arises) or after a few more steps: In this example all that’s happening is that the substitutions specified by the axiom are getting separately applied to the left- and right-hand sides of each rule that is generated. But if we really take seriously the idea that everything is a symbolic expression, things can get a bit more complicated. Consider for example the rule: If we apply this to then if x_ “matches any expression” it can match the whole expression giving the result: Standard mathematics doesn’t have an obvious meaning for something like this—although as soon as one “goes metamathematical” it’s fine. But in an effort to maintain contact with standard mathematics we’ll for now have the “meta rule” that x_ can’t match an expression whose top-level operator is . (As we’ll discuss later, including such matches would allow us to do exotic things like encode set theory within arithmetic, which is again something usually considered to be “syntactically prevented” in mathematical logic.) Another—still more obscure—meta rule we have is that x_ can’t “match inside a variable”. In Wolfram Language, for example, a_ has the full form Pattern[a,Blank[]], and one could imagine that x_ could match “internal pieces” of this. But for now, we’re going to treat all variables as atomic—even though later on, when we “descend below the level of variables”, the story will be different. When we apply a rule like to we’re taking a rule with pattern variables, and doing substitutions with it on a “literal expression” without pattern variables. But it’s also perfectly possible to apply pattern rules to pattern rules—and indeed that’s what we’ll mostly do below. But in this case there’s another subtle issue that can arise. Because if our rule generates variables, we can end up with two different kinds of variables with “arbitrary names”: generated variables, and pattern variables from the rule we’re operating on. And when we canonicalize the names of these variables, we can end up with identical expressions that we need to merge. Here’s what happens if we apply the rule to the literal rule : If we apply it to the pattern rule but don’t do canonicalization, we’ll just get the same basic result: But if we canonicalize we get instead: The effect is more dramatic if we go to two steps. When operating on the literal rule we get: Operating on the pattern rule, but without canonicalization, we get while if we include canonicalization many rules merge and we get: 8 | Accumulative Evolution We can think of “ordinary expressions” like as being like “data”, and rules as being like “code”. But when everything is a symbolic expression, it’s perfectly possible—as we saw above—to “treat code like data”, and in particular to generate rules as output. But this now raises a new possibility. When we “get a rule as output”, why not start “using it like code” and applying it to things? In mathematics we might apply some theorem to prove a lemma, and then we might subsequently use that lemma to prove another theorem—eventually building up a whole “accumulative structure” of lemmas (or theorems) being used to prove other lemmas. In any given proof we can in principle always just keep using the axioms over and over again—but it’ll be much more efficient to progressively build a library of more and more lemmas, and use these. And in general we’ll build up a richer structure by “accumulating lemmas” than always just going back to the axioms. In the multiway graphs we’ve drawn so far, each edge represents the application of a rule, but that rule is always a fixed axiom. To represent accumulative evolution we need a slightly more elaborate structure—and it’ll be convenient to use token-event graphs rather than pure multiway graphs. Every time we apply a rule we can think of this as an event. And with the setup we’re describing, that event can be thought of as taking two tokens as input: one the “code rule” and the other the “data rule”. The output from the event is then some collection of rules, which can then serve as input (either “code” or “data”) to other events. Let’s start with the very simple example of the rule where for now there are no patterns being used. Starting from this rule, we get the token-event graph (where now we’re indicating the initial “axiom” statement using a slightly different color): One subtlety here is that the is applied to itself—so there are two edges going into the event from the node representing the rule. Another subtlety is that there are two different ways the rule can be applied, with the result that there are two output rules generated. Here’s another example, based on the two rules: Continuing for another step we get: Typically we will want to consider as “defining an equivalence”, so that means the same as , and can be conflated with it—yielding in this case: Now let’s consider the rule: After one step we get: After 2 steps we get: The token-event graphs after 3 and 4 steps in this case are (where now we’ve deduplicated events): Let’s now consider a rule with the same structure, but with pattern variables instead of literal symbols: Here’s what happens after one step (note that there’s canonicalization going on, so a_’s in different rules aren’t “the same”) and we see that there are different theorems from the ones we got without patterns. After 2 steps with the pattern rule we get where now the complete set of “theorems that have been derived” is (dropping the _’s for readability) or as trees: After another step one gets where now there are 2860 “theorems”, roughly exponentially distributed across sizes according to and with a typical “size-19” theorem being: In effect we can think of our original rule (or “axiom”) as having initiated some kind of “mathematical Big Bang” from which an increasing number of theorems are generated. Early on we described having a “gas” of mathematical theorems that—a little like molecules—can interact and create new theorems. So now we can view our accumulative evolution process as a concrete example of this. Let’s consider the rule from previous sections: After one step of accumulative evolution according to this rule we get: After 2 and 3 steps the results are: What is the significance of all this complexity? At a basic level, it’s just an example of the ubiquitous phenomenon in the computational universe (captured in the Principle of Computational Equivalence) that even systems with very simple rules can generate behavior as complex as anything. But the question is whether—on top of all this complexity—there are simple “coarse-grained” features that we can identify as “higher-level mathematics”; features that we can think of as capturing the “bulk” behavior of the accumulative evolution of axiomatic mathematics. 9 | Accumulative String Systems As we’ve just seen, the accumulative evolution of even very simple transformation rules for expressions can quickly lead to considerable complexity. And in an effort to understand the essence of what’s going on, it’s useful to look at the slightly simpler case not of rules for “tree-structured expressions” but instead at rules for strings of characters. Consider the seemingly trivial case of the rule: After one step this gives while after 2 steps we get though treating as the same as this just becomes: Here’s what happens with the rule: After 2 steps we get and after 3 steps where now there are a total of 25 “theorems”, including (unsurprisingly) things like: It’s worth noting that despite the “lexical similarity” of the string rule we’re now using to the expression rule from the previous section, these rules actually work in very different ways. The string rule can apply to characters anywhere within a string, but what it inserts is always of fixed size. The expression rule deals with trees, and only applies to “whole subtrees”, but what it inserts can be a tree of any size. (One can align these setups by thinking of strings as expressions in which characters are “bound together” by an associative operator, as in A·B·A·A. But if one explicitly gives associativity axioms these will lead to additional pieces in the token-event graph.) A rule like also has the feature of involving patterns. In principle we could include patterns in strings too—both for single characters (as with _) and for sequences of characters (as with __)—but we won’t do this here. (We can also consider one-way rules, using → instead of .) To get a general sense of the kinds of things that happen in accumulative (string) systems, we can consider enumerating all possible distinct two-way string transformation rules. With only a single character A, there are only two distinct cases because systematically generates all possible rules and at t steps gives a total number of rules equal to: With characters A and B the distinct token-event graphs generated starting from rules with a total of at most 5 characters are: Note that when the strings in the initial rule are the same length, only a rather trivial finite token-event graph is ever generated, as in the case of : But when the strings are of different lengths, there is always unbounded growth. 10 | The Case of Hypergraphs We’ve looked at accumulative versions of expression and string rewriting systems. So what about accumulative versions of hypergraph rewriting systems of the kind that appear in our Physics Project? Consider the very simple hypergraph rule or pictorially: (Note that the nodes that are named 1 here are really like pattern variables, that could be named for example x_.) We can now do accumulative evolution with this rule, at each step combining results that involve equivalent (i.e. isomorphic) hypergraphs: After two steps this gives: And after 3 steps: How does all this compare to “ordinary” evolution by hypergraph rewriting? Here’s a multiway graph based on applying the same underlying rule repeatedly, starting from an initial condition formed from the rule: What we see is that the accumulative evolution in effect “shortcuts” the ordinary multiway evolution, essentially by “caching” the result of every piece of every transformation between states (which in this case are rules), and delivering a given state in fewer steps. In our typical investigation of hypergraph rewriting for our Physics Project we consider one-way transformation rules. Inevitably, though, the ruliad contains rules that go both ways. And here, in an effort to understand the correspondence with our metamodel of mathematics, we can consider two-way hypergraph rewriting rules. An example is the two-way version of the rule above: Now the token-event graph becomes or after 2 steps (where now the transformations from “later states” to “earlier states” have started to fill in): Just like in ordinary hypergraph evolution, the only way to get hypergraphs with additional hyperedges is to start with a rule that involves the addition of new hyperedges—and the same is true for the addition of new elements. Consider the rule: After 1 step this gives while after 2 steps it gives: The general appearance of this token-event graph is not much different from what we saw with string rewrite or expression rewrite systems. So what this suggests is that it doesn’t matter much whether we’re starting from our metamodel of axiomatic mathematics or from any other reasonably rich rewriting system: we’ll always get the same kind of “large-scale” token-event graph structure. And this is an example of what we’ll use to argue for general laws of metamathematics. 11 | Proofs in Accumulative Systems In an earlier section, we discussed how paths in a multiway graph can represent proofs of “equivalence” between expressions (or the “entailment” of one expression by another). For example, with the rule (or “axiom”) this shows a path that “proves” that “BA entails AAB”: But once we know this, we can imagine adding this result (as what we can think of as a “lemma”) to our original rule: And now (the “theorem”) “BA entails AAB” takes just one step to prove—and all sorts of other proofs are also shortened: It’s perfectly possible to imagine evolving a multiway system with a kind of “caching-based” speed-up mechanism where every new entailment discovered is added to the list of underlying rules. And, by the way, it’s also possible to use two-way rules throughout the multiway system: But accumulative systems provide a much more principled way to progressively “add what’s discovered”. So what do proofs look like in such systems? Consider the rule: Running it for 2 steps we get the token-event graph: Now let’s say we want to prove that the original “axiom” implies (or “entails”) the “theorem” . Here’s the subgraph that demonstrates the result: And here it is as a separate “proof graph” where each event takes two inputs—the “rule to be applied” and the “rule to apply to”—and the output is the derived (i.e. entailed or implied) new rule or rules. If we run the accumulative system for another step, we get: Now there are additional “theorems” that have been generated. An example is: And now we can find a proof of this theorem: This proof exists as a subgraph of the token-event graph: The proof just given has the fewest events—or “proof steps”—that can be used. But altogether there are 50 possible proofs, other examples being: These correspond to the subgraphs: How much has the accumulative character of these token-event graphs contributed to the structure of these proofs? It’s perfectly possible to find proofs that never use “intermediate lemmas” but always “go back to the original axiom” at every step. In this case examples are which all in effect require at least one more “sequential event” than our shortest proof using intermediate lemmas. A slightly more dramatic example occurs for the theorem where now without intermediate lemmas the shortest proof is but with intermediate lemmas it becomes: What we’ve done so far here is to generate a complete token-event graph for a certain number of steps, and then to see if we can find a proof in it for some particular statement. The proof is a subgraph of the “relevant part” of the full token-event graph. Often—in analogy to the simpler case of finding proofs of equivalences between expressions in a multiway graph—we’ll call this subgraph a “proof path”. But in addition to just “finding a proof” in a fully constructed token-event graph, we can ask whether, given a statement, we can directly construct a proof for it. As discussed in the context of proofs in ordinary multiway graphs, computational irreducibility implies that in general there’s no “shortcut” way to find a proof. In addition, for any statement, there may be no upper bound on the length of proof that will be required (or on the size or number of intermediate “lemmas” that will have to be used). And this, again, is the shadow of undecidability in our systems: that there can be statements whose provability may be arbitrarily difficult to determine. 12 | Beyond Substitution: Cosubstitution and Bisubstitution In making our “metamodel” of mathematics we’ve been discussing the rewriting of expressions according to rules. But there’s a subtle issue that we’ve so far avoided, that has to do with the fact that the expressions we’re rewriting are often themselves patterns that stand for whole classes of expressions. And this turns out to allow for additional kinds of transformations that we’ll call cosubstitution and bisubstitution. Let’s talk first about cosubstitution. Imagine we have the expression f[a]. The rule would do a substitution for a to give f[b]. But if we have the expression f[c] the rule will do nothing. Now imagine that we have the expression f[x_]. This stands for a whole class of expressions, including f[a], f[c], etc. For most of this class of expressions, the rule will do nothing. But in the specific case of f[a], it applies, and gives the result f[b]. If our rule is f[x_] → s then this will apply as an ordinary substitution to f[a], giving the result s. But if the rule is f[b] → s this will not apply as an ordinary substitution to f[a]. However, it can apply as a cosubstitution to f[x_] by picking out the specific case where x_ stands for b, then using the rule to give s. In general, the point is that ordinary substitution specializes patterns that appear in rules—while what one can think of as the “dual operation” of cosubstitution specializes patterns that appear in the expressions to which the rules are being applied. If one thinks of the rule that’s being applied as like an operator, and the expression to which the rule is being applied as an operand, then in effect substitution is about making the operator fit the operand, and cosubstitution is about making the operand fit the operator. It’s important to realize that as soon as one’s operating on expressions involving patterns, cosubstitution is not something “optional”: it’s something that one has to include if one is really going to interpret patterns—wherever they occur—as standing for classes of expressions. When one’s operating on a literal expression (without patterns) only substitution is ever possible, as in corresponding to this fragment of a token-event graph: Let’s say we have the rule f[a] → s (where f[a] is a literal expression). Operating on f[b] this rule will do nothing. But what if we apply the rule to f[x_]? Ordinary substitution still does nothing. But cosubstitution can do something. In fact, there are two different cosubstitutions that can be done in this case: What’s going on here? In the first case, f[x_] has the “special case” f[a], to which the rule applies (“by cosubstitution”)—giving the result s. In the second case, however, it’s on its own which has the special case f[a], that gets transformed by the rule to s, giving the final cosubstitution result f[s]. There’s an additional wrinkle when the same pattern (such as ) appears multiple times: In all cases, x_ is matched to a. But which of the x_’s is actually replaced is different in each case. Here’s a slightly more complicated example: In ordinary substitution, replacements for patterns are in effect always made “locally”, with each specific pattern separately being replaced by some expression. But in cosubstitution, a “special case” found for a pattern will get used throughout when the replacement is done. Let’s see how this all works in an accumulative axiomatic system. Consider the very simple rule: One step of substitution gives the token-event graph (where we’ve canonicalized the names of pattern variables to a_ and b_): But one step of cosubstitution gives instead: Here are the individual transformations that were made (with the rule at least nominally being applied only in one direction): The token-event graph above is then obtained by canonicalizing variables, and combining identical expressions (though for clarity we don’t merge rules of the form and ). If we go another step with this particular rule using only substitution, there are additional events (i.e. transformations) but no new theorems produced: Cosubstitution, however, produces another 27 theorems or altogether or as trees: We’ve now seen examples of both substitution and cosubstitution in action. But in our metamodel for mathematics we’re ultimately dealing not with each of these individually, but rather with the “symmetric” concept of bisubstitution, in which both substitution and cosubstitution can be mixed together, and applied even to parts of the same expression. In the particular case of , bisubstitution adds nothing beyond cosubstitution. But often it does. Consider the rule: Here’s the result of applying this to three different expressions using substitution, cosubstitution and bisubstitution (where we consider only matches for “whole ∘ expressions”, not subparts): Cosubstitution very often yields substantially more transformations than substitution—bisubstitution then yielding modestly more than cosubstitution. For example, for the axiom system the number of theorems derived after 1 and 2 steps is given by: In some cases there are theorems that can be produced by full bisubstitution, but not—even after any number of steps—by substitution or cosubstitution alone. However, it is also common to find that theorems can in principle be produced by substitution alone, but that this just takes more steps (and sometimes vastly more) than when full bisubstitution is used. (It’s worth noting, however, that the notion of “how many steps” it takes to “reach” a given theorem depends on the foliation one chooses to use in the token-event graph.) The various forms of substitution that we’ve discussed here represent different ways in which one theorem can entail others. But our overall metamodel of mathematics—based as it is purely on the structure of symbolic expressions and patterns—implies that bisubstitution covers all entailments that are possible. In the history of metamathematics and mathematical logic, a whole variety of “laws of inference” or “methods of entailment” have been considered. But with the modern view of symbolic expressions and patterns (as used, for example, in the Wolfram Language), bisubstitution emerges as the fundamental form of entailment, with other forms of entailment corresponding to the use of particular types of expressions or the addition of further elements to the pure substitutions we’ve used here. It should be noted, however, that when it comes to the ruliad different kinds of entailments correspond merely to different foliations—with the form of entailment that we’re using representing just a particularly straightforward case. The concept of bisubstitution has arisen in the theory of term rewriting, as well as in automated theorem proving (where it is often viewed as a particular “strategy”, and called “paramodulation”). In term rewriting, bisubstitution is closely related to the concept of unification—which essentially asks what assignment of values to pattern variables is needed in order to make different subterms of an expression be identical. 13 | Some First Metamathematical Phenomenology Now that we’ve finished describing the many technical issues involved in constructing our metamodel of mathematics, we can start looking at its consequences. We discussed above how multiway graphs formed from expressions can be used to define a branchial graph that represents a kind of “metamathematical space”. We can now use a similar approach to set up a metamathematical space for our full metamodel of the “progressive accumulation” of mathematical statements. Let’s start by ignoring cosubstitution and bisubstitution and considering only the process of substitution—and beginning with the axiom: Doing accumulative evolution from this axiom we get the token-event graph or after 2 steps: From this we can derive an “effective multiway graph” by directly connecting all input and output tokens involved in each event: And then we can produce a branchial graph, which in effect yields an approximation to the “metamathematical space” generated by our axiom: Showing the statements produced in the form of trees we get (with the top node representing ⟷): If we do the same thing with full bisubstitution, then even after one step we get a slightly larger token-event graph: After two steps, we get which contains 46 statements, compared to 42 if only substitution is used. The corresponding branchial graph is: The adjacency matrices for the substitution and bisubstitution cases are then which have 80% and 85% respectively of the number of edges in complete graphs of these sizes. Branchial graphs are usually quite dense, but they nevertheless do show definite structure. Here are some results after 2 steps: 14 | Relations to Automated Theorem Proving We’ve discussed at some length what happens if we start from axioms and then build up an “entailment cone” of all statements that can be derived from them. But in the actual practice of mathematics people often want to just look at particular target statements, and see if they can be derived (i.e. proved) from the axioms. But what can we say “in bulk” about this process? The best source of potential examples we have right now come from the practice of automated theorem proving—as for example implemented in the Wolfram Language function FindEquationalProof. As a simple example of how this works, consider the axiom and the theorem: Automated theorem proving (based on FindEquationalProof) finds the following proof of this theorem: Needless to say, this isn’t the only possible proof. And in this very simple case, we can construct the full entailment cone—and determine that there aren’t any shorter proofs, though there are two more of the same length: All three of these proofs can be seen as paths in the entailment cone: How “complicated” are these proofs? In addition to their lengths, we can for example ask how big the successive intermediate expressions they involve become, where here we are including not only the proofs already shown, but also some longer ones as well: In the setup we’re using here, we can find a proof of by starting with lhs, building up an entailment cone, and seeing whether there’s any path in it that reaches rhs. In general there’s no upper bound on how far one will have to go to find such a path—or how big the intermediate expressions may need to get. One can imagine all kinds of optimizations, for example where one looks at multistep consequences of the original axioms, and treats these as “lemmas” that we can “add as axioms” to provide new rules that jump multiple steps on a path at a time. Needless to say, there are lots of tradeoffs in doing this. (Is it worth the memory to store the lemmas? Might we “jump” past our target? etc.) But typical actual automated theorem provers tend to work in a way that is much closer to our accumulative rewriting systems—in which the “raw material” on which one operates is statements rather than expressions. Once again, we can in principle always construct a whole entailment cone, and then look to see whether a particular statement occurs there. But then to give a proof of that statement it’s sufficient to find the subgraph of the entailment cone that leads to that statement. For example, starting with the axiom we get the entailment cone (shown here as a token-event graph, and dropping _’s): After 2 steps the statement shows up in this entailment cone where we’re indicating the subgraph that leads from the original axiom to this statement. Extracting this subgraph we get which we can view as a proof of the statement within this axiom system. But now let’s use traditional automated theorem proving (in the form of FindEquationalProof) to get a proof of this same statement. Here’s what we get: This is again a token-event graph, but its structure is slightly different from the one we “fished out of” the entailment cone. Instead of starting from the axiom and “progressively deriving” our statement we start from both the statement and the axiom and then show that together they lead “merely via substitution” to a statement of the form , which we can take as an “obviously derivable Sometimes the minimal “direct proof” found from the entailment cone can be considerably simpler than the one found by automated theorem proving. For example, for the statement the minimal direct proof is while the one found by FindEquationalProof is: But the great advantage of automated theorem proving is that it can “directedly” search for proofs instead of just “fishing them out of” the entailment cone that contains all possible exhaustively generated proofs. To use automated theorem proving you have to “know where you want to go”—and in particular identify the theorem you want to prove. Consider the axiom and the statement: This statement doesn’t show up in the first few steps of the entailment cone for the axiom, even though millions of other theorems do. But automated theorem proving finds a proof of it—and rearranging the “prove-a-tautology proof” so that we just have to feed in a tautology somewhere in the proof, we get: The model-theoretic methods we’ll discuss a little later allow one effectively to “guess” theorems that might be derivable from a given axiom system. So, for example, for the axiom system here’s a “guess” at a theorem and here’s a representation of its proof found by automated theorem proving—where now the length of an intermediate “lemma” is indicated by the size of the corresponding node and in this case the longest intermediate lemma is of size 67 and is: In principle it’s possible to rearrange token-event graphs generated by automated theorem proving to have the same structure as the ones we get directly from the entailment cone—with axioms at the beginning and the theorem being proved at the end. But typical strategies for automated theorem proving don’t naturally produce such graphs. In principle automated theorem proving could work by directly searching for a “path” that leads to the theorem one’s trying to prove. But usually it’s much easier instead to have as the “target” a simple tautology. At least conceptually automated theorem proving must still try to “navigate” through the full token-event graph that makes up the entailment cone. And the main issue in doing this is that there are many places where one does not know “which branch to take”. But here there’s a crucial—if at first surprising—fact: at least so long as one is using full bisubstitution it ultimately doesn’t matter which branch one takes; there’ll always be a way to “merge back” to any other branch. This is a consequence of the fact that the accumulative systems we’re using automatically have the property of confluence which says that every branch is accompanied by a subsequent merge. There’s an almost trivial way in which this is true by virtue of the fact that for every edge the system also includes the reverse of that edge. But there’s a more substantial reason as well: that given any two statements on two different branches, there’s always a way to combine them using a bisubstitution to get a single statement. In our Physics Project, the concept of causal invariance—which effectively generalizes confluence—is an important one, that leads among other things to ideas like relativistic invariance. Later on we’ll discuss the idea that “regardless of what order you prove theorems in, you’ll always get the same math”, and its relationship to causal invariance and to the notion of relativity in metamathematics. But for now the importance of confluence is that it has the potential to simplify automated theorem proving—because in effect it says one can never ultimately “make a wrong turn” in getting to a particular theorem, or, alternatively, that if one keeps going long enough every path one might take will eventually be able to reach every theorem. And indeed this is exactly how things work in the full entailment cone. But the challenge in automated theorem proving is to generate only a tiny part of the entailment cone, yet still “get to” the theorem we want. And in doing this we have to carefully choose which “branches” we should try to merge using bisubstitution events. In automated theorem proving these bisubstitution events are typically called “critical pair lemmas”, and there are a variety of strategies for defining an order in which critical pair lemmas should be tried. It’s worth pointing out that there’s absolutely no guarantee that such procedures will find the shortest proof of any given theorem (or in fact that they’ll find a proof at all with a given amount of computational effort). One can imagine “higher-order proofs” in which one attempts to transform not just statements of the form , but full proofs (say represented as token-event graphs). And one can imagine using such transformations to try to simplify proofs. A general feature of the proofs we’ve been showing is that they are accumulative, in the sense they continually introduce lemmas which are then reused. But in principle any proof can be “unrolled” into one that just repeatedly uses the original axioms (and in fact, purely by substitution)—and never introduces other lemmas. The necessary “cut elimination” can effectively be done by always recreating each lemma from the axioms whenever it’s needed—a process which can become exponentially complex. As an example, from the axiom above we can generate the proof where for example the first lemma at the top is reused in four events. But now by cut elimination we can “unroll” this whole proof into a “straight-line” sequence of substitutions on expressions done just using the original axiom and we see that our final theorem is the statement that the first expression in the sequence is equivalent under the axiom to the last one. As is fairly evident in this example, a feature of automated theorem proving is that its result tends to be very “non-human”. Yes, it can provide incontrovertible evidence that a theorem is valid. But that evidence is typically far away from being any kind of “narrative” suitable for human consumption. In the analogy to molecular dynamics, an automated proof gives detailed “turn-by-turn instructions” that show how a molecule can reach a certain place in a gas. Typical “human-style” mathematics, on the other hand, operates on a higher level, analogous to talking about overall motion in a fluid. And a core part of what’s achieved by our physicalization of metamathematics is understanding why it’s possible for mathematical observers like us to perceive mathematics as operating at this higher level. 15 | Axiom Systems of Present-Day Mathematics The axiom systems we’ve been talking about so far were chosen largely for their axiomatic simplicity. But what happens if we consider axiom systems that are used in practice in present-day The simplest common example are the axioms (actually, a single axiom) of semigroup theory, stated in our notation as: Using only substitution, all we ever get after any number of steps is the token-event graph (i.e. “entailment cone”): But with bisubstitution, even after one step we already get the entailment cone which contains such theorems as: After 2 steps, the entailment cone becomes which contains 1617 theorems such as with sizes distributed as follows: Looking at these theorems we can see that—in fact by construction—they are all just statements of the associativity of ∘. Or, put another way, they state that under this axiom all expression trees that have the same sequence of leaves are equivalent. What about group theory? The standard axioms can be written where ∘ is interpreted as the binary group multiplication operation, overbar as the unary inverse operation, and 1 as the constant identity element (or, equivalently, zero-argument function). One step of substitution already gives: It’s notable that in this picture one can already see “different kinds of theorems” ending up in different “metamathematical locations”. One also sees some “obvious” tautological “theorems”, like and If we use full bisubstitution, we get 56 rather than 27 theorems, and many of the theorems are more complicated: After 2 steps of pure substitution, the entailment cone in this case becomes which includes 792 theorems with sizes distributed according to: But among all these theorems, do straightforward “textbook theorems” appear, like: The answer is no. It’s inevitable that in the end all such theorems must appear in the entailment cone. But it turns out that it takes quite a few steps. And indeed with automated theorem proving we can find “paths” that can be taken to prove these theorems—involving significantly more than two steps: So how about logic, or, more specifically Boolean algebra? A typical textbook axiom system for this (represented in terms of And ∧, Or ∨ and Not) is: After one step of substitution from these axioms we get or in our more usual rendering: So what happens here with “named textbook theorems” (excluding commutativity and distributivity, which already appear in the particular axioms we’re using)? Once again none of these appear in the first step of the entailment cone. But at step 2 with full bisubstitution the idempotence laws show up where here we’re only operating on theorems with leaf count below 14 (of which there are a total of 27,953). And if we go to step 3—and use leaf count below 9—we see the law of excluded middle and the law of noncontradiction show up: How are these reached? Here’s the smallest fragment of token-event graph (“shortest path”) within this entailment cone from the axioms to the law of excluded middle: There are actually many possible “paths” (476 in all with our leaf count restriction); the next smallest ones with distinct structures are: Here’s the “path” for this theorem found by automated theorem proving: Most of the other “named theorems” involve longer proofs—and so won’t show up until much later in the entailment cone: The axiom system we’ve used for Boolean algebra here is by no means the only possible one. For example, it’s stated in terms of And, Or and Not—but one doesn’t need all those operators; any Boolean expression (and thus any theorem in Boolean algebra) can also be stated just in terms of the single operator Nand. And in terms of that operator the very simplest axiom system for Boolean algebra contains (as I found in 2000) just one axiom (where here ∘ is now interpreted as Nand): Here’s one step of the substitution entailment cone for this axiom: After 2 steps this gives an entailment cone with 5486 theorems with size distribution: When one’s working with Nand, it’s less clear what one should consider to be “notable theorems”. But an obvious one is the commutativity of Nand: Here’s a proof of this obtained by automated theorem proving (tipped on its side for readability): Eventually it’s inevitable that this theorem must show up in the entailment cone for our axiom system. But based on this proof we would expect it only after something like 102 steps. And with the entailment cone growing exponentially this means that by the time shows up, perhaps other theorems would have done so—though most vastly more complicated. We’ve looked at axioms for group theory and for Boolean algebra. But what about other axiom systems from present-day mathematics? In a sense it’s remarkable how few of these there are—and indeed I was able to list essentially all of them in just two pages in A New Kind of Science: The longest axiom system listed here is a precise version of Euclid’s original axioms where we are listing everything (even logic) in explicit (Wolfram Language) functional form. Given these axioms we should now be able to prove all theorems in Euclidean geometry. As an example (that’s already complicated enough) let’s take Euclid’s very first “proposition” (Book 1, Proposition 1) which states that it’s possible “with a ruler and compass” (i.e. with lines and circles) to construct an equilateral triangle based on any line segment—as in: We can write this theorem by saying that given the axioms together with the “setup” it’s possible to derive: We can now use automated theorem proving to generate a proof and in this case the proof takes 272 steps. But the fact that it’s possible to generate this proof shows that (up to various issues about the “setup conditions”) the theorem it proves must eventually “occur naturally” in the entailment cone of the original axioms—though along with an absolutely immense number of other theorems that Euclid didn’t “call out” and write down in his books. Looking at the collection of axiom systems from A New Kind of Science (and a few related ones) for many of them we can just directly start generating entailment cones—here shown after one step, using substitution only: But if we’re going to make entailment cones for all axiom systems there are a few other technical wrinkles we have to deal with. The axiom systems shown above are all “straightforwardly equational” in the sense that they in effect state what amount to “algebraic relations” (in the sense of universal algebra) universally valid for all choices of variables. But some axiom systems traditionally used in mathematics also make other kinds of statements. In the traditional formalism and notation of mathematical logic these can look quite complicated and abstruse. But with a metamodel of mathematics like ours it’s possible to untangle things to the point where these different kinds of statements can also be handled in a streamlined way. In standard mathematical notation one might write which we can read as “for all a and b, equals ”—and which we can interpret in our “metamodel” of mathematics as the (two-way) rule: What this says is just that any time we see an expression that matches the pattern we can replace it by (or in Wolfram Language notation just ), and vice versa, so that in effect can be said to entail . But what if we have axioms that involve not just universal statements (“for all…”) but also existential statements (“there exists…”)? In a sense we’re already dealing with these. Whenever we write —or in explicit functional form, say o[a_, b_]—we’re effectively asserting that there exists some operator o that we can do operations with. It’s important to note that once we introduce o (or ∘) we imagine that it represents the same thing wherever it appears (in contrast to a pattern variable like a_ that can represent different things in different instances). Now consider an “explicit existential statement” like which we can read as “there exists something a for which equals a”. To represent the “something” we just introduce a “constant”, or equivalently an expression with head, say, α, and zero arguments: α []. Now we can write our existential statement as We can operate on this using rules like , with α[] always “passing through” unchanged—but with its mere presence asserting that “it exists”. A very similar setup works even if we have both universal and existential quantifiers. For example, we can represent as just where now there isn’t just a single object, say β[], that we assert exists; instead there are “lots of different β’s”, “parametrized” in this case by a. We can apply our standard accumulative bisubstitution process to this statement—and after one step we get: Note that this is a very different result from the one for the “purely universal” statement: In general, we can “compile” any statement in terms of quantifiers into our metamodel, essentially using the standard technique of Skolemization from mathematical logic. Thus for example can be “compiled into” can be compiled into: If we look at the actual axiom systems used in current mathematics there’s one more issue to deal with—which doesn’t affect the axioms for logic or group theory, but does show up, for example, in the Peano axioms for arithmetic. And the issue is that in addition to quantifying over “variables”, we also need to quantify over “functions”. Or formulated differently, we need to set up not just individual axioms, but a whole “axiom schema” that can generate an infinite sequence of “ordinary axioms”, one for each possible “function”. In our metamodel of mathematics, we can think of this in terms of “parametrized functions”, or in Wolfram Language, just as having functions whose heads are themselves patterns, as in f[n_][a_]. Using this setup we can then “compile” the standard induction axiom of Peano Arithmetic into the (Wolfram Language) metamodel form where the “implications” in the original axiom have been converted into one-way rules, so that what the axiom can now be seen to do is to define a transformation for something that is not an “ordinary mathematical-style expression” but rather an expression that is itself a rule. But the important point is that our whole setup of doing substitutions in symbolic expressions—like Wolfram Language—makes no fundamental distinction between dealing with “ordinary expressions” and with “rules” (in Wolfram Language, for example, is just Rule[a,b]). And as a result we can expect to be able to construct token-event graphs, build entailment cones, etc. just as well for axiom systems like Peano Arithmetic, as for ones like Boolean algebra and group theory. The actual number of nodes that appear even in what might seem like simple cases can be huge, but the whole setup makes it clear that exploring an axiom system like this is just another example—that can be uniformly represented with our metamodel of mathematics—of a form of sampling of the ruliad. 16 | The Model-Theoretic Perspective We’ve so far considered something like just as an abstract statement about arbitrary symbolic variables x and y, and some abstract operator ∘. But can we make a “model” of what x, y, and ∘ could “explicitly be”? Let’s imagine for example that x and y can take 2 possible values, say 0 or 1. (We’ll use numbers for notational convenience, though in principle the values could be anything we want.) Now we have to ask what ∘ can be in order to have our original statement always hold. It turns out in this case that there are several possibilities, that can be specified by giving possible “multiplication tables” for ∘: (For convenience we’ll often refer to such multiplication tables by numbers FromDigits[Flatten[m],k], here 0, 1, 5, 7, 10, 15.) Using let’s say the second multiplication table we can then “evaluate” both sides of the original statement for all possible choices of x and y, and verify that the statement always holds: If we allow, say, 3 possible values for x and y, there turn out to be 221 possible forms for ∘. The first few are: As another example, let’s consider the simplest axiom for Boolean algebra (that I discovered in 2000): Here are the “size-2” models for this and these, as expected, are the truth tables for Nand and Nor respectively. (In this particular case, there are no size-3 models, 12 size-4 models, and in general models of size 2^n—and no finite models of any other size.) Looking at this example suggests a way to talk about models for axiom systems. We can think of an axiom system as defining a collection of abstract constraints. But what can we say about objects that might satisfy those constraints? A model is in effect telling us about these objects. Or, put another way, it’s telling what “things” the axiom system “describes”. And in the case of my axiom for Boolean algebra, those “things” would be Boolean variables, operated on using Nand or Nor. As another example, consider the axioms for group theory Here are the models up to size 3 in this case: Is there a mathematical interpretation of these? Well, yes. They essentially correspond to (representations of) particular finite groups. The original axioms define constraints to be satisfied by any group. These models now correspond to particular groups with specific finite numbers of elements (and in fact specific representations of these groups). And just like in the Boolean algebra case this interpretation now allows us to start saying what the models are “about”. The first three, for example, correspond to cyclic groups which can be thought of as being “about” addition of integers mod k For axiom systems that haven’t traditionally been studied in mathematics, there typically won’t be any such preexisting identification of what they’re “about”. But we can still think of models as being a way that a mathematical observer can characterize—or summarize—an axiom system. And in a sense we can see the collection of possible finite models for an axiom system as being a kind of “model signature” for the axiom system. But let’s now consider what models tell us about “theorems” associated with a given axiom system. Take for example the axiom: Here are the size-2 models for this axiom system: Let’s now pick the last of these models. Then we can take any symbolic expression involving ∘, and say what its values would be for every possible choice of the values of the variables that appear in The last row here gives an “expression code” that summarizes the values of each expression in this particular model. And if two expressions have different codes in the model then this tells us that these expressions cannot be equivalent according to the underlying axiom system. But if the codes are the same, then it’s at least possible that the expressions are equivalent in the underlying axiom system. So as an example, let’s take the equivalences associated with pairs of expressions that have code 3 (according to the model we’re using): So now let’s compare with an actual entailment cone for our underlying axiom system (where to keep the graph of modest size we have dropped expressions involving more than 3 variables): So far this doesn’t establish equivalence between any of our code-3 expressions. But if we generate a larger entailment cone (here using a different initial expression) we get where the path shown corresponds to the statement demonstrating that this is an equivalence that holds in general for the axiom system. But let’s take another statement implied by the model, such as: Yes, it’s valid in the model. But it’s not something that’s generally valid for the underlying axiom system, or could ever be derived from it. And we can see this for example by picking another model for the axiom system, say the second-to-last one in our list above and finding out that the values for the two expressions here are different in that model: The definitive way to establish that a particular statement follows from a particular axiom system is to find an explicit proof for it, either directly by picking it out as a path in the entailment cone or by using automated theorem proving methods. But models in a sense give one a way to “get an approximate result”. As an example of how this works, consider a collection of possible expressions, with pairs of them joined whenever they can be proved equal in the axiom system we’re discussing: Now let’s indicate what codes two models of the axiom system assign to the expressions: The expressions within each connected graph component are equal according to the underlying axiom system, and in both models they are always assigned the same codes. But sometimes the models “overshoot”, assigning the same codes to expressions not in the same connected component—and therefore not equal according to the underlying axiom system. The models we’ve shown so far are ones that are valid for the underlying axiom system. If we use a model that isn’t valid we’ll find that even expressions in the same connected component of the graph (and therefore equal according to the underlying axiom system) will be assigned different codes (note the graphs have been rearranged to allow expressions with the same code to be drawn in the same We can think of our graph of equivalences between expressions as corresponding to a slice through an entailment graph—and essentially being “laid out in metamathematical space”, like a branchial graph, or what we’ll later call an “entailment fabric”. And what we see is that when we have a valid model different codes yield different patches that in effect cover metamathematical space in a way that respects the equivalences implied by the underlying axiom system. But now let’s see what happens if we make an entailment cone, tagging each node with the code corresponding to the expression it represents, first for a valid model, and then for non-valid ones: With the valid model, the whole entailment cone is tagged with the same code (and here also same color). But for the non-valid models, different “patches” in the entailment cone are tagged with different codes. Let’s say we’re trying to see if two expressions are equal according to the underlying axiom system. The definitive way to tell this is to find a “proof path” from one expression to the other. But as an “approximation” we can just “evaluate” these two expressions according to a model, and see if the resulting codes are the same. Even if it’s a valid model, though, this can only definitively tell us that two expressions aren’t equal; it can’t confirm that they are. In principle we can refine things by checking in multiple models—particularly ones with more elements. But without essentially pre-checking all possible equalities we can’t in general be sure that this will give us the complete story. Of course, generating explicit proofs from the underlying axiom system can also be hard—because in general the proof can be arbitrarily long. And in a sense there is a tradeoff. Given a particular equivalence to check we can either search for a path in the entailment graph, often effectively having to try many possibilities. Or we can “do the work up front” by finding a model or collection of models that we know will correctly tell us whether the equivalence is correct. Later we’ll see how these choices relate to how mathematical observers can “parse” the structure of metamathematical space. In effect observers can either explicitly try to trace out “proof paths” formed from sequences of abstract symbolic expressions—or they can “globally predetermine” what expressions “mean” by identifying some overall model. In general there may be many possible choices of models—and what we’ll see is that these different choices are essentially analogous to different choices of reference frames in physics. One feature of our discussion of models so far is that we’ve always been talking about making models for axioms, and then applying these models to expressions. But in the accumulative systems we’ve discussed above (and that seem like closer metamodels of actual mathematics), we’re only ever talking about “statements”—with “axioms” just being statements we happen to start with. So how do models work in such a context? Here’s the beginning of the token-event graph starting with produced using one step of entailment by substitution: For each of the statements given here, there are certain size-2 models (indicated here by their multiplication tables) that are valid—or in some cases all models are valid: We can summarize this by indicating in a 4×4 grid which of the 16 possible size-2 models are consistent with each statement generated so far in the entailment cone: Continuing one more step we get: It’s often the case that statements generated on successive steps in the entailment cone in essence just “accumulate more models”. But—as we can see from the right-hand edge of this graph—it’s not always the case—and sometimes a model valid for one statement is no longer valid for a statement it entails. (And the same is true if we use full bisubstitution rather than just substitution.) Everything we’ve discussed about models so far here has to do with expressions. But there can also be models for other kinds of structures. For strings it’s possible to use something like the same setup, though it doesn’t work quite so well. One can think of transforming the string and then trying to find appropriate “multiplication tables” for ∘, but here operating on the specific elements A and B, not on a collection of elements defined by the model. Defining models for a hypergraph rewriting system is more challenging, if interesting. One can think of the expressions we’ve used as corresponding to trees—which can be “evaluated” as soon as definite “operators” associated with the model are filled in at each node. If we try to do the same thing with graphs (or hypergraphs) we’ll immediately be thrust into issues of the order in which we scan the graph. At a more general level, we can think of a “model” as being a way that an observer tries to summarize things. And we can imagine many ways to do this, with differing degrees of fidelity, but always with the feature that if the summaries of two things are different, then those two things can’t be transformed into each other by whatever underlying process is being used. Put another way, a model defines some kind of invariant for the underlying transformations in a system. The raw material for computing this invariant may be operators at nodes, or may be things like overall graph properties (like cycle counts). 17 | Axiom Systems in the Wild We’ve talked about what happens with specific, sample axiom systems, as well as with various axiom systems that have arisen in present-day mathematics. But what about “axiom systems in the wild”—say just obtained by random sampling, or by systematic enumeration? In effect, each possible axiom system can be thought of as “defining a possible field of mathematics”—just in most cases not one that’s actually been studied in the history of human mathematics. But the ruliad certainly contains all such axiom systems. And in the style of A New Kind of Science we can do ruliology to explore them. As an example, let’s look at axiom systems with just one axiom, one binary operator and one or two variables. Here are the smallest few: For each of these axiom systems, we can then ask what theorems they imply. And for example we can enumerate theorems—just as we have enumerated axiom systems—then use automated theorem proving to determine which theorems are implied by which axiom systems. This shows the result, with possible axiom systems going down the page, possible theorems going across, and a particular square being filled in (darker for longer proofs) if a given theorem can be proved from a given axiom system: The diagonal on the left is axioms “proving themselves”. The lines across are for axiom systems like that basically say that any two expressions are equal—so that any theorem that is stated can be proved from the axiom system. But what if we look at the whole entailment cone for each of these axiom systems? Here are a few examples of the first two steps: With our method of accumulative evolution the axiom doesn’t on its own generate a growing entailment cone (though if combined with any axiom containing ∘ it does, and so does on its own). But in all the other cases shown the entailment cone grows rapidly (typically at least exponentially)—in effect quickly establishing many theorems. Most of those theorems, however, are “not small”—and for example after 2 steps here are the distributions of their sizes: So let’s say we generate only one step in the entailment cone. This is the pattern of “small theorems” we establish: And here is the corresponding result after two steps: Superimposing this on our original array of theorems we get: In other words, there are many small theorems that we can establish “if we look for them”, but which won’t “naturally be generated” quickly in the entailment cone (though eventually it’s inevitable that they will be generated). (Later we’ll see how this relates to the concept of “entailment fabrics” and the “knitting together of pieces of mathematics”.) In the previous section we discussed the concept of models for axiom systems. So what models do typical “axiom systems from the wild” have? The number of possible models of a given size varies greatly for different axiom systems: But for each model we can ask what theorems it implies are valid. And for example combining all models of size 2 yields the following “predictions” for what theorems are valid (with the actual theorems indicated by dots): Using instead models of size 3 gives “more accurate predictions”: As expected, looking at a fixed number of steps in the entailment cone “underestimates” the number of valid theorems, while looking at finite models overestimates it. So how does our analysis for “axiom systems from the wild” compare with what we’d get if we considered axiom systems that have been explicitly studied in traditional human mathematics? Here are some examples of “known” axiom systems that involve just a single binary operator and here’s the distribution of theorems they give: As must be the case, all the axiom systems for Boolean algebra yield the same theorems. But axiom systems for “different mathematical theories” yield different collections of theorems. What happens if we look at entailments from these axiom systems? Eventually all theorems must show up somewhere in the entailment cone of a given axiom system. But here are the results after one step of entailment: Some theorems have already been generated, but many have not: Just as we did above, we can try to “predict” theorems by constructing models. Here’s what happens if we ask what theorems hold for all valid models of size 2: For several of the axiom systems, the models “perfectly predict” at least the theorems we show here. And for Boolean algebra, for example, this isn’t surprising: the models just correspond to identifying ∘ as Nand or Nor, and to say this gives a complete description of Boolean algebra. But in the case of groups, “size-2 models” just capture particular groups that happen to be of size 2, and for these particular groups there are special, extra theorems that aren’t true for groups in general. If we look at models specifically of size 3 there aren’t any examples for Boolean algebra so we don’t predict any theorems. But for group theory, for example, we start to get a slightly more accurate picture of what theorems hold in general: Based on what we’ve seen here, is there something “obviously special” about the axiom systems that have traditionally been used in human mathematics? There are cases like Boolean algebra where the axioms in effect constrain things so much that we can reasonably say that they’re “talking about definite things” (like Nand and Nor). But there are plenty of other cases, like group theory, where the axioms provide much weaker constraints, and for example allow an infinite number of possible specific groups. But both situations occur among axiom systems “from the wild”. And in the end what we’re doing here doesn’t seem to reveal anything “obviously special” (say in the statistics of models or theorems) about “human” axiom systems. And what this means is that we can expect that conclusions we draw from looking at the “general case of all axiom systems”—as captured in general by the ruliad—can be expected to hold in particular for the specific axiom systems and mathematical theories that human mathematics has studied. 18 | The Topology of Proof Space In the typical practice of pure mathematics the main objective is to establish theorems. Yes, one wants to know that a theorem has a proof (and perhaps the proof will be helpful in understanding the theorem), but the main focus is on theorems and not on proofs. In our effort to “go underneath” mathematics, however, we want to study not only what theorems there are, but also the process by which the theorems are reached. We can view it as an important simplifying assumption of typical mathematical observers that all that matters is theorems—and that different proofs aren’t relevant. But to explore the underlying structure of metamathematics, we need to unpack this—and in effect look directly at the structure of proof space. Let’s consider a simple system based on strings. Say we have the rewrite rule and we want to establish the theorem . To do this we have to find some path from A to ABA in the multiway system (or, effectively, in the entailment cone for this axiom system): But this isn’t the only possible path, and thus the only possible proof. In this particular case, there are 20 distinct paths, each corresponding to at least a slightly different proof: But one feature here is that all these different proofs can in a sense be “smoothly deformed” into each other, in this case by progressively changing just one step at a time. So this means that in effect there is no nontrivial topology to proof space in this case—and “distinctly inequivalent” collections of proofs: But consider instead the rule . With this “axiom system” there are 15 possible proofs for the theorem : Pulling out just the proofs we get: And we see that in a sense there’s a “hole” in proof space here—so that there are two distinctly different kinds of proofs that can be done. One place it’s common to see a similar phenomenon is in games and puzzles. Consider for example the Towers of Hanoi puzzle. We can set up a multiway system for the possible moves that can be made. Starting from all disks on the left peg, we get after 1 step: After 2 steps we have: And after 8 steps (in this case) we have the whole “game graph”: The corresponding result for 4 disks is: And in each case we see the phenomenon of nontrivial topology. What fundamentally causes this? In a sense it reflects the possibility for distinctly different strategies that lead to the same result. Here, for example, different sides of the “main loop” correspond to the “foundational choice” of whether to move the biggest disk first to the left or to the right. And the same basic thing happens with 4 disks on 4 pegs, though the overall structure is more complicated there: If two paths diverge in a multiway system it could be that it will never be possible for them to merge again. But whenever the system has the property of confluence, it’s guaranteed that eventually the paths will merge. And, as it turns out, our accumulative evolution setup guarantees that (at least ignoring generation of new variables) confluence will always be achieved. But the issue is how quickly. If branches always merge after just one step, then in a sense there’ll always be topologically trivial proof space. But if the merging can take awhile (and in a continuum limit, arbitrarily long) then there’ll in effect be nontrivial topology. And one consequence of the nontrivial topology we’re discussing here is that it leads to disconnection in branchial space. Here are the branchial graphs for the first 3 steps in our original 3-disk 3-peg case: For the first two steps, the branchial graphs stay connected; but on the third step there’s disconnection. For the 4-disk 4-peg case the sequence of branchial graphs begins: At the beginning (and also the end) there’s a single component, that we might think of as a coherent region of metamathematical space. But in the middle it breaks into multiple disconnected components—in effect reflecting the emergence of multiple distinct regions of metamathematical space with something like event horizons temporarily existing between them. How should we interpret this? First and foremost, it’s something that reveals that there’s structure “below” the “fluid dynamics” level of mathematics; it’s something that depends on the discrete “axiomatic infrastructure” of metamathematics. And from the point of view of our Physics Project, we can think of it as a kind of metamathematical analog of a “quantum effect”. In our Physics Project we imagine different paths in the multiway system to correspond to different possible quantum histories. The observer is in effect spread over multiple paths, which they coarse grain or conflate together. An “observable quantum effect” occurs when there are paths that can be followed by the system, but that are somehow “too far apart” to be immediately coarse-grained together by the observer. Put another way, there is “noticeable quantum interference” when the different paths corresponding to different histories that are “simultaneously happening” are “far enough apart” to be distinguished by the observer. “Destructive interference” is presumably associated with paths that are so far apart that to conflate them would effectively require conflating essentially every possible path. (And our later discussion of the relationship between falsity and the “principle of explosion” then suggests a connection between destructive interference in physics and falsity in In essence what determines the extent of “quantum effects” is then our “size” as observers in branchial space relative to the size of features in branchial space such as the “topological holes” we’ve been discussing. In the metamathematical case, the “size” of us as observers is in effect related to our ability (or choice) to distinguish slight differences in axiomatic formulations of things. And what we’re saying here is that when there is nontrivial topology in proof space, there is an intrinsic dynamics in metamathematical entailment that leads to the development of distinctions at some scale—though whether these become “visible” to us as mathematical observers depends on how “strong a metamathematical microscope” we choose to use relative to the scale of the “topological holes”. 19 | Time, Timelessness and Entailment Fabrics A fundamental feature of our metamodel of mathematics is the idea that a given set of mathematical statements can entail others. But in this picture what does “mathematical progress” look like? In analogy with physics one might imagine it would be like the evolution of the universe through time. One would start from some limited set of axioms and then—in a kind of “mathematical Big Bang”—these would lead to a progressively larger entailment cone containing more and more statements of mathematics. And in analogy with physics, one could imagine that the process of following chains of successive entailments in the entailment cone would correspond to the passage of time. But realistically this isn’t how most of the actual history of human mathematics has proceeded. Because people—and even their computers—basically never try to extend mathematics by axiomatically deriving all possible valid mathematical statements. Instead, they come up with particular mathematical statements that for one reason or another they think are valid and interesting, then try to prove these. Sometimes the proof may be difficult, and may involve a long chain of entailments. Occasionally—especially if automated theorem proving is used—the entailments may approximate a geodesic path all the way from the axioms. But the practical experience of human mathematics tends to be much more about identifying “nearby statements” and then trying to “fit them together” to deduce the statement one’s interested in. And in general human mathematics seems to progress not so much through the progressive “time evolution” of an entailment graph as through the assembly of what one might call an “entailment fabric” in which different statements are being knitted together by entailments. In physics, the analog of the entailment graph is basically the causal graph which builds up over time to define the content of a light cone (or, more accurately, an entanglement cone). The analog of the entailment fabric is basically the (more-or-less) instantaneous state of space (or, more accurately, branchial space). In our Physics Project we typically take our lowest-level structure to be a hypergraph—and informally we often say that this hypergraph “represents the structure of space”. But really we should be deducing the “structure of space” by taking a particular time slice from the “dynamic evolution” represented by the causal graph—and for example we should think of two “atoms of space” as “being connected” in the “instantaneous state of space” if there’s a causal connection between them defined within the slice of the causal graph that occurs within the time slice we’re considering. In other words, the “structure of space” is knitted together by the causal connections represented by the causal graph. (In traditional physics, we might say that space can be “mapped out” by looking at overlaps between lots of little light cones.) Let’s look at how this works out in our metamathematical setting, using string rewrites to simplify things. If we start from the axiom this is the beginning of the entailment cone it generates: But instead of starting with one axiom and building up a progressively larger entailment cone, let’s start with multiple statements, and from each one generate a small entailment cone, say applying each rule at most twice. Here are entailment cones started from several different statements: But the crucial point is that these entailment cones overlap—so we can knit them together into an “entailment fabric”: Or with more pieces and another step of entailment: And in a sense this is a “timeless” way to imagine building up mathematics—and metamathematical space. Yes, this structure can in principle be viewed as part of the branchial graph obtained from a slice of an entailment graph (and technically this will be a useful way to think about it). But a different view—closer to the practice of human mathematics—is that it’s a “fabric” formed by fitting together many different mathematical statements. It’s not something where one’s tracking the overall passage of time, and seeing causal connections between things—as one might in “running a program”. Rather, it’s something where one’s fitting pieces together in order to satisfy constraints—as one might in creating a tiling. Underneath everything is the ruliad. And entailment cones and entailment fabrics can be thought of just as different samplings or slicings of the ruliad. The ruliad is ultimately the entangled limit of all possible computations. But one can think of it as being built up by starting from all possible rules and initial conditions, then running them for an infinite number of steps. An entailment cone is essentially a “slice” of this structure where one’s looking at the “time evolution” from a particular rule and initial condition. An entailment fabric is an “orthogonal” slice, looking “at a particular time” across different rules and initial conditions. (And, by the way, rules and initial conditions are essentially equivalent, particularly in an accumulative system.) One can think of these different slices of the ruliad as being what different kinds of observers will perceive within the ruliad. Entailment cones are essentially what observers who persist through time but are localized in rulial space will perceive. Entailment fabrics are what observers who ignore time but explore more of rulial space will perceive. Elsewhere I’ve argued that a crucial part of what makes us perceive the laws of physics we do is that we are observers who consider ourselves to be persistent through time. But now we’re seeing that in the way human mathematics is typically done, the “mathematical observer” will be of a different character. And whereas for a physical observer what’s crucial is causality through time, for a mathematical observer (at least one who’s doing mathematics the way it’s usually done) what seems to be crucial is some kind of consistency or coherence across metamathematical space. In physics it’s far from obvious that a persistent observer would be possible. It could be that with all those detailed computationally irreducible processes happening down at the level of atoms of space there might be nothing in the universe that one could consider consistent through time. But the point is that there are certain “coarse-grained” attributes of the behavior that are consistent through time. And it is by concentrating on these that we end up describing things in terms of the laws of physics we know. There’s something very analogous going on in mathematics. The detailed branchial structure of metamathematical space is complicated, and presumably full of computational irreducibility. But once again there are “coarse-grained” attributes that have a certain consistency and coherence across it. And it is on these that we concentrate as human “mathematical observers”. And it is in terms of these that we end up being able to do “human-level mathematics”—in effect operating at a “fluid dynamics” level rather than a “molecular dynamics” one. The possibility of “doing physics in the ruliad” depends crucially on the fact that as physical observers we assume that we have certain persistence and coherence through time. The possibility of “doing mathematics (the way it’s usually done) in the ruliad” depends crucially on the fact that as “mathematical observers” we assume that the mathematical statements we consider will have a certain coherence and consistency—or, in effect, that it’s possible for us to maintain and grow a coherent body of mathematical knowledge, even as we try to include all sorts of new mathematical statements. 20 | The Notion of Truth Logic was originally conceived as a way to characterize human arguments—in which the concept of “truth” has always seemed quite central. And when logic was applied to the foundations of mathematics, “truth” was also usually assumed to be quite central. But the way we’ve modeled mathematics here has been much more about what statements can be derived (or entailed) than about any kind of abstract notion of what statements can be “tagged as true”. In other words, we’ve been more concerned with “structurally deriving” that “” than in saying that “1 + 1 = 2 is true”. But what is the relation between this kind of “constructive derivation” and the logical notion of truth? We might just say that “if we can construct a statement then we should consider it true”. And if we’re starting from axioms, then in a sense we’ll never have an “absolute notion of truth”—because whatever we derive is only “as true as the axioms we started from”. One issue that can come up is that our axioms might be inconsistent—in the sense that from them we can derive two obviously inconsistent statements. But to get further in discussing things like this we really need not only to have a notion of truth, but also a notion of falsity. In traditional logic it has tended to be assumed that truth and falsity are very much “the same kind of thing”—like 1 and 0. But one feature of our view of mathematics here is that actually truth and falsity seem to have a rather different character. And perhaps this is not surprising—because in a sense if there’s one true statement about something there are typically an infinite number of false statements about it. So, for example, the single statement is true, but the infinite collection of statements for any other are all false. There is another aspect to this, discussed since at least the Middle Ages, often under the name of the “principle of explosion”: that as soon as one assumes any statement that is false, one can logically derive absolutely any statement at all. In other words, introducing a single “false axiom” will start an explosion that will eventually “blow up everything”. So within our model of mathematics we might say that things are “true” if they can be derived, and are “false” if they lead to an “explosion”. But let’s say we’re given some statement. How can we tell if it’s true or false? One thing we can do to find out if it’s true is to construct an entailment cone from our axioms and see if the statement appears anywhere in it. Of course, given computational irreducibility there’s in general no upper bound on how far we’ll need to go to determine this. But now to find out if a statement is false we can imagine introducing the statement as an additional axiom, and then seeing if the entailment cone that’s now produced contains an explosion—though once again there’ll in general be no upper bound on how far we’ll have to go to guarantee that we have a “genuine explosion” on our hands. So is there any alternative procedure? Potentially the answer is yes: we can just try to see if our statement is somehow equivalent to “true” or “false”. But in our model of mathematics where we’re just talking about transformations on symbolic expressions, there’s no immediate built-in notion of “true” and “false”. To talk about these we have to add something. And for example what we can do is to say that “true” is equivalent to what seems like an “obvious tautology” such as , or in our computational notation, , while “false” is equivalent to something “obviously explosive”, like (or in our particular setup something more like ). But even though something like “Can we find a way to reach from a given statement?” seems like a much more practical question for an actual theorem-proving system than “Can we fish our statement out of a whole entailment cone?”, it runs into many of the same issues—in particular that there’s no upper limit on the length of path that might be needed. Soon we’ll return to the question of how all this relates to our interpretation of mathematics as a slice of the ruliad—and to the concept of the entailment fabric perceived by a mathematical observer. But to further set the context for what we’re doing let’s explore how what we’ve discussed so far relates to things like Gödel’s theorem, and to phenomena like incompleteness. From the setup of basic logic we might assume that we could consider any statement to be either true or false. Or, more precisely, we might think that given a particular axiom system, we should be able to determine whether any statement that can be syntactically constructed with the primitives of that axiom system is true or false. We could explore this by asking whether every statement is either derivable or leads to an explosion—or can be proved equivalent to an “obvious tautology” or to an “obvious explosion”. But as a simple “approximation” to this, let’s consider a string rewriting system in which we define a “local negation operation”. In particular, let’s assume that given a statement like the “negation” of this statement just exchanges A and B, in this case yielding . Now let’s ask what statements are generated from a given axiom system. Say we start with . After one step of possible substitutions we get while after 2 steps we get: And in our setup we’re effectively asserting that these are “true” statements. But now let’s “negate” the statements, by exchanging A and B. And if we do this, we’ll see that there’s never a statement where both it and its negation occur. In other words, there’s no obvious inconsistency being generated within this axiom system. But if we consider instead the axiom then this gives: And since this includes both and its “negation” , by our criteria we must consider this axiom system to be inconsistent. In addition to inconsistency, we can also ask about incompleteness. For all possible statements, does the axiom system eventually generate either the statement or its negation? Or, in other words, can we always decide from the axiom system whether any given statement is true or false? With our simple assumption about negation, questions of inconsistency and incompleteness become at least in principle very simple to explore. Starting from a given axiom system, we generate its entailment cone, then we ask within this cone what fraction of possible statements, say of a given length, occur. If the answer is more than 50% we know there’s inconsistency, while if the answer is less than 50% that’s evidence of incompleteness. So what happens with different possible axiom systems? Here are some results from A New Kind of Science, in each case showing both what amounts to the raw entailment cone (or, in this case, multiway system evolution from “true”), and the number of statements of a given length reached after progressively more steps: At some level this is all rather straightforward. But from the pictures above we can already get a sense that there’s a problem. For most axiom systems the fraction of statements reached of a given length changes as we increase the number of steps in the entailment cone. Sometimes it’s straightforward to see what fraction will be achieved even after an infinite number of steps. But often it’s And in general we’ll run into computational irreducibility—so that in effect the only way to determine whether some particular statement is generated is just to go to ever more steps in the entailment cone and see what happens. In other words, there’s no guaranteed-finite way to decide what the ultimate fraction will be—and thus whether or not any given axiom system is inconsistent, or incomplete, or neither. For some axiom systems it may be possible to tell. But for some axiom systems it’s not, in effect because we don’t in general know how far we’ll have to go to determine whether a given statement is true or not. A certain amount of additional technical detail is required to reach the standard versions of Gödel’s incompleteness theorems. (Note that these theorems were originally stated specifically for the Peano axioms for arithmetic, but the Principle of Computational Equivalence suggests that they’re in some sense much more general, and even ubiquitous.) But the important point here is that given an axiom system there may be statements that either can or cannot be reached—but there’s no upper bound on the length of path that might be needed to reach them even if one can. OK, so let’s come back to talking about the notion of truth in the context of the ruliad. We’ve discussed axiom systems that might show inconsistency, or incompleteness—and the difficulty of determining if they do. But the ruliad in a sense contains all possible axiom systems—and generates all possible statements. So how then can we ever expect to identify which statements are “true” and which are not? When we talked about particular axiom systems, we said that any statement that is generated can be considered true (at least with respect to that axiom system). But in the ruliad every statement is generated. So what criterion can we use to determine which we should consider “true”? The key idea is any computationally bounded observer (like us) can perceive only a tiny slice of the ruliad. And it’s a perfectly meaningful question to ask whether a particular statement occurs within that perceived slice. One way of picking a “slice” is just to start from a given axiom system, and develop its entailment cone. And with such a slice, the criterion for the truth of a statement is exactly what we discussed above: does the statement occur in the entailment cone? But how do typical “mathematical observers” actually sample the ruliad? As we discussed in the previous section, it seems to be much more by forming an entailment fabric than by developing a whole entailment cone. And in a sense progress in mathematics can be seen as a process of adding pieces to an entailment fabric: pulling in one mathematical statement after another, and checking that they fit into the fabric. So what happens if one tries to add a statement that “isn’t true”? The basic answer is that it produces an “explosion” in which the entailment fabric can grow to encompass essentially any statement. From the point of view of underlying rules—or the ruliad—there’s really nothing wrong with this. But the issue is that it’s incompatible with an “observer like us”—or with any realistic idealization of a mathematician. Our view of a mathematical observer is essentially an entity that accumulates mathematical statements into an entailment fabric. But we assume that the observer is computationally bounded, so in a sense they can only work with a limited collection of statements. So if there’s an explosion in an entailment fabric that means the fabric will expand beyond what a mathematical observer can coherently handle. Or, put another way, the only kind of entailment fabrics that a mathematical observer can reasonably consider are ones that “contain no explosions”. And in such fabrics, it’s reasonable to take the generation or entailment of a statement as a signal that the statement can be considered true. The ruliad is in a sense a unique and absolute thing. And we might have imagined that it would lead us to a unique and absolute definition of truth in mathematics. But what we’ve seen is that that’s not the case. And instead our notion of truth is something based on how we sample the ruliad as mathematical observers. But now we must explore what this means about what mathematics as we perceive it can be like. 21 | What Can Human Mathematics Be Like? The ruliad in a sense contains all structurally possible mathematics—including all mathematical statements, all axiom systems and everything that follows from them. But mathematics as we humans conceive of it is never the whole ruliad; instead it is always just some tiny part that we as mathematical observers sample. We might imagine, however, that this would mean that there is in a sense a complete arbitrariness to our mathematics—because in a sense we could just pick any part of the ruliad we want. Yes, we might want to start from a specific axiom system. But we might imagine that that axiom system could be chosen arbitrarily, with no further constraint. And that the mathematics we study can therefore be thought of as an essentially arbitrary choice, determined by its detailed history, and perhaps by cognitive or other features of humans. But there is a crucial additional issue. When we “sample our mathematics” from the ruliad we do it as mathematical observers and ultimately as humans. And it turns out that even very general features of us as mathematical observers turn out to put strong constraints on what we can sample, and how. When we discussed physics, we said that the central features of observers are their computational boundedness and their assumption of their own persistence through time. In mathematics, observers are again computationally bounded. But now it is not persistence through time that they assume, but rather a certain coherence of accumulated knowledge. We can think of a mathematical observer as progressively expanding the entailment fabric that they consider to “represent mathematics”. And the question is what they can add to that entailment fabric while still “remaining coherent” as observers. In the previous section, for example, we argued that if the observer adds a statement that can be considered “logically false” then this will lead to an “explosion” in the entailment fabric. Such a statement is certainly present in the ruliad. But if the observer were to add it, then they wouldn’t be able to maintain their coherence—because, whimsically put, their mind would necessarily In thinking about axiomatic mathematics it’s been standard to say that any axiom system that’s “reasonable to use” should at least be consistent (even though, yes, for a given axiom system it’s in general ultimately undecidable whether this is the case). And certainly consistency is one criterion that we now see is necessary for a “mathematical observer like us”. But one can expect that it’s not the only criterion. In other words, although it’s perfectly possible to write down any axiom system, and even start generating its entailment cone, only some axiom systems may be compatible with “mathematical observers like us”. And so, for example, something like the Continuum Hypothesis—which is known to be independent of the “established axioms” of set theory—may well have the feature that, say, it has to be assumed to be true in order to get a metamathematical structure compatible with mathematical observers like us. In the case of physics, we know that the general characteristics of observers lead to certain key perceived features and laws of physics. In statistical mechanics, we’re dealing with “coarse-grained observers” who don’t trace and decode the paths of individual molecules, and therefore perceive the Second Law of thermodynamics, fluid dynamics, etc. And in our Physics Project we’re also dealing with coarse-grained observers who don’t track all the details of the atoms of space, but instead perceive space as something coherent and effectively continuous. And it seems as if in metamathematics there’s something very similar going on. As we began to discuss in the very first section above, mathematical observers tend to “coarse grain” metamathematical space. In operational terms, one way they do this is by talking about something like the Pythagorean theorem without always going down to the detailed level of axioms, and for example saying just how real numbers should be defined. And something related is that they tend to concentrate more on mathematical statements and theorems than on their proofs. Later we’ll see how in the context of the ruliad there’s an even deeper level to which one can go. But the point here is that in actually doing mathematics one tends to operate at the “human scale” of talking about mathematical concepts rather than the “molecular-scale details” of axioms. But why does this work? Why is one not continually “dragged down” to the detailed axiomatic level—or below? How come it’s possible to reason at what we described above as the “fluid dynamics” level, without always having to go down to the detailed “molecular dynamics” level? The basic claim is that this works for mathematical observers for essentially the same reason as the perception of space works for physical observers. With the “coarse-graining” characteristics of the observer, it’s inevitable that the slice of the ruliad they sample will have the kind of coherence that allows them to operate at a higher level. In other words, mathematics can be done “at a human level” for the same basic reason that we have a “human-level experience” of space in physics. The fact that it works this way depends both on necessary features of the ruliad—and in general of multicomputation—as well as on characteristics of us as observers. Needless to say, there are “corner cases” where what we’ve described starts to break down. In physics, for example, the “human-level experience” of space breaks down near spacetime singularities. And in mathematics, there are cases where for example undecidability forces one to take a lower-level, more axiomatic and ultimately more metamathematical view. But the point is that there are large regions of physical space—and metamathematical space—where these kinds of issues don’t come up, and where our assumptions about physical—and mathematical—observers can be maintained. And this is what ultimately allows us to have the “human-scale” views of physics and mathematics that we do. 22 | Going below Axiomatic Mathematics In the traditional view of the foundations of mathematics one imagines that axioms—say stated in terms of symbolic expressions—are in some sense the lowest level of mathematics. But thinking in terms of the ruliad suggests that in fact there is a still-lower “ur level”—a kind of analog of machine code in which everything, including axioms, is broken down into ultimate “raw computation”. Take an axiom like , or, in more precise computational language: Compared to everything we’re used to seeing in mathematics this looks simple. But actually it’s already got a lot in it. For example, it assumes the notion of a binary operator, which it’s in effect naming “∘”. And for example it also assumes the notion of variables, and has two distinct pattern variables that are in effect “tagged” with the names x and y. So how can we define what this axiom ultimately “means”? Somehow we have to go from its essentially textual symbolic representation to a piece of actual computation. And, yes, the particular representation we’ve used here can immediately be interpreted as computation in the Wolfram Language. But the ultimate computational concept we’re dealing with is more general than that. And in particular it can exist in any universal computational system. Different universal computational systems (say particular languages or CPUs or Turing machines) may have different ways to represent computations. But ultimately any computation can be represented in any of them—with the differences in representation being like different “coordinatizations of computation”. And however we represent computations there is one thing we can say for sure: all possible computations are somewhere in the ruliad. Different representations of computations correspond in effect to different coordinatizations of the ruliad. But all computations are ultimately there. For our Physics Project it’s been convenient to use a “parametrization of computation” that can be thought of as being based on rewriting of hypergraphs. The elements in these hypergraphs are ultimately purely abstract, but we tend to talk about them as “atoms of space” to indicate the beginnings of our interpretation. It’s perfectly possible to use hypergraph rewriting as the “substrate” for representing axiom systems stated in terms of symbolic expressions. But it’s a bit more convenient (though ultimately equivalent) to instead use systems based on expression rewriting—or in effect tree rewriting. At the outset, one might imagine that different axiom systems would somehow have to be represented by “different rules” in the ruliad. But as one might expect from the phenomenon of universal computation, it’s actually perfectly possible to think of different axiom systems as just being specified by different “data” operated on by a single set of rules. There are many rules and structures that we could use. But one set that has the benefit of a century of history are S, K combinators. The basic concept is to represent everything in terms of “combinator expressions” containing just the two objects S and K. (It’s also possible to have just one fundamental object, and indeed S alone may be enough.) It’s worth saying at the outset that when we go this “far down” things get pretty non-human and obscure. Setting things up in terms of axioms may already seem pedantic and low level. But going to a substrate below axioms—that we can think of as getting us to raw “atoms of existence”—will lead us to a whole other level of obscurity and complexity. But if we’re going to understand how mathematics can emerge from the ruliad this is where we have to go. And combinators provide us with a more-or-less-concrete example. Here’s an example of a small combinator expression which corresponds to the “expression tree”: We can write the combinator expression without explicit “function application” [ ... ] by using a (left) application operator • and it’s always unambiguous to omit this operator, yielding the compact representation: By mapping S, K and the application operator to codewords it’s possible to represent this as a simple binary sequence: But what does our combinator expression mean? The basic combinators are defined to have the rules: These rules on their own don’t do anything to our combinator expression. But if we form the expression which we can write as then repeated application of the rules gives: We can think of this as “feeding” c, x and y into our combinator expression, then using the “plumbing” defined by the combinator expression to assemble a particular expression in terms of c, x and y. But what does this expression now mean? Well, that depends on what we think c, x and y mean. We might notice that c always appears in the configuration c[_][_]. And this means we can interpret it as a binary operator, which we could write in infix form as ∘ so that our expression becomes: And, yes, this is all incredibly low level. But we need to go even further. Right now we’re feeding in names like c, x and y. But in the end we want to represent absolutely everything purely in terms of S and K. So we need to get rid of the “human-readable names” and just replace them with “lumps” of S, K combinators that—like the names—get “carried around” when the combinator rules are applied. We can think about our ultimate expressions in terms of S and K as being like machine code. “One level up” we have assembly language, with the same basic operations, but explicit names. And the idea is that things like axioms—and the laws of inference that apply to them—can be “compiled down” to this assembly language. But ultimately we can always go further, to the very lowest-level “machine code”, in which only S and K ever appear. Within the ruliad as “coordinatized” by S, K combinators, there’s an infinite collection of possible combinator expressions. But how do we find ones that “represent something recognizably mathematical”? As an example let’s consider a possible way in which S, K can represent integers, and arithmetic on integers. The basic idea is that an integer n can be input as the combinator expression which for n = 5 gives: But if we now apply this to [S][K] what we get reduces to which contains 4 S’s. But with this representation of integers it’s possible to find combinator expressions that represent arithmetic operations. For example, here’s a representation of an addition operator: At the “assembly language” level we might call this plus, and apply it to integers i and j using: But at the “pure machine code” level can be represented simply by which when applied to [S][K] reduces to the “output representation” of 3: As a slightly more elaborate example represents the operation of raising to a power. Then becomes: Applying this to [S][K] repeated application of the combinator rules gives eventually yielding the output representation of 8: We could go on and construct any other arithmetic or computational operation we want, all just in terms of the “universal combinators” S and K. But how should we think about this in terms of our conception of mathematics? Basically what we’re seeing is that in the “raw machine code” of S, K combinators it’s possible to “find” a representation for something we consider to be a piece of mathematics. Earlier we talked about starting from structures like axiom systems and then “compiling them down” to raw machine code. But what about just “finding mathematics” in a sense “naturally occurring” in “raw machine code”? We can think of the ruliad as containing “all possible machine code”. And somewhere in that machine code must be all the conceivable “structures of mathematics”. But the question is: in the wildness of the raw ruliad, what structures can we as mathematical observers successfully pick out? The situation is quite directly analogous to what happens at multiple levels in physics. Consider for example a fluid full of molecules bouncing around. As we’ve discussed several times, observers like us usually aren’t sensitive to the detailed dynamics of the molecules. But we can still successfully pick out large-scale structures—like overall fluid motions, vortices, etc. And—much like in mathematics—we can talk about physics just at this higher level. In our Physics Project all this becomes much more extreme. For example, we imagine that space and everything in it is just a giant network of atoms of space. And now within this network we imagine that there are “repeated patterns”—that correspond to things like electrons and quarks and black holes. In a sense it is the big achievement of natural science to have managed to find these regularities so that we can describe things in terms of them, without always having to go down to the level of atoms of space. But the fact that these are the kinds of regularities we have found is also a statement about us as physical observers. And the point is that even at the level of the raw ruliad our characteristics as physical observers will inevitably lead us to such regularities. The fact that we are computationally bounded and assume ourselves to have a certain persistence will lead us to consider things that are localized and persistent—that in physics we identify for example as particles. And it’s very much the same thing in mathematics. As mathematical observers we’re interested in picking out from the raw ruliad “repeated patterns” that are somehow robust. But now instead of identifying them as particles, we’ll identify them as mathematical constructs and definitions. In other words, just as a repeated pattern in the ruliad might in physics be interpreted as an electron, in mathematics a repeated pattern in the ruliad might be interpreted as an integer. We might think of physics as something “emergent” from the structure of the ruliad, and now we’re thinking of mathematics the same way. And of course not only is the “underlying stuff” of the ruliad the same in both cases, but also in both cases it’s “observers like us” that are sampling and perceiving things. There are lots of analogies to the process we’re describing of “fishing constructs out of the raw ruliad”. As one example, consider the evolution of a (“class 4”) cellular automaton in which localized structures emerge: Underneath, just as throughout the ruliad, there’s lots of detailed computation going on, with rules repeatedly getting applied to each cell. But out of all this underlying computation we can identify a certain set of persistent structures—which we can use to make a “higher-level description” that may capture the aspects of the behavior that we care about. Given an “ocean” of S, K combinator expressions, how might we set about “finding mathematics” in them? One straightforward approach is just to identify certain “mathematical properties” we want, and then go searching for S, K combinator expressions that satisfy these. For example, if we want to “search for (propositional) logic” we first need to pick combinator expressions to symbolically represent “true” and “false”. There are many pairs of expressions that will work. As one example, let’s pick: Now we can just search for combinator expressions which, when applied to all possible pairs of “true” and “false” give truth tables corresponding to particular logical functions. And if we do this, here are examples of the smallest combinator expressions we find: Here’s how we can then reproduce the truth table for And: If we just started picking combinator expressions at random, then most of them wouldn’t be “interpretable” in terms of this representation of logic. But if we ran across for example we could recognize in it the combinators for And, Or, etc. that we identified above, and in effect “disassemble” it to give: It’s worth noting, though, that even with the choices we made above for “true” and “false”, there’s not just a single possible combinator, say for And. Here are a few possibilities: And there’s also nothing unique about the choices for “true” and “false”. With the alternative choices here are the smallest combinator expressions for a few logical functions: So what can we say in general about the “interpretability” of an arbitrary combinator expression? Obviously any combinator expression does what it does at the level of raw combinators. But the question is whether it can be given a “higher-level”—and potentially “mathematical”—interpretation. And in a sense this is directly an issue of what a mathematical observer “perceives” in it. Does it contain some kind of robust structure—say a kind of analog for mathematics of a particle in Axiom systems can be viewed as a particular way to “summarize” certain “raw machine code” in the ruliad. But from the point of a “raw coordinatization of the ruliad” like combinators there doesn’t seem to be anything immediately special about them. At least for us humans, however, they do seem to be an obvious “waypoint”. Because by distinguishing operators and variables, establishing arities for operators and introducing names for things, they reflect the kind of structure that’s familiar from human language. But now that we think of the ruliad as what’s “underneath” both mathematics and physics there’s a different path that’s suggested. With the axiomatic approach we’re effectively trying to leverage human language as a way of summarizing what’s going on. But an alternative is to leverage our direct experience of the physical world, and our perception and intuition about things like space. And as we’ll discuss later, this is likely in many ways a better “metamodel” of the way pure mathematics is actually practiced by us humans. In some sense, this goes straight from the “raw machine code” of the ruliad to “human-level mathematics”, sidestepping the axiomatic level. But given how much “reductionist” work has already been done in mathematics to represent its results in axiomatic form, there is definitely still great value in seeing how the whole axiomatic setup can be “fished out” of the “raw ruliad”. And there’s certainly no lack of complicated technical issues in doing this. As one example, how should one deal with “generated variables”? If one “coordinatizes” the ruliad in terms of something like hypergraph rewriting this is fairly straightforward: it just involves creating new elements or hypergraph nodes (which in physics would be interpreted as atoms of space). But for something like S, K combinators it’s a bit more subtle. In the examples we’ve given above, we have combinators that, when “run”, eventually reach a fixed point. But to deal with generated variables we probably also need combinators that never reach fixed points, making it considerably more complicated to identify correspondences with definite symbolic expressions. Another issue involves rules of entailment, or, in effect, the metalogic of an axiom system. In the full axiomatic setup we want to do things like create token-event graphs, where each event corresponds to an entailment. But what rule of entailment should be used? The underlying rules for S, K combinators, for example, define a particular choice—though they can be used to emulate others. But the ruliad in a sense contains all choices. And, once again, it’s up to the observer to “fish out” of the raw ruliad a particular “slice”—which captures not only the axiom system but also the rules of entailment used. It may be worth mentioning a slightly different existing “reductionist” approach to mathematics: the idea of describing things in terms of types. A type is in effect an equivalence class that characterizes, say, all integers, or all functions from tuples of reals to truth values. But in our terms we can interpret a type as a kind of “template” for our underlying “machine code”: we can say that some piece of machine code represents something of a particular type if the machine code matches a particular pattern of some kind. And the issue is then whether that pattern is somehow robust “like a particle” in the raw ruliad. An important part of what made our Physics Project possible is the idea of going “underneath” space and time and other traditional concepts of physics. And in a sense what we’re doing here is something very similar, though for mathematics. We want to go “underneath” concepts like functions and variables, and even the very idea of symbolic expressions. In our Physics Project a convenient “parametrization” of what’s “underneath” is a hypergraph made up of elements that we often refer to as “atoms of space”. In mathematics we’ve discussed using combinators as our “parametrization” of what’s “underneath”. But what are these “made of”? We can think of them as corresponding to raw elements of metamathematics, or raw elements of computation. But in the end, they’re “made of” whatever the ruliad is “made of”. And perhaps the best description of the elements of the ruliad is that they are “atoms of existence”—the smallest units of anything, from which everything, in mathematics and physics and elsewhere, must be made. The atoms of existence aren’t bits or points or anything like that. They’re something fundamentally lower level that’s come into focus only with our Physics Project, and particularly with the identification of the ruliad. And for our purposes here I’ll call such atoms of existence “emes” (pronounced “eemes”, like phonemes etc.). Everything in the ruliad is made of emes. The atoms of space in our Physics Project are emes. The nodes in our combinator trees are emes. An eme is a deeply abstract thing. And in a sense all it has is an identity. Every eme is distinct. We could give it a name if we wanted to, but it doesn’t intrinsically have one. And in the end the structure of everything is built up simply from relations between emes. 23 | The Physicalized Laws of Mathematics The concept of the ruliad suggests there is a deep connection between the foundations of mathematics and physics. And now that we have discussed how some of the familiar formalism of mathematics can “fit into” the ruliad, we are ready to use the “bridge” provided by the ruliad to start exploring how to apply some of the successes and intuitions of physics to mathematics. A foundational part of our everyday experience of physics is our perception that we live in continuous space. But our Physics Project implies that at sufficiently small scales space is actually made of discrete elements—and it is only because of the coarse-grained way in which we experience it that we perceive it as continuous. In mathematics—unlike physics—we’ve long thought of the foundations as being based on things like symbolic expressions that have a fundamentally discrete structure. Normally, though, the elements of those expressions are, for example, given human-recognizable names (like 2 or Plus). But what we saw in the previous section is that these recognizable forms can be thought of as existing in an “anonymous” lower-level substrate made of what we can call atoms of existence or emes. But the crucial point is that this substrate is directly based on the ruliad. And its structure is identical between the foundations of mathematics and physics. In mathematics the emes aggregate up to give us our universe of mathematical statements. In physics they aggregate up to give us our physical universe. But now the commonality of underlying “substrate” makes us realize that we should be able to take our experience of physics, and apply it to mathematics. So what is the analog in mathematics of our perception of the continuity of space in physics? We’ve discussed the idea that we can think of mathematical statements as being laid out in a metamathematical space—or, more specifically, in what we’ve called an entailment fabric. We initially talked about “coordinatizing” this using axioms, but in the previous section we saw how to go “below axioms” to the level of “pure emes”. When we do mathematics, though, we’re sampling this on a much higher level. And just like as physical observers we coarse grain the emes (that we usually call “atoms of space”) that make up physical space, so too as “mathematical observers” we coarse grain the emes that make up metamathematical space. Foundational approaches to mathematics—particularly over the past century or so—have almost always been based on axioms and on their fundamentally discrete symbolic structure. But by going to a lower level and seeing the correspondence with physics we are led to consider what we might think of as a higher-level “experience” of mathematics—operating not at the “molecular dynamics” level of specific axioms and entailments, but rather at what one might call the “fluid dynamics” level of larger-scale concepts. At the outset one might not have any reason to think that this higher-level approach could consistently be applied. But this is the first big place where ideas from physics can be used. If both physics and mathematics are based on the ruliad, and if our general characteristics as observers apply in both physics and mathematics, then we can expect that similar features will emerge. And in particular, we can expect that our everyday perception of physical space as continuous will carry over to mathematics, or, more accurately, to metamathematical space. The picture is that we as mathematical observers have a certain “size” in metamathematical space. We identify concepts—like integers or the Pythagorean theorem—as “regions” in the space of possible configurations of emes (and ultimately of slices of the ruliad). At an axiomatic level we might think of ways to capture what a typical mathematician might consider “the same concept” with slightly different formalism (say, different large cardinal axioms or different models of real numbers). But when we get down to the level of emes there’ll be vastly more freedom in how we capture a given concept—so that we’re in effect using a whole region of “emic space” to do so. But now the question is what happens if we try to make use of the concept defined by this “region”? Will the “points in the region” behave coherently, or will everything be “shredded”, with different specific representations in terms of emes leading to different conclusions? The expectation is that in most cases it will work much like physical space, and that what we as observers perceive will be quite independent of the detailed underlying behavior at the level of emes. Which is why we can expect to do “higher-level mathematics”, without always having to descend to the level of emes, or even axioms. And this we can consider as the first great “physicalized law of mathematics”: that coherent higher-level mathematics is possible for us for the same reason that physical space seems coherent to observers like us. We’ve discussed several times before the analogy to the Second Law of thermodynamics—and the way it makes possible a higher-level description of things like fluids for “observers like us”. There are certainly cases where the higher-level description breaks down. Some of them may involve specific probes of molecular structure (like Brownian motion). Others may be slightly more “unwitting” (like hypersonic flow). In our Physics Project we’re very interested in where similar breakdowns might occur—because they’d allow us to “see below” the traditional continuum description of space. Potential targets involve various extreme or singular configurations of spacetime, where in effect the “coherent observer” gets “shredded”, because different atoms of space “within the observer” do different things. In mathematics, this kind of “shredding” of the observer will tend to be manifest in the need to “drop below” higher-level mathematical concepts, and go down to a very detailed axiomatic, metamathematical or even eme level—where computational irreducibility and phenomena like undecidability are rampant. It’s worth emphasizing that from the point of view of pure axiomatic mathematics it’s not at all obvious that higher-level mathematics should be possible. It could be that there’d be no choice but to work through every axiomatic detail to have any chance of making conclusions in mathematics. But the point is that we now know there could be exactly the same issue in physics. Because our Physics Project implies that at the lowest level our universe is effectively made of emes that have all sorts of complicated—and computationally irreducible—behavior. Yet we know that we don’t have to trace through all the details of this to make conclusions about what will happen in the universe—at least at the level we normally perceive it. In other words, the fact that we can successfully have a “high-level view” of what happens in physics is something that fundamentally has the same origin as the fact that we can successfully have a high-level view of what happens in mathematics. Both are just features of how observers like us sample the ruliad that underlies both physics and mathematics. 24 | Uniformity and Motion in Metamathematical Space We’ve discussed how the basic concept of space as we experience it in physics leads us to our first great physicalized law of mathematics—and how this provides for the very possibility of higher-level mathematics. But this is just the beginning of what we can learn from thinking about the correspondences between physical and metamathematical space implied by their common origin in the structure of the ruliad. A key idea is to think of a limit of mathematics in which one is dealing with so many mathematical statements that one can treat them “in bulk”—as forming something we could consider a continuous metamathematical space. But what might this space be like? Our experience of physical space is that at our scale and with our means of perception it seems to us for the most part quite simple and uniform. And this is deeply connected to the concept that pure motion is possible in physical space—or, in other words, that it’s possible for things to move around in physical space without fundamentally changing their character. Looked at from the point of view of the atoms of space it’s not at all obvious that this should be possible. After all, whenever we move we’ll almost inevitably be made up of different atoms of space. But it’s fundamental to our character as observers that the features we end up perceiving are ones that have a certain persistence—so that we can imagine that we, and objects around us, can just “move unchanged”, at least with respect to those aspects of the objects that we perceive. And this is why, for example, we can discuss laws of mechanics without having to “drop down” to the level of the atoms of space. So what’s the analog of all this in metamathematical space? At the present stage of our physical universe, we seem to be able to experience physical space as having features like being basically three-dimensional. Metamathematical space probably doesn’t have such familiar mathematical characterizations. But it seems very likely (and we’ll see some evidence of this from empirical metamathematics below) that at the very least we’ll perceive metamathematical space as having a certain uniformity or homogeneity. In our Physics Project we imagine that we can think of physical space as beginning “at the Big Bang” with what amounts to some small collection of atoms of space, but then growing to the vast number of atoms in our current universe through the repeated application of particular rules. But with a small set of rules being applied a vast number of times, it seems almost inevitable that some kind of uniformity must result. But then the same kind of thing can be expected in metamathematics. In axiomatic mathematics one imagines the mathematical analog of the Big Bang: everything starts from a small collection of axioms, and then expands to a huge number of mathematical statements through repeated application of laws of inference. And from this picture (which gets a bit more elaborate when one considers emes and the full ruliad) one can expect that at least after it’s “developed for a while” metamathematical space, like physical space, will have a certain uniformity. The idea that physical space is somehow uniform is something we take very much for granted, not least because that’s our lifelong experience. But the analog of this idea for metamathematical space is something we don’t have immediate everyday intuition about—and that in fact may at first seem surprising or even bizarre. But actually what it implies is something that increasingly rings true from modern experience in pure mathematics. Because by saying that metamathematical space is in a sense uniform, we’re saying that different parts of it somehow seem similar—or in other words that there’s parallelism between what we see in different areas of mathematics, even if they’re not “nearby” in terms of entailments. But this is exactly what, for example, the success of category theory implies. Because it shows us that even in completely different areas of mathematics it makes sense to set up the same basic structures of objects, morphisms and so on. As such, though, category theory defines only the barest outlines of mathematical structure. But what our concept of perceived uniformity in metamathematical space suggests is that there should in fact be closer correspondences between different areas of mathematics. We can view this as another fundamental “physicalized law of mathematics”: that different areas of mathematics should ultimately have structures that are in some deep sense “perceived the same” by mathematical observers. For several centuries we’ve known there’s a certain correspondence between, for example, geometry and algebra. But it’s been a major achievement of recent mathematics to identify more and more such correspondences or “dualities”. Often the existence of these has seemed remarkable, and surprising. But what our view of metamathematics here suggests is that this is actually a general physicalized law of mathematics—and that in the end essentially all different areas of mathematics must share a deep structure, at least in some appropriate “bulk metamathematical limit” when enough statements are considered. But it’s one thing to say that two places in metamathematical space are “similar”; it’s another to say that “motion between them” is possible. Once again we can make an analogy with physical space. We’re used to the idea that we can move around in space, maintaining our identity and structure. But this in a sense requires that we can maintain some kind of continuity of existence on our path between two positions. In principle it could have been that we would have to be “atomized” at one end, then “reconstituted” at the other end. But our actual experience is that we perceive ourselves to continually exist all the way along the path. In a sense this is just an assumption about how things work that physical observers like us make; but what’s nontrivial is that the underlying structure of the ruliad implies that this will always be consistent. And so we expect it will be in metamathematics. Like a physical observer, the way a mathematical observer operates, it’ll be possible to “move” from one area of mathematics to another “at a high level”, without being “atomized” along the way. Or, in other words, that a mathematical observer will be able to make correspondences between different areas of mathematics without having to go down to the level of emes to do so. It’s worth realizing that as soon as there’s a way of representing mathematics in computational terms the concept of universal computation (and, more tightly, the Principle of Computational Equivalence) implies that at some level there must always be a way to translate between any two mathematical theories, or any two areas of mathematics. But the question is whether it’s possible to do this in “high-level mathematical terms” or only at the level of the underlying “computational substrate”. And what we’re saying is that there’s a general physicalized law of mathematics that implies that higher-level translation should be possible. Thinking about mathematics at a traditional axiomatic level can sometimes obscure this, however. For example, in axiomatic terms we usually think of Peano Arithmetic as not being as powerful as ZFC set theory (for example, it lacks transfinite induction)—and so nothing like “dual” to it. But Peano Arithmetic can perfectly well support universal computation, so inevitably a “formal emulator” for ZFC set theory can be built in it. But the issue is that to do this essentially requires going down to the “atomic” level and operating not in terms of mathematical constructs but instead directly in terms of “metamathematical” symbolic structure (and, for example, explicitly emulating things like equality predicates). But the issue, it seems, is that if we think at the traditional axiomatic level, we’re not dealing with a “mathematical observer like us”. In the analogy we’ve used above, we’re operating at the “molecular dynamics” level, not at the human-scale “fluid dynamics” level. And so we see all sorts of details and issues that ultimately won’t be relevant in typical approaches to actually doing pure It’s somewhat ironic that our physicalized approach shows this by going below the axiomatic level—to the level of emes and the raw ruliad. But in a sense it’s only at this level that there’s the uniformity and coherence to conveniently construct a general picture that can encompass observers like us. Much as with ordinary matter we can say that “everything is made of atoms”, we’re now saying that everything is “made of computation” (and its structure and behavior is ultimately described by the ruliad). But the crucial idea that emerged from our Physics Project—and that is at the core of what I’m calling the multicomputational paradigm—is that when we ask what observers perceive there is a whole additional level of inexorable structure. And this is what makes it possible to do both human-scale physics and higher-level mathematics—and for there to be what amounts to “pure motion”, whether in physical or metamathematical space. There’s another way to think about this, that we alluded to earlier. A key feature of an observer is to have a coherent identity. In physics, that involves having a consistent thread of experience in time. In mathematics, it involves bringing together a consistent view of “what’s true” in the space of mathematical statements. In both cases the observer will in effect involve many separate underlying elements (ultimately, emes). But in order to maintain the observer’s view of having a coherent identity, the observer must somehow conflate all these elements, effectively treating them as “the same”. In physics, this means “coarse-graining” across physical or branchial (or, in fact, rulial) space. In mathematics, this means “coarse-graining” across metamathematical space—or in effect treating different mathematical statements as “the same”. In practice, there are several ways this happens. First of all, one tends to be more concerned about mathematical results than their proofs, so two statements that have the same form can be considered the same even if the proofs (or other processes) that generated them are different (and indeed this is something we have routinely done in constructing entailment cones here). But there’s more. One can also imagine that any statements that entail each other can be considered “the same”. In a simple case, this means that if and then one can always assume . But there’s a much more general version of this embodied in the univalence axiom of homotopy type theory—that in our terms can be interpreted as saying that mathematical observers consider equivalent things the same. There’s another way that mathematical observers conflate different statements—that’s in many ways more important, but less formal. As we mentioned above, when mathematicians talk, say, about the Pythagorean theorem, they typically think they have a definite concept in mind. But at the axiomatic level—and even more so at the level of emes—there are a huge number of different “metamathematical configurations” that are all “considered the same” by the typical working mathematician, or by our “mathematical observer”. (At the level of axioms, there might be different axiom systems for real numbers; at the level of emes there might be different ways of representing concepts like addition or equality.) In a sense we can think of mathematical observers as having a certain “extent” in metamathematical space. And much like human-scale physical observers see only the aggregate effects of huge numbers of atoms of space, so also mathematical observers see only the “aggregate effects” of huge numbers of emes of metamathematical space. But now the key question is whether a “whole mathematical observer” can “move in metamathematical space” as a single “rigid” entity, or whether it will inevitably be distorted—or shredded—by the structure of metamathematical space. In the next section we’ll discuss the analog of gravity—and curvature—in metamathematical space. But our physicalized approach tends to suggest that in “most” of metamathematical space, a typical mathematical observer will be able to “move around freely”, implying that there will indeed be paths or “bridges” between different areas of mathematics, that involve only higher-level mathematical constructs, and don’t require dropping down to the level of emes and the raw ruliad. 25 | Gravitational and Relativistic Effects in Metamathematics If metamathematical space is like physical space, does that mean that it has analogs of gravity, and relativity? The answer seems to be “yes”—and these provide our next examples of physicalized laws of mathematics. In the end, we’re going to be able to talk about at least gravity in a largely “static” way, referring mostly to the “instantaneous state of metamathematics”, captured as an entailment fabric. But in leveraging ideas from physics, it’s important to start off formulating things in terms of the analog of time for metamathematics—which is entailment. As we’ve discussed above, the entailment cone is the direct analog of the light cone in physics. Starting with some mathematical statement (or, more accurately, some event that transforms it) the forward entailment cone contains all statements (or, more accurately, events) that follow from it. Any possible “instantaneous state of metamathematics” then corresponds to a “transverse slice” through this entailment cone—with the slice in effect being laid out in metamathematical space. An individual entailment of one statement by another corresponds to a path in the entailment cone, and this path (or, more accurately for accumulative evolution, subgraph) can be thought of as a proof of one statement given another. And in these terms the shortest proof can be thought of as a geodesic in the entailment cone. (In practical mathematics, it’s very unlikely one will find—or care about—the strictly shortest proof. But even having a “fairly short proof” will be enough to give the general conclusions we’ll discuss here.) Given a path in the entailment cone, we can imagine projecting it onto a transverse slice, i.e. onto an entailment fabric. Being able to consistently do this depends on having a certain uniformity in the entailment cone, and in the sequence of “metamathematical hypersurfaces” that are defined by whatever “metamathematical reference frame” we’re using. But assuming, for example, that underlying computational irreducibility successfully generates a kind of “statistical uniformity” that cannot be “decoded” by the observer, we can expect to have meaningful paths—and geodesics—on entailment But what these geodesics are like then depends on the emergent geometry of entailment fabrics. In physics, the limiting geometry of the analog of this for physical space is presumably a fairly simple 3D manifold. For branchial space, it’s more complicated, probably for example being “exponential dimensional”. And for metamathematics, the limiting geometry is also undoubtedly more complicated—and almost certainly exponential dimensional. We’ve argued that we expect metamathematical space to have a certain perceived uniformity. But what will affect this, and therefore potentially modify the local geometry of the space? The basic answer is exactly the same as in our Physics Project. If there’s “more activity” somewhere in an entailment fabric, this will in effect lead to “more local connections”, and thus effective “positive local curvature” in the emergent geometry of the network. Needless to say, exactly what “more activity” means is somewhat subtle, especially given that the fabric in which one is looking for this is itself defining the ambient geometry, measures of “area”, etc. In our Physics Project we make things more precise by associating “activity” with energy density, and saying that energy effectively corresponds to the flux of causal edges through spacelike hypersurfaces. So this suggests that we think about an analog of energy in metamathematics: essentially defining it to be the density of update events in the entailment fabric. Or, put another way, energy in metamathematics depends on the “density of proofs” going through a region of metamathematical space, i.e. involving particular “nearby” mathematical statements. There are lots of caveats, subtleties and details. But the notion that “activity AKA energy” leads to increasing curvature in an emergent geometry is a general feature of the whole multicomputational paradigm that the ruliad captures. And in fact we expect a quantitative relationship between energy density (or, strictly, energy-momentum) and induced curvature of the “transversal space”—that corresponds exactly to Einstein’s equations in general relativity. It’ll be more difficult to see this in the metamathematical case because metamathematical space is geometrically more complicated—and less familiar—than physical space. But even at a qualitative level, it seems very helpful to think in terms of physics and spacetime analogies. The basic phenomenon is that geodesics are deflected by the presence of “energy”, in effect being “attracted to it”. And this is why we can think of regions of higher energy (or energy-momentum/mass)—in physics and in metamathematics—as “generating gravity”, and deflecting geodesics towards them. (Needless to say, in metamathematics, as in physics, the vast majority of overall activity is just devoted to knitting together the structure of space, and when gravity is produced, it’s from slightly increased activity in a particular region.) (In our Physics Project, a key result is that the same kind of dependence of “spatial” structure on energy happens not only in physical space, but also in branchial space—where there’s a direct analog of general relativity that basically yields the path integral of quantum mechanics.) What does this mean in metamathematics? Qualitatively, the implication is that “proofs will tend to go through where there’s a higher density of proofs”. Or, in an analogy, if you want to drive from one place to another, it’ll be more efficient if you can do at least part of your journey on a freeway. One question to ask about metamathematical space is whether one can always get from any place to any other. In other words, starting from one area of mathematics, can one somehow derive all others? A key issue here is whether the area one starts from is computation universal. Propositional logic is not, for example. So if one starts from it, one is essentially trapped, and cannot reach other But results in mathematical logic have established that most traditional areas of axiomatic mathematics are in fact computation universal (and the Principle of Computational Equivalence suggests that this will be ubiquitous). And given computation universality there will at least be some “proof path”. (In a sense this is a reflection of the fact that the ruliad is unique, so everything is connected in “the same ruliad”.) But a big question is whether the “proof path” is “big enough” to be appropriate for a “mathematical observer like us”. Can we expect to get from one part of metamathematical space to another without the observer being “shredded”? Will we be able to start from any of a whole collection of places in metamathematical space that are considered “indistinguishably nearby” to a mathematical observer and have all of them “move together” to reach our destination? Or will different specific starting points follow quite different paths—preventing us from having a high-level (“fluid dynamics”) description of what’s going on, and instead forcing us to drop down to the “molecular dynamics” level? In practical pure mathematics, this tends to be an issue of whether there is an “elegant proof using high-level concepts”, or whether one has to drop down to a very detailed level that’s more like low-level computer code, or the output of an automated theorem proving system. And indeed there’s a very visceral sense of “shredding” in cases where one’s confronted with a proof that consists of page after page of “machine-like details”. But there’s another point here as well. If one looks at an individual proof path, it can be computationally irreducible to find out where the path goes, and the question of whether it ever reaches a particular destination can be undecidable. But in most of the current practice of pure mathematics, one’s interested in “higher-level conclusions”, that are “visible” to a mathematical observer who doesn’t resolve individual proof paths. Later we’ll discuss the dichotomy between explorations of computational systems that routinely run into undecidability—and the typical experience of pure mathematics, where undecidability is rarely encountered in practice. But the basic point is that what a typical mathematical observer sees is at the “fluid dynamics level”, where the potentially circuitous path of some individual molecule is not relevant. Of course, by asking specific questions—about metamathematics, or, say, about very specific equations—it’s still perfectly possible to force tracing of individual “low-level” proof paths. But this isn’t what’s typical in current pure mathematical practice. And in a sense we can see this as an extension of our first physicalized law of mathematics: not only is higher-level mathematics possible, but it’s ubiquitously so, with the result that, at least in terms of the questions a mathematical observer would readily formulate, phenomena like undecidability are not generically seen. But even though undecidability may not be directly visible to a mathematical observer, its underlying presence is still crucial in coherently “knitting together” metamathematical space. Because without undecidability, we won’t have computation universality and computational irreducibility. But—just like in our Physics Project—computational irreducibility is crucial in producing the low-level apparent randomness that is needed to support any kind of “continuum limit” that allows us to think of large collections of what are ultimately discrete emes as building up some kind of coherent geometrical space. And when undecidability is not present, one will typically not end up with anything like this kind of coherent space. An extreme example occurs in rewrite systems that eventually terminate—in the sense that they reach a “fixed-point” (or “normal form”) state where no more transformations can be applied. In our Physics Project, this kind of termination can be interpreted as a spacelike singularity at which “time stops” (as at the center of a non-rotating black hole). But in general decidability is associated with “limits on how far paths can go”—just like the limits on causal paths associated with event horizons in physics. There are many details to work out, but the qualitative picture can be developed further. In physics, the singularity theorems imply that in essence the eventual formation of spacetime singularities is inevitable. And there should be a direct analog in our context that implies the eventual formation of “metamathematical singularities”. In qualitative terms, we can expect that the presence of proof density (which is the analog of energy) will “pull in” more proofs until eventually there are so many proofs that one has decidability and a “proof event horizon” is formed. In a sense this implies that the long-term future of mathematics is strangely similar to the long-term future of our physical universe. In our physical universe, we expect that while the expansion of space may continue, many parts of the universe will form black holes and essentially be “closed off”. (At least ignoring expansion in branchial space, and quantum effects in general.) The analog of this in mathematics is that while there can be continued overall expansion in metamathematical space, more and more parts of it will “burn out” because they’ve become decidable. In other words, as more work and more proofs get done in a particular area, that area will eventually be “finished”—and there will be no more “open-ended” questions associated with it. In physics there’s sometimes discussion of white holes, which are imagined to effectively be time-reversed black holes, spewing out all possible material that could be captured in a black hole. In metamathematics, a white hole is like a statement that is false and therefore “leads to an explosion”. The presence of such an object in metamathematical space will in effect cause observers to be shredded—making it inconsistent with the coherent construction of higher-level mathematics. We’ve talked at some length about the “gravitational” structure of metamathematical space. But what about seemingly simpler things like special relativity? In physics, there’s a notion of basic, flat spacetime, for which it’s easy to construct families of reference frames, and in which parallel trajectories stay parallel. In metamathematics, the analog is presumably metamathematical space in which “parallel proof geodesics” remain “parallel”—so that in effect one can continue “making progress in mathematics” by just “keeping on doing what you’ve been doing”. And somehow relativistic invariance is associated with the idea that there are many ways to do math, but in the end they’re all able to reach the same conclusions. Ultimately this is something one expects as a consequence of fundamental features of the ruliad—and the inevitability of causal invariance in it resulting from the Principle of Computational Equivalence. It’s also something that might seem quite familiar from practical mathematics and, say, from the ability to do derivations using different methods—like from either geometry or algebra—and yet still end up with the same So if there’s an analog of relativistic invariance, what about analogs of phenomena like time dilation? In our Physics Project time dilation has a rather direct interpretation. To “progress in time” takes a certain amount of computational work. But motion in effect also takes a certain amount of computational work—in essence to continually recreate versions of something in different places. But from the ruliad on up there is ultimately only a certain amount of computational work that can be done—and if computational work is being “used up” on motion, there is less available to devote to progress in time, and so time will effectively run more slowly, leading to the experience of time dilation. So what is the metamathematical analog of this? Presumably it’s that when you do derivations in math you can either stay in one area and directly make progress in that area, or you can “base yourself in some other area” and make progress only by continually translating back and forth. But ultimately that translation process will take computational work, and so will slow down your progress—leading to an analog of time dilation. In physics, the speed of light defines the maximum amount of motion in space that can occur in a certain amount of time. In metamathematics, the analog is that there’s a maximum “translation distance” in metamathematical space that can be “bridged” with a certain amount of derivation. In physics we’re used to measuring spatial distance in meters—and time in seconds. In metamathematics we don’t yet have familiar units in which to measure, say, distance between mathematical concepts—or, for that matter, “amount of derivation” being done. But with the empirical metamathematics we’ll discuss in the next section we actually have the beginnings of a way to define such things, and to use what’s been achieved in the history of human mathematics to at least imagine “empirically measuring” what we might call “maximum metamathematical speed”. It should be emphasized that we are only at the very beginning of exploring things like the analogs of relativity in metamathematics. One important piece of formal structure that we haven’t really discussed here is causal dependence, and causal graphs. We’ve talked at length about statements entailing other statements. But we haven’t talked about questions like which part of which statement is needed for some event to occur that will entail some other statement. And—while there’s no fundamental difficulty in doing it—we haven’t concerned ourselves with constructing causal graphs to represent causal relationships and causal dependencies between events. When it comes to physical observers, there is a very direct interpretation of causal graphs that relates to what a physical observer can experience. But for mathematical observers—where the notion of time is less central—it’s less clear just what the interpretation of causal graphs should be. But one certainly expects that they will enter in the construction of any general “observer theory” that characterizes “observers like us” across both physics and mathematics. 26 | Empirical Metamathematics We’ve discussed the overall structure of metamathematical space, and the general kind of sampling that we humans do of it (as “mathematical observers”) when we do mathematics. But what can we learn from the specifics of human mathematics, and the actual mathematical statements that humans have published over the centuries? We might imagine that these statements are just ones that—as “accidents of history”—humans have “happened to find interesting”. But there’s definitely more to it—and potentially what’s there is a rich source of “empirical data” relevant to our physicalized laws of mathematics, and to what amounts to their “experimental validation”. The situation with “human settlements” in metamathematical space is in a sense rather similar to the situation with human settlements in physical space. If we look at where humans have chosen to live and build cities, we’ll find a bunch of locations in 3D space. The details of where these are depend on history and many factors. But there’s a clear overarching theme, that’s in a sense a direct reflection of underlying physics: all the locations lie on the more-or-less spherical surface of the Earth. It’s not so straightforward to see what’s going on in the metamathematical case, not least because any notion of coordinatization seems to be much more complicated for metamathematical space than for physical space. But we can still begin by doing “empirical metamathematics” and asking questions about for example what amounts to where in metamathematical space we humans have so far established ourselves. And as a first example, let’s consider Boolean algebra. Even to talk about something called “Boolean algebra” we have to be operating at a level far above the raw ruliad—where we’ve already implicitly aggregated vast numbers of emes to form notions of, for example, variables and logical operations. But once we’re at this level we can “survey” metamathematical space just by enumerating possible symbolic statements that can be created using the operations we’ve set up for Boolean algebra (here And ∧, Or ∨ and Not): But so far these are just raw, structural statements. To connect with actual Boolean algebra we must pick out which of these can be derived from the axioms of Boolean algebra, or, put another way, which of them are in the entailment cone of these axioms: Of all possible statements, it’s only an exponentially small fraction that turn out to be derivable: But in the case of Boolean algebra, we can readily collect such statements: We’ve typically explored entailment cones by looking at slices consisting of collections of theorems generated after a specified number of proof steps. But here we’re making a very different sampling of the entailment cone—looking in effect instead at theorems in order of their structural complexity as symbolic expressions. In doing this kind of systematic enumeration we’re in a sense operating at a “finer level of granularity” than typical human mathematics. Yes, these are all “true theorems”. But mostly they’re not theorems that a human mathematician would ever write down, or specifically “consider interesting”. And for example only a small fraction of them have historically been given names—and are called out in typical logic textbooks: The reduction from all “structurally possible” theorems to just “ones we consider interesting” can be thought of as a form of coarse graining. And it could well be that this coarse graining would depend on all sorts of accidents of human mathematical history. But at least in the case of Boolean algebra there seems to be a surprisingly simple and “mechanical” procedure that can reproduce it. Go through all theorems in order of increasing structural complexity, in each case seeing whether a given theorem can be proved from ones earlier in the list: It turns out that the theorems identified by humans as “interesting” coincide almost exactly with “root theorems” that cannot be proved from earlier theorems in the list. Or, put another way, the “coarse graining” that human mathematicians do seems (at least in this case) to essentially consist of picking out only those theorems that represent “minimal statements” of new information—and eliding away those that involve “extra ornamentation”. But how are these “notable theorems” laid out in metamathematical space? Earlier we saw how the simplest of them can be reached after just a few steps in the entailment cone of a typical textbook axiom system for Boolean algebra. The full entailment cone rapidly gets unmanageably large but we can get a first approximation to it by generating individual proofs (using automated theorem proving) of our notable theorems, and then seeing how these “knit together” through shared intermediate lemmas in a token-event graph: Looking at this picture we see at least a hint that clumps of notable theorems are spread out across the entailment cone, only modestly building on each other—and in effect “staking out separated territories” in the entailment cone. But of the 11 notable theorems shown here, 7 depend on all 6 axioms, while 4 depend only on various different sets of 3 axioms—suggesting at least a certain amount of fundamental interdependence or coherence. From the token-event graph we can derive a branchial graph that represents a very rough approximation to how the theorems are “laid out in metamathematical space”: We can get a potentially slightly better approximation by including proofs not just of notable theorems, but of all theorems up to a certain structural complexity. The result shows separation of notable theorems both in the multiway graph and in the branchial graph: In doing this empirical metamathematics we’re including only specific proofs rather than enumerating the whole entailment cone. We’re also using only a specific axiom system. And even beyond this, we’re using specific operators to write our statements in Boolean algebra. In a sense each of these choices represents a particular “metamathematical coordinatization”—or particular reference frame or slice that we’re sampling in the ruliad. For example, in what we’ve done above we’ve built up statements from And, Or and Not. But we can just as well use any other functionally complete sets of operators, such as the following (here each shown representing a few specific Boolean expressions): For each set of operators, there are different axiom systems that can be used. And for each axiom system there will be different proofs. Here are a few examples of axiom systems with a few different sets of operators—in each case giving a proof of the law of double negation (which has to be stated differently for different operators): Boolean algebra (or, equivalently, propositional logic) is a somewhat desiccated and thin example of mathematics. So what do we find if we do empirical metamathematics on other areas? Let’s talk first about geometry—for which Euclid’s Elements provided the very first large-scale historical example of an axiomatic mathematical system. The Elements started from 10 axioms (5 “postulates” and 5 “common notions”), then gave 465 theorems. Each theorem was proved from previous ones, and ultimately from the axioms. Thus, for example, the “proof graph” (or “theorem dependency graph”) for Book 1, Proposition 5 (which says that angles at the base of an isosceles triangle are equal) is: One can think of this as a coarse-grained version of the proof graphs we’ve used before (which are themselves in turn “slices” of the entailment graph)—in which each node shows how a collection of “input” theorems (or axioms) entails a new theorem. Here’s a slightly more complicated example (Book 1, Proposition 48) that ultimately depends on all 10 of the original axioms: And here’s the full graph for all the theorems in Euclid’s Elements: Of the 465 theorems here, 255 (i.e. 55%) depend on all 10 axioms. (For the much smaller number of notable theorems of Boolean algebra above we found that 64% depended on all 6 of our stated axioms.) And the general connectedness of this graph in effect reflects the idea that Euclid’s theorems represent a coherent body of connected mathematical knowledge. The branchial graph gives us an idea of how the theorems are “laid out in metamathematical space”: One thing we notice is that theorems about different areas—shown here in different colors—tend to be separated in metamathematical space. And in a sense the seeds of this separation are already evident if we look “textually” at how theorems in different books of Euclid’s Elements refer to each other: Looking at the overall dependence of one theorem on others in effect shows us a very coarse form of entailment. But can we go to a finer level—as we did above for Boolean algebra? As a first step, we have to have an explicit symbolic representation for our theorems. And beyond that, we have to have a formal axiom system that describes possible transformations between these. At the level of “whole theorem dependency” we can represent the entailment of Euclid’s Book 1, Proposition 1 from axioms as: But if we now use the full, formal axiom system for geometry that we discussed in a previous section we can use automated theorem proving to get a full proof of Book 1, Proposition 1: In a sense this is “going inside” the theorem dependency graph to look explicitly at how the dependencies in it work. And in doing this we see that what Euclid might have stated in words in a sentence or two is represented formally in terms of hundreds of detailed intermediate lemmas. (It’s also notable that whereas in Euclid’s version, the theorem depends only on 3 out of 10 axioms, in the formal version the theorem depends on 18 out of 20 axioms.) How about for other theorems? Here is the theorem dependency graph from Euclid’s Elements for the Pythagorean theorem (which Euclid gives as Book 1, Proposition 47): The theorem depends on all 10 axioms, and its stated proof goes through 28 intermediate theorems (i.e. about 6% of all theorems in the Elements). In principle we can “unroll” the proof dependency graph to see directly how the theorem can be “built up” just from copies of the original axioms. Doing a first step of unrolling we get: And “flattening everything out” so that we don’t use any intermediate lemmas but just go back to the axioms to “re-prove” everything we can derive the theorem from a “proof tree” with the following number of copies of each axiom (and a certain “depth” to reach that axiom): So how about a more detailed and formal proof? We could certainly in principle construct this using the axiom system we discussed above. But an important general point is that the thing we in practice call “the Pythagorean theorem” can actually be set up in all sorts of different axiom systems. And as an example let’s consider setting it up in the main actual axiom system that working mathematicians typically imagine they’re (usually implicitly) using, namely ZFC set theory. Conveniently, the Metamath formalized math system has accumulated about 40,000 theorems across mathematics, all with hand-constructed proofs based ultimately on ZFC set theory. And within this system we can find the theorem dependency graph for the Pythagorean theorem: Altogether it involves 6970 intermediate theorems, or about 18% of all theorems in Metamath—including ones from many different areas of mathematics. But how does it ultimately depend on the axioms? First, we need to talk about what the axioms actually are. In addition to “pure ZFC set theory”, we need axioms for (predicate) logic, as well as ones that define real and complex numbers. And the way things are set up in Metamath’s “set.mm” there are (essentially) 49 basic axioms (9 for pure set theory, 15 for logic and 25 related to numbers). And much as in Euclid’s Elements we found that the Pythagorean theorem depended on all the axioms, so now here we find that the Pythagorean theorem depends on 48 of the 49 axioms—with the only missing axiom being the Axiom of Choice. Just like in the Euclid’s Elements case, we can imagine “unrolling” things to see how many copies of each axiom are used. Here are the results—together with the “depth” to reach each axiom: And, yes, the numbers of copies of most of the axioms required to establish the Pythagorean theorem are extremely large. There are several additional wrinkles that we should discuss. First, we’ve so far only considered overall theorem dependency—or in effect “coarse-grained entailment”. But the Metamath system ultimately gives complete proofs in terms of explicit substitutions (or, effectively, bisubstitutions) on symbolic expressions. So, for example, while the first-level “whole-theorem-dependency” graph for the Pythagorean theorem is the full first-level entailment structure based on the detailed proof is (where the black vertices indicate “internal structural elements” in the proof—such as variables, class specifications and Another important wrinkle has to do with the concept of definitions. The Pythagorean theorem, for example, refers to squaring numbers. But what is squaring? What are numbers? Ultimately all these things have to be defined in terms of the “raw data structures” we’re using. In the case of Boolean algebra, for example, we could set things up just using Nand (say denoted ∘), but then we could define And and Or in terms of Nand (say as and respectively). We could still write expressions using And and Or—but with our definitions we’d immediately be able to convert these to pure Nands. Axioms—say about Nand—give us transformations we can use repeatedly to make derivations. But definitions are transformations we use “just once” (like macro expansion in programming) to reduce things to the point where they involve only constructs that appear in the axioms. In Metamath’s “set.mm” there are about 1700 definitions that effectively build up from “pure set theory” (as well as logic, structural elements and various axioms about numbers) to give the mathematical constructs one needs. So, for example, here is the definition dependency graph for addition (“+” or Plus): At the bottom are the basic constructs of logic and set theory—in terms of which things like order relations, complex numbers and finally addition are defined. The definition dependency graph for GCD, for example, is somewhat larger, though has considerable overlap at lower levels: Different constructs have definition dependency graphs of different sizes—in effect reflecting their “definitional distance” from set theory and the underlying axioms being used: In our physicalized approach to metamathematics, though, something like set theory is not our ultimate foundation. Instead, we imagine that everything is eventually built up from the raw ruliad, and that all the constructs we’re considering are formed from what amount to configurations of emes in the ruliad. We discussed above how constructs like numbers and logic can be obtained from a combinator representation of the ruliad. We can view the definition dependency graph above as being an empirical example of how somewhat higher-level definitions can be built up. From a computer science perspective, we can think of it as being like a type hierarchy. From a physics perspective, it’s as if we’re starting from atoms, then building up to molecules and beyond. It’s worth pointing out, however, that even the top of the definition hierarchy in something like Metamath is still operating very much at an axiomatic kind of level. In the analogy we’ve been using, it’s still for the most part “formulating math at the molecular dynamics level” not at the more human “fluid dynamics” level. We’ve been talking about “the Pythagorean theorem”. But even on the basis of set theory there are many different possible formulations one can give. In Metamath, for example, there is the pythag version (which is what we’ve been using), and there is also a (somewhat more general) pythi version. So how are these related? Here’s their combined theorem dependency graph (or at least the first two levels in it)—with red indicating theorems used only in deriving pythag, blue indicating ones used only in deriving pythi, and purple indicating ones used in both: And what we see is there’s a certain amount of “lower-level overlap” between the derivations of these variants of the Pythagorean theorem, but also some discrepancy—indicating a certain separation between these variants in metamathematical space. So what about other theorems? Here’s a table of some famous theorems from all over mathematics, sorted by the total number of theorems on which proofs of them formulated in Metamath depend—giving also the number of axioms and definitions used in each case: The Pythagorean theorem (here the pythi formulation) occurs solidly in the second half. Some of the theorems with the fewest dependencies are in a sense very structural theorems. But it’s interesting to see that theorems from all sorts of different areas soon start appearing, and then are very much mixed together in the remainder of the list. One might have thought that theorems involving “more sophisticated concepts” (like Ramsey’s theorem) would appear later than “more elementary” ones (like the sum of angles of a triangle). But this doesn’t seem to be true. There’s a distribution of what amount to “proof sizes” (or, more strictly, theorem dependency sizes)—from the Schröder–Bernstein theorem which relies on less than 4% of all theorems, to Dirichlet’s theorem that relies on 25%: If we look not at “famous” theorems, but at all theorems covered by Metamath, the distribution becomes broader, with many short-to-prove “glue” or essentially “definitional” lemmas appearing: But using the list of famous theorems as an indication of the “math that mathematicians care about” we can conclude that there is a kind of “metamathematical floor” of results that one needs to reach before “things that we care about” start appearing. It’s a bit like the situation in our Physics Project—where the vast majority of microscopic events that happen in the universe seem to be devoted merely to knitting together the structure of space, and only “on top of that” can events which can be identified with things like particles and motion appear. And indeed if we look at the “prerequisites” for different famous theorems, we indeed find that there is a large overlap (indicated by lighter colors)—supporting the impression that in a sense one first has “knit together metamathematical space” and only then can one start generating “interesting theorems”: Another way to see “underlying overlap” is to look at what axioms different theorems ultimately depend on (the colors indicate the “depth” at which the axioms are reached): The theorems here are again sorted in order of “dependency size”. The “very-set-theoretic” ones at the top don’t depend on any of the various number-related axioms. And quite a few “integer-related theorems” don’t depend on complex number axioms. But otherwise, we see that (at least according to the proofs in set.mm) most of the “famous theorems” depend on almost all the axioms. The only axiom that’s rarely used is the Axiom of Choice—on which only things like “analysis-related theorems” such as the Fundamental Theorem of Calculus depend. If we look at the “depth of proof” at which axioms are reached, there’s a definite distribution: And this may be about as robust as any a “statistical characteristic” of the sampling of metamathematical space corresponding to mathematics that is “important to humans”. If we were, for example, to consider all possible theorems in the entailment cone we’d get a very different picture. But potentially what we see here may be a characteristic signature of what’s important to a “mathematical observer like us”. Going beyond “famous theorems” we can ask, for example, about all the 42,000 or so identified theorems in the Metamath set.mm collection. Here’s a rough rendering of their theorem dependency graph, with different colors indicating theorems in different fields of math (and with explicit edges removed): There’s some evidence of a certain overall uniformity, but we can see definite “patches of metamathematical space” dominated by different areas of mathematics. And here’s what happens if we zoom in on the central region, and show where famous theorems lie: A bit like we saw for the named theorems of Boolean algebra clumps of famous theorems appear to somehow “stake out their own separate metamathematical territory”. But notably the famous theorems seem to show some tendency to congregate near “borders” between different areas of mathematics. To get more of a sense of the relation between these different areas, we can make what amounts to a highly coarsened branchial graph, effectively laying out whole areas of mathematics in metamathematical space, and indicating their cross-connections: We can see “highways” between certain areas. But there’s also a definite “background entanglement” between areas, reflecting at least a certain background uniformity in metamathematical space, as sampled with the theorems identified in Metamath. It’s not the case that all these areas of math “look the same”—and for example there are differences in their distributions of theorem dependency sizes: In areas like algebra and number theory, most proofs are fairly long, as revealed by the fact that they have many dependencies. But in set theory there are plenty of short proofs, and in logic all the proofs of theorems that have been included in Metamath are short. What if we look at the overall dependency graph for all theorems in Metamath? Here’s the adjacency matrix we get: The results are triangular because theorems in the Metamath database are arranged so that later ones only depend on earlier ones. And while there’s considerable patchiness visible, there still seems to be a certain overall background level of uniformity. In doing this empirical metamathematics we’re sampling metamathematical space just through particular “human mathematical settlements” in it. But even from the distribution of these “settlements” we potentially begin to see evidence of a certain background uniformity in metamathematical space. Perhaps in time as more connections between different areas of mathematics are found human mathematics will gradually become more “uniformly settled” in metamathematical space—and closer to what we might expect from entailment cones and ultimately from the raw ruliad. But it’s interesting to see that even with fairly basic empirical metamathematics—operating on a current corpus of human mathematical knowledge—it may already be possible to see signs of some features of physicalized metamathematics. One day, no doubt, we’ll be able do experiments in physics that take our “parsing” of the physical universe in terms of things like space and time and quantum mechanics—and reveal “slices” of the raw ruliad underneath. But perhaps something similar will also be possible in empirical metamathematics: to construct what amounts to a metamathematical microscope (or telescope) through which we can see aspects of the ruliad. 27 | Invented or Discovered? How Mathematics Relates to Humans It’s an old and oft-asked question: is mathematics ultimately something that is invented, or something that is discovered? Or, put another way: is mathematics something arbitrarily set up by us humans, or something inevitable and fundamental and in a sense “preexisting”, that we merely get to explore? In the past it’s seemed as if these were two fundamentally incompatible possibilities. But the framework we’ve built here in a sense blends them both into a rather unexpected synthesis. The starting point is the idea that mathematics—like physics—is rooted in the ruliad, which is a representation of formal necessity. Actual mathematics as we “experience” it is—like physics—based on the particular sampling we make of the ruliad. But then the crucial point is that very basic characteristics of us as “observers” are sufficient to constrain that experience to be our general mathematics—or our physics. At some level we can say that “mathematics is always there”—because every aspect of it is ultimately encoded in the ruliad. But in another sense we can say that the mathematics we have is all “up to us”—because it’s based on how we sample the ruliad. But the point is that that sampling is not somehow “arbitrary”: if we’re talking about mathematics for us humans then it’s us ultimately doing the sampling, and the sampling is inevitably constrained by general features of our nature. A major discovery from our Physics Project is that it doesn’t take much in the way of constraints on the observer to deeply constrain the laws of physics they will perceive. And similarly we posit here that for “observers like us” there will inevitably be general (“physicalized”) laws of mathematics, that make mathematics inevitably have the general kinds of characteristics we perceive it to have (such as the possibility of doing mathematics at a high level, without always having to drop down to an “atomic” level). Particularly over the past century there’s been the idea that mathematics can be specified in terms of axiom systems, and that these axiom systems can somehow be “invented at will”. But our framework does two things. First, it says that “far below” axiom systems is the raw ruliad, which in a sense represents all possible axiom systems. And second, it says that whatever axiom systems we perceive to be “operating” will be ones that we as observers can pick out from the underlying structure of the ruliad. At a formal level we can “invent” an arbitrary axiom system (and it’ll be somewhere in the ruliad), but only certain axiom systems will be ones that describe what we as “mathematical observers” can perceive. In a physics setting we might construct some formal physical theory that talks about detailed patterns in the atoms of space (or molecules in a gas), but the kind of “coarse-grained” observations that we can make won’t capture these. Put another way, observers like us can perceive certain kinds of things, and can describe things in terms of these perceptions. But with the wrong kind of theory—or “axioms”—these descriptions won’t be sufficient—and only an observer who’s “shredded” down to a more “atomic” level will be able to track what’s going on. There’s lots of different possible math—and physics—in the ruliad. But observers like us can only “access” a certain type. Some putative alien not like us might access a different type—and might end up with both a different math and a different physics. Deep underneath they—like us—would be talking about the ruliad. But they’d be taking different samples of it, and describing different aspects of it. For much of the history of mathematics there was a close alignment between the mathematics that was done and what we perceive in the world. For example, Euclidean geometry—with its whole axiomatic structure—was originally conceived just as an idealization of geometrical things that we observe about the world. But by the late 1800s the idea had emerged that one could create “disembodied” axiomatic systems with no particular grounding in our experience in the world. And, yes, there are many possible disembodied axiom systems that one can set up. And in doing ruliology and generally exploring the computational universe it’s interesting to investigate what they do. But the point is that this is something quite different from mathematics as mathematics is normally conceived. Because in a sense mathematics—like physics—is a “more human” activity that’s based on what “observers like us” make of the raw formal structure that is ultimately embodied in the ruliad. When it comes to physics there are, it seems, two crucial features of “observers like us”. First, that we’re computationally bounded. And second, that we have the perception that we’re persistent—and have a definite and continuous thread of experience. At the level of atoms of space, we’re in a sense constantly being “remade”. But we nevertheless perceive it as always being the “same us”. This single seemingly simple assumption has far-reaching consequences. For example, it leads us to experience a single thread of time. And from the notion that we maintain a continuity of experience from every successive moment to the next we are inexorably led to the idea of a perceived continuum—not only in time, but also for motion and in space. And when combined with intrinsic features of the ruliad and of multicomputation in general, what comes out in the end is a surprisingly precise description of how we’ll perceive our universe to operate—that seems to correspond exactly with known core laws of physics. What does that kind of thinking tell us about mathematics? The basic point is that—since in the end both relate to humans—there is necessarily a close correspondence between physical and mathematical observers. Both are computationally bounded. And the assumption of persistence in time for physical observers becomes for mathematical observers the concept of maintaining coherence as more statements are accumulated. And when combined with intrinsic features of the ruliad and multicomputation this then turns out to imply the kind of physicalized laws of mathematics that we’ve In a formal axiomatic view of mathematics one just imagines that one invents axioms and sees their consequences. But what we’re describing here is a view of mathematics that is ultimately just about the ways that we as mathematical observers sample and experience the ruliad. And if we use axiom systems it has to be as a kind of “intermediate language” that helps us make a slightly higher-level description of some corner of the raw ruliad. But actual “human-level” mathematics—like human-level physics—operates at a higher level. Our everyday experience of the physical world gives us the impression that we have a kind of “direct access” to many foundational features of physics, like the existence of space and the phenomenon of motion. But our Physics Project implies that these are not concepts that are in any sense “already there”; they are just things that emerge from the raw ruliad when you “parse” it in the kinds of ways observers like us do. In mathematics it’s less obvious (at least to all but perhaps experienced pure mathematicians) that there’s “direct access” to anything. But in our view of mathematics here, it’s ultimately just like physics—and ultimately also rooted in the ruliad, but sampled not by physical observers but by mathematical ones. So from this point of view there’s just as much that’s “real” underneath mathematics as there is underneath physics. The mathematics is sampled slightly differently (though very similarly)—but we should not in any sense consider it “fundamentally more abstract”. When we think of ourselves as entities within the ruliad, we can build up what we might consider a “fully abstract” description of how we get our “experience” of physics. And we can basically do the same thing for mathematics. So if we take the commonsense point of view that physics fundamentally exists “for real”, we’re forced into the same point of view for mathematics. In other words, if we say that the physical universe exists, so must we also say that in some fundamental sense, mathematics also exists. It’s not something we as humans “just make”, but it is something that is made through our particular way of observing the ruliad, that is ultimately defined by our particular characteristics as observers, with our particular core assumptions about the world, our particular kinds of sensory experience, and so on. So what can we say in the end about whether mathematics is “invented” or “discovered”? It is neither. Its underpinnings are the ruliad, whose structure is a matter of formal necessity. But its perceived form for us is determined by our intrinsic characteristics as observers. We neither get to “arbitrarily invent” what’s underneath, nor do we get to “arbitrarily discover” what’s already there. The mathematics we see is the result of a combination of formal necessity in the underlying ruliad, and the particular forms of perception that we—as entities like us—have. Putative aliens could have quite different mathematics, not because the underlying ruliad is any different for them, but because their forms of perception might be different. And it’s the same with physics: even though they “live in the same physical universe” their perception of the laws of physics could be quite different. 28 | What Axioms Can There Be for Human Mathematics? When they were first developed in antiquity the axioms of Euclidean geometry were presumably intended basically as a kind of “tightening” of our everyday impressions of geometry—that would aid in being able to deduce what was true in geometry. But by the mid-1800s—between non-Euclidean geometry, group theory, Boolean algebra and quaternions—it had become clear that there was a range of abstract axiom systems one could in principle consider. And by the time of Hilbert’s program around 1900 the pure process of deduction was in effect being viewed as an end in itself—and indeed the core of mathematics—with axiom systems being seen as “starter material” pretty much just “determined by convention”. In practice even today very few different axiom systems are ever commonly used—and indeed in A New Kind of Science I was able to list essentially all of them comfortably on a couple of pages. But why these axiom systems and not others? Despite the idea that axiom systems could ultimately be arbitrary, the concept was still that in studying some particular area of mathematics one should basically have an axiom system that would provide a “tight specification” of whatever mathematical object or structure one was trying to talk about. And so, for example, the Peano axioms are what became used for talking about arithmetic-style operations on integers. In 1931, however, Gödel’s theorem showed that actually these axioms weren’t strong enough to constrain one to be talking only about integers: there were also other possible models of the axiom system, involving all sorts of exotic “non-standard arithmetic”. (And moreover, there was no finite way to “patch” this issue.) In other words, even though the Peano axioms had been invented—like Euclid’s axioms for geometry—as a way to describe a definite “intuitive” mathematical thing (in this case, integers) their formal axiomatic structure “had a life of its own” that extended (in some sense, infinitely) beyond its original intended purpose. Both geometry and arithmetic in a sense had foundations in everyday experience. But for set theory dealing with infinite sets there was never an obvious intuitive base rooted in everyday experience. Some extrapolations from finite sets were clear. But in covering infinite sets various axioms (like the Axiom of Choice) were gradually added to capture what seemed like “reasonable” mathematical But one example whose status for a long time wasn’t clear was the Continuum Hypothesis—which asserts that the “next distinct possible cardinality” after the cardinality of the integers is : the cardinality of real numbers (i.e. of “the continuum”). Was this something that followed from previously accepted axioms of set theory? And if it was added, would it even be consistent with them? In the early 1960s it was established that actually the Continuum Hypothesis is independent of the other axioms. With the axiomatic view of the foundations of mathematics that’s been popular for the past century or so it seems as if one could, for example, just choose at will whether to include the Continuum Hypothesis (or its negation) as an axiom in set theory. But with the approach to the foundations of mathematics that we’ve developed here, this is no longer so clear. Recall that in our approach, everything is ultimately rooted in the ruliad—with whatever mathematics observers like us “experience” just being the result of the particular sampling we do of the ruliad. And in this picture, axiom systems are a particular representation of fairly low-level features of the sampling we do of the raw ruliad. If we could do any kind of sampling we want of the ruliad, then we’d presumably be able to get all possible axiom systems—as intermediate-level “waypoints” representing different kinds of slices of the ruliad. But in fact by our nature we are observers capable of only certain kinds of sampling of the ruliad. We could imagine “alien observers” not like us who could for example make whatever choice they want about the Continuum Hypothesis. But given our general characteristics as observers, we may be forced into a particular choice. Operationally, as we’ve discussed above, the wrong choice could, for example, be incompatible with an observer who “maintains coherence” in metamathematical space. Let’s say we have a particular axiom stated in standard symbolic form. “Underneath” this axiom there will typically be at the level of the raw ruliad a huge cloud of possible configurations of emes that can represent the axiom. But an “observer like us” can only deal with a coarse-grained version in which all these different configurations are somehow considered equivalent. And if the entailments from “nearby configurations” remain nearby, then everything will work out, and the observer can maintain a coherent view of what’s going, for example just in terms of symbolic statements about axioms. But if instead different entailments of raw configurations of emes lead to very different places, the observer will in effect be “shredded”—and instead of having definite coherent “single-minded” things to say about what happens, they’ll have to separate everything into all the different cases for different configurations of emes. Or, as we’ve said it before, the observer will inevitably end up getting “shredded”—and not be able to come up with definite mathematical conclusions. So what specifically can we say about the Continuum Hypothesis? It’s not clear. But conceivably we can start by thinking of as characterizing the “base cardinality” of the ruliad, while characterizes the base cardinality of a first-level hyperruliad that could for example be based on Turing machines with oracles for their halting problems. And it could be that for us to conclude that the Continuum Hypothesis is false, we’d have to somehow be straddling the ruliad and the hyperruliad, which would be inconsistent with us maintaining a coherent view of mathematics. In other words, the Continuum Hypothesis might somehow be equivalent to what we’ve argued before is in a sense the most fundamental “contingent fact”—that just as we live in a particular location in physical space—so also we live in the ruliad and not the hyperruliad. We might have thought that whatever we might see—or construct—in mathematics would in effect be “entirely abstract” and independent of anything about physics, or our experience in the physical world. But particularly insofar as we’re thinking about mathematics as done by humans we’re dealing with “mathematical observers” that are “made of the same stuff” as physical observers. And this means that whatever general constraints or features exist for physical observers we can expect these to carry over to mathematical observers—so it’s no coincidence that both physical and mathematical observers have the same core characteristics, of computational boundedness and “assumption of coherence”. And what this means is that there’ll be a fundamental correlation between things familiar from our experience in the physical world and what shows up in our mathematics. We might have thought that the fact that Euclid’s original axioms were based on our human perceptions of physical space would be a sign that in some “overall picture” of mathematics they should be considered arbitrary and not in any way central. But the point is that in fact our notions of space are central to our characteristics as observers. And so it’s inevitable that “physical-experience-informed” axioms like those for Euclidean geometry will be what appear in mathematics for “observers like us”. 29 | Counting the Emes of Mathematics and Physics How does the “size of mathematics” compare to the size of our physical universe? In the past this might have seemed like an absurd question, that tries to compare something abstract and arbitrary with something real and physical. But with the idea that both mathematics and physics as we experience them emerge from our sampling of the ruliad, it begins to seem less absurd. At the lowest level the ruliad can be thought of as being made up of atoms of existence that we call emes. As physical observers we interpret these emes as atoms of space, or in effect the ultimate raw material of the physical universe. And as mathematical observers we interpret them as the ultimate elements from which the constructs of mathematics are built. As the entangled limit of all possible computations, the whole ruliad is infinite. But we as physical or mathematical observers sample only limited parts of it. And that means we can meaningfully ask questions like how the number of emes in these parts compare—or, in effect, how big is physics as we experience it compared to mathematics. In some ways an eme is like a bit. But the concept of emes is that they’re “actual atoms of existence”—from which “actual stuff” like the physical universe and its history are made—rather than just “static informational representations” of it. As soon as we imagine that everything is ultimately computational we are immediately led to start thinking of representing it in terms of bits. But the ruliad is not just a representation. It’s in some way something lower level. It’s the “actual stuff” that everything is made of. And what defines our particular experience of physics or of mathematics is the particular samples we as observers take of what’s in the ruliad. So the question is now how many emes there are in those samples. Or, more specifically, how many emes “matter to us” in building up our experience. Let’s return to an analogy we’ve used several times before: a gas made of molecules. In the volume of a room there might be individual molecules, each on average colliding every seconds. So that means that our “experience of the room” over the course of a minute or so might sample collisions. Or, in terms closer to our Physics Project, we might say that there are perhaps “collision events” in the causal graph that defines what we experience. But these “collision events” aren’t something fundamental; they have what amounts to “internal structure” with many associated parameters about location, time, molecular configuration, etc. Our Physics Project, however, suggests that—far below for example our usual notions of space and time—we can in fact have a truly fundamental definition of what’s happening in the universe, ultimately in terms of emes. We don’t yet know the “physical scale” for this—and in the end we presumably need experiments to determine that. But rather rickety estimates based on a variety of assumptions suggest that the elementary length might be around meters, with the elementary time being around seconds. And with these estimates we might conclude that our “experience of a room for a minute” would involve sampling perhaps update events, that create about this number of atoms of space. But it’s immediately clear that this is in a sense a gross underestimate of the total number of emes that we’re sampling. And the reason is that we’re not accounting for quantum mechanics, and for the multiway nature of the evolution of the universe. We’ve so far only considered one “thread of time” at one “position in branchial space”. But in fact there are many threads of time, constantly branching and merging. So how many of these do we experience? In effect that depends on our size in branchial space. In physical space “human scale” is of order a meter—or perhaps elementary lengths. But how big is it in branchial space? The fact that we’re so large compared to the elementary length is the reason that we consistently experience space as something continuous. And the analog in branchial space is that if we’re big compared to the “elementary branchial distance between branches” then we won’t experience the different individual histories of these branches, but only an aggregate “objective reality” in which we conflate together what happens on all the branches. Or, put another way, being large in branchial space is what makes us experience classical physics rather than quantum mechanics. Our estimates for branchial space are even more rickety than for physical space. But conceivably there are on the order of “instantaneous parallel threads of time” in the universe, and encompassed by our instantaneous experience—implying that in our minute-long experience we might sample a total of on the order of close to emes. But even this is a vast underestimate. Yes, it tries to account for our extent in physical space and in branchial space. But then there’s also rulial space—which in effect is what “fills out” the whole ruliad. So how big are we in that space? In essence that’s like asking how many different possible sequences of rules there are that are consistent with our experience. The total conceivable number of sequences associated with emes is roughly the number of possible hypergraphs with nodes—or around . But the actual number consistent with our experience is smaller, in particular as reflected by the fact that we attribute specific laws to our universe. But when we say “specific laws” we have to recognize that there is a finiteness to our efforts at inductive inference which inevitably makes these laws at least somewhat uncertain to us. And in a sense that uncertainty is what represents our “extent in rulial space”. But if we want to count the emes that we “absorb” as physical observers, it’s still going to be a huge number. Perhaps the base may be lower—say —but there’s still a vast exponent, suggesting that if we include our extent in rulial space, we as physical observers may experience numbers of emes like . But let’s say we go beyond our “everyday human-scale experience”. For example, let’s ask about “experiencing” our whole universe. In physical space, the volume of our current universe is about times larger than “human scale” (while human scale is perhaps times larger than the “scale of the atoms of space”). In branchial space, conceivably our current universe is times larger than “human scale”. But these differences absolutely pale in comparison to the sizes associated with rulial space. We might try to go beyond “ordinary human experience” and for example measure things using tools from science and technology. And, yes, we could then think about “experiencing” lengths down to meters, or something close to “single threads” of quantum histories. But in the end, it’s still the rulial size that dominates, and that’s where we can expect most of the vast number of emes that form of our experience of the physical universe to come from. OK, so what about mathematics? When we think about what we might call human-scale mathematics, and talk about things like the Pythagorean theorem, how many emes are there “underneath”? “Compiling” our theorem down to typical traditional mathematical axioms, we’ve seen that we’ll routinely end up with expressions containing, say, symbolic elements. But what happens if we go “below that”, compiling these symbolic elements—which might include things like variables and operators—into “pure computational elements” that we can think of as emes? We’ve seen a few examples, say with combinators, that suggest that for the traditional axiomatic structures of mathematics, we might need another factor of maybe roughly . These are incredibly rough estimates, but perhaps there’s a hint that there’s “further to go” to get from human-scale for a physical observer down to atoms of space that correspond to emes, than there is to get from human-scale for a mathematical observer down to emes. Just like in physics, however, this kind of “static drill-down” isn’t the whole story for mathematics. When we talk about something like the Pythagorean theorem, we’re really referring to a whole cloud of “human-equivalent” points in metamathematical space. The total number of “possible points” is basically the size of the entailment cone that contains something like the Pythagorean theorem. The “height” of the entailment cone is related to typical lengths of proofs—which for current human mathematics might be perhaps hundreds of steps. And this would lead to overall sizes of entailment cones of very roughly theorems. But within this “how big” is the cloud of variants corresponding to particular “human-recognized” theorems? Empirical metamathematics could provide additional data on this question. But if we very roughly imagine that half of every proof is “flexible”, we’d end up with things like variants. So if we asked how many emes correspond to the “experience” of the Pythagorean theorem, it might be, say, . To give an analogy of “everyday physical experience” we might consider a mathematician thinking about mathematical concepts, and maybe in effect pondering a few tens of theorems per minute—implying according to our extremely rough and speculative estimates that while typical “specific human-scale physics experience” might involve emes, specific human-scale mathematics experience might involve emes (a number comparable, for example, to the number of physical atoms in our universe). What if instead of considering “everyday mathematical experience” we consider all humanly explored mathematics? On the scales we’re describing, the factors are not large. In the history of human mathematics, only a few million theorems have been published. If we think about all the computations that have been done in the service of mathematics, it’s a somewhat larger factor. I suspect Mathematica is the dominant contributor here—and we can estimate that the total number of Wolfram Language operations corresponding to “human-level mathematics” done so far is perhaps . But just like for physics, all these numbers pale in comparison with those introduced by rulial sizes. We’ve talked essentially about a particular path from emes through specific axioms to theorems. But the ruliad in effect contains all possible axiom systems. And if we start thinking about enumerating these—and effectively “populating all of rulial space”—we’ll end up with exponentially more But as with the perceived laws of physics, in mathematics as done by humans it’s actually just a narrow slice of rulial space that we’re sampling. It’s like a generalization of the idea that something like arithmetic as we imagine it can be derived from a whole cloud of possible axiom systems. It’s not just one axiom system; but it’s also not all possible axiom systems. One can imagine doing some combination of ruliology and empirical metamathematics to get an estimate of “how broad” human-equivalent axiom systems (and their construction from emes) might be. But the answer seems likely to be much smaller than the kinds of sizes we have been estimating for physics. It’s important to emphasize that what we’ve discussed here is extremely rough—and speculative. And indeed I view its main value as being to provide an example of how to imagine thinking through things in the context of the ruliad and the framework around it. But on the basis of what we’ve discussed, we might make the very tentative conclusion that “human-experienced physics” is bigger than “human-experienced mathematics”. Both involve vast numbers of emes. But physics seems to involve a lot more. In a sense—even with all its abstraction—the suspicion is that there’s “less ultimately in mathematics” as far as we’re concerned than there is in physics. Though by any ordinary human standards, mathematics still involves absolutely vast numbers of emes. 30 | Some Historical (and Philosophical) Background The human activity that we now call “mathematics” can presumably trace its origins into prehistory. What might have started as “a single goat”, “a pair of goats”, etc. became a story of abstract numbers that could be indicated purely by things like tally marks. In Babylonian times the practicalities of a city-based society led to all sorts of calculations involving arithmetic and geometry—and basically everything we now call “mathematics” can ultimately be thought of as a generalization of these ideas. The tradition of philosophy that emerged in Greek times saw mathematics as a kind of reasoning. But while much of arithmetic (apart from issues of infinity and infinitesimals) could be thought of in explicit calculational ways, precise geometry immediately required an idealization—specifically the concept of a point having no extent, or equivalently, the continuity of space. And in an effort to reason on top of this idealization, there emerged the idea of defining axioms and making abstract deductions from them. But what kind of a thing actually was mathematics? Plato talked about things we sense in the external world, and things we conceptualize in our internal thoughts. But he considered mathematics to be at its core an example of a third kind of thing: something from an abstract world of ideal forms. And with our current thinking, there is an immediate resonance between this concept of ideal forms and the concept of the ruliad. But for most of the past two millennia of the actual development of mathematics, questions about what it ultimately was lay in the background. An important step was taken in the late 1600s when Newton and others “mathematicized” mechanics, at first presenting what they did in the form of axioms similar to Euclid’s. Through the 1700s mathematics as a practical field was viewed as some kind of precise idealization of features of the world—though with an increasingly elaborate tower of formal derivations constructed in it. Philosophy, meanwhile, typically viewed mathematics—like logic—mostly as an example of a system in which there was a formal process of derivation with a “necessary” structure not requiring reference to the real world. But in the first half of the 1800s there arose several examples of systems where axioms—while inspired by features of the world—ultimately seemed to be “just invented” (e.g. group theory, curved space, quaternions, Boolean algebra, …). A push towards increasing rigor (especially for calculus and the nature of real numbers) led to more focus on axiomatization and formalization—which was still further emphasized by the appearance of a few non-constructive “purely formal” proofs. But if mathematics was to be formalized, what should its underlying primitives be? One obvious choice seemed to be logic, which had originally been developed by Aristotle as a kind of catalog of human arguments, but two thousand years later felt basic and inevitable. And so it was that Frege, followed by Whitehead and Russell, tried to start “constructing mathematics” from “pure logic” (along with set theory). Logic was in a sense a rather low-level “machine code”, and it took hundreds of pages of unreadable (if impressive-looking) “code” for Whitehead and Russell, in their 1910 Principia Mathematica, to get to 1 + 1 = 2. Meanwhile, starting around 1900, Hilbert took a slightly different path, essentially representing everything with what we would now call symbolic expressions, and setting up axioms as relations between these. But what axioms should be used? Hilbert seemed to feel that the core of mathematics lay not in any “external meaning” but in the pure formal structure built up from whatever axioms were used. And he imagined that somehow all the truths of mathematics could be “mechanically derived” from axioms, a bit, as he said in a certain resonance with our current views, like the “great calculating machine, Nature” does it for physics. Not all mathematicians, however, bought into this “formalist” view of what mathematics is. And in 1931 Gödel managed to prove from inside the formal axiom system traditionally used for arithmetic that this system had a fundamental incompleteness that prevented it from ever having anything to say about certain mathematical statements. But Gödel seems to have maintained a more Platonic belief about mathematics: that even though the axiomatic method falls short, the truths of mathematics are in some sense still “all there”, and it’s potentially possible for the human mind to have “direct access” to them. And while this is not quite the same as our picture of the mathematical observer accessing the ruliad, there’s again some definite resonance here. But, OK, so how has mathematics actually conducted itself over the past century? Typically there’s at least lip service paid to the idea that there are “axioms underneath”—usually assumed to be those from set theory. There’s been significant emphasis placed on the idea of formal deduction and proof—but not so much in terms of formally building up from axioms as in terms of giving narrative expositions that help humans understand why some theorem might follow from other things they know. There’s been a field of “mathematical logic” concerned with using mathematics-like methods to explore mathematics-like aspects of formal axiomatic systems. But (at least until very recently) there’s been rather little interaction between this and the “mainstream” study of mathematics. And for example phenomena like undecidability that are central to mathematical logic have seemed rather remote from typical pure mathematics—even though many actual long-unsolved problems in mathematics do seem likely to run into it. But even if formal axiomatization may have been something of a sideshow for mathematics, its ideas have brought us what is without much doubt the single most important intellectual breakthrough of the twentieth century: the abstract concept of computation. And what’s now become clear is that computation is in some fundamental sense much more general than mathematics. At a philosophical level one can view the ruliad as containing all computation. But mathematics (at least as it’s done by humans) is defined by what a “mathematical observer like us” samples and perceives in the ruliad. The most common “core workflow” for mathematicians doing pure mathematics is first to imagine what might be true (usually through a process of intuition that feels a bit like making “direct access to the truths of mathematics”)—and then to “work backwards” to try to construct a proof. As a practical matter, though, the vast majority of “mathematics done in the world” doesn’t follow this workflow, and instead just “runs forward”—doing computation. And there’s no reason for at least the innards of that computation to have any “humanized character” to it; it can just involve the raw processes of But the traditional pure mathematics workflow in effect depends on using “human-level” steps. Or if, as we described earlier, we think of low-level axiomatic operations as being like molecular dynamics, then it involves operating at a “fluid dynamics” level. A century ago efforts to “globally understand mathematics” centered on trying to find common axiomatic foundations for everything. But as different areas of mathematics were explored (and particularly ones like algebraic topology that cut across existing disciplines) it began to seem as if there might also be “top-down” commonalities in mathematics, in effect directly at the “fluid dynamics” level. And within the last few decades, it’s become increasingly common to use ideas from category theory as a general framework for thinking about mathematics at a high level. But there’s also been an effort to progressively build up—as an abstract matter—formal “higher category theory”. A notable feature of this has been the appearance of connections to both geometry and mathematical logic—and for us a connection to the ruliad and its features. The success of category theory has led in the past decade or so to interest in other high-level structural approaches to mathematics. A notable example is homotopy type theory. The basic concept is to characterize mathematical objects not by using axioms to describe properties they should have, but instead to use “types” to say “what the objects are” (for example, “mapping from reals to integers”). Such type theory has the feature that it tends to look much more “immediately computational” than traditional mathematical structures and notation—as well as making explicit proofs and other metamathematical concepts. And in fact questions about types and their equivalences wind up being very much like the questions we’ve discussed for the multiway systems we’re using as metamodels for mathematics. Homotopy type theory can itself be set up as a formal axiomatic system—but with axioms that include what amount to metamathematical statements. A key example is the univalence axiom which essentially states that things that are equivalent can be treated as the same. And now from our point of view here we can see this being essentially a statement of metamathematical coarse graining—and a piece of defining what should be considered “mathematics” on the basis of properties assumed for a mathematical observer. When Plato introduced ideal forms and their distinction from the external and internal world the understanding of even the fundamental concept of computation—let alone multicomputation and the ruliad—was still more than two millennia in the future. But now our picture is that everything can in a sense be viewed as part of the world of ideal forms that is the ruliad—and that not only mathematics but also physical reality are in effect just manifestations of these ideal forms. But a crucial aspect is how we sample the “ideal forms” of the ruliad. And this is where the “contingent facts” about us as human “observers” enter. The formal axiomatic view of mathematics can be viewed as providing one kind of low-level description of the ruliad. But the point is that this description isn’t aligned with what observers like us perceive—or with what we will successfully be able to view as human-level mathematics. A century ago there was a movement to take mathematics (as well, as it happens, as other fields) beyond its origins in what amount to human perceptions of the world. But what we now see is that while there is an underlying “world of ideal forms” embodied in the ruliad that has nothing to do with us humans, mathematics as we humans do it must be associated with the particular sampling we make of that underlying structure. And it’s not as if we get to pick that sampling “at will”; the sampling we do is the result of fundamental features of us as humans. And an important point is that those fundamental features determine our characteristics both as mathematical observers and as physical observers. And this fact leads to a deep connection between our experience of physics and our definition of mathematics. Mathematics historically began as a formal idealization of our human perception of the physical world. Along the way, though, it began to think of itself as a more purely abstract pursuit, separated from both human perception and the physical world. But now, with the general idea of computation, and more specifically with the concept of the ruliad, we can in a sense see what the limit of such abstraction would be. And interesting though it is, what we’re now discovering is that it’s not the thing we call mathematics. And instead, what we call mathematics is something that is subtly but deeply determined by general features of human perception—in fact, essentially the same features that also determine our perception of the physical world. The intellectual foundations and justification are different now. But in a sense our view of mathematics has come full circle. And we can now see that mathematics is in fact deeply connected to the physical world and our particular perception of it. And we as humans can do what we call mathematics for basically the same reason that we as humans manage to parse the physical world to the point where we can do science about it. 31 | Implications for the Future of Mathematics Having talked a bit about historical context let’s now talk about what the things we’ve discussed here mean for the future of mathematics—both in theory and in practice. At a theoretical level we’ve characterized the story of mathematics as being the story of a particular way of exploring the ruliad. And from this we might think that in some sense the ultimate limit of mathematics would be to just deal with the ruliad as a whole. But observers like us—at least doing mathematics the way we normally do it—simply can’t do that. And in fact, with the limitations we have as mathematical observers we can inevitably sample only tiny slices of the ruliad. But as we’ve discussed, it is exactly this that leads us to experience the kinds of “general laws of mathematics” that we’ve talked about. And it is from these laws that we get a picture of the “large-scale structure of mathematics”—that turns out to be in many ways similar to the picture of the large-scale structure of our physical universe that we get from physics. As we’ve discussed, what corresponds to the coherent structure of physical space is the possibility of doing mathematics in terms of high-level concepts—without always having to drop down to the “atomic” level. Effective uniformity of metamathematical space then leads to the idea of “pure metamathematical motion”, and in effect the possibility of translating at a high level between different areas of mathematics. And what this suggests is that in some sense “all high-level areas of mathematics” should ultimately be connected by “high-level dualities”—some of which have already been seen, but many of which remain to be discovered. Thinking about metamathematics in physicalized terms also suggests another phenomenon: essentially an analog of gravity for metamathematics. As we discussed earlier, in direct analogy to the way that “larger densities of activity” in the spatial hypergraph for physics lead to a deflection in geodesic paths in physical space, so also larger “entailment density” in metamathematical space will lead to deflection in geodesic paths in metamathematical space. And when the entailment density gets sufficiently high, it presumably becomes inevitable that these paths will all converge, leading to what one might think of as a “metamathematical singularity”. In the spacetime case, a typical analog would be a place where all geodesics have finite length, or in effect “time stops”. In our view of metamathematics, it corresponds to a situation where “all proofs are finite”—or, in other words, where everything is decidable, and there is no more “fundamental difficulty” left. Absent other effects we might imagine that in the physical universe the effects of gravity would eventually lead everything to collapse into black holes. And the analog in metamathematics would be that everything in mathematics would “collapse” into decidable theories. But among the effects not accounted for is continued expansion—or in effect the creation of new physical or metamathematical space, formed in a sense by underlying raw computational processes. What will observers like us make of this, though? In statistical mechanics an observer who does coarse graining might perceive the “heat death of the universe”. But at a molecular level there is all sorts of detailed motion that reflects a continued irreducible process of computation. And inevitably there will be an infinite collection of possible “slices of reducibility” to be found in this—just not necessarily ones that align with any of our current capabilities as observers. What does this mean for mathematics? Conceivably it might suggest that there’s only so much that can fundamentally be discovered in “high-level mathematics” without in effect “expanding our scope as observers”—or in essence changing our definition of what it is we humans mean by doing mathematics. But underneath all this is still raw computation—and the ruliad. And this we know goes on forever, in effect continually generating “irreducible surprises”. But how should we study “raw computation”? In essence we want to do unfettered exploration of the computational universe, of the kind I did in A New Kind of Science, and that we now call the science of ruliology. It’s something we can view as more abstract and more fundamental than mathematics—and indeed, as we’ve argued, it’s for example what’s underneath not only mathematics but also physics. Ruliology is a rich intellectual activity, important for example as a source of models for many processes in nature and elsewhere. But it’s one where computational irreducibility and undecidability are seen at almost every turn—and it’s not one where we can readily expect “general laws” accessible to observers like us, of the kind we’ve seen in physics, and now see in mathematics. We’ve argued that with its foundation in the ruliad mathematics is ultimately based on structures lower level than axiom systems. But given their familiarity from the history of mathematics, it’s convenient to use axiom systems—as we have done here—as a kind of “intermediate-scale metamodel” for mathematics. But what is the “workflow” for using axiom systems? One possibility in effect inspired by ruliology is just to systematically construct the entailment cone for an axiom system, progressively generating all possible theorems that the axiom system implies. But while doing this is of great theoretical interest, it typically isn’t something that will in practice reach much in the way of (currently) familiar mathematical results. But let’s say one’s thinking about a particular result. A proof of this would correspond to a path within the entailment cone. And the idea of automated theorem proving is to systematically find such a path—which, with a variety of tricks, can usually be done vastly more efficiently than just by enumerating everything in the entailment cone. In practice, though, despite half a century of history, automated theorem proving has seen very little use in mainstream mathematics. Of course it doesn’t help that in typical mathematical work a proof is seen as part of the high-level exposition of ideas—but automated proofs tend to operate at the level of “axiomatic machine code” without any connection to human-level narrative. But if one doesn’t already know the result one’s trying to prove? Part of the intuition that comes from A New Kind of Science is that there can be “interesting results” that are still simple enough that they can conceivably be found by some kind of explicit search—and then verified by automated theorem proving. But so far as I know, only one significant unexpected result has so far ever been found in this way with automated theorem proving: my 2000 result on the simplest axiom system for Boolean algebra. And the fact is that when it comes to using computers for mathematics, the overwhelming fraction of the time they’re used not to construct proofs, but instead to do “forward computations” and “get results” (yes, often with Mathematica). Of course, within those forward computations, there are many operations—like Reduce, SatisfiableQ, PrimeQ, etc.—that essentially work by internally finding proofs, but their output is “just results” not “why-it’s-true explanations”. (FindEquationalProof—as its name suggests—is a case where an actual proof is generated.) Whether one’s thinking in terms of axioms and proofs, or just in terms of “getting results”, one’s ultimately always dealing with computation. But the key question is how that computation is “packaged”. Is one dealing with arbitrary, raw, low-level constructs, or with something higher level and more “humanized”? As we’ve discussed, at the lowest level, everything can be represented in terms of the ruliad. But when we do both mathematics and physics what we’re perceiving is not the raw ruliad, but rather just certain high-level features of it. But how should these be represented? Ultimately we need a language that we humans understand, that captures the particular features of the underlying raw computation that we’re interested in. From our computational point of view, mathematical notation can be thought of as a rough attempt at this. But the most complete and systematic effort in this direction is the one I’ve worked towards for the past several decades: what’s now the full-scale computational language that is the Wolfram Language (and Mathematica). Ultimately the Wolfram Language can represent any computation. But the point is to make it easy to represent the computations that people care about: to capture the high-level constructs (whether they’re polynomials, geometrical objects or chemicals) that are part of modern human thinking. The process of language design (on which, yes, I’ve spent immense amounts of time) is a curious mixture of art and science, that requires both drilling down to the essence of things, and creatively devising ways to make those things accessible and cognitively convenient for humans. At some level it’s a bit like deciding on words as they might appear in a human language—but it’s something more structured and demanding. And it’s our best way of representing “high-level” mathematics: mathematics not at the axiomatic (or below) “machine code” level, but instead at the level human mathematicians typically think about We’ve definitely not “finished the job”, though. Wolfram Language currently has around 7000 built-in primitive constructs, of which at least a couple of thousand can be considered “primarily mathematical”. But while the language has long contained constructs for algebraic numbers, random walks and finite groups, it doesn’t (yet) have built-in constructs for algebraic topology or K-theory. In recent years we’ve been slowly adding more kinds of pure-mathematical constructs—but to reach the frontiers of modern human mathematics might require perhaps a thousand more. And to make them useful all of them will have to be carefully and coherently designed. The great power of the Wolfram Language comes not only from being able to represent things computationally, but also being able to compute with things, and get results. And it’s one thing to be able to represent some pure mathematical construct—but quite another to be able to broadly compute with it. The Wolfram Language in a sense emphasizes the “forward computation” workflow. Another workflow that’s achieved some popularity in recent years is the proof assistant one—in which one defines a result and then as a human one tries to fill in the steps to create a proof of it, with the computer verifying that the steps correctly fit together. If the steps are low level then what one has is something like typical automated theorem proving—though now being attempted with human effort rather than being done automatically. In principle one can build up to much higher-level “steps” in a modular way. But now the problem is essentially the same as in computational language design: to create primitives that are both precise enough to be immediately handled computationally, and “cognitively convenient” enough to be usefully understood by humans. And realistically once one’s done the design (which, after decades of working on such things, I can say is hard), there’s likely to be much more “leverage” to be had by letting the computer just do computations than by expending human effort (even with computer assistance) to put together proofs. One might think that a proof would be important in being sure one’s got the right answer. But as we’ve discussed, that’s a complicated concept when one’s dealing with human-level mathematics. If we go to a full axiomatic level it’s very typical that there will be all sorts of pedantic conditions involved. Do we have the “right answer” if underneath we assume that 1/0=0? Or does this not matter at the “fluid dynamics” level of human mathematics? One of the great things about computational language is that—at least if it’s written well—it provides a clear and succinct specification of things, just like a good “human proof” is supposed to. But computational language has the great advantage that it can be run to create new results—rather than just being used to check something. It’s worth mentioning that there’s another potential workflow beyond “compute a result” and “find a proof”. It’s “here’s an object or a set of constraints for creating one; now find interesting facts about this”. Type into Wolfram|Alpha something like sin^4(x) (and, yes, there’s “natural math understanding” needed to translate something like this to precise Wolfram Language). There’s nothing obvious to “compute” here. But instead what Wolfram|Alpha does is to “say interesting things” about this—like what its maximum or its integral over a period is. In principle this is a bit like exploring the entailment cone—but with the crucial additional piece of picking out which entailments will be “interesting to humans”. (And implementationally it’s a very deeply constrained exploration.) It’s interesting to compare these various workflows with what one can call experimental mathematics. Sometimes this term is basically just applied to studying explicit examples of known mathematical results. But the much more powerful concept is to imagine discovering new mathematical results by “doing experiments”. Usually these experiments are not done at the level of axioms, but rather at a considerably higher level (e.g. with things specified using the primitives of Wolfram Language). But the typical pattern is to enumerate a large number of cases and to see what happens—with the most exciting result being the discovery of some unexpected phenomenon, regularity or irregularity. This type of approach is in a sense much more general than mathematics: it can be applied to anything computational, or anything described by rules. And indeed it is the core methodology of ruliology , and what it does to explore the computational universe—and the ruliad. One can think of the typical approach in pure mathematics as representing a gradual expansion of the entailment fabric, with humans checking (perhaps with a computer) statements they consider adding. Experimental mathematics effectively strikes out in some “direction” in metamathematical space, potentially jumping far away from the entailment fabric currently within the purview of some mathematical observer. And one feature of this—very common in ruliology—is that one may run into undecidability. The “nearby” entailment fabric of the mathematical observer is in a sense “filled in enough” that it doesn’t typically have infinite proof paths of the kind associated with undecidability. But something reached by experimental mathematics has no such guarantee. What’s good of course is that experimental mathematics can discover phenomena that are “far away” from existing mathematics. But (like in automated theorem proving) there isn’t necessarily any human-accessible “narrative explanation” (and if there’s undecidability there may be no “finite explanation” at all). So how does this all relate to our whole discussion of new ideas about the foundations of mathematics? In the past we might have thought that mathematics must ultimately progress just by working out more and more consequences of particular axioms. But what we’ve argued is that there’s a fundamental infrastructure even far below axiom systems—whose low-level exploration is the subject of ruliology. But the thing we call mathematics is really something higher level. Axiom systems are some kind of intermediate modeling layer—a kind of “assembly language” that can be used as a wrapper above the “raw ruliad”. In the end, we’ve argued, the details of this language won’t matter for typical things we call mathematics. But in a sense the situation is very much like in practical computing: we want an “assembly language” that makes it easiest to do the typical high-level things we want. In practical computing that’s often achieved with RISC instruction sets. In mathematics we typically imagine using axiom systems like ZFC. But—as reverse mathematics has tended to indicate—there are probably much more accessible axiom systems that could be used to reach the mathematics we want. (And ultimately even ZFC is limited in what it can reach.) But if we could find such a “RISC” axiom system for mathematics it has the potential to make practical more extensive exploration of the entailment cone. It’s also conceivable—though not guaranteed—that it could be “designed” to be more readily understood by humans. But in the end actual human-level mathematics will typically operate at a level far above it. And now the question is whether the “physicalized general laws of mathematics” that we’ve discussed can be used to make conclusions directly about human-level mathematics. We’ve identified a few features—like the very possibility of high-level mathematics, and the expectation of extensive dualities between mathematical fields. And we know that basic commonalities in structural features can be captured by things like category theory. But the question is what kinds of deeper general features can be found, and used. In physics our everyday experience immediately makes us think about “large-scale features” far above the level of atoms of space. In mathematics our typical experience so far has been at a lower level. So now the challenge is to think more globally, more metamathematically and, in effect, more like in physics. In the end, though, what we call mathematics is what mathematical observers perceive. So if we ask about the future of mathematics we must also ask about the future of mathematical observers. If one looks at the history of physics there was already much to understand just on the basis of what we humans could “observe” with our unaided senses. But gradually as more kinds of detectors became available—from microscopes to telescopes to amplifiers and so on—the domain of the physical observer was expanded, and the perceived laws of physics with it. And today, as the practical computational capability of observers increases, we can expect that we’ll gradually see new kinds of physical laws (say associated with hitherto “it’s just random” molecular motion or other features of systems). As we’ve discussed above, we can see our characteristics as physical observers as being associated with “experiencing” the ruliad from one particular “vantage point” in rulial space (just as we “experience” physical space from one particular vantage point in physical space). Putative “aliens” might experience the ruliad from a different vantage point in rulial space—leading them to have laws of physics utterly incoherent with our own. But as our technology and ways of thinking progress, we can expect that we’ll gradually be able to expand our “presence” in rulial space (just as we do with spacecraft and telescopes in physical space). And so we’ll be able to “experience” different laws of physics. We can expect the story to be very similar for mathematics. We have “experienced” mathematics from a certain vantage point in the ruliad. Putative aliens might experience it from another point, and build their own “paramathematics” utterly incoherent with our mathematics. The “natural evolution” of our mathematics corresponds to a gradual expansion in the entailment fabric, and in a sense a gradual spreading in rulial space. Experimental mathematics has the potential to launch a kind of “metamathematical space probe” which can discover quite different mathematics. At first, though, this will tend to be a piece of “raw ruliology”. But, if pursued, it potentially points the way to a kind of “colonization of rulial space” that will gradually expand the domain of the mathematical The physicalized general laws of mathematics we’ve discussed here are based on features of current mathematical observers (which in turn are highly based on current physical observers). What these laws would be like with “enhanced” mathematical observers we don’t yet know. Mathematics as it is today is a great example of the “humanization of raw computation”. Two other examples are theoretical physics and computational language. And in all cases there is the potential to gradually expand our scope as observers. It’ll no doubt be a mixture of technology and methods along with expanded cognitive frameworks and understanding. We can use ruliology—or experimental mathematics—to “jump out” into the raw ruliad. But most of what we’ll see is “non-humanized” computational irreducibility. But perhaps somewhere there’ll be another slice of computational reducibility: a different “island” on which “alien” general laws can be built. But for now we exist on our current “island” of reducibility. And on this island we see the particular kinds of general laws that we’ve discussed. We saw them first in physics. But there we discovered that they could emerge quite generically from a lower-level computational structure—and ultimately from the very general structure that we call the ruliad. And now, as we’ve discussed here, we realize that the thing we call mathematics is actually based on exactly the same foundations—with the result that it should show the same kinds of general laws. It’s a rather different view of mathematics—and its foundations—than we’ve been able to form before. But the deep connection with physics that we’ve discussed allows us to now have a physicalized view of metamathematics, which informs both what mathematics really is now, and what the future can hold for the remarkable pursuit that we call mathematics. Some Personal History: The Evolution of These Ideas It’s been a long personal journey to get to the ideas described here—stretching back nearly 45 years. Parts have been quite direct, steadily building over the course of time. But other parts have been surprising—even shocking. And to get to where we are now has required me to rethink some very long-held assumptions, and adopt what I had believed was a rather different way of thinking—even though, ironically, I’ve realized in the end that many aspects of this way of thinking pretty much mirror what I’ve done all along at a practical and technological level. Back in the late 1970s as a young theoretical physicist I had discovered the “secret weapon” of using computers to do mathematical calculations. By 1979 I had outgrown existing systems and decided to build my own. But what should its foundations be? A key goal was to represent the processes of mathematics in a computational way. I thought about the methods I’d found effective in practice. I studied the history of mathematical logic. And in the end I came up with what seemed to me at the time the most obvious and direct approach: that everything should be based on transformations for symbolic expressions. I was pretty sure this was actually a good general approach to computation of all kinds—and the system we released in 1981 was named SMP (“Symbolic Manipulation Program”) to reflect this generality. History has indeed borne out the strength of the symbolic expression paradigm—and it’s from that we’ve been able to build the huge tower of technology that is the modern Wolfram Language. But all along mathematics has been an important use case—and in effect we’ve now seen four decades of validation that the core idea of transformations on symbolic expressions is a good metamodel of When Mathematica was first released in 1988 we called it “A System for Doing Mathematics by Computer”, where by “doing mathematics” we meant doing computations in mathematics and getting results. People soon did all sorts of experiments on using Mathematica to create and present proofs. But the overwhelming majority of actual usage was for directly computing results—and almost nobody seemed interested in seeing the inner workings, presented as a proof or otherwise. But in the 1980s I had started my work on exploring the computational universe of simple programs like cellular automata. And doing this was all about looking at the ongoing behavior of systems—or in effect the (often computationally irreducible) history of computations. And even though I sometimes talked about using my computational methods to do “experimental mathematics”, I don’t think I particularly thought about the actual progress of the computations I was studying as being like mathematical processes or proofs. In 1991 I started working on what became A New Kind of Science, and in doing so I tried to systematically study possible forms of computational processes—and I was soon led to substitution systems and symbolic systems which I viewed in their different ways as being minimal idealizations of what would become Wolfram Language, as well as to multiway systems. There were some areas to which I was pretty sure the methods of A New Kind of Science would apply. Three that I wasn’t sure about were biology, physics and mathematics. But by the late 1990s I had worked out quite a bit about the first two, and started looking at mathematics. I knew that Mathematica and what would become Wolfram Language were good representations of “practical mathematics”. But I assumed that to understand the foundations of mathematics I should look at the traditional low-level representation of mathematics: axiom systems. And in doing this I was soon able to simplify to multiway systems—with proofs being paths: I had long wondered what the detailed relationships between things like my idea of computational irreducibility and earlier results in mathematical logic were. And I was pleased at how well many things could be clarified—and explicitly illustrated—by thinking in terms of multiway systems. My experience in exploring simple programs in general had led to the conclusion that computational irreducibility and therefore undecidability were quite ubiquitous. So I considered it quite a mystery why undecidability seemed so rare in the mathematics that mathematicians typically did. I suspected that in fact undecidability was lurking close at hand—and I got some evidence of that by doing experimental mathematics. But why weren’t mathematicians running into this more? I came to suspect that it had something to do with the history of mathematics, and with the idea that mathematics had tended to expand its subject matter by asking “How can this be generalized while still having such-and-such a theorem be true?” But I also wondered about the particular axiom systems that had historically been used for mathematics. They all fit easily on a couple of pages. But why these and not others? Following my general “ruliological” approach of exploring all possible systems I started just enumerating possible axiom systems—and soon found out that many of them had rich and complicated implications. But where among these possible systems did the axiom systems historically used in mathematics lie? I did searches, and at about the 50,000th axiom was able to find the simplest axiom system for Boolean algebra. Proving that it was correct gave me my first serious experience with automated theorem proving. But what kind of a thing was the proof? I made some attempt to understand it, but it was clear that it wasn’t something a human could readily understand—and reading it felt a bit like trying to read machine code. I recognized that the problem was in a sense a lack of “human connection points”—for example of intermediate lemmas that (like words in a human language) had a contextualized significance. I wondered about how one could find lemmas that “humans would care about”? And I was surprised to discover that at least for the “named theorems” of Boolean algebra a simple criterion could reproduce them. Quite a few years went by. Off and on I thought about two ultimately related issues. One was how to represent the execution histories of Wolfram Language programs. And the other was how to represent proofs. In both cases there seemed to be all sorts of detail, and it seemed difficult to have a structure that would capture what would be needed for further computation—or any kind of general Meanwhile, in 2009, we released Wolfram|Alpha. One of its features was that it had “step-by-step” math computations. But these weren’t “general proofs”: rather they were narratives synthesized in very specific ways for human readers. Still, a core concept in Wolfram|Alpha—and the Wolfram Language—is the idea of integrating in knowledge about as many things as possible in the world. We’d done this for cities and movies and lattices and animals and much more. And I thought about doing it for mathematical theorems as well. We did a pilot project—on theorems about continued fractions. We trawled through the mathematical literature assessing the difficulty of extending the “natural math understanding” we’d built for Wolfram|Alpha. I imagined a workflow which would mix automated theorem generation with theorem search—in which one would define a mathematical scenario, then say “tell me interesting facts about this”. And in 2014 we set about engaging the mathematical community in a large-scale curation effort to formalize the theorems of mathematics. But try as we might, only people already involved in math formalization seemed to care; with few exceptions working mathematicians just didn’t seem to consider it relevant to what they did. We continued, however, to push slowly forward. We worked with proof assistant developers. We curated various kinds of mathematical structures (like function spaces). I had estimated that we’d need more than a thousand new Wolfram Language functions to cover “modern pure mathematics”, but without a clear market we couldn’t motivate the huge design (let alone implementation) effort that would be needed—though, partly in a nod to the intellectual origins of mathematics, we did for example do a project that has succeeded in finally making Euclid-style geometry computable. Then in the latter part of the 2010s a couple more “proof-related” things happened. Back in 2002 we’d started using equational logic automated theorem proving to get results in functions like FullSimplify. But we hadn’t figured out how to present the proofs that were generated. In 2018 we finally introduced FindEquationalProof—allowing programmatic access to proofs, and making it feasible for me to explore collections of proofs in bulk. I had for decades been interested in what I’ve called “symbolic discourse language”: the extension of the idea of computational language to “everyday discourse”—and to the kind of thing one might want for example to express in legal contracts. And between this and our involvement in the idea of computational contracts, and things like blockchain technology, I started exploring questions of AI ethics and “constitutions”. At this point we’d also started to introduce machine-learning-based functions into the Wolfram Language. And—with my “human incomprehensible” Boolean algebra proof as “empirical data”—I started exploring general questions of explainability, and in effect proof. And not long after that came the surprise breakthrough of our Physics Project. Extending my ideas from the 1990s about computational foundations for fundamental physics it suddenly became possible finally to understand the underlying origins of the main known laws of physics. And core to this effort—and particularly to the understanding of quantum mechanics—were multiway systems. At first we just used the knowledge that multiway systems could also represent axiomatic mathematics and proofs to provide analogies for our thinking about physics (“quantum observers might in effect be doing critical-pair completions”, “causal graphs are like higher categories”, etc.) But then we started wondering whether the phenomenon of the emergence that we’d seen for the familiar laws of physics might also affect mathematics—and whether it could give us something like a “bulk” version of metamathematics. I had long studied the transition from discrete “computational” elements to “bulk” behavior, first following my interest in the Second Law of thermodynamics, which stretched all the way back to age 12 in 1972, then following my work on cellular automaton fluids in the mid-1980s, and now with the emergence of physical space from underlying hypergraphs in our Physics Project. But what might “bulk” metamathematics be like? One feature of our Physics Project—in fact shared with thermodynamics—is that certain aspects of its observed behavior depend very little on the details of its components. But what did they depend on? We realized that it all had to do with the observer—and their interaction (according to what I’ve described as the 4th paradigm for science) with the general “multicomputational” processes going on underneath. For physics we had some idea what characteristics an “observer like us” might have (and actually they seemed to be closely related to our notion of consciousness). But what might a “mathematical observer” be like? In its original framing we talked about our Physics Project as being about “finding the rule for the universe”. But right around the time we launched the project we realized that that wasn’t really the right characterization. And we started talking about rulial multiway systems that instead “run every rule”—but in which an observer perceives only some small slice, that in particular can show emergent laws of physics. But what is this “run every rule” structure? In the end it’s something very fundamental: the entangled limit of all possible computations—that I call the ruliad. The ruliad basically depends on nothing: it’s unique and its structure is a matter of formal necessity. So in a sense the ruliad “necessarily exists”—and, I argued, so must our universe. But we can think of the ruliad not only as the foundation for physics, but also as the foundation for mathematics. And so, I concluded, if we believe that the physical universe exists, then we must conclude—a bit like Plato—that mathematics exists too. But how did all this relate to axiom systems and ideas about metamathematics? I had two additional pieces of input from the latter half of 2020. First, following up on a note in A New Kind of Science , I had done an extensive study of the “empirical metamathematics” of the network of the theorems in Euclid, and in a couple of math formalization systems. And second, in celebration of the 100th anniversary of their invention essentially as “primitives for mathematics”, I had done an extensive ruliological and other study of combinators. I began to work on this current piece in the fall of 2020, but felt there was something I was missing. Yes, I could study axiom systems using the formalism of our Physics Project. But was this really getting at the essence of mathematics? I had long assumed that axiom systems really were the “raw material” of mathematics—even though I’d long gotten signals they weren’t really a good representation of how serious, aesthetically oriented pure mathematicians thought about things. In our Physics Project we’d always had as a target to reproduce the known laws of physics. But what should the target be in understanding the foundations of mathematics? It always seemed like it had to revolve around axiom systems and processes of proof. And it felt like validation when it became clear that the same concepts of “substitution rules applied to expressions” seemed to span my earliest efforts to make math computational, the underlying structure of our Physics Project, and “metamodels” of axiom systems. But somehow the ruliad—and the idea that if physics exists so must math—made me realize that this wasn’t ultimately the right level of description. And that axioms were some kind of intermediate level, between the “raw ruliad”, and the “humanized” level at which pure mathematics is normally done. At first I found this hard to accept; not only had axiom systems dominated thinking about the foundations of mathematics for more than a century, but they also seemed to fit so perfectly into my personal “symbolic rules” paradigm. But gradually I got convinced that, yes, I had been wrong all this time—and that axiom systems were in many respects missing the point. The true foundation is the ruliad, and axiom systems are a rather-hard-to-work-with “machine-code-like” description below the inevitable general “physicalized laws of metamathematics” that emerge—and that imply that for observers like us there’s a fundamentally higher-level approach to mathematics. At first I thought this was incompatible with my general computational view of things. But then I realized: “No, quite the opposite!” All these years I’ve been building the Wolfram Language precisely to connect “at a human level” with computational processes—and with mathematics. Yes, it can represent and deal with axiom systems. But it’s never felt particularly natural. And it’s because they’re at an awkward level—neither at the level of the raw ruliad and raw computation, nor at the level where we as humans define mathematics. But now, I think, we begin to get some clarity on just what this thing we call mathematics really is. What I’ve done here is just a beginning. But between its explicit computational examples and its conceptual arguments I feel it’s pointing the way to a broad and incredibly fertile new understanding that—even though I didn’t see it coming—I’m very excited is now here. Notes & Thanks For more than 25 years Elise Cawley has been telling me her thematic (and rather Platonic) view of the foundations of mathematics—and that basing everything on constructed axiom systems is a piece of modernism that misses the point. From what’s described here, I now finally realize that, yes, despite my repeated insistence to the contrary, what she’s been telling me has been on the right track all along! I’m grateful for extensive help on this project from James Boyd and Nik Murzin, with additional contributions by Brad Klee and Mano Namuduri. Some of the early core technical ideas here arose from discussions with Jonathan Gorard, with additional input from Xerxes Arsiwalla and Hatem Elshatlawy. (Xerxes and Jonathan have now also been developing connections with homotopy type theory.) I’ve had helpful background discussions (some recently and some longer ago) with many people, including Richard Assar, Jeremy Avigad, Andrej Bauer, Kevin Buzzard, Mario Carneiro, Greg Chaitin, Harvey Friedman, Tim Gowers, Tom Hales, Lou Kauffman, Maryanthe Malliaris, Norm Megill, Assaf Peretz, Dana Scott, Matthew Szudzik, Michael Trott and Vladimir Voevodsky. I’d like to recognize Norm Megill, creator of the Metamath system used for some of the empirical metamathematics here, who died in December 2021. (Shortly before his death he was also working on simplifying the proof of my axiom for Boolean algebra.) Most of the specific development of this report has been livestreamed or otherwise recorded, and is available—along with archives of working notebooks—at the Wolfram Physics Project website. The Wolfram Language code to produce all the images here is directly available by clicking each image. And I should add that this project would have been impossible without the Wolfram Language, both its practical manifestation, and the ideas that it has inspired and clarified. So thanks to everyone involved in the 40+ years of its development and gestation! Graphical Key A glossary of terms that are either new here, or used in unfamiliar ways A system in which states are rules and rules update rules. Successive steps in the evolution of such a system are collections of rules that can be applied to each other. The traditional foundational way to represent mathematics using axioms, viewed here as being intermediate between the raw ruliad and human-scale mathematics. The combination of substitution and cosubstitution that corresponds to the complete set of possible transformations to make on expressions containing patterns. Space corresponding to the limit of a branchial graph that provides a map of common ancestry (or entanglement) in a multiway graph. The dual operation to substitution, in which a pattern expression that is to be transformed is specialized to allow a given rule to match it. The smallest element of existence according to our framework. In physics it can be identified as an “atom of space”, but in general it is an entity whose only internal attribute is that it is distinct from others. The expanding region of a multiway graph or token-event graph affected by a particular node. The entailment cone is the analog in metamathematical space of a light cone in physical space. A piece of metamathematical space constructed by knitting together many small entailment cones. An entailment fabric is a rough model for what a mathematical observer might effectively perceive. A combination of entailment cones starting from a collection of initial nodes. The process of rewriting (tree-structured) symbolic expressions according to rules for symbolic patterns. (Called “operator systems” in A New Kind of Science. Combinators are a special case.) An entity sampling the ruliad as a mathematician might effectively do it. Mathematical observers are expected to have certain core human-derived characteristics in common with physical observers. The space in which mathematical expressions or mathematical statements can be considered to lie. The space can potentially acquire a geometry as a limit of its construction through a branchial graph. A graph that represents an evolution process in which there are multiple outcomes from a given state at each step. Multiway graphs are central to our Physics Project and to the multicomputational paradigm in general. Parallel analogs of mathematics corresponding to different samplings of the ruliad by putative aliens or others. A symbolic expression that involves pattern variables (x_ etc. in Wolfram Language, or ∀ quantifiers in mathematical logic). The concept of treating metamathematical constructs like elements of the physical universe. Another term for the entailment cone. The subgraph in a token-event graph that leads from axioms to a given statement. The path in a multiway graph that shows equivalence between expressions, or the subgraph in a token-event graph that shows the constructibility of a given statement. The entangled limit of all possible computational processes, that is posited to be the ultimate foundation of both physics and mathematics. The limit of rulelike slices taken from a foliation of the ruliad in time. The analog in the rulelike “direction” of branchial space or physical space. The process by which an observer who has aggregated statements in a localized region of metamathematical space is effectively pulled apart by trying to cover consequences of these statements. A symbolic expression, often containing a two-way rule, and often derivable from axioms, and thus representing a lemma or theorem. An update event in which a symbolic expression (which may be a rule) is transformed by substitution according to a given rule. A graph indicating the transformation of expressions or statements (“tokens”) through updating events. A transformation rule for pattern expressions that can be applied in both directions (indicated with ). The process of giving different names to variables generated through different events. 4 comments 1. how is this related to Max Tegmark’s mathematical universe hypothesis? 2. “Because by saying that metamathematical space is in a sense uniform, we’re saying that different parts of it somehow seem similar—or in other words that there’s parallelism between what we see in different areas of mathematics, even if they’re not “nearby” in terms of entailments.” I wanted to ask if the word parallelism is the best word to use in this sentence. I’m having a hard time understanding this word used in this way. Thanks in advance. 3. As a layperson with a longstanding desire to understand the philosophy of mathematics and its relationship to the physical world, I would like to thank you very much for the effort you put into this treatise. Despite years of off-and-on-again reading on the topic, this is the first time I have started to experience the sense of satisfaction and understanding expected when one ‘gets to the bottom of’ a complex subject. 4. “(In principle one might think it should be possible to set up a state that will ‘behave antithermodynamically’—but the point is that to do so would require predicting a computationally irreducible process, which a computationally bounded observer can’t do.)” Some human experience sees that differently. Separating space and time is useful, especially your parenthetical expression that “time represents ‘computational progress’ in the universe, while space represents the ‘layout of its data “And at some rough level we might imagine that we’re sensing time passing by the rate at which we add to those internal perceptions. If we’re not adding to the perceptions, then in effect time will stop for us—as happens if we’re asleep, anesthetized or dead.” Yet, however we perceive it, time continues. “And it is only because of the way we as observers sample things that we experience time as a single thread.” But that is solipsism on a grand scale, an anthroposism, or just anthrosism for short. However, “many paths of history” is not the same thing as many threads of time. Generally, there is only one thread or sweep variously measured. Neither space (in any form) nor time (in any measure) are digital in nature, but analogic. Discreteness merely eases computation, doesn’t it? “But computationally bounded observers like us have to equivalence most of those details to wind up with something that ‘fits in our finite minds’”. But isn’t a significant part of the time problem that we have so much trouble transcending our anthropocentric view of the universe, especially regarding measurement, whether microscopically or Funny in a way that, while constantly changing, we believe that we “are the same as” we were before. When we look at other people are we able to perceive the child they were? “Yes,” to the extent they retain their connection with youth (even in more advanced age), and “no” to the extent that youthful thread has diminished to the point of not being recognizable. The varieties of time, as you especially point to regarding black holes, vary for all of us essentially equally. Time changes, but it is still what we would all experience. Try to imagine a larger view or slice of time, one that encompasses, say, an entire galaxy, group of galaxies, or all of existence. How might they differ from the more localized view(s) of time you discuss? Perhaps not at all. That’s what you discuss with your notion of a “ruliad,” is it not, although our perceptual limitations and the formation of our ideas are limited? Useful concept, the ruliad, thanks. (Meanhile, emes, memes and temes, oh, my!) “…computational irreducibility is what tells us that computationally bounded observers like us can’t in general ever “jump ahead”; we just have to follow a linear chain of steps.” Yet, at times we do “jump ahead.” The “arrows of time” are aligned because in both cases we are in effect “requiring the past to be simpler”. Great point, and one in which I’m hearing Feynman. Thoughtful article. I enjoyed reading it, despite not being a mathematician or a physicist.
{"url":"https://writings.stephenwolfram.com/2022/03/the-physicalization-of-metamathematics-and-its-implications-for-the-foundations-of-mathematics/","timestamp":"2024-11-05T11:49:16Z","content_type":"text/html","content_length":"695027","record_id":"<urn:uuid:2fe371d4-2b6f-4c71-a034-699b9842e75b>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00817.warc.gz"}
Non-parametric back-projection of incidence cases to exposure cases using a known incubation time as in Becker et al (1991) — backprojNP Non-parametric back-projection of incidence cases to exposure cases using a known incubation time as in Becker et al (1991) The function is an implementation of the non-parametric back-projection of incidence cases to exposure cases described in Becker et al. (1991). The method back-projects exposure times from a univariate time series containing the number of symptom onsets per time unit. Here, the delay between exposure and symptom onset for an individual is seen as a realization of a random variable governed by a known probability mass function. The back-projection function calculates the expected number of exposures \(\lambda_t\) for each time unit under the assumption of a Poisson distribution, but without any parametric assumption on how the \(\lambda_t\) evolve in time. Furthermore, the function contains a bootstrap based procedure, as given in Yip et al (2011), which allows an indication of uncertainty in the estimated \(\lambda_t\). The procedure is equivalent to the suggestion in Becker and Marschner (1993). However, the present implementation in backprojNP allows only a univariate time series, i.e. simultaneous age groups as in Becker and Marschner (1993) are not possible. The method in Becker et al. (1991) was originally developed for the back-projection of AIDS incidence, but it is equally useful for analysing the epidemic curve in outbreak situations of a disease with long incubation time, e.g. in order to qualitatively investigate the effect of intervention measures. backprojNP(sts, incu.pmf, control = list(k = 2, eps = rep(0.005,2), Tmark = nrow(sts), B = -1, alpha = 0.05, verbose = FALSE, lambda0 = NULL, eq3a.method = c("R","C"), hookFun = function(stsbp) {}), an object of class "sts" (or one that can be coerced to that class): contains the observed number of symptom onsets as a time series. Probability mass function (PMF) of the incubation time. The PMF is specified as a vector or matrix with the value of the PMF evaluated at \(0,...,d_max\), i.e. note that the support includes zero. The value of \(d_max\) is automatically calculated as length(incu.pmf)-1 or nrow(incu.pmf)-1. Note that if the sts object has more than one column, then for the backprojection the incubation time is either recycled for all components or, if it is a matrix with the same number of columns as the sts object, the \(k\)'th column of incu.pmf is used for the backprojection of the \(k\)'th series. A list with named arguments controlling the functionality of the non-parametric back-projection. An integer representing the smoothing parameter to use in the smoothing step of the EMS algorithm. Needs to be an even number. A vector of length two representing the convergence threshold \(\epsilon\) of the EMS algorithm, see Details for further information. The first value is the threshold to use in the \(k=0\) loop, which forms the values for the parametric bootstrap. The second value is the threshold to use in the actual fit and bootstrap fitting using the specified k. If k is only of length one, then this number is replicated twice. Numeric with \(T'\leq T\). Upper time limit on which to base convergence, i.e. only the values \(\lambda_1,\ldots,\lambda_{T'}\) are monitored for convergence. See details. The maximum number of EM iterations to do before stopping. Number of parametric bootstrap samples to perform from an initial k=0 fit. For each sample a back projection is performed. See Becker and Marschner (1993) for details. (1-\(\alpha\))*100% confidence intervals are computed based on the percentile method. (boolean). If true show extra progress and debug information. Start values for lambda. Vector needs to be of the length nrow(sts). A single character being either "R" or "C" depending on whether the three nested loops of equation 3a in Becker et al. (1991) are to be executed as safe R code (can be extremely slow, however the implementation is not optimized for speed) or a C code (can be more than 200 times faster!). However, the C implementation is experimental and can hang R if, e.g., the time series does not go far enough back. Hook function called for each iteration of the EM algorithm. The function should take a single argument stsbp of class "stsBP" class. It will be have the lambda set to the current value of lambda. If no action desired just leave the function body empty (default). Additional arguments are possible. Additional arguments are sent to the hook function. Becker et al. (1991) specify a non-parametric back-projection algorithm based on the Expectation-Maximization-Smoothing (EMS) algorithm. In the present implementation the algorithm iterates until $$\frac{||\lambda^{(k+1)} - \lambda^{(k)}||}{||\lambda^{(k)}||} < \epsilon$$ This is a slight adaptation of the proposals in Becker et al. (1991). If \(T\) is the length of \(\lambda\) then one can avoid instability of the algorithm near the end by considering only the \(\lambda\)'s with index \(1,\ldots,T'\). See the references for further information. backprojNP returns an object of "stsBP". Becker NG, Watson LF and Carlin JB (1991), A method for non-parametric back-projection and its application to AIDS data, Statistics in Medicine, 10:1527-1542. Becker NG and Marschner IC (1993), A method for estimating the age-specific relative risk of HIV infection from AIDS incidence data, Biometrika, 80(1):165-178. Yip PSF, Lam KF, Xu Y, Chau PH, Xu J, Chang W, Peng Y, Liu Z, Xie X and Lau HY (2011), Reconstruction of the Infection Curve for SARS Epidemic in Beijing, China Using a Back-Projection Method, Communications in Statistics - Simulation and Computation, 37(2):425-433. Associations of Age and Sex on Clinical Outcome and Incubation Period of Shiga toxin-producing Escherichia coli O104:H4 Infections, 2011 (2013), Werber D, King LA, Müller L, Follin P, Buchholz U, Bernard H, Rosner BM, Ethelberg S, de Valk H, Höhle M, American Journal of Epidemiology, 178(6):984-992. The method is still experimental. A proper plot routine for stsBP objects is currently missing. #Generate an artificial outbreak of size n starting at time t0 and being of length n <- 1e3 ; t0 <- 23 ; l <- 10 #PMF of the incubation time is an interval censored gamma distribution #with mean 15 truncated at 25. dmax <- 25 inc.pmf <- c(0,(pgamma(1:dmax,15,1.4) - pgamma(0:(dmax-1),15,1.4))/pgamma(dmax,15,1.4)) #Function to sample from the incubation time rincu <- function(n) { sample(0:dmax, size=n, replace=TRUE, prob=inc.pmf) #Sample time of exposure and length of incubation time exposureTimes <- t0 + sample(x=0:(l-1),size=n,replace=TRUE) symptomTimes <- exposureTimes + rincu(n) #Time series of exposure (truth) and symptom onset (observed) X <- table( factor(exposureTimes,levels=1:(max(symptomTimes)+dmax))) Y <- table( factor(symptomTimes,levels=1:(max(symptomTimes)+dmax))) #Convert Y to an sts object Ysts <- sts(Y) #Plot the outbreak plot(Ysts, xaxis.labelFormat=NULL, legend=NULL) #Add true number of exposures to the plot #Helper function to show the EM step plotIt <- function(cur.sts) { plot(cur.sts,xaxis.labelFormat=NULL, legend.opts=NULL,ylim=c(0,140)) #Call non-parametric back-projection function with hook function but #without bootstrapped confidence intervals bpnp.control <- list(k=0,eps=rep(0.005,2),iter.max=rep(250,2),B=-1,hookFun=plotIt,verbose=TRUE) #Fast C version (use argument: eq3a.method="C")! sts.bp <- backprojNP(Ysts, incu.pmf=inc.pmf, control=modifyList(bpnp.control,list(eq3a.method="C")), ylim=c(0,max(X,Y))) #Show result #Do the convolution for the expectation mu <- matrix(0,ncol=ncol(sts.bp),nrow=nrow(sts.bp)) #Loop over all series for (j in 1:ncol(sts.bp)) { #Loop over all time points for (t in 1:nrow(sts.bp)) { #Convolution, note support of inc.pmf starts at zero (move idx by 1) i <- seq_len(t) mu[t,j] <- sum(inc.pmf[t-i+1] * upperbound(sts.bp)[i,j],na.rm=TRUE) #Show the fit #Non-parametric back-projection including bootstrap CIs bpnp.control2 <- modifyList(bpnp.control, list(hookFun=NULL, k=2, B=10, # in practice, use B >= 1000 ! sts.bp2 <- backprojNP(Ysts, incu.pmf=inc.pmf, control=bpnp.control2) # Plot the result. This is currently a manual routine. # ToDo: Need to specify a plot method for stsBP objects which also # shows the CI. # Parameters: # stsBP - object of class stsBP which is to be plotted. plot.stsBP <- function(stsBP) { maxy <- max(observed(stsBP),upperbound(stsBP),stsBP@ci,na.rm=TRUE) plot(upperbound(stsBP),type="n",ylim=c(0,maxy), ylab="Cases",xlab="time") if (!all(is.na(stsBP@ci))) { polygon( c(1:nrow(stsBP),rev(1:nrow(stsBP))), #Plot the result of k=0 and add truth for comparison. No CIs available #Same for k=2
{"url":"http://surveillance.r-forge.r-project.org/pkgdown/reference/backprojNP.html","timestamp":"2024-11-07T07:21:30Z","content_type":"text/html","content_length":"43728","record_id":"<urn:uuid:bdbc65f1-09aa-46a2-b337-6fa721eb588d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00201.warc.gz"}
How to Correctly Sum Up Numbers One of the first examples for loops in probably every programming language course is taking a list of numbers and calculating the sum of them. And in every programming language, it’s trivial to write up a solution: If you compile this with a modern optimizing compiler, you will even get very efficient vectorized code, that sums millions of integers per second. Nice! Unfortunately, as you might have guessed, this is not the end of the story. There are several issues with this solution if used in any serious application where you want correct results, e.g., a database system: In this post, we are exploring the world of handling exceptional cases in numerical computations. We will focus only on the initial example of adding up a list of numbers as even this simple example comes with more than enough edge cases. Obviously, when we talk about edge cases in arithmetic on a computer, “numerical overflows” are the first thing most people will think of. Since most hardware only supports fixed-size values for the arithmetic operation, there is always a limit to the numbers the hardware can represent. Most CPU architectures support only up to 64-bit integers in their registers, for example.
{"url":"https://folu.me/post/prqneqo-d-dpbz/blog/overflow_handling","timestamp":"2024-11-13T15:13:06Z","content_type":"text/html","content_length":"235809","record_id":"<urn:uuid:c0055c43-4664-49cc-b094-16ced3718c45>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00438.warc.gz"}
First published Wed Jul 16, 2008; substantive revision Tue Oct 13, 2015 The big news about chaos is supposed to be that the smallest of changes in a system can result in very large differences in that system’s behavior. The so-called butterfly effect has become one of the most popular images of chaos. The idea is that the flapping of a butterfly’s wings in Argentina could cause a tornado in Texas three weeks later. By contrast, in an identical copy of the world sans the Argentinian butterfly, no such storm would have arisen in Texas. The mathematical version of this property is known as sensitive dependence. However, it turns out that sensitive dependence is somewhat old news, so some of the implications flowing from it are perhaps not such “big news” after all. Still, chaos studies have highlighted these implications in fresh ways and led to thinking about other implications as well. In addition to exhibiting sensitive dependence, chaotic systems possess two other properties: they are deterministic and nonlinear (Smith 2007). This entry discusses systems exhibiting these three properties and what their philosophical implications might be for theories and theoretical understanding, confirmation, explanation, realism, determinism, free will and consciousness, and human and divine action. The mathematical phenomenon of chaos is studied in sciences as diverse as astronomy, meteorology, population biology, economics and social psychology. While there are few (if any) causal mechanisms such diverse disciplines have in common, the phenomenological behavior of chaos—e.g., sensitivity to the tiniest changes in initial conditions or seemingly random and unpredictable behavior that nevertheless follows precise rules—appears in many of the models in these disciplines. Observing similar chaotic behavior in such diverse fields certainly presents a challenge to our understanding of chaos as a phenomenon. Arguably, one can say that Aristotle was already aware of something similar to what we now call sensitive dependence. Writing about methodology and epistemology, he observed that “the least initial deviation from the truth is multiplied later a thousandfold” (Aristotle OTH, 271b8). Nevertheless, thinking about how small disturbances might grow explosively to produce substantial effects on a physical system’s behavior became a phenomenon of ever intensifying investigation beginning with a famous paper by Edward Lorenz (1963.) He noted that a particular meteorological model could exhibit exquisitely sensitive dependence on small changes in initial conditions. French mathematician Jacques Hadamard had already developed the framework for partial differential equations exhibiting both continuous and discontinuous dependence on initial conditions by 1922. Any equations exhibiting sensitive but continuous dependence are well-posed problems under his framework; however, he raised the possibility that any solution to equations for a physical system exhibiting such sensitive dependence could indicate that the target system obeyed no laws (Hadamard 1922, p. 38). Lorenz’s pioneering work demonstrated that such sensitive dependence was not a matter of mathematical misdescription; rather, there was something interesting in the mathematical model exhibiting chaos. Moreover, Lorenz’s and subsequent work indicated that there seemed to be no issue of the law-likeness of target systems whose models exhibited sensitive dependence. Though some other scientists and mathematicians prior to Lorenz had examined such phenomena, these were basically isolated investigations never producing a recognizable, sustained field of inquiry as happened after the publication of Lorenz’s seminal paper. Sensitive dependence on initial conditions (SDIC) for some systems had already been identified by James Clerk Maxwell (1876, p. 13). He described such phenomena as being cases where the “physical axiom” that from like antecedents flow like consequences is violated. Like others Maxwell recognized this kind of behavior could be found in systems with a sufficiently large number of variables (possessing a sufficient level of complexity in this numerical sense). But he also argued that such sensitive dependence could happen the case of two spheres colliding (1860). Henri Poincaré (1913), on the other hand, later recognized that this same kind of behavior could be realized in systems with a small number of variables (simple systems exhibiting very complicated behavior). Pierre Duhem, relying on work by Hadamard and Poincaré, further articulated the practical consequences of SDIC for the scientist interested in deducing mathematically precise consequences from mathematical models (1982, pp. 138–142). Poincaré discussed examples that, in hindsight, we can view as raising doubts about taking explosive growth of small effects to be a sufficient condition for defining chaos. First, consider a perfectly symmetric cone precisely balanced on its tip with only the force of gravity acting on it. In the absence of any impressed forces, the cone would maintain this unstable equilibrium forever. It is unstable because the smallest nudge, from an air molecule, say, will cause the cone to tip over, but it could tip over in any direction due to the slight differences in various perturbations arising from suffering different collisions with different molecules. Here, variations in the slightest causes issue forth in dramatically different effects (a violation of Maxwell’s physical axiom). If we were to plot the tipping over of the unstable cone, we would see that from a small ball of starting conditions, a number of different trajectories issuing forth from this ball would quickly diverge from each other. The concept of nearby trajectories diverging or growing away from each other plays an important role in discussions of chaos. Three useful benchmarks for characterizing trajectory divergence are linear, exponential and geometric growth rates. Linear growth can be represented by the simple expression \(y = ax+b\), where \(a\) is an arbitrary positive constant and \(b\) is an arbitrary constant. A special case of linear growth is illustrated by stacking pennies on a checkerboard \((a = 1\), \(b = 0)\). If we use the rule of placing one penny on the first square, two pennies on the second square, three pennies on the third square, and so forth, we will end up with 64 pennies stacked on the last square. The total number of pennies on the checkerboard will be 2080. Exponential growth can be represented by the expression \(y = n_{0}e^{ax}\), where \(n_{0}\) is some initial quantity (say the initial number of pennies to be stacked) and \(a\) is an arbitrary positive constant. \((n_{0}\) is called ‘initial’ because when \(x = 0\) (the ‘initial time’), we get \(y = n_{0}\).) Going back to our penny stacking analogy \((a = 1)\), we again start with placing 1 penny on the first square, but now about 2.7 pennies are stacked on the second square, about 7.4 pennies on the third square, and so forth, and we finally end up with about \(6.2 \times 10^{27}\) pennies staked on the last square! Clearly, exponential growth outpaces linear very rapidly. Finally, we have geometric growth, which can be represented by the expression \(y = a^{bx}\), where \(a\) and \(b \) are arbitrary positive constants. Note that in the case \(a = e\) and \(b = 1\), we recover the exponential case.^[1] Many authors consider an important mark of chaos to be trajectories issuing from nearby points diverging from one another exponentially quickly. However, it is also possible for trajectory divergence to be faster than exponential. Take Poincaré’s example of a molecule in a gas of \(N\) molecules. If this molecule suffered the slightest of deviations from its initial starting point and you compared the molecule’s trajectories from these two slightly different starting points, the resulting trajectories would diverge at a geometric rate, to the \(n\)^th power, due to the \(n\) subsequent collisions, each being different than what it would have been had there been no slight change in the initial condition. A third example discussed by Poincaré is of a man walking on a street on his way to his business. He starts out at a particular time. Meanwhile unknown to him, there is a tiler working on the roof of a building on the same street. The tiler accidentally drops a tile, killing the business man. Had the business man started out at a slightly earlier or later time, the outcome of his trajectory would have been vastly different! Many intuitively think that the example of the business man is qualitatively different from Poincaré’s other two examples and has nothing to do with chaos at all. However, the cone unstably balanced on its tip that begins to fall also is not a chaotic system as it has no other identifying features usually picked out as belonging to chaotic dynamics, such as nonlinear behavior (see below). Furthermore, it only has one unstable point—the tip—whereas chaos usually requires instability at nearly all points in a region (see below). To be able to identify systems as chaotic or not, we need a definition or a list of distinguishing characteristics. But coming up with a workable, broadly applicable definition of chaos has been problematic. To begin, chaos is typically understood as a mathematical property of a dynamical system. A dynamical system is a deterministic mathematical model, where time can be either a continuous or a discrete variable. Such models may be studied as mathematical objects or may be used to describe a target system (some kind of physical, biological or economic system, say). I will return to the question of using mathematical models to represent actual-world systems throughout this article. For our purposes, we will consider a mathematical model to be deterministic if it exhibits unique evolution: (Unique Evolution) A given state of a model is always followed by the same history of state transitions. A simple example of a dynamical system would be the equations describing the motion of a pendulum. The equations of a dynamical system are often referred to as dynamical or evolution equations describing the change in time of variables taken to adequately describe the target system (e.g., the velocity as a function of time for a pendulum). A complete specification of the initial state of such equations is referred to as the initial conditions for the model, while a characterization of the boundaries for the model domain are known as the boundary conditions. An example of a dynamical system with a boundary condition would be the equation modeling the flight of a rubber ball fired at a wall by a small cannon. The boundary condition might be that the wall absorbs no kinetic energy (energy of motion) so that the ball is reflected off the wall with no loss of energy. The initial conditions would be the position and velocity of the ball as it left the mouth of the cannon. The dynamical system would then describe the flight of the ball to and from the wall. Although some popularized discussions of chaos have claimed that it invalidates determinism, there is nothing inconsistent about systems having the property of unique evolution while exhibiting chaotic behavior (much of the confusion over determinism derives from equating determinism with predictability—see below). While it is true that apparent randomness can be generated if the state space (see below) one uses to analyze chaotic behavior is coarse-grained, this produces only an epistemic form of nondeterminism. The underlying equations are still fully deterministic. If there is a breakdown of determinism in chaotic systems, that can only occur if there is some kind of indeterminism introduced such that the property of unique evolution is rendered false (e.g., §4 below). The dynamical systems of interest in chaos studies are nonlinear, such as the Lorenz model equations for convection in fluids: \[\begin{align*} \frac{dx}{dt} &= -\sigma x + \sigma y; \\ \tag{Lorenz} \frac{dy}{dt} &= rx - y + xz ; \\ \frac{dz}{dt} &= xy - bz.\\ \end{align*}\] A dynamical system is characterized as linear or nonlinear depending on the nature of the equations of motion describing the target system. Consider a differential equation system, such as \(d\bx/dt = \bF\bx\) for a set of variables \(\bx = x_1, x_2, \ldots, x_n\). These variables might represent positions, momenta, chemical concentration or other key features of the target system, and the system of equations tells us how these key variables change with time. Suppose that \(\bx_{1}(t)\) and \(\bx_{2}(t)\) are solutions of the equation system \(d\bx/dt = \bF\bx\). If the system of equations is linear, it can easily be shown that \(\bx_{3}(t) = a\bx_{1}(t)+ b\bx_{2}(t)\) is also a solution, where \(a\) and \(b\) are constants. This is known as the principle of linear superposition. So if the matrix of coefficients \(\bF\) does not contain any of the variables \(\bx\) or functions of them, then the principle of linear superposition holds. If the principle of linear superposition holds, then, roughly, a system behaves linearly: Any multiplicative change in a variable, by a factor \(\alpha\) say, implies a multiplicative or proportional change of its output by \(\alpha\). For example, if you start with your stereo at low volume and turn the volume control one unit, the volume increases by one unit. If you now turn the control two units, the volume increases two units. This is an example of a linear response. In a nonlinear system, such as (Lorenz), linear superposition fails and a system need not change proportionally to the change in a variable. If you turn your volume control too far, the volume may not only increase more than the number of units of the turn, but whistles and various other distortions occur in the sound. These are examples of a nonlinear response. Much of the modeling of physical systems takes place in what is called state space, an abstract mathematical space of points where each point represents a possible state of the system. An instantaneous state is taken to be characterized by the instantaneous values of the variables considered crucial for a complete description of the state. One advantage of working in state space is that it often allows us to study useful geometric properties of the trajectories of the target system without knowing the exact solutions to the dynamical equations. When the state of the system is fully characterized by position and momentum variables, the resulting space is often called phase space. A model can be studied in state space by following its trajectory from the initial state to some chosen final state. The evolution equations govern the path—the history of state transitions—of the system in state space. However, note that some crucial assumptions are being made here. We are assuming, for instance, that a state of a system is characterized by the values of the crucial variables and that a physical state corresponds via these values to a point in state space. These assumptions allow us to develop mathematical models for the evolution of these points in state space and such models are taken to represent (perhaps through an isomorphism or some more complicated relation) the physical systems of interest. In other words, we assume that our mathematical models are faithful representations of physical systems and that the state spaces employed faithfully represent the space of actual possibilities of target systems. This package of assumptions is known as the faithful model assumption (e.g., Bishop 2005), and, in its idealized limit—the perfect model scenario—it can license the (perhaps sloppy) slide between model talk and system talk (i.e., whatever is true of the model is also true of the target system and vice versa). In the context of nonlinear models, faithfulness appears to be inadequate (§3). The question of defining chaos is basically the question what makes a dynamical system such as (1) chaotic rather than nonchaotic. But this turns out to be a hard question to answer! Stephen Kellert defines chaos theory as “the qualitative study of unstable aperiodic behavior in deterministic nonlinear dynamical systems” (1993, p. 2). This definition restricts chaos to being a property of nonlinear dynamical systems (although in his (1993), Kellert is sometimes ambiguous as to whether chaos is only a behavior of mathematical models or of actual-world systems). That is, chaos is chiefly a property of particular types of mathematical models. Furthermore, Kellert’s definition picks out two key features that are simultaneously present: instability and aperiodicity. Unstable systems are those exhibiting SDIC. Aperiodic behavior means that the system variables never repeat any values in any regular fashion. I take it that the “theory” part of his definition has much to do with the “qualitative study” of such systems, so let’s leave that part for §2. Chaos, then, appears to be unstable aperiodic behavior in nonlinear dynamical systems. This definition is both qualitative and restrictive. It is qualitative in that there are no mathematically precise criteria given for the unstable and aperiodic nature of the behavior in question, although there are some ways of characterizing these aspects (the notions of dynamical system and nonlinearity have precise mathematical meanings). Of course can one add mathematically precise definitions of instability and aperiodicity, but this precision may not actually lead to useful improvements in the definition (see below). The definition is restrictive in that it limits chaos to be a property of mathematical models, so the import for actual physical systems becomes tenuous. At this point we must invoke the faithful model assumption—namely, that our mathematical models and their state spaces have a close correspondence to target systems and their possible behaviors—to forge a link between this definition and chaos in actual systems. Immediately we face two related questions here: 1. How faithful are our models? How strong is the correspondence with target systems? This relates to issues in realism and explanation (§5) as well as confirmation (§3). 2. Do features of our mathematical analyses, e.g., characterizations of instability, turn out to be oversimplified or problematic, such that their application to physical systems may not be useful? Furthermore, Kellert’s definition may also be too broad to pick out only chaotic behaviors. For instance, take the iterative map \(x_{n + 1} = cx_{n}\). This map obviously exhibits only orbits that are unstable and aperiodic. For instance, choosing the values \(c = 1.1\) and \(x_{0} = .5\), successive iterations will continue to increase and never return near the original value of \(x_{0}\). So Kellert’s definition would classify this map as chaotic, but the map does not have any other properties qualifying it as chaotic. This suggests Kellert’s definition of chaos would pick out a much broader set of behaviors than what is normally accepted as chaotic. Part of Robert Batterman’s (1993) discusses problematic definitions of chaos, namely, those that focus on notions of unpredictability. This certainly is neither necessary nor sufficient to distinguish chaos from sheer random behavior. Batterman does not actually specify an alternative definition of chaos. He suggests that exponential instability—the exponential divergence of two trajectories issuing forth from neighboring initial conditions (taken by many as the defining feature of SDIC)—is a necessary condition, but leaves it open as to whether it is sufficient. Figure 1: The Lorenz Attractor However, what does appear to pass as a crucial feature of chaos for Batterman—a definition if you will—is the presence of a kind of “stretching and folding” mechanism in the dynamics (see the discussion on p. 49 and figure 5 of his essay). Basically such a mechanism will cause some trajectories to converge rapidly while causing other trajectories to diverge rapidly. Such a mechanism would tend to cause trajectories issuing from various points in some small neighborhood of state space to mix and separate in rather dramatic ways. For instance, some initially neighboring trajectories on the Lorenz attractor (Figure 1) become separated, where some end up on one wing while others end up on the other wing rather rapidly. This stretching and folding is part of what leads to definitions of the distance between trajectories in state space as increasing (diverging) on average. The presence of such a mechanism in the dynamics, Batterman believes, is a necessary condition for chaos. As such, this defining characteristic could be applied to both mathematical models and actual-world systems, though the identification of such mechanisms in target systems may be rather tricky. Let us start with the property of SDIC and distinguish weak and strong forms of sensitive dependence (somewhat following Smith 1998). Weak sensitive dependence can be characterized as follows. Consider the propagator, \(\bJ(\bx(t))\), a function that evolves trajectories \(\bx(t)\) in time (an example of a propagator is given in the Appendix). Let \(\bx(0)\) and \(\by(0)\) be initial conditions for two different trajectories. Then, weak sensitive dependence can be defined as A system characterized by \(\bJ(\bx(t))\) has the property of weak sensitive dependence on its initial conditions if and only if \(\exists \varepsilon \gt 0\) \(\forall \bx(0)\) \(\forall \delta \gt 0\) \(\exists t\gt 0\) \(\exists \by(0)\), \(\abs{\bx(0) - \by(0)} \lt \delta\) and \(\abs{\bJ(\bx(t)) - \bJ(\by(t))} \gt \varepsilon.\) The essential idea is that the propagator acts so that no matter how close together \(\bx(0)\) and \(\by(0)\) are the trajectory initiating from \(\by(0)\) will eventually diverge by \(\varepsilon\) from the trajectory initiating from \(\bx(0)\). However, WSD does not specify the rate of divergence (it is compatible with linear rates of divergence) nor does it specify how many points surrounding \(\bx(0)\) will give rise to diverging trajectories (it could be a set of any measure, e.g., zero). On the other hand, chaos is usually characterized by a strong form of sensitive dependence: \(\exists \lambda\) such that for almost all points \(\bx(0)\), \(\forall \delta \gt 0\) \(\exists t\gt 0\) such that for almost all points \(\by(0)\) in a small neighborhood \((\delta)\) around \(\ bx(0)\), \(\abs{\bx(0) - \by(0)}\lt \delta\) and \(\abs{\bJ(\bx(t)) - \bJ(\by(t))} \approx \abs{\bJ(\bx(0)) - \bJ(\by(0))}e^{\lambda t}\), where the “almost all” caveat is understood as applying for all points in state space except a set of measure zero. Here, \(\lambda\) is interpreted as the largest global Lyapunov exponent (see the Appendix) and is taken to represent the average rate of divergence of neighboring trajectories issuing forth from some small neighborhood centered around \(\bx(0)\). Exponential growth is implied if \(\lambda \gt 0\) (convergence if \(\lambda \lt 0)\). In general, such growth cannot go on forever. If the system is bounded in space and in momentum, there will be limits as to how far nearby trajectories can diverge from one another. Note that according to SD, Poincaré’s first two examples would fail to qualify as characterizing a chaotic system (the first one exhibits an entire range of growth rates from zero to larger than exponential, while the second one exhibits growth larger than exponential). On the other hand, these examples do satisfy WSD. One strategy for devising a definition for chaos is to begin with discrete maps and then generalize to the continuous case. For example, if one begins with a continuous system, by using a Poincaré surface of section—roughly, a two-dimensional plane is defined and one plots the intersections of trajectories with this plane—a discrete map can be generated. If the original continuous system exhibits chaotic behavior, then the discrete map generated by the surface of section will also be chaotic because the surface of section will have the same topological properties as the continuous system. Robert Devaney’s influential definition of chaos (1989) was proposed in this fashion. Let \(f\) be a function defined on some state space \(S\). In the continuous case, \(f\) would vary continuously on \(S\) and we might have a differential equation specifying how \(f\) varies. In the discrete case, \(f\) can be thought of as a mapping that can be iterated or reapplied a number of times. To indicate this, we can write \(f^{n}(x)\), meaning \(f\) is applied iteratively \(n\) times. For instance, \(f^{3}(x)\) would indicate \(f\) has been applied three times, thus \(f^{3}(x) = f(f(f(x)))\) (Robert May’s classic 1976 review article has a nice discussion of this for the logistic map, \(x_{n + 1} = rx_{n}(1 - x_{n})\), which arises in modeling the dynamics of predator-prey relations, for instance.). Furthermore, let \(K\) be a subset of \(S\). Then \(f(K)\) represents \(f\) applied to the set of points \(K\), that is, \(f\) maps the set \(K\) into \(f(K)\). If \(f(K) = K\), then \(K\) is an invariant set under \(f\). Now Devaney’s definition of chaos can be stated as follows: A continuous map \(f\) is chaotic if \(f\) has an invariant set \(K\subseteq S\) such that 1. \(f\) satisfies WSD on \(K\), 2. The set of points initiating periodic orbits are dense in \(K\), and 3. \(f\) is topologically transitive on \(K\). Topological transitivity is the following notion: consider open sets \(U\) and \(V\) around the points \(u\) and \(v\) respectively. Regardless how small \(U\) and \(V\) are, some trajectory initiating from \(U\) eventually visits \(V\). This condition roughly guarantees that trajectories starting from points in \(U\) will eventually fill \(S\) densely. Taken together, these three conditions represent an attempt to precisely characterize the kind of irregular, aperiodic behavior we expect chaotic systems to exhibit. Devaney’s definition has the virtues of being precise and compact. However, objections have been raised against it. Since the time he proposed his definition, it has been shown that (2) and (3) imply (1) if the set \(K\) has an infinite number of elements (see Banks et al. 1992), although this result does not hold for sets with finite elements. More to the point, the definition seems counterintuitive in that it emphasizes periodic orbits rather than aperiodicity, but the latter seems a much better characterization of chaos. After all, it is precisely the lack of periodicity that is characteristic of chaos. To be fair to Devany, however, he casts his definition in terms of unstable periodic points, the kind of points where trajectories issuing forth from neighboring points would exhibit WSD. If the set of unstable periodic points is dense in \(K\), then we have a guarantee that the kinds of aperiodic orbits characteristic of chaos will be abundant. Some have argued that (2) is not even necessary for characterizing chaos (e.g., Robinson 1995, pp. 83–4). Furthermore, nothing in Devaney’s definition hints at the stretching and folding of trajectories, which appears to be a necessary condition for chaos from a qualitative perspective. Peter Smith (1998, pp. 176–7) suggests that Chaos\(_{d}\) is, perhaps, a consequence rather than a mark of chaos. Another possibility for capturing the concept of the folding and stretching of trajectories so characteristic of chaotic dynamics is the following: A discrete map \(f\) is chaotic if, for some iteration \(n \ge 1\), it maps the unit interval \(I\) into a horseshoe (see Figure 2). Figure 2: The Smale Horseshoe To construct the Smale horseshoe map (Figure 2), start with the unit square (indicated in yellow). First, stretch it in the \(y\) direction by more than a factor of two. Then compress it in the \(x\) direction by more than a factor of two. Now, fold the resulting rectangle and lay it back onto the square so that the construction overlaps and leaves the middle and vertical edges of the initial unit square uncovered. Repeating these stretching and folding operations leads to the Samale attractor. This definition has at least two virtues. First, it can be proven that Chaos\(_{h}\) implies Chaos\(_{d}\). Second, it yields exponential divergence, so we get SD, which is what many people expect for chaotic systems. However, it has a significant disadvantage in that it cannot be applied to invertible maps, the kinds of maps characteristic of many systems exhibiting Hamiltonian chaos. A Hamiltonian system is one where the total kinetic energy plus potential energy is conserved; in contrast, dissipative systems lose energy through some dissipative mechanism such as friction or viscosity. Hamiltonian chaos, then, is chaotic behavior in a Hamiltonian system. Other possible definitions have been suggested in the literature. For instance (Smith 1998, pp. 181–2), A discrete map is chaotic just in case it exhibits topological entropy: Let \(f\) be a discrete map and \(\{W_{i}\}\) be a partition of a bounded region \(W\) containing a probability measure which is invariant under \(f\). Then the topological entropy of \(f\) is defined as \(h_{T}(f) = \sup_{\{W_i\}h(f,\{W_i\})}\), where sup is the supremum of the set \(\{W_{i}\}\). Roughly, given the points in a neighborhood \(N\) around \(\bx(0)\) less than \(\varepsilon\) away from each other, after \(n\) iterates of \(f\) the trajectories initiating from the points in \(N\) will differ by \(\varepsilon\) or greater, where more and more trajectories will differ by at least \(\varepsilon\) as \(n\) increases. In the case of one-dimensional maps, however, it can be shown that Chaos\(_{h}\) implies Chaos\(_{te}\). So this does not look to be a basic definition, though it is often more useful for proving theorems relative to the other definitions. Another candidate, often found in the physics literature, is A discrete map is chaotic if it has a positive global Lyapunov exponent. The meaning of positivity here is that a global Lyapunov exponent is positive for almost all points in the specified set \(S\). This definition certainly is directly connected to SD and is one physicists often use to characterize systems as chaotic. Furthermore, it offers practical advantages when it comes to calculations and can often be “straightforwardly” related to experimental data in the sense of examining data sets generated from physical systems for global Lyapunov exponents.^[2] One might think that SD, Chaos\(_{te}\) or Chaos\(_{\lambda}\) could be sufficient for defining chaos, but these characterizations run into problems from simple counterexamples. For instance, consider a discrete dynamical system with \(S = [0, \infty)\), the absolute value as a metric (i.e., as the function that defines the distance between two points) on \(\mathbf{R}\), and a mapping \ (f: [(0, \infty) \rightarrow [0,\infty)\), \(f(x) = cx\), where \(c \gt 1\). In this dynamical system, all neighboring trajectories diverge exponentially fast, but all accelerate off to infinity. However, chaotic dynamics is usually characterized as being confined to some attractor—a strange attractor (see sec. 5.1 below) in the case of dissipative systems, the energy surface in the case of Hamiltonian systems. This confinement need not be due to physical walls of some container. If, in the case of Hamiltonian chaos, the dynamics is confined to an energy surface (by the action of a force like gravity), this surface could be spatially unbounded. So at the very least some additional conditions are needed (e.g., that guarantee trajectories in state space are dense). In much physics and philosophy literature, something like the following set of conditions seems to be assumed as adequately defining chaos: a. Trajectories are confined due to some kind of stretching and folding mechanism. b. Some trajectory orbits are aperiodic, meaning that they do not repeat themselves on any time scales. c. Trajectories exhibit SD or Chaos\(_{\lambda}\). Of these three features, (c) is often taken to be crucial to defining SDIC and is often suspected as being related to the other two. That is to say, exponential growth in the separation of neighboring trajectories characterized by \(\lambda\) is taken to be a property of a particular kind of dynamics that can only exist in nonlinear systems and models. Though the favored approaches to defining chaos involve global Lyapunov exponents, there are problems with this way of defining SDIC (and, hence, characterizing chaos). First, the definition of global Lyapunov exponents involves the infinite time limit (see the Appendix), so, strictly speaking, \(\lambda\) only characterizes growth in uncertainties as \(t\) increases without bounds, not for any finite \(t\). So the combination \(\exists \lambda\) and \(\exists t\gt 0\) in SD is inconsistent. At best, SD can only hold for the large time limit and this implies that chaos as a phenomenon can only arise in this limit, contrary to what we take to be our best evidence. Furthermore, neither our models nor physical systems run for infinite time, but an infinitely long time is required to verify the presumed exponential divergence of trajectories issuing from infinitesimally close points in state space. On might try to get around these problems by invoking the standard physicist’s assumption that an infinite-time limit can be used to effectively represent some large but finite elapsed time. However, one reason to doubt this assumption in the context of chaos is that the calculation of finite-time Lyapunov exponents do not usually lead to on-average exponential growth as characterized by global Lyapunov exponents (e.g., Smith, Ziehmann and Fraedrich 1999). In general, for finite times the propagator varies from point to point in state space (i.e., it is a function of the position \(\bx(t)\) in state space and only approaches a constant in the infinite time limit), implying that the local finite-time Lyapunov exponents vary from point to point. Therefore, trajectories diverge and converge from each other at various rates as they evolve in time—the uncertainty does not vary uniformly in the chaotic region of state space (Smith, Ziehmann and Fraedrich 1999; Smith 2000). This is in contrast to global Lyapunov exponents which are on-average global measures of trajectory divergence and which imply that uncertainty grows uniformly (for \(\lambda \gt 0)\), but such uniform growth rarely occurs outside a few simple mathematical models. For instance, the Lorenz, Moore-Spiegel, Rössler, Henon and Ikeda attractors all possess regions dominated by decreasing uncertainties in time, where uncertainties associated with different trajectories issuing forth from some small neighborhood shrink for the amount of time trajectories remain within such regions (e.g., Smith, Ziehmann and Fraedrich 1999, pp. 2870–9; Ziehmann, Smith and Kurths 2000, pp. 273–83). Hence, on-average exponential growth in trajectory divergence is not guaranteed for chaotic dynamics. Linear stability analysis can indicate when nonlinearities can be expected to dominate the dynamics, and local finite-time Lyapunov exponents can indicate regions on an attractor where these nonlinearities will cause all uncertainties to decrease—cause trajectories to converge rather than diverge—so long as trajectories remain in those regions. To summarize, the folklore that trajectories issuing forth from neighboring points will diverge on-average exponentially in a chaotic region of state space is false in any sense other than for infinitesimal uncertainties in the infinite time limit for simple mathematical models. The second problem with the standard account is that there simply is no implication that finite uncertainties will exhibit an on-average growth rate characterized by any Lyapunov exponents, local or global. For example, the linearized dynamics used to derive global Lyapunov exponents presupposes infinitesimal uncertainties (Appendix (A1)–(A5)). But when uncertainties are finite, such dynamics do not apply and no valid conclusions can be drawn about the dynamics of finite uncertainties from the dynamics of infinitesimal uncertainties. Certainly infinitesimal uncertainties never become finite in finite time (barring super exponential growth). Even if infinitesimal uncertainties became finite after a finite time, that would presuppose the dynamics is unconfined, whereas the interesting features of nonlinear dynamics usually take place in subregions of state space. Presupposing an unconfined dynamics would be inconsistent with the features we are typically trying to capture. Can the on average exponential growth rate characterizing SD ever be attributed legitimately to diverging trajectories if their separation is no longer infinitesimal? Examining simple models (e.g., the Baker’s transformation) might seem to indicate yes. However, answering this question requires some care for more complex systems like the Lorenz or Moore-Spiegel attractors. It may turn out that the rate of divergence in the finite separation between two nearby trajectories in a chaotic region changes character numerous times over the course of their winding around in state space, sometimes faster, sometimes slower than that calculated from global Lyapunov exponents, sometimes contracting, sometimes diverging (Smith, Ziehmann and Fraedrich 1999; Ziehmann, Smith and Kurths 2000). But in the long run, some of these trajectories could effectively diverge as if there was on-average exponential growth in uncertainties as characterized by global Lyapunov exponents. However, it is conjectured that the set of initial points in the state space exhibiting this behavior is a set of measure zero, meaning, in this context, that although there are an infinite number of points exhibiting this behavior, this set represents zero percent of the number of points composing the attractor. The details of the kinds of divergence (convergence) neighboring trajectories undergo turn on the detailed structure of the dynamics (i.e., it is determined point-by-point by local growth and convergence of finite uncertainties and not by any Lyapunov exponents). But as a practical matter, all finite uncertainties saturate at the diameter of the attractor. This is to say, that the uncertainty reaches some maximum amount of spreading after a finite time and is not well quantified by global measures derived from Lyapunov exponents (e.g., Lorenz 1965). So the folklore—that on-average exponential divergence of trajectories characterizes chaotic dynamics—is misleading for nonlinear models and systems, in particular the ones we want to label as chaotic. Therefore, drawing an inference from the presence of positive global Lyapunov exponents to the existence of on-average exponentially diverging trajectories is invalid. This has implications for defining chaos because exponential growth parametrized by global Lyapunov exponents turns out not to be an appropriate measure. Hence, SD or Chaos\(_{\lambda}\) turn out to be misleading definitions of chaos. Finally, I want to briefly draw attention to the observer-dependent nature of global Lyapunov exponents in the special theory of relativity. As has been recently demonstrated (Zheng, Misra and Atmanspacher 2003), global Lyapunov exponents change in magnitude under Lorentz transformations, though not in sign—e.g., positive Lyapunov exponents are always positive under Lorentz transformations. Moreover, under Rindler transformations, global Lyapunov exponents are not invariant so that a system characterized as chaotic under SD or Chaos\(_{\lambda}\) for an accelerated Rindler observer turns out to be nonchaotic for an inertial Minkowski observer and any system that is chaotic for a an inertial Minkowski observer is nonchaotic for an accelerated Rindler observer. So along with the simultaneity subtleties raised for observers by Einstein’s theory of special relativity (see the entry on conventionality of simultaneity), chaos, at least under SD or Chaos\(_{\ lambda}\), turns out to also have observer-dependent features for pairs of observers in different reference frames. What these features mean for our understanding of the phenomenon of chaos remains largely unexplored. There is no consensus regarding a precise definition of chaotic behavior among mathematicians and physicists, although physicists often prefer Chaos\(_{h}\) or Chaos\(_{\lambda}\). The latter definitions, however, are trivially false for finite uncertainties in real systems and of limited applicability for mathematical models. It also appears to be the case that there is no one “right” or “correct” definition, but that varying definitions have varying strengths and weaknesses regarding tradeoffs on generality, theorem-generation, calculation ease and so forth. The best candidates for necessary conditions for chaos still appear to be (1) WSD, which is rather weak, or (2) the presence of stretching and folding mechanisms (“pulls trajectories apart” in one dimension while “compressing them” in another). The other worry is that the definitions we have been considering may only hold for our mathematical models, but may not be applicable to actual target systems. The formal definitions seek to fully characterize chaotic behavior in mathematical models, but we are also interested in capturing chaotic behavior in physical and biological systems as well. Phenomenologically, the kinds of chaotic behaviors we see in actual-world systems exhibit features such as SDIC, aperiodicity, unpredictability, instability under small perturbations and apparent randomness. However, given that target systems run for only a finite amount of time and that the uncertainties are always larger than infinitesimal, such systems violate the assumptions necessary for deriving SD. In other words, even if we have good statistical measures that yield on average exponential growth in uncertainties for a physical data set, what guarantee do we have that this corresponds with the exponential growth of SD? After all, any growth in uncertainties (alternatively, any growth in distance between neighboring trajectories) can be fitted with an exponential. If there is no physical significance to global Lyapunov exponents (because they only apply to infinitesimal uncertainties), then one is free to choose any parameter to fit an exponential for the growth in uncertainties. So where does this leave us regarding a definition of chaos? Are all our attempts at definitions inadequate? Is there only one definition for chaos, and if so, is it only a mathematical property or also a physical one? Do we, perhaps, need multiple definitions (some of which are nonequivalent) to adequately characterize such complex and intricate behavior? Is it reasonable to expect that the phenomenological features of chaos of interest to physicists and applied mathematicians can be captured in precise mathematical definitions given that there may be irreducible vagueness in the characterization of these features? From a physical point of view, isn’t a phenomenological characterization sufficient for the purpose of identifying and exploring the underlying mechanisms responsible for the stretching and folding of trajectories? The answers to these questions largely lie in our purposes for the kinds of inquiry in which we are engaged (e.g., proving rigorous mathematical theorems vs. detecting chaotic behavior in physical data vs. designing systems to control such behavior). Sitting in the background for all of these discussions is nonlinearity. Chaos only exists in nonlinear systems (at least for classical macroscopic systems; see sec. 6 for subtitles regarding quantum chaos). Nonlinearity appears to be a necessary condition for the stretching and folding mechanisms, so would seem to be a necessary condition for chaotic behavior. However, there is an alternative way to characterize the systems in which such stretching and folding takes place: nonseparability. As discussed in Section 1.2.2, linear systems always obey the principle of linear superposition. This implies that the Hamiltonians for such systems are always separable. A separable Hamiltonian can always be transformed into a sum of separate Hamiltonians with one element in the sum corresponding to each subsystem. In effect, a separable system is one where the interactions among subsystems can be transformed away leaving the subsystems independent of each other. The whole is the sum of the parts, as it were. Chaos is impossible for separable Hamiltonians. For a nonlinear systems, by contrast, Hamiltonians are never separable. There are no transformation techniques that can turn a nonseparable Hamiltonian into the sum of separate Hamiltonians. In other words, the interactions in a nonlinear system cannot be decomposed into individual independent subsystems, nor can the whole system and its environment be ignored (Bishop 2010a). Nonseparable classical systems are the kinds of systems where chaotic behavior can manifest itself. So alternatively one could say that nonseparability of a Hamiltonian is a necessary condition for stretching and folding mechanisms and, hence, for chaos (e.g., Kronz 1998). One often finds references in the literature to “chaos theory.” For instance, Kellert characterizes chaos theory as “the qualitative study of unstable aperiodic behavior in deterministic nonlinear systems” (Kellert 1993, p. 2). In what sense is chaos a theory? Is it a theory in the same sense that electrodynamics or quantum mechanics are theories? Answering such questions is difficult if for no other reason than that there is no consensus about what a theory is. Scientists often treat theories as systematic bodies of knowledge that provide explanations and predictions for actual-world phenomena. But trying to get more specific or precise than this generates significant differences for how to conceptualize theories. Options here range from the axiomatic or syntactic view of the logical positivists and empiricists (see Vienna Circle) to the semantic or model-theoretic view (see models in science), to Kuhnian (see Thomas Kuhn) and less rigorous conceptions of theories. The axiomatic view of theories appears to be inapplicable to chaos. There are no axioms—no laws—no deductive structures, no linking of observational statements to theoretical statements whatsoever in the literature on chaotic dynamics. Kellert’s (1993) focus on chaos models is suggestive of the semantic view of theories, and many texts and articles on chaos focus on models (e.g., logistic map, Henon map, Lorenz attractor). Briefly, on the semantic view, a theory is characterized by (1) some set of models and (2) the hypotheses linking these models with idealized physical systems. The mathematical models discussed in the literature are concrete and fairly well understood, but what about the hypotheses linking chaos models with idealized physical systems? In the chaos literature, there is a great deal of discussion of various robust or universal patterns and the kinds of predictions that can and cannot be made using chaotic models. Moreover, there is a lot of emphasis on qualitative predictions, geometric “mechanisms” and patterns, but this all comes up short of spelling out hypotheses linking chaos models with idealized physical systems. One possibility is to look for hypotheses about how such models are deployed when studying actual physical systems. Chaos models seem to be deployed to ascertain various kinds of information about bifurcation points, period doubling sequences, the onset of chaotic dynamics, strange attractors and other denizens of the chaos zoo of behaviors. The hypotheses connecting chaos models to physical systems would have to be filled in if we are to employ the semantic conception fully. I take it these would be hypotheses about, for example, how strange attractors reconstructed from physical data relate to the physical system from which the data were originally recorded. Or about how a one-dimensional map for a particular full nonlinear model (idealized physical system) developed using, say, Poincaré surface of section techniques, relates to the target system being modeled. Such an approach does seem consistent with the semantic view as illustrated with classical mechanics. There we have various models such as the harmonic oscillator and hypotheses about how these models apply to idealized physical systems, including specifications of spring constants and their identification with mathematical terms in a model, small oscillation limits, and so forth. But in classical mechanics there is a clear association between the models of a theory and the state spaces definable over the variables of those models, with a further hypothesis about the relationship between the model state space and that of the physical system being modeled (the faithful model assumption, §1.2.3). One can translate between the state spaces and the models and, in the case of classical mechanics, can read the laws off as well (e.g., Newton’s laws of motion are encoded in the possibilities allowed in the state spaces of classical mechanics). Unfortunately, the connection between state spaces, chaotic models and laws is less clear. Indeed, there currently are no good candidates for laws of chaos over and above the laws of classical mechanics, and some, such as Kellert, explicitly deny that chaos modeling is getting at laws at all (1993, ch. 4). Furthermore, the relationship between the state spaces of chaotic models and the spaces of idealized physical systems is quite delicate, which seems to be a dissimilarity between classical mechanics and “chaos theory.” In the former case, we seem to be able to translate between models and state spaces.^[3] In the latter, we can derive a state space for chaotic models from the full nonlinear model, but we cannot reverse the process and get back to the nonlinear model state space from that of the chaotic model. One might expect the hypotheses connecting chaos models with idealized physical systems to piggy back on the hypotheses connecting classical mechanics models with their corresponding idealized physical systems. But it is neither clear how this would work in the case of nonlinear systems in classical mechanics, nor how this would work for chaotic models in biology, economics and other disciplines.^[4] Additionally, there is another potential problem that arises from thinking about the faithful model assumption, namely what is the relationship or mapping between model and target system? Is it one-to-one as we standardly assume? Or is it a one-to-many relation (several different nonlinear models of the same target system or, potentially, vice versa) or a many-to-many relationship?^[5] For many classical mechanics problems—namely, where linear models or force functions are used in Newton’s second law—the mapping or translation between model and target system appears to be straightforwardly one-to-one. However, in nonlinear contexts, where one might be constructing a model from a data set generated by observing a system, there are potentially many nonlinear models that can be constructed, where each model is as empirically adequate to the system behavior as any other. Is there really only one unique model for each target system and we simply do not know which is the “true” one (say, because of underdeterminiation problems—see scientific realism)? Or is there really no one-to-one relationship between our mathematical models and target systems? Moreover, an important feature of the semantic view is that models are only intended to capture the crucial features of target systems and always involve various forms of abstraction and idealization (see models in science). These caveats are potentially deadly in the context of nonlinear dynamics. Any errors in our models for such systems, no matter how accurate our initial data, will lead to errors in predicting actual systems as these errors will grow (perhaps rapidly) with time. This brings out one of the problems with the faithful model assumption that is hidden, so to speak, in the context of linear systems. In the latter context, models can be erroneous by leaving out “negligible” factors and, at least for reasonable times, our model predictions do not differ significantly with the target systems we are modeling (wait long enough, however, and such predictions will differ significantly). In nonlinear contexts, by contrast, it is not so clear there are any “negligible” factors. Even the smallest omission in a nonlinear model can lead to disastrous effects because the differences these terms would have made versus their absence potentially can be rapidly amplified as the model evolves (see §3). Another possibility is to drop hypotheses connecting models with target systems and simply focus on the defining models of the semantic view of theories. This is very much the spirit of the mathematical theory of dynamical systems. There the focus is on models and their relations, but there is no emphasis on hypotheses connecting these models with actual systems, idealized or otherwise. Unfortunately, this would mean that chaos theory would be only a mathematical theory and not a physical one. Both the syntactic and semantic views of theories focus on the formal structure of theoretical bodies, and their “fit” with theorizing about chaotic dynamics seems quite problematic. In contrast, perhaps one should conceive of chaos theory in a more informal or paradigmatic way, say along the lines of Kuhn’s (1996) analysis of scientific paradigms. There is no emphasis on the precise structure of scientific theories in Kuhn’s picture of science. Rather, theories are cohesive, systematic bodies of knowledge defined mainly by the roles they play in normal science practice within a dominant paradigm. There is a very strong sense in literature about chaos that a “new paradigm” has emerged out of chaos research with its emphasis on unstable rather than stable behavior, on dynamical patterns rather than on mechanisms, on universal features (e.g., Feigenbaum’s number) rather than laws, and on qualitative understanding rather than on precise prediction. Whether or not chaotic dynamics represents a genuine scientific paradigm, the use of the term ‘chaos theory’ in much of the scientific and philosophical literature has the definite flavor of characterizing and understanding complex behavior rather than an emphasis on the formal structure of principles and hypotheses. Given a target system to be modeled, and invoking the faithful model assumption, there are two basic approaches to model confirmation discussed in the philosophical literature on modeling following a strategy known as piecemeal improvement (I will ignore bootstrapping approaches as they suffer similar problems, but only complicate the discussion). These piecemeal strategies are also found in the work of scientists modeling actual-world systems and represent competing approaches vying for government funding (for an early discussion, see Thompson 1957). The first basic approach is to focus on successive refinements to the accuracy of the initial data used by the model while keeping the model itself fixed (e.g., Laymon 1989, p. 359). The idea here is that if a model is faithful in reproducing the behavior of the target system to some degree, refining the precision of the initial data fed to the model will lead to its behavior monotonically converging to the target system’s behavior. This is to say that as the uncertainty in the initial data is reduced, a faithful model’s behavior is expected to converge to the target system’s behavior. The import of the faithful model assumption is that if one were to plot the trajectory of the target system in an appropriate state space, the model trajectory in the same state space would monotonically become more like the system trajectory on some measure as the data is refined (I will ignore difficulties regarding appropriate measures for discerning similarity in trajectories; see Smith 2000). The second basic approach is to focus on successive refinements of the model while keeping the initial data fixed (e.g., Wimsatt 1987). The idea here is that if a model is faithful in reproducing the behavior of the target system, refining the model will produce an even better fit with the target system’s behavior. This is to say that if a model is faithful, successive improvements will lead to its behavior monotonically converging to the target system’s behavior. Again, the import of the faithful model assumption is that if one were to plot the trajectory of the target system in an appropriate state space, the model trajectory in the same state space would monotonically become more like the system trajectory as the model is made more realistic. What both of these basic approaches have in common is that piecemeal monotonic convergence of model behavior to target system behavior is a mark for confirmation of the model (Koperski 1998). By either improving the quality of the initial data or improving the quality of the model, the model in question reproduces the target system’s behavior monotonically better and yields predictions of the future states of the target system that show monotonically less deviation with respect to the behavior of the target system. In this sense, monotonic convergence to the behavior of the target system is a key criterion for whether the model is confirmed. If monotonic convergence to the target system behavior is not found by pursuing either of these basic approaches, then the model is considered to be disconfirmed. For linear models it is easy to see the intuitive appeal of such piecemeal strategies. After all, for linear systems of equations a small change in the magnitude of a variable is guaranteed to yield a proportional change in the output of the model. So by making piecemeal refinements to the initial data or to the linear model only proportional changes in model output are expected. If the linear model is faithful, then making small improvements “in the right direction” in either the initial data or the model itself can be tracked by improved model performance. The qualifier “in the right direction,” drawing upon the faithful model assumption, means that the data quality really is increased or that the model really is more realistic (captures more features of the target system in an increasingly accurate way), and is signified by the model’s monotonically improved performance with respect to the target system. However, both of these basic approaches to confirming models encounter serious difficulties when applied to nonlinear models, where the principle of linear superposition no longer holds. In the first approach, successive small refinements in the initial data used by nonlinear models is not guaranteed to lead to any convergence between model behavior and target system behavior. Any small refinements in initial data can lead to non-proportional changes in model behavior rendering this piecemeal convergence strategy ineffective as a means for confirming the model. A refinement of the quality of the data “in the right direction” is not guaranteed to lead to a nonlinear model monotonically improving in capturing the target system’s behavior. The small refinement in data quality may very well lead to the model behavior diverging away from the system’s behavior.^[6] In the second approach, keeping the data fixed but making successive refinements in nonlinear models is also not guaranteed to lead to any convergence between model behavior and target system behavior. With the loss of linear superposition, any small changes in the model can lead to non-proportional changes in model behavior again rendering the convergence strategy ineffective as a means for confirming the model. Even if a small refinement to the model is made “in the right direction,” there is no guarantee that the nonlinear model will monotonically improve in capturing the target system’s behavior. The small refinement in the model may very well lead to the model behavior diverging away from the system’s behavior. So whereas for linear models piecemeal strategies might be expected to lead to better confirmed models (presuming the target system exhibits only stable linear behavior), no such expectation is justified for nonlinear models deployed in the characterization of nonlinear target systems. Even for a faithful nonlinear model, the smallest changes in either the initial data or the model itself may result in non-proportional changes in model output, an output that is not guaranteed to “move in the right direction” even if the small changes are made “in the right direction” (of course, this lack of guarantee of monotonic improvement also raises questions about what “in the right direction” means, but I will ignore these difficulties here). Intuitively, piecemeal convergence strategies look to be dependent on the perfect model scenario. Given a perfect model, refining the quality of the data should lead to monotonic convergence of the model behavior to the target system’s behavior, but even this expectation is not always justifiable for perfect models (cf. Judd and Smith 2001; Smith 2003). On the other hand, given good data, perfecting a model intuitively should also lead to monotonic convergence of the model behavior to the target system’s behavior. By making small changes to a nonlinear model, hopefully based on improved understanding of relevant features of the target system (e.g., the physics of weather systems or the structures of economies), there is no guarantee that such changes will produce monotonic improvement in the model’s performance with respect to the target system’s behavior. The loss of linear superposition, then, leads to a similar lack of guarantee of a continuous path of improvement as the lack of guarantee of piecemeal confirmation. And without such a guaranteed path of improvement, there is no guarantee that a faithful nonlinear model can be perfected by piecemeal means. Of course, we do not have perfect models. But even if we did, they are unlikely to live up to our intuitions about them (Judd and Smith 2001; Judd and Smith 2004). For example, no matter how many observations of a system are made, there still will be a set of trajectories in the model state space that are indistinguishable from the actual trajectory of the target system. Indeed, even for infinite past observations, we cannot eliminate the uncertainty in the epistemic states given some unknown ontological state of the target system. One important reason for this difficulty follows from the faithful model assumption. Suppose the nonlinear model state space is a faithful representation of the possibilities lying in the physical space of the target system. No matter how fine-grained we make our model state space, it will still be the case that there are many different states of the actual target system (ontological states) that are mappable into the same state of the model state space (epistemic states). This means that there will always be many more target system states than there are model states for any computational models since the equations have to be discretized. In principle, in those cases where we can develop a fully analytical model, we could get an exact match between the number of possible model states and the number of target system states. However, such analytical models are rare in complexity studies (many of the analytical models are toy models, like the baker’s map, which, while illustrative of techniques, are misleading when it comes to metaphysical and ontological conclusions due to their simplicity). Therefore, whether there is a perfect model or not for a target system, there is no guarantee of monotonic improvement with respect to the target system’s behavior. Traditional piecemeal confirmation strategies fail. This is the upshot of the failure of the principle of linear superposition. No matter how faithful the model, no guarantee of piecemeal monotonic improvement of a nonlinear model’s behavior with respect to the target system can be made (of course, if one waits for long enough times piecemeal confirmation strategies will also fail for linear systems). Furthermore, problems with these confirmation strategies will arise whether one is seeking to model point-valued trajectories in state space or one is using probability densities defined on state space. One possible response to the piecemeal confirmation problems discussed here is to turn to a Bayesian framework for confirmation, but similar problems arise here for nonlinear models. Given that there are no perfect models in the model class to which we would apply a Bayesian scheme and given the fact that imperfect models will fail to reproduce or predict target system behavior over time scales that may be short compared to our interests, there again is no guarantee that monotonic improvement can be achieved for our nonlinear models (I leave aside the problem that having no perfect model in our model class renders many Bayesian confirmation schemes ill-defined). For nonlinear models, faithfulness can fail and piecemeal perfectibility cannot be guaranteed, raising questions about scientific modeling practices and our understanding of them. However, the implications of the loss of linear superposition reach father than this. Policy assessment often utilizes model forecasts and if the models and systems lying at the core of policy deliberations are nonlinear, then policy assessment will be affected by the same lack of guarantee as model confirmation. Suppose administrators are using a nonlinear model in the formulation of economic policies designed to keep GDP ever increasing while minimizing unemployment (among achieving other socio-economic goals). While it is true that there will be some uncertainty generated by running the model several times over slightly different data sets, assume that policies taking these uncertainties into account to some degree can be fashioned. Once in place, the policies need assessment regarding their effectiveness and potential adverse effects, but such assessment will not involve merely looking at monthly or quarterly reports on GDP and employment data to see if targets are being met. The nonlinear economic model driving the policy decisions will need to be rerun to check if trends are indeed moving “in the right direction” with respect to the earlier forecasts. But, of course, data for the model now has changed and there is no guarantee that the model will produce a forecast with this new data that fits well with the old forecasts used to craft the original policies. Nor is there a guarantee of any fit between the new runs of the nonlinear model and the economic data being gathered as part of ongoing monitoring of the economic policies. How, then, are policy makers to make reliable assessments of policies? The same problem—that small changes in data or model in nonlinear contexts are not guaranteed to yield proportionate model outputs or monotonically improved model performance—also plagues policy assessment using nonlinear models. Such problems remain largely unexplored. One of the exciting features of SDIC is that there is no lower limit on just how small some change or perturbation can be—the smallest of effects will eventually be amplified up affecting the behavior of any system exhibiting SDIC. A number of authors have argued that chaos through SDIC opens a door for quantum mechanics to “infect” chaotic classical mechanics systems (e.g., Hobbs 1991; Barone et al. 1993; Kellert 1993; Bishop 2008). ^[7] The essential point is that the nature of particular kinds of nonlinear dynamics—those which exhibit stretching and folding (confinement) of trajectories, where there are no trajectory crossings, and which exhibit aperiodic orbits—apparently open the door for quantum effects to change the behavior of chaotic macroscopic systems. The central argument runs as follows and is known as the sensitive dependence argument (SD argument for short): A. For systems exhibiting SDIC, trajectories starting out in a highly localized region of state space will diverge on-average exponentially fast from one another. B. Quantum mechanics limits the precision with which physical systems can be specified to a neighborhood in phase space of no less than \(1/(2\pi/h)^{N}\), where \(h\) is Plank’s constant (with units of action) and \(N\) is the dimension of the system in question. C. Given enough time and the quantum mechanical bound on the neighborhood \(\varepsilon\) for the initial conditions, two trajectories of the same chaotic system will have future states localizable to a much larger region \(\delta\) in phase space (from (A) and (B)). D. Therefore, quantum mechanics will influence the outcomes of chaotic systems leading to a violation of unique evolution. Premise (A) makes clear that SD is the operative definition for characterizing chaotic behavior in this argument, invoking exponential growth characterized by the largest global Lyapunov exponent. Premise (B) expresses the precision limit for the state of minimum uncertainty for momentum and position pairs in an \(N\)-dimensional quantum system (note, the exponent is \(2N\) in the case of uncorrelated electrons).^[8] The conclusion of the argument in the form given here is actually stronger than that quantum mechanics can influence a macroscopic system exhibiting SDIC; determinism fails for such systems because of such influences. Briefly, the reasoning runs as follows. Because of SDIC, nonlinear chaotic systems whose initial states can be located only within a small neighborhood \(\varepsilon\) of state space will have future states that can be located only within a much larger patch \(\delta\). For example, two isomorphic nonlinear systems of classical mechanics exhibiting SDIC, whose initial states are localized within \(\varepsilon\), will have future states that can be localized only within \(\delta\). Since quantum mechanics sets a lower bound on the size of the patch of initial conditions, unique evolution must fail for nonlinear chaotic systems. The SD argument does not go through as smoothly as some of its advocates have thought, however. There are difficult issues regarding the appropriate version of quantum mechanics (e.g., von Neumann, Bohmian or decoherence theories; see entries under quantum mechanics), the nature of quantum measurement theory (collapse vs. non-collapse theories; see the section on the measurement problem in the entry on philosophical issues in quantum theory), and the selection of the initial state characterizing the system that must be resolved before one can say clearly whether or not unique evolution is violated. For instance, just because quantum effects might influence macroscopic chaotic systems doesn’t guarantee that determinism fails for such systems. Whether quantum interactions with nonlinear macroscopic systems exhibiting SDIC contribute indeterministically to the outcomes of such systems depends on the currently undecidable question of indeterminism in quantum mechanics and the measurement problem as well as on how one chooses to the system-measurement apparatus cut (Bishop 2008). To expand on one issue, there is a serious open question as to whether the indeterminism in quantum mechanics is simply the result of ignorance due to epistemic limitations or if it is an ontological feature of the quantum world. Suppose that quantum mechanics is ultimately deterministic, but that there is some additional factor, a hidden variable as it is often called, such that if this variable were available to us, our description of quantum systems would be fully deterministic. Another possibility is that there is an interaction with the broader environment that accounts for how the probabilities in quantum mechanics arise (physicists call this approach “decoherence”). Under either of these possibilities, we would interpret the indeterminism observed in quantum mechanics as an expression of our ignorance, and, hence, indeterminism would not be a fundamental feature of the quantum domain. It would be merely epistemic in nature due to our lack of knowledge or access to quantum systems. So if the indeterminism in QM is not ontologically genuine, then whatever contribution quantum effects make to macroscopic systems exhibiting SDIC would not violate unique evolution. In contrast, suppose it is the case that quantum mechanics is genuinely indeterministic; that is, all the relevant factors of quantum systems do not fully determine their behavior at any given moment. Then the possibility exists that not all physical systems traditionally thought to be in the domain of classical mechanics can be described using strictly deterministic models, leading to the need to approach the modeling of such nonlinear systems differently. Moreover, the possible constraints of nonlinear classical mechanics systems on the amplification of quantum effects must be considered on a case-by-case basis. For instance, damping due to friction can place constraints on how quickly amplification of quantum effects can take place before they are completely washed out (Bishop 2008). And one has to investigate the local finite-time dynamics for each system because these may not yield any on-average growth in uncertainties (e.g., Smith, Ziehmann, Fraedrich 1999). In summary, there is no abstract, a priori reasoning establishing the truth of the SD argument; the argument can only be demonstrated on a case-by-case basis. Perhaps detailed examination of several cases would enable us to make some generalizations about how wide spread the possibilities for the amplification of quantum effects are. Two traditional topics in philosophy of science are realism and explanation. Although not well explored in the context of chaos, there are interesting questions regarding both topics deserving of further exploration. Chaos raises a number of questions about scientific realism (see scientific realism) only some of which will be touched on here. First and foremost, scientific realism is usually formulated as a thesis about the status of unobservable terms in scientific theories and their relationship to entities, events and processes in the actual world. In other words, theories make various claims about features of the world and these claims are at least approximately true. But as we saw in §2, there are serious questions about formulating a theory of chaos, let alone determining how this theory fares under scientific realism. It seems more reasonable, then, to discuss some less ambitious realist questions regarding chaos: Is chaos an actual phenomenon? Do the various denizens of chaos, like fractals, actually exist? This leads us back to the faithful model assumption (§1.2.3). Recall this assumption maintains that our model equations faithfully capture target system behavior and that the model state space faithfully represents the actual possibilities of the target system. Is the sense of faithfulness here that of actual correspondence between mathematical models and features of actual systems? Or can faithfulness be understood in terms of empirical adequacy alone, a primarily instrumentalist construal of faithfulness? Is a realist construal of faithfulness threatened by the mapping between models and systems potentially being one-to-many or many-to-many? A related question is whether or not our mathematical models are simulating target systems or merely mimicking their behavior. To be simulating a system suggests that there is some actual correspondence between the model and the target system it is designed to capture. On the other hand, if a mathematical model is merely mimicking the behavior of a target system, there is no guarantee that the model has any genuine correspondence to the actual properties of the target system. The model merely imitates behavior. These issues become crucial for modern techniques of building nonlinear dynamical models from large time series data sets (e.g., Smith 1992), for example the sunspot record or the daily closing value of a particular stock for some specific period of time. In such cases, after performing some tests on the data set, the modeler sets to work constructing a mathematical model that reproduces the time series as its output. Do such models only mimic behavior of target systems? Where does realism come into the picture? A further question regarding chaos and realism is the following: Is chaos only a feature of our mathematical models or is it a genuine feature of actual systems in our world? This question is well illustrated by a peculiar geometric structure of dissipative chaotic models called a strange attractor, which can form based upon the stretching and folding of trajectories in state space. Strange attractors normally only occupy a subregion of state space, but once a trajectory wanders close enough to the attractor, it is caught near the surface of the attractor for the rest of its future. One of the characteristic features of strange attractors is that they posses self-similar structure. Magnify any small portion of the attractor and you would find that the magnified portion would look identical to the regular-sized region. Magnify the magnified region and you would see the identical structure repeated again. Continuous repetition of this process would yield the same results. The self-similar structure is repeated on arbitrarily small scales. An important geometric implication of self-similarity is that there is no inherent size scale so that we can take as large a magnification of as small a region of the attractor as we want and a statistically similarly structure will be repeated (Hilborn 1994, p. 56). In other words, strange attractors for chaotic models have an infinite number of layers of repetitive structure. This type of structure allows trajectories to remain within a bounded region of state space by folding and intertwining with one another without ever intersecting or repeating themselves exactly. Strange attractors also are often characterized as possess noninteger or fractal dimension (though not all strange attractors have such dimensionality). The type of dimensionality we usually meet in physics as well as in everyday experience is characterized by integers. A point has dimension zero; a line has dimension one; a square has dimension two; a cube has dimension three and so on. As a generalization of our intuitions regarding dimensionality, consider a large square. Suppose we fill this large square with smaller squares each having an edge length of \(\varepsilon\). The number of small squares needed to completely fill the space inside the large square is \(N(\varepsilon)\). Now repeat this process of filling the large square with small squares, but each time let the length \ (\varepsilon\) get smaller and smaller. In the limit as \(\varepsilon\) approached zero, we would find that the ratio \(\ln N(\varepsilon)/\ln(1 / \varepsilon)\) equals two just as we would expect for a 2-dimensional square. You can imagine the same exercise of filling a large 3-dimensional cube (a room, say) with smaller cubes and in the limit of \(\varepsilon\) approaching zero, we would arrive at a dimension of three. When we apply this generalization of dimensionality to the geometric structure of strange attractors, we often find noninteger dimensionality. Roughly this means that if we try to apply the same procedure of “filling” the structure formed by the strange attractor with small squares or cubes, in the limit as \(\varepsilon\) approaches zero the result is noninteger. Whether one is examining a set of nonlinear mathematical equations or analyzing the time series data from an experiment, the presence of self-similarity or noninteger dimension are indications that the chaotic behavior of the system under study is dissipative (nonconservative, doesn’t conserve energy) rather than Hamiltonian (does conservative energy). Although there is no universally accepted definition for strange attractors or fractal dimension among mathematicians, the more serious question is whether strange attractors and fractal dimensions are properties of our models only or also of actual-world systems. For instance, empirical investigations of a number of actual-world systems indicate that there is no infinitely repeating self-similar structure like that of strange attractors (Avnir, et al. 1998; see also Shenker 1994). At most, one finds self-similar structure repeated on two or three spatial scales in the reconstructed state space and that is it. This appears to be more like a prefractal, where self-similar structure exists on only a finite number of length scales. That is to say, prefractals repeat their structure under magnification only a finite number of times rather than infinitely as in the case of a fractal. So this seems to indicate that there are no genuine strange attractors with fractal dimension in actual systems, but possibly only attractors having prefractal geometries with self-similarity on a limited number of spatial scales. On the other hand, the dissipative chaotic models used to characterize some actual-world systems all exhibit strange attractors with fractal geometries. So it looks like fractal geometries in chaotic model state spaces bear no relationship to the pre-fractal features of actual-world systems. In other words, these fractal features of many of our models are clearly false of the target systems though the models themselves may still be useful for helping scientists locate interesting dynamics of target systems characterized by prefractal properties. Scientific realism and usefulness look to part ways here. At least many of the strange attractors of our models play the role of useful fictions. There are caveats to this line of thinking, however. First, the prefractal character of the analyzed data sets (e.g. by Avnir, et al. 1998) could be an artifact of the way data is massaged before it is analyzed or due to the analog-to-digital conversion that must take place before data analysis can begin. Reducing real number valued data to a finite string of would destroy fractal structure. If so, the infinitely self-similar structures of fractals in our models might not be such a bad approximation after all. A different reason, though, to suspect that physical systems cannot have such self-repeating structures “all the way down” is that at some point the classical world gives way to the quantum world, where things change so drastically that there cannot be a strange attractor because the state space changes. Hence, we are applying a model carrying a tremendous amount of excess, fictitious structure to understand features of physical systems. This looks like a problem because one of the key structures playing a crucial role in chaos explanations—the infinitely intricate structure of the strange attractor—would then be absent from the corresponding physical system. According to Peter Smith (1998, ch. 3), one might be justified in employing obviously false chaos models because the infinitely intricate structure of strange attractors (1) is the result of relatively simple stretching and folding mechanisms and (2) many of the points in state space of interest are invariant under this stretching and folding mechanism. These features represent kinds of simplicity that can be had at the (perhaps exorbitant!) cost of fictitious infinite structure. The strange attractor exhibits this structure and the attractor is a sign of some stretching and folding process. The infinite structure is merely geometric extra baggage, but the robust properties like period-doubling sequences, onset of chaos, and so forth are real enough. This has the definite flavor of being antirealist about some key elements of explanation in chaos (§5.2) and has been criticized as such (Koperski 2001). Instead of trying to squeeze chaos into scientific realism’s mold, then, perhaps it is better to turn to an alternative account of realism, structural realism. Roughly, the idea is that realism in scientific practices hinges on the structural relations of phenomena. So structural realism tends to focus on the causal structures in well-confirmed scientific hypotheses and theories. The kinds of universal structural features identified in chaotic phenomena in realms as diverse as physics, biology and economics is very suggestive of some form of structural realism and, indeed, look to play key roles in chaos explanations (see below). Though, again, there are significant worries that infinitely repeating self-similar structure might not be realized in physical systems. On a structural approach to realism regarding chaos models, one faces the difficulty that strange attractors are at best too gross an approximation to the structure of physical attractors and at worst terribly Perhaps other kinds of geometric structures associated with chaos would qualify on a structural realist view. After all, it also seems to be the case that realism for chaos models has more to do with processes—namely stretching and folding mechanisms at work in target systems. But here the connection with realism and chaos models would come indirectly via an appeal to the causal processes at work in the full nonlinear models taken to represent physical systems. Perhaps the fractal character of strange attractors is an artifact introduced through the various idealizations and approximations used to derive such chaotic models. If so, then perhaps there is another way to arrive at more realistic chaos models that have prefractal attractors. Chaos has been invoked as an explanation for, or as contributing substantially to explanations of, actual-world behaviors. Some examples are epileptic seizures, heart fibrillation, neural processes, chemical reactions, weather, industrial control processes and even forms of message encryption. Aside from irregular behavior of actual-world systems, chaos is also invoked to explain features such as the actual trajectories exhibited in a given state space or the sojourn times of trajectories in particular regions of state space. But what, exactly, is the role chaos plays in these various explanations? More succinctly, what are chaos explanations? The nature of scientific explanation (see the entry on scientific explanation) in the literature on chaos is thoroughly under-discussed to put it mildly. Traditional accounts for scientific explanation such as covering-law, causal mechanical and unification models all present various kinds of drawbacks when applied to chaotic phenomena. For instance, if there are no universal laws lying at the heart of chaos explanations—and it does not seem credible that such laws could really play a role in chaos explanations—covering-law models do not look promising as candidates for chaos Roughly speaking, the causal-mechanical model of explanation maintains that science provides understanding of diverse facts and events by showing how these fit into the causal structure of the world. If chaos is a behavior exhibited by nonlinear systems (mathematical and physical), then it seems reasonable to think that there might be some mechanisms or processes standing behind this behavior. After all, chaos is typically understood to be a property of the dynamics of such systems, and dynamics often reflects the processes at work and their interactions. The links between causal mechanisms and behaviors in the causal-mechanical model are supposed to be reliable links along the following lines: If mechanism \(C\) is present, behavior \(B\) typically follows. In this sense, chaos explanations, understood on the causal-mechanical model, are envisioned as providing reliable connections between mechanisms and the chaotic behavior exhibited by systems containing such On the other hand, the basic idea of unification accounts of explanation is that science provides understanding of diverse facts and events by showing how these may be unified by a much smaller set of factors (e.g., laws or causes). Perhaps one can argue that chaos is a domain or set of a limited number of patterns and tools for explaining/understanding a set of characteristic behaviors found in diverse phenomena spread across physics, chemistry, biology, economics, social psychology, and so forth. In this sense the set of patterns or structures (e.g., “stretching and folding”) might make up the explanatory store unifying our understanding of all the diverse phenomena behaving chaotically. Both causal and unification accounts, as typically conceived, assume that theories are in place and that the models of those theories play some role in explanation. In causal accounts, causal processes are key components of the models. In unification accounts, laws might be the ultimate explanatory factors, but we often connect laws with physical systems via models. To be explanatory, however, such accounts must make the faithful model assumption; namely, that our models (and their state spaces) are faithful in what they say about actual systems. Recall that SD—exponential divergence of neighboring trajectories—is taken by many to be a necessary condition for chaos. As we saw in §3, it is not straightforward to confirm when we have a model serving as a good explanation because, for instance, the slightest refinement of initial conditions can lead to wildly differing behavior. So on many standard approaches to confirmation and models, it would be difficult to say when we had a good explanation. Even if we push the faithful model assumption to its extreme limit—i.e., assuming the model is perfect—we run into tricky questions regarding confirmation since there are too many states indistinguishable from the actual state of the system yielding empirically indistinguishable trajectories in the model state space (Judd and Smith 2001). Perhaps with chaos explanation we should either search for a process yielding the “stretching and folding” in the dynamics (causal form of explanation) or we should search for the common properties such behavior exhibits (unification form of explanation) underlying the behavior of the nonlinear systems of interest. In other words, we want to be able to understand why systems exhibit SDIC, aperiodicity, randomness, and so forth. But these are the properties characterizing chaotic behavior, so the unification account of explanation sounds like it may ultimately involve appealing to the properties in need of explanation. The explanatory picture becomes more complicated by shifting away from SD as characterized by a positive global Lyapunov exponent and settling for what may be more realistic, namely the effects of divergence/contraction characterized by finite-time Lyapunov exponents. However, even in this case, it appears that the properties to which one appeals on a unification account pick out the patterns of chaos that we want to understand: How do these properties arise? It seems that unification accounts are still at a disadvantage in characterizing chaos explanations. Suppose we appealed to strange attractors in our models or in state space reconstruction techniques. Would this be evidence that there is a strange attractor in the target system’s behavior? Modulo worries raised in §5.1, even if the presence of a strange attractor in the state space was both a necessary and sufficient condition for the model being chaotic, this would not amount to an explanation of chaotic behavior in the target system. First, the strange attractor is an object in state space, which is not the same as saying that the actual system behaves as if there is a strange attractor in the physical space of its activity. A trajectory in a state space is a way of gaining useful information about the target system (via the faithful model assumption), but it is different from trajectories developed by looking at how an actual system’s properties change with respect to time. Just because a trajectory of a system in state space is spiraling ever closer to the strange attractor does not imply that the target system’s behavior in physical space is somehow approaching that attractor (except possibly under the perfect model scenario). Second, but related, the presence of a strange attractor would only be a mark of chaos, not an explanation for why chaotic properties are being exhibited by a system. It seems we still need to appeal to processes and interactions causing the dynamics to have the characteristic properties we associate with chaos. At this point, a question implied at the end of the previous subsection arises, namely what is effecting the unification in chaos explanations? Unification models of explanation typically posit an explanatory store of a relatively small number of laws or mechanisms that serve to explain or unify a diverse set of phenomena. A standard example is that of Newtonian mechanics providing a small set of principles that could serve to explain phenomena as diverse as projectile motions, falling bodies, tides, planetary orbits and pendula. In this way, we say that Newtonian mechanics unified a diverse set of phenomena by showing that they all were governed by a small set of physical principles. Now, if a unification construal of chaos explanations only focuses on the mathematical similarities in behaviors of diverse phenomena (e.g., period doubling route to chaos or SDIC), then one can legitimately question whether the relevant sense of unification is in play in chaos explanations. The “explanatory store” of chaos explanations is indeed a small set of mathematical and geometrical features, but is this the wrong store (compare with the physical principles of Newtonian mechanics)? However, if unification is supposed to be achieved through underlying mechanisms producing these mathematical and geometrical features, then the explanatory store appears to be very large and heterogeneous—the mechanisms in physics are different from those in biology, are different from those in ecology, are different from those in economics are different from those in social psychology…Once again, the causal-mechanical model appears to make more sense for characterizing the nature of chaos explanations. If this were all there was to the story of chaos explanations, then a causal account of explanation looks more promising. But it would also be the case that there is nothing special about such explanations: There are processes and interactions that cause the dynamics to have chaotic properties. But Stephen Kellert (1993, ch. 4) maintains that there is something new about chaotic dynamics, forcing us to rethink explanation when it comes to chaos models. His proposal for chaos explanations as yielding qualitative understanding of system behavior suggests that causal accounts, at least, do not fit well with what is going on in chaos research. Kellert first focuses on one of the key intuitions driving many views in debates on scientific explanation: Namely, that the sciences provide understanding or insight into phenomena. Chaos explanations, according to Kellert, achieve understanding by constructing, elaborating and applying simple dynamical models. He gives three points of contrast between this approach to understanding and what he takes to be the standard approach to understanding in the sciences. The first is that chaos explanations involve models that are holistic rather than microreductionist. Models of the latter type seek to break systems down into their constituent parts and look for law-like relations among the parts. In contrast, many of the mathematical tools of chaotic dynamics are holistic in that they extract or reveal information about the behavior of the model system as a whole not readily apparent from the nonlinear equations of the model themselves. Methods such as state space reconstruction and sections-of-surface can reveal information implicit in the nonlinear equations. Developing one- and two-dimensional maps from the model equations can also provide this kind of information directly, and are much simpler than the full model equations. Whereas the first point of contrast is drawn from the practice of physics, the second is logical. After reducing the system to its parts, the next step in the standard approach to understanding, according to Kellert, is to construct a “deductive scheme, which yields a rigorous proof of the necessity (or expectability) of the situation at hand” (1993, p. 91). What Kellert is referring to, here, is the deductive-nomological account of explanation (see Section 2 of the entry on scientific explanation, on the DN model). The approach in chaotic dynamics makes no use of deductive inferences. Specifically, instead of looking at basic principles, propositions, and so on, and making deductive inferences, chaos explanations appeal to computer simulations because of the difficulty or even impossibility of deducing the chaotic behavior of the system from the model equations (e.g., no proof of SD for the Lorenz model based on the governing equations). The third point of contrast is historical. In contexts where linear superposition holds, a full specification of the instantaneous state plus the equations of motion yield all the information about the system there is to know (e.g., pendulum and projectile motion). Although a full specification of such states is impossible, very small errors in specifying such states lead to very small deviations between model and target system behaviors, at least for short times and good models. By contrast, in nonlinear contexts, where linear superposition fails, a full specification of an instantaneous state of the system plus the equations of motion does not yield all the information there is about the system, for example, if there are memory effects (hysteresis), or the act of measurements introducing disturbances that SDIC can amplify. In the former case, we also need to know the history of the system as well (whether it started out below the critical point or above the critical point, say). So chaos explanations must also take model histories into account. What kind of understanding is achieved in chaos explanations? Kellert argues we get (1) predictions of qualitative behavior rather than quantitative detail, (2) geometric mechanisms rather than causal processes, and (3) patterns rather than law-like necessity. Regarding (1), detailed predictions regarding individual trajectories fail rather rapidly for chaotic models when there is any error in specification of the initial state. So, says Kellert, instead we predict global behaviors of models and have an account of limited predictability in chaotic models. But many of these behaviors can be precisely predicted (e.g., control parameter values^[9] at which various bifurcations occur, the onset of chaos, the return of n-periodic orbits). (1) amounts to important, but limited insight. On this view we are able to predict when to expect qualitative features of the nonlinear dynamics to undergo a sudden change, but chaos models do not yield precise values of system variables. We get the latter values by running full-blow computer simulations on the full nonlinear model equations, provided the degrees of freedom are reasonable. In this sense, chaos explanations are complimentary to the full model simulation because the former can tell us when/where to expect dynamical changes such as the onset of complicated dynamics in the latter. Regarding (2), chaos explanation is not a species of causal explanation. That is to say, chaos explanations do not focus on or reveal processes and interactions giving rise to the dynamics; rather, they reveal large-scale geometric features of the dynamics. Kellert argues the kinds of mechanisms on which chaos explanations focus are not causal, but geometric. Part of the reason why he puts things this way is that he views typical causal accounts of explanation as operating in a reductive mode: trace the individual causal processes and their interactions to understand the behavior of the system. But chaos explanations, according to Kellert, eschew this approach, focusing instead on the behavior of systems as wholes. Indeed, chaos explanations tend to group models and systems together as exhibiting similar patterns of behavior without regard for their underlying causal differences. Causal processes are ignored; instead, universal patterns of behavior are the focus. And it is the qualitative information about the geometric features of the model that are key to chaos explanations for Kellert. Regarding (3), if scientific understanding is only to be achieved via appeal to universal laws expressing nomic necessity—still a strong intuition among many philosophers—then chaos explanations definitely do not measure up. Chaos explanations do not rely on nomic considerations at all; rather, they rely on patterns of behavior and various properties characterizing this behavior. In brief, chaos studies search for patterns rather than laws. But suppose we change the notion of laws from universal statements of nomic necessity to phenomenological regularities (e.g., Cartwright 1999; Dupré 1993)? Could chaos explanations then be understood as a search for such phenomenological laws at the level of wholes? After all, chaos as a field is not proposing any revisions to physical laws the way relativity and quantum mechanics did. Rather, if it is proposing anything, it is new levels of analysis and techniques for this analysis. Perhaps it is at the level of wholes that interesting phenomenological regularities exist that cannot be probed by microreductionist approaches. But this feature, at least on first blush, may not count against the microreductionist in anything other than an epistemological sense, that is, holistic methodologies are more effective for answering some questions chaos raises. This dynamical understanding, as Kellert denotes it, achieved by chaos models would suggest that typical causal accounts of explanation are aimed at a different level of understanding. In other words, causal accounts look much more consonant with studying the full nonlinear model. Chaos explanation, by contrast, pursues understanding by using reduced equations derived through various techniques, though still based on the full nonlinear equations. This way of viewing things suggests that there is a kind of unification going on in chaos explanation after all. A set of behavior patterns serves as the explanatory or unificatory features bringing together the appearance of similar features across a very diverse set of phenomena and disciplines (note: Kellert does not discuss unification accounts). This, in turn, suggests a further possibility: A causal account of explanation is more appropriate at the level of the full model, while a unification account perhaps is more appropriate at the level of the chaotic model. The approaches would be complementary rather than competing. Furthermore, the claim is that study of such chaotic models can give us understanding of the behavior in corresponding actual-world systems. Not because the model trajectories are isomorphic to the system trajectories; rather, because there is a topological or geometric similarity or correspondence between the models and the systems being modeled. This is a different version of the faithful model assumption in that now the topological/geometric features of target systems are taken to be faithfully represented by our chaotic models. In contrast to Kellert, Peter Smith makes it clear that he thinks there is nothing particularly special about chaos explanations in comparison with explanation in mathematical physics in general (1998, ch. 7). Perhaps it simply is the case that mathematical physics explanations are not well captured by philosophical accounts of explanation and this mismatch—peculiarly highlighted in a catchy field such as chaos—could provide some of the reason for why people have taken chaos explanations to pose radical challenges to traditional philosophical accounts of explanation. In particular, Smith takes issue with Kellert’s view that chaos explanations are, in the main, qualitative rather than quantitative. He points out that we can calculate Lyapunov exponents, bifurcation points as control parameters change, and even use chaos models be predict the values of evolving dynamical variables—“individual trajectory picture”—at least for some short time horizon. So perhaps there is more quantitative information to be gleaned from chaos models than Kellert lets on (this is particularly true if we turn to statistical methods of prediction). Furthermore, Smith argues that standard physics explanations, along with quantitative results, always emphasize the qualitative features of models as well. We might agree, then, that there is nothing particularly special or challenging about chaos explanations relative to other kinds of explanation in physics regarding qualitative/quantitative understanding. What does seem to be the case is that chaos models—and nonlinear dynamics models generally—make the extraction of usefulness quantitative information more difficult. What is exhibited by methodological approaches in chaos is not that different from what happens in other areas of mathematical physics, where the mathematics is intractable and the physical insight comes with a struggle. Moreover, there is no guarantee that in the future we will not make some kind of breakthrough placing chaos models on a much sounder first principles footing, so there does not seem to be much substance to the claim that chaos explanations are different in kind from other modes of explanation in mathematical physics. Kellert’s discussion of “dynamic understanding” and Peter Smith’s critical remarks both overlap in their agreement that various robust or universal features of chaos are important for chaos studies The idea of focusing on universal features such as patterns, critical numbers, and so forth suggests that some form of unification account of explanation is what is at work in chaos explanations: group together all examples of chaotic behavior via universal patterns and other features (e.g., period doubling sequences). There is disagreement on the extent to which the methodologies of current chaos research present any radically new challenges to the project of scientific explanation. Even if there is not something radically new here regarding scientific explanation, the kind of understanding provided by chaos models is challenging to clarify. One problem is that this “dynamic understanding” appears to be descriptive only. That is, Kellert seems to be saying we understand how chaos arises when we can point to a period doubling sequence or to the presence of a strange attractor, for instance. But this appears to be only providing distinguishing marks for chaos rather than yielding genuine insight into what lies behind the behavior, i.e., the cause of the behavior. Kellert eschews causes regarding chaos explanations and there is a fairly straightforward reason for this: The simplified models of chaos appear to be just mathematics (e.g., one dimensional maps) based on the original nonlinear equations. In other words, it looks as if the causes have been squeezed out! So the question of whether causal, unificationist or some other approach to scientific explanation best captures chaos research remains open. Moreover, since all these simplified models use roughly the same mathematics, why should we think it is surprising that we see the same patterns arise over and over again in disparate models? After all, if all the traces of processes and interactions—the causes—have been removed from chaos models, as Kellert suggests, why should it be surprising that chaos models in physics, biology, economics and social psychology exhibit similar behavior? If it really boils down to the same mathematics in all the models, then what is it we are actually coming to understand by using these models? On the other hand, perhaps chaos studies are uncovering universal patterns that exist in the actual world, not just in mathematics. Identifying these universal patterns is one thing, explaining them is Quantum chaos, or quantum chaology as it is better called, is the study of the relationship between chaos in the macroscopic or classical domain and the quantum domain. The implications of chaos in classical physics for quantum systems have received some intensely focused study, with questions raised about the actual existence of chaos in the quantum domain and the viability of the correspondence principle between classical and quantum mechanics to name the most provocative. Before looking at these questions, there is the thorny problem of defining quantum chaos. The difficulties in establishing an agreed definition of quantum chaos are actually more challenging than for classical chaos (§1). Recall that there were several subtleties involved in attempting to arrive at a consensus definition of classical chaos. One important proposal for a necessary condition is the presence of some form of stretching and folding mechanism associated with a nonlinearity in the system. However, since Schrödinger’s equation is linear, quantum mechanics is a linear theory. This means that quantum states starting out initially close remain just as close (in Hilbert space norm) throughout their evolution. So in contrast to chaos in classical physics, there is no separation (exponential or otherwise) between quantum states under Schrödinger evolution. The best candidates for a necessary condition for chaos appear to be missing from the quantum domain. Instead researchers study the quantization of classical chaotic systems and these studies are known as quantum chaology: “the study of semiclassical, but nonclassical, phenomena characteristic of systems whose classical counterparts exhibit chaos” (Berry 1989, p. 335). It turns out that there are a number of remarkable behaviors exhibited by such quantized systems that are interesting in their own right. It is these behaviors that raise questions about what form chaotic dynamics might take in the quantum domain (if any) and the validity of the correspondence principle. Moreover, these studies reveal further evidence that the relationship between the quantum and classical domains are subtle indeed. Researchers in quantum chaology have focused on universal statistical properties that are independent of the quantum systems under investigation. Furthermore, studies focus on so-called simple quantum systems (i.e., those that can be described by a finite number of parameters or finite amount of information). The kinds of statistical properties studied in such systems include the statistics of energy levels and semi-classical structures of wave functions. These statistical properties are relevant for quantum state transitions, ionization and other quantum phenomena found in atomic and nuclear physics, solid state physics of mesoscopic systems and even quantum information. Some typical systems studied are quantum billiards (particles restricted to two-dimensional motions), the quantum kicked rotor, a single periodically driven spin and coupled spins. Often, iterated maps are used in investigating quantum chaos just as in classical chaos (§1.2.5 above). Billiards are a particularly well-studied family of models. Think of a perfectly flat billiard table and assume that the billiard balls bounce off the edges of the table elastically. Such a model table at the macroscopic scale of our experience where the balls and edges are characterized by classical mechanics is called a classical billiard. Lots of analytic results have been worked out for classical billiards so this is makes billiards a very attractive model to study. A chaotic billiard is a classical billiard where the conditions lead to chaotic behavior of the balls. There is a wealth of results for chaotic billiards, too. Such analytical and computational riches have made quantum versions of billiards workhorses for studying quantum chaology as will be seen below. One can produce quantum billiards by using Schrödinger’s equation to describe particles reflecting off the boundaries (where one specifies that the wave function for the particles is zero at a boundary), or one can start with the equations describe a classical billiard and quantize the observables (e.g., position and momentum) yielding quantized billiards. To organize the discussion, isolated systems, where the energy spectra are discrete, will be treated first followed by interacting systems, where energy spectra are continuous. Although whether the energy spectra are discrete or not is not crucial to quantum chaology, whether a quantum system is isolated or not has been argued to be potentially important to whether chaos exists in the quantum One difference between classical chaotic dynamics and quantum dynamics is that the state space of the former supports fractal structure while the state space of the latter does not. A second difference is that classical chaotic dynamics has a continuous energy spectrum associated with its motion. As previously noted, classical chaos is considered to be a property of bounded macroscopic systems. In comparison, the quantum dynamics in bounded, isolated systems has a discrete energy spectrum associated with its motion. Moreover, phenomena such as SDIC could only be possible in quantum systems that appropriately mirror classical system behaviors. From semi-classical considerations, Berry et al. (1979) showed that semi-classical quantum systems (see below for how such systems are constructed) could be expected to mirror the behavior of their corresponding classical systems only up to the Ehrenfest time \(t_{E}\), of the order \(\ln(2\pi/h)\) secs, an estimate also known as the log time reflecting the exponential instability of classical chaotic trajectories. In these semi-classical studies, \(h/2\pi\) often is treated as a parameter that is reduced in magnitude as the classical domain is approached. On this view, the smaller \(h/2\pi\), the more “classical” the system’s behavior becomes. For instance, assuming the value of Planck’s constant in KMS units, \(t_{E} \ sim 80\) secs. As Planck’s constant decreases, \(t_{E}\) grows. In nonchaotic classical systems the orbits in state space are well isolated and everything well behaved for very long times. In contrast for bound chaotic systems the orbits start coalescing in increasing numbers on the scale of \(t_{E}\), implying that the semi-classical approximation fails by \(t_{E}\). On the other hand, a Gaussian wave packet centered on a classical trajectory is thought to be able to shadow that trajectory up to \(t_{E}\) before becoming too spread out over the energy surface since \(t_{E}\) is a measure of when quantum wave packets have spread too much to mimic classical trajectories and the Ehrenfest theorem breaks down. So there are two effects at work in semi-classical systems over time: (1) the coalescing of classical chaotic trajectories and (2) the spreading of quantum wave packets. Between the lack of nonlinearity in quantum mechanics and the latter two effects, things look rather bleak for finding close quantum analogs of classical chaos. While \(t_{E}\) represents an important limit for how long quantum state vectors can be expected to shadow classical trajectories, there are interesting behaviors in the semi-classical quantum models corresponding to classical chaotic systems on longer time scales. By performing some more detailed analysis, Tomsovic and Heller (1993) showed that comparing the full quantum solutions with suitably chosen semi-classical solutions for some billiards problems provided excellent agreement well after \(t_{E}\) including fine details of the energy spectra. For their techniques semi-classical mechanics remains accurate for modeling quantum systems up to a time that scales with \((h/2\pi)^{-1/2}\). The vast majority of these quantum chaology studies focus on three questions: 1. Can classically chaotic systems be quantized? 2. Are there any quantum mechanical manifestations, “precursors,” of classical chaos? 3. Is there a rigorous distinction between chaotic and non-chaotic quantum systems?^[10] The first two questions focus on different directions of research, both related to what is known as semi-classical mechanics. In the first, investigation starts with a classical chaotic system and seeks to quantize it to study its quantum behavior. To quantize a classical model, one replaces functions in the equations of motion with their corresponding quantum operators. Here, there are various results demonstrating that strongly ergodic classical billiards, when quantized, exhibit quantum ergodicity. But this is not the same as showing that a classical chaotic system, when quantized, exhibits chaotic behavior. There are no examples of the latter due to the reasons listed at the beginning of this subsection. Furthermore, there are interesting numerical results on quantum interference in quantized classical billiards (Casati 2005). Consider a double slit with the source enclosed in a two-dimensional wave resonator with the shape of a classical billiard. Adjust the Gaussian wave packet’s initial average energy to be one 1600^th of an excited state of the quantized billiard and send it toward the double slit opening of the resonator. Let the slit width be three De Broglie wavelengths, and suppose that the wave packet is sharply peaked in momentum so that its spatial spread, by the Heisenberg relations, is the width of the resonator. If the shape of the resonator corresponds to a classical chaotic billiard, then there is almost no quantum interference. In the classical case, the multiply reflected waves would become randomized in phase. On the other hand, if the shape of the resonator corresponds to a classical regular billiard, then the well-known interference patterns emerge. So depending on whether the classical billiard is chaotic or not determines whether the quantized quantum analogue exhibits interference. The second question starts with a quantum system that has some relationship with a classical chaotic system via an appropriate semi-classical limit. The classical-to-quantum direction often follows the pioneering work of Martin Gutzweiller (1971) in quantizing the classical chaotic system. The quantum-to-classical direction is much more difficult and fraught with conceptual problems. Standard approaches, here, are to start with a quantum analogue to a classical chaotic system and then derive a semi-classical system that represents the quantum system in some kind of classical limit (Berry 1987 and 2001; Bokulich 2008). This work results in statistics of suitably normalized energy levels for the semi-classical systems with universal features. For classical systems that behave non-chaotically, the energy levels of the semi-classical system approximate a Poisson distribution, where small spacings dominate. In contrast when the classical system behaves chaotically, the energy levels of the semi-classical system take on a distribution originally derived by Eugene Wigner (1951) to describe nuclear energy spectra (for discussion see Guhr et al. 1998). These latter distributions depend only on some symmetry properties (e.g., the presence or absence of time-reversal symmetry in the system).^[11] Moreover, the presence of periodic orbits in the analog classical systems largely determine the properties of semi-classical systems (Berry 1977). Interestingly, many classically chaotic models systems also display universal energy level fluctuations that are well described by Wigner’s methods (Casati, Guarneri and Valz-Gris 1980; Bohigas, Giannoni and Schmit 1984). This has led to the quantum chaos conjecture: (Quantum Chaos Conjecture) The short-range correlations in the energy spectra of semi-classical quantum systems which are strongly chaotic in the classical limit obey universal fluctuation laws based on ensembles of random matrices without free parameters. This conjecture is motivated by the accumulated evidence over the decades that the energy spectra of very simple non-integrable classically chaotic systems contain universal level fluctuations described by random matrix theory. Given random matrix theory’s successful application to nuclear spectra and these classical results, the question of whether there were analog results for quantum systems with chaotic systems as an appropriate limit seems reasonable. The conjecture basically means that the energy spectra for the semi-classical analogues of classical chaotic systems are structurally the same as those classical systems. This conjecture remains unproven, though it appears to hold for the case of classical chaotic billiards and their semi-classical counterparts. Since this is a conjecture about semi-classical systems, this means that the structure of the energy spectra of semi-classical systems is strictly dependent on chaos in the corresponding classical systems not on any chaotic behavior in quantum or semi-classical systems. One can raise serious questions about these quantum-to-classical studies, however. The semi-classical systems are derived using various asymptotic procedures (Berry 1987 and 2001), but these procedures do not yield the actual classical systems that are supposed to be the limiting cases of the quantum systems. More importantly, the actual kinds of limiting relations between the quantum and classical domains are different than are typically considered in semi-classical approaches (§6.3 below). The actual relationship between the mathematical results and actual quantum and classical physical systems is tenuous at best leaving us, again, with the worry that chaos is an artifact of the mathematics (§5). One of the reasons the quantum chaos conjecture remains unproven likely is that inappropriate notions of “the classical limit” are being used. Even though the energy level statistics for quantum billiards in the semi-classical counterparts to classical billiard systems share universal properties, the actual behavior of the trajectories in the two systems is substantially different (under Schrödinger evolution Hilbert space vectors never diverge from one another). Another fundamental problem is that classical chaos is a function of nonlinearities whereas Schrödinger’s equation describing quantum systems is linear. Empirical investigation of quantum chaology, hence, usually focuses on externally driven quantum systems (see next section) and scattering processes (e.g., quantum billiards). The focus in these studies is on the unpredictability of the time evolution of such systems and processes. Although unpredictability is a feature of classical chaotic systems, there are many reasons why the time evolution of quantum systems may be as unpredictable (e.g., if commuting observables undergo complicated dynamics). It is not clear that unpredictability in externally driven quantum systems and scattering processes is due to any form of chaos. Quantum systems do sometimes exhibit bifurcations. For instance, rotating molecules under some circumstances will undergo several consecutive qualitative changes that are interpreted as bifurcations (Zhilinskií 2001). Whether there is a series of bifurcations in such systems that could eventually lead to a transition to some form of quantum chaotic behavior is currently unknown. At best, quantum chaology in isolated systems has produced results that have interesting relationships with integrable and non-integrable classical systems and some important experimental results (e.g., Bayfield and Koch 1974; Casati, Chirikov, Izrailev and Ford 1979; Fishman, Grempel and Prange 1982; Casati, Chirikov and Shepelyanski 1984; Berry 2001). These relationships are all statistical as indicated. One issue with studying isolated, closed quantum systems is that the state spaces of these systems do not allow the formation of the state-space structures typically associated with classical chaotic systems. There are some exceptions discussed in the literature, but it is actually ambiguous if these are genuine cases of chaos. One example discussed in the literature is a quantum Hamiltonian operator for an \(N\)-dimensional torus: \(\bfrac{1}{2}(g_{k} n_{k} + n_{k} g_{k})\), where \(n_{k} = -i\partial / \partial \theta_{k}, \theta_{k}\) is an angle variable, and \(d\ theta_{i} /dt = g_{i}(\theta_{k})\) for \(i, k = 1, 2,3,..., N\) (Chirikov, Izrailev and Shepelyanski 1988, p. 79). The probability density for momentum grows exponentially fast, which seems to parallel SDIC for trajectories in the classical case. Again, it is far from clear that this is chaos; there is no principled reason for considering the exponential growth in some quantity as a mark of chaos (recall the example in the first paragraph of §1.2.6 above). Building on the numerical results of the double-slit/billiard wave resonator described above, it may be possible to apply quantum chaology to the quantum measurement problem. Typically, models for quantum measurement describe the destruction of coherent quantum states as an effect of external noise or the environment. These quantum chaology results could allow the development of a dynamical theory of quantum decoherence due to the interaction between a classical chaotic (or at least non-integrable) system and coherent quantum states producing the incoherent mixtures observed in measurement devices. These considerations lead us to interacting systems. The failure to find the features of classical chaos in quantum systems is usually diagnosed as being due to the linear nature of Schrödinger’s equation (classical chaos appears to require nonlinearity as a necessary condition). And the evidence from isolated quantum systems substantiates this diagnosis as just discussed. What about interacting quantum systems (which sometimes get called open quantum systems)? At first glance, one can argue that the linearity of Schrödinger’s equation implies that nearby quantum states will always remain nearby as they evolve in time. However, some alternative possibilities for possible chaotic behavior have been proposed for interacting quantum systems. Fred Kronz (1998, 2000) has argued that focusing on the separable/nonseparable Hamiltonian distinction is more appropriate than nonlinearity for the question of quantum chaos (1.2.7 Taking Stock above). Although Schrödinger’s equation is linear, there are many examples of nonseparable Hamiltonians in quantum mechanics. A prime example would be the Hamiltonian describing an interaction between a measurement device and a quantum system. In such situations, the quantum system-measurement apparatus compound system can evolve from a tensor product state to a nonseparable entangled state represented by an irreducible superposition of tensor product states. A second ubiquitous example would be the famous Einstein-Podolsky-Rosen correlations. Although many, such as Robert Hilborn (1994, 549–569), have argued that the unitary evolution of quantum systems makes SDIC impossible for quantum mechanics, these arguments do not take into account that interacting quantum systems typically have nonseparable Hamiltonians. For interacting quantum systems, Schrödinger’s equation is no longer valid and one typically turns to so-called master equations to describe evolution (Davies 1976). Such equations typically have nonseparable Hamiltonians. In general, the time evolution of the components of such interacting systems is not unitary meaning that there is no formal prohibition against SDIC. Moreover, an important contrast between isolated and interacting quantum systems is that while the former have discrete energy spectra, the latter have continuous spectra. A continuous energy spectrum is characteristic of classical systems. Nevertheless, work in interacting quantum systems largely has only uncovered the same kinds of universal statistical characteristics of energy spectra and fluctuations as found in isolated systems (e.g., Guhr, Müller-Groeling and Weidenmüller 1998; Ponomarenko, et al. 2008; Filikhin, Matinyan and Vlahovic 2011). It is often the case that the quantum chaology literature uses a broader notion of chaos as behavior that “cannot be described as a superposition of independent one-dimensional motions” (Ponomarenko, et al. 2008, p. 357); in other words, a form of inseparability. Still, the chaology in interacting quantum systems looks to be the same as in isolated systems: “Quantum mechanically, chaotic systems are characterized by distinctive statistics of their energy levels, which must comply with one of the Gaussian random ensembles, in contrast to the level statistics for the nonchaotic systems described by the Poisson distribution” (Ponomarenko, et al. 2008, p. 357). This is largely due to the fact that quantum chaology is closely tied to universal statistical patterns in quantum systems that share some relationship with classical chaotic counterpart systems. One of the measures used to detect chaotic behavior in classical systems is a positive Kolmogorov entropy, which can be related to Lyapunov exponents (e.g., Atmanspacher and Scheingraber 1987). Unfortunately, there are no appropriate analogous for Lyapunov exponents in quantum systems. There are alternative entropy measures that could be used, for instance, the von Neumann or Connes-Narnhofer-Thirring entropies. However, there are currently many open questions about which, if any, of these entropy measures is the appropriate quantum analog (likely they are each appropriate for particular research purposes). Moreover, while these measures have a relationship to the statistics of energy levels and states characteristic of quantum chaology, there currently are no other known features of quantum systems that these measures could relate to chaotic behavior observed in classical trajectories. There is an interesting physical model of a charged particle in a unit square with periodic boundary conditions with an external electromagnetic field that occasionally gives it a kick (turns on and off). Mathematically this model is a generalization of the quantized Arnold cat map (Arnold and Avez 1968; Weigert 1990; Weigert 1993). Physically it represents a charged particle confined to an energy surface shaped like a torus that receives kicks from an external field. The classical model has trajectories that exhibit the stretching and folding process that seems to be a necessary condition for chaos, has positive Lyapunov exponents, and is algorithmically complex^[12], one of the measures used to detect classical chaos. Its trajectories have many of the marks of chaos. For the quantum model, the kick of the electromagnetic field has the effect of mapping the quantum labels of state vectors that are initially close together to labels which do not necessarily ever come close again. This is somewhat reminiscent of the divergence of classical chaotic trajectories except that it is the change in the state labels that plays the role of the classical trajectories. This leads to an absolutely continuous quasi-energy spectrum (the quasi-energy is defined as the set of numbers representing the “energy” in the evolution operator acting on state vector labels). The expectation value of the particle position becomes unpredictable with respect to the initial state label after long times and one can show that the sequence of shifts of the quantum state labels is algorithmically complex. Moreover, a “distance” between the labels can be defined that increases exponentially with time. This is the most convincing example in quantum chaology of behavior analogous to classical chaos. However, there are issues that raise questions about whether the behavior of the sequences of quantum state labels is enough to qualify the system as chaotic. For one thing, the quantum chaos conjecture is inapplicable to this system due to the continuous spectrum of the quasi-energy. More importantly, as pointed about above, exponential divergence is neither necessary nor sufficient to characterize a system as chaotic, and neither is algorithmic complexity. There are many examples of systems that are algorithmically complex but are not chaotic. Long randomly generated bit strings, no matter how they were obtained, are algorithmically complex but need not have any relationship to chaos. The behavior of the quantum labels for the kicked particle is irregular to be sure, but the actual temporal evolution of the state vectors is algorithmically compressible, so not irregular in any way. The kind of behavior observed in quantum chaology involves the statistics of energy states in quantum systems that have some kind of relationship to classical chaotic systems (e.g., by quantizing the latter systems). Important features of classical chaos, such as SDIC and the period doubling route to chaos, appear to be absent from quantum systems. This situation has led to arguments that the correspondence principle between quantum and classical mechanics fails and that the former may be incomplete (Ford 1992). The correspondence principle can be understood broadly to mean that as a quantum system system is scaled up to macroscopic size, its behavior should become more like a classical system. Alternatively, the behavior of a quantum model should reproduce the behavior of macroscopic classical models in the limit of large quantum numbers. The correspondence principle is sometimes conceived as letting Planck’s constant go to zero. Nevertheless, all these conceptions are terribly inadequate. Since \(h\) is a constant of nature, it can never change value, much less go to zero. One always has to speak of relevant limits of ratios of the classical to quantum actions, for example, which always involve Planck’s constant. Moreover, these limits are singular, meaning that the smooth behavior of a quantity or pattern is disrupted, often by becoming infinite (Friedrichs 1955; Dingle 1973; Primas 1998). So there is no straightforward sense in which quantum models become increasingly similar to macroscopic systems as quantum numbers get large. Joseph Ford offers a different construal of the correspondence principle: “any two valid physical theories which have an overlap in their domains of validity must, to relevant accuracy, yield the same predictions for physical observations.” In the case of quantum and Newtonian mechanics, this means that “quantum mechanics must, in general agree with the predictions of Newtonian mechanics when the systems under study are macroscopic” (1992, p. 1087). Unfortunately, he gives no discussion of what “domains of validity” mean or in what sense quantum and Newtonian mechanics have some overlap in their domains of validity. What he claims in his American Journal of Physics article is that “The very essence of correspondence lies in the notion that quantum mechanics can describe events in the macroscopic world without any limit taking. Were this not the case, then there would be no overlap in the quantal and classical regions of validity” (1992, p. 1088). Sir Michael Berry is even more direct: “all systems,” even our orbiting moon, “obey the laws of quantum mechanics” (Berry 2001, p. 42). The upshot for chaos is that “if there is chaos (however defined) in the macroscopic world, quantum mechanics must also exhibit precisely the same chaos, else quantum mechanics is not as general a theory as popularly supposed” (Ford 1992, p. 1088). As seen above, classical chaotic behavior is not recovered in quantum chaology, and this leads to a dilemma: Either the correspondence principle is false or quantum mechanics is incomplete. Ford, as would most physicists, rejects the first horn of the dilemma. Therefore, the problem must lie with quantum mechanics: Its lack of chaos reveals some incompleteness in the theory. Something is This dilemma is false, however. The way Ford (and to some degree Berry among others) describes things bespeaks a common misconception of the relationship between the quantum and classical domains. Much as he makes of the subtlety of limiting relations—and they are much more subtle than he realizes—his discussion of the correspondence principle actually turns on an overly simple relationship between the quantum and classical domains. That overly simple relationship presupposes that quantum mechanics fully explains classical phenomena, or, alternatively, that quantum mechanics reduces the classical domain in an appropriate limit. Under such a presupposition, if classical chaos either does not exist in quantum mechanics or if the latter cannot explain or reproduce classical chaos, then it appears that there is some inadequacy with quantum mechanics. The relationship between the quantum and classical domains is nontrivial. First, it does not involve a “classical limit,” but a series of limits of the ratio of quantum observables involving Planck’s constant and other physical observables going to zero (e.g., relevant classical and quantum actions), or limits involving the separation of nuclear and electronic frames of motion (in the case of chemistry) among others. All of these limits involve singular asymptotic series; hence, the relationship between quantum phenomena and classical phenomena is not one involving anything like bridge laws relating the two domains as Nagelian and other forms of reduction would require. There is a change in the character of the states and observables going from the quantum to the classical domains (Bishop, 2010b). The classical states and observables are neither a function of nor straightforwardly related to intrinsic states and observables in quantum mechanics. Second, even starting with the quantum domain, there are different classical worlds that result from taking these various limits in different orders. Since these limits correspond to different physical transitions, changing the order of the limits changes the order of physical transitions yielding physically inequivalent macroscopic domains. Given the physical incompatibility among these different macroscopic worlds, the actual physical transitions between the quantum and classical must occur in a particular order to recover the classical domain of our experience. Of course, there is much discussion of the “approximately classical” or “quasi-classical” trajectories for quantum systems that can be derived from semi-classical considerations (Berry 1987 and 2001). But such quasi-classical behavior is exhibited only for limited times (except for overly idealized models) and under very special initial conditions (Pauli 1933, p. 166) for ground states only (excited energy eigenstates never show classical behavior). Appeal to Ehrenfest’s theorem is of no help, here, because all this theorem guarantees for such very special, short-lived dynamics is that the usual physics practice of averaging the values of the quantum-mechanical observables tends to wash out the errors or differences between the classical and quantum calculations for contextually relevant situations and times. Moreover, the theorem is neither necessary nor sufficient for classical behavior. For instance, applying Ehrenfest’s theorem to a quantum harmonic oscillator yields average quantities for the position and momentum that track with the classical quantities for some brief time. Yet, the quantum oscillator’s discrete states yield thermodynamic properties very different from a classical oscillator. So satisfying the theorem is insufficient to guarantee classical behavior. Third, the emergence of our classical world is not merely a matter of environmental decoherence (e.g., Omnés 1994; Berry 2001; Wallace 2012). For one thing, there is no context-free limit of infinitely many degrees of freedom because this limit always has uncountably infinitely many physically inequivalent representations. Moreover, it is simply false that an improper mixture of quantum states “allows one to interpret the state of the [quantum] system in terms of a classical probability distribution,” such that “it is useful to regard ‘mixed states’ as effectively classical,” so that “one can interpret the system described by [a nonpure density operator] in terms of a classical ‘mixture’ with the exact state of the system unknown to the observer” (Zurek 1991, 46–47). Impure quantum states can be interpreted as classical mixtures if and only if their components are described by disjoint states. For a classical mixture of two pure states (e.g., water and oil), the pure states are disjoint if and only if there exists a classical observable such that the expectation values with respect to these states are different. It is this disjointness that makes it possible to distinguish states in a classical manner. In summary, there is nothing in the quantum domain by itself that determines the character of the classical domain (though the former provides some necessary conditions for the latter). Hence, classical chaos, along with many other classical features, is emergent in a more complex, subtle sense than Ford and others allow. The correspondence principle must reflect emergent classicality if it is to be a viable principle, which means that the implicit assumption of reductionism in Ford’s discussions of quantum chaology should be abandoned. Once the reductionist assumption is removed, the disparity between quantum chaology and classical chaos no longer calls an appropriately formulated correspondence principle into question. This resolves the first horn of the dilemma. The second horn of the dilemma likewise is resolved. There is no reason to suspect that there is some kind of inadequacy in quantum mechanics if features such as chaos are emergent in the classical domain. Neither the generality nor the validity of quantum mechanics is in question. Nor does the complex, subtle emergent relationship between the quantum and classical domains imply that the two domains are nonoverlapping or disjoint. Rather, the overlap between the quantum and classical is partial and nontrivial. Quantum mechanics is universally applicable, but this in no way implies that it alone universally governs classical behavior. It contributes some of the necessary conditions for classical properties and behaviors, but no sufficient conditions. One indicator of this is that classical mechanics is formulated in terms of continuous trajectories of individual particles through spacetime, while quantum mechanics is formulated in terms of probabilities and wave functions. There are deep conceptual differences between the classical and the quantum.^[13] This suggests that we should not expect individual continuous trajectories to result from quantum mechanics in contextually inappropriate limits nor that quantum mechanics should exhibit the full range of classical behaviors, contrary to Ford and others. Instead, we should expect that quantum probabilities recover the classical probabilities in the contextually appropriate situations and that there should be some interesting relationships between quantum and classical properties and behaviors. The interesting statistical regularities discovered in quantum chaology fit with this emergent, nontrival overlapping relationship nicely. There have been some discussions regarding the wider implications of chaos for other domains of philosophical inquiry. Three of the more thought-provoking ones will be surveyed here. Recall that mathematically, chaos is a property of dynamical systems which are deterministic (§1.2.1). Since the 18th century, the best models of and support for metaphysical determinism were thought to be the determinism of theories and models in physics. But this strategy is more problematic and subtle than has been typically realized (e.g., due to the difficulties with faithful models, §3.). So perhaps it is not so surprising that some have argued that chaos reflects some form of indeterminism; hence, the world is not metaphysically deterministic. Of course, chaotic systems are notorious for their unpredictability, and some such as Karl Popper (1950) have argued that unpredictability implies indeterminism. Yet this is to identify determinism (an ontological property) with predictability (an epistemic property). An example of someone who has pushed the claim that chaotic behavior implies that determinism fails for our word is physicist turned Anglican priest, John Polkinghorne: “The apparently deterministic proves to be intrinsically unpredictable. It is suggested that the natural interpretation of this exquisite sensitivity is to treat it, not merely as an epistemological barrier, but as an indication of the ontological openness of the world of complex dynamical systems” (1989, p. 43). Giving a critical realist reading of epistemology and ontology, Polkinghorne seeks to link the epistemological barrier with an ontological failure of determinism because of ontological openness to influences not fully accounted for in our physics descriptions. Nevertheless, the mathematical properties of dynamical systems (e.g., their deterministic character) present a serious problem with this line of reasoning. Determinism as unique evolution appears to be preserved in our mathematical models of chaos, which serve as our ontic descriptions of chaotic systems.^[14] What would it take to raise questions about the determinism of actual-world systems? For nonlinear dynamical systems, their presumed connection with target systems is one place to start. Mathematical modeling of actual-world systems requires distinctions between variables and parameters as well as between systems and their boundaries. However, where linear superposition is lost such distinctions become problematic (Bishop 2010a). This situation raises questions about our epistemic access to systems and models in the investigation of complex systems, but also raises questions about inferring the supposed determinism of target systems based on these models. Moreover, if the system in question is nonlinear, then the faithful model assumption (§1.2.3) raises difficulties for inferring the determinism of the target system from the deterministic character of the model. Consider the problem of the mapping between the model and the target system. There is no guarantee that this mapping is one-to-one even for the most faithful model. The mapping may actually be a many-to-one relation or a many-to-many relationship. A one-to-one relationship between a deterministic model and target system would make the inference from the deterministic character of our mathematical model to the deterministic character of the target system more secure. However, a many-to-one mapping raises problems. One might think this can be resolved by requiring the entire model class in a many-to-one relation be deterministic. Such a requirement is nontrivial, though. For instance, it’s not uncommon for different modeling groups to submit proposals for the same project, where some propose deterministic models and others propose nondeterministic models. Nonlinear models render any inferences from physics to metaphysical determinism shaky at best. A number of authors have looked to quantum mechanics to help explain consciousness and free will (e.g., Compton 1935; Eccles 1970; Penrose 1991, 1994 and 1997; Beck and Eccles 1992; Stapp 1993; Kane 1996; quantum consciousness). Still it has been less clear to many that quantum mechanics is relevant to consciousness and free will. For example, an early objection to quantum effects influencing human volitions was offered by philosopher J. J. C. Smart (1963, pp. 123–4).^[15] Even if indeterminism was true at the quantum level, Smart argued that the brain remains deterministic in its operations because quantum events are insignificant by comparison. After all a single neuron is known to be excited by on the order of a thousand molecules, each molecule consisting of ten to twenty atoms. Quantum effects though substantial when focusing on single atoms are presumed negligible when focusing on systems involving large numbers of molecules. So it looks like quantum effects would be too insignificant in comparison to the effects of thousands of molecules to play any possible role in consciousness or deliberation. Arguments such as Smart’s do not take into consideration the possibility for amplifying quantum effects through the interplay between SDIC at the level of the macroscopic world on the one hand and quantum effects on the other (see §4). SD arguments purport to demonstrate that chaos in classical systems can amplify quantum fluctuations due to sensitivity to the smallest changes in initial conditions. Along these lines suppose (somewhat simplistically) the patterns of neural firings in the brain correspond to decision states. The idea is that chaos could amplify quantum events causing a single neuron to fire that would not have fired otherwise. If the brain (a macroscopic object) is also in a chaotic dynamical state, making it sensitive to small disturbances, this additional neural firing, small as it is, would then be further amplified to the point where the brain states would evolve differently than if the neuron had not fired. In turn these altered neural firings and brain states would carry forward such quantum effects affecting the outcomes of human choices. There are several objections to this line of argument. First, the presence of chaos in the brain and its operations is an empirical matter that is hotly debated (Freeman and Skarda 1987; Freeman 1991, 2000; Kaneko, Tsuda and Ikegami 1994 pp. 103–189; Vandervert 1997; Diesmann, Gewaltig and Aertsen 1999; Lehnertz 2000; Van Orden, Holden and Turvey 2003 and 2005; Aihara 2008; Rajan, Abbott and Sompolinsky 2010). It should be pointed out, however, that these discussions typically assume SD or Chaos\(_{\lambda}\) as the definition of chaos. All that is really needed for sensitivity to and amplification of quantum effects in the brain is the loss of principle of superposition found in nonlinear systems. Second, these kinds of sensitivity arguments depend crucially on how quantum mechanics itself and measurements are interpreted as well as the status of indeterminism (§4). Third, although in the abstract sensitivity arguments seem to lead to the conclusion that the smallest of effects can be amplified, applying such arguments to concrete physical systems shows that the amplification process may be severely constrained. In the case of the brain, we currently do no know what constraints on amplification exist. An alternative possibility avoiding many of the difficulties exhibited in the chaos+quantum mechanics approach is suggested by the research on far-from-equilibrium systems by Ilya Prigogine and his Brussels-Austin Group (Bishop 2004). Their work purports to offer reasons to search for a different type of indeterminism in both micro and macrophysical domains. Consider a system of particles. If the particles are distributed uniformly in position in a region of space, the system is said to be in thermodynamic equilibrium (e.g., cream uniformly distributed throughout a cup of coffee). In contrast, if the system is far-from-equilibrium (nonequilibrium) the particles are arranged so that highly ordered structures might appear (e.g., a cube of ice floating in tea). The following properties characterize nonequilibrium statistical systems: large number of particles, high degree of structure and order, collective behavior, irreversibility, and emergent properties. The brain possesses all these properties, so that the brain can be considered a nonequilibrium system (an equilibrium brain is a dead brain!). Let me quickly sketch a simplified version of the approach to point out why the developments of the Brussels-Austin Group offer an alternative for investigating the connections between physics, consciousness and free will. Conventional approaches in physics describe systems using particle trajectories as a fundamental explanatory element of their models, meaning that the behavior of a model is derivable from the trajectories of the particles composing the model. The equations governing the motion of these particles are reversible with respect to time (they can be run backwards and forwards like a film). When there are too many particles involved to make these types of calculations feasible (as in gases or liquids), coarse-grained averaging procedures are used to develop a statistical picture of how the system behaves rather than focusing on the behavior of individual particles. In contrast the Brussels-Austin approach views nonequilibrium systems in terms of nonlinear models whose fundamental explanatory elements are distributions; that is to say, the arrangements of the particles are the fundamental explanatory elements and not the individual particles and trajectories.^[16] The equations governing the behavior of these distributions are generally irreversible with respect to time. In addition focusing exclusively on distribution functions opens the possibility that macroscopic nonequilibrium models are irreducibly indeterministic, an indeterminism that has nothing to do with ignorance about the system. If so, this would mean probabilities are as much an ontologically fundamental element of the macroscopic world as they are of the microscopic and are free of the interpretive difficulties found in conventional quantum mechanics. One important insight of the Brussels-Austin Group’s shift away from trajectories to distributions as fundamental elements is that explanation also shifts from a local context (set of particle trajectories) to a global context (distribution of the entire set of particles). Systems acting as a whole may produce collective effects that are not reducible to a summation of the trajectories and subelements composing the system (Bishop 2004 and 2012). The brain exhibits this type of collective behavior in many circumstances (Engel, et al. 1997) and the work of Prigogine and his colleagues gives us another tool for trying to understand that behavior. Moreover, nonlinear nonequilibrium models also exhibit SDIC, so there are a number of possibilities in such approaches for very rich dynamical description of brain operations and cognitive phenomena (e.g., Juarrero 1999; Chemero and Silberstein 2008). Though the Brussels-Austin approach to nonequilibrium statistical mechanics is still speculative and contains some open technical questions, it offers an alternative for exploring the relationship between physics, consciousness and free will as well as pointing to a new possible source for indeterminism to be explored in free will theories. Whether approaches applying chaotic dynamics to understanding the nature of consciousness and free will represent genuine advances remains an open question. For example, if the world is deterministic, then the invocation of SDIC in cognitive dynamics (e.g., Kane 1996) may provide a sophisticated framework for exploring deliberative processes, but would not be sufficient for incompatibilist notions of freedom. On the other hand, if indeterminism (quantum mechanical or otherwise) is operative in the brain, the challenge still remains for indeterminists such Robert Kane (1996) to demonstrate that agents can effectively harness such indeterminism by utilizing the exquisite sensitivity provided by nonlinear dynamics to ground and explain free will. Questions about realism and explanation in chaotic dynamics (§5) are relevant here as well as the faithful model assumption. There has also been much recent work applying the perspective of dynamical systems to cognition and action, drawing explicitly on such properties as attractors, bifurcations, SDIC and other denizens of the nonlinear zoo (e.g., van Gelder 1995; Kelso 1995; Port and van Gelder 1995; Juarrero 1999; Tsuda 2001). The basic idea is to deploy the framework of nonlinear dynamics for interpreting neural and cognitive activity as well as complex behavior. It is then hoped that patterns of neural, cognitive and human activity can be explained as the results of nonlinear dynamical processes involving causal interactions and constraints at multiple levels (e.g., neurons, brains, bodies, physical environments). Such approaches are highly suggestive, but also face challenges. For instance, as mentioned in the previous section, the nature of neural and cognitive dynamics is still much disputed. Ultimately, it is an empirical matter whether these dynamics are largely nonlinear or not. Moreover, the explanatory power of dynamical systems approaches relative to rival computational approaches has been challenged (e.g., Clark 1998). Again, questions about realism and explanation in chaotic dynamics (§5) are relevant here as well as the faithful model assumption. Furthermore, Polkinghorne (among others), as previously noted, has proposed interpreting the randomness in macroscopic chaotic models and systems as representing a genuine indeterminism rather than merely a measure of our ignorance (1991, pp. 34–48). The idea is that such openness or indeterminism is not only important to the free will and action that we experience (pp. 40–1), but also for divine action in the world (e.g., Polkinghorne 1989; §7.1). In essence the sensitivity to small changes exhibited by the systems and models studied in chaotic dynamics, complexity theory and nonequilibrium statistical mechanics is taken to represent an ontological opening in the physical order for divine activity. However the sensitivity upon which Polkinghorne relies would also be open to quantum influences whether deterministic or indeterministic. Furthermore, as mentioned previously in connection with the Brussels-Austin program, much rides on whether a source for indeterminism in chaotic behavior can be found. If Polkinghorne’s suggestion amounts to simply viewing the world as if chaos harbors indeterminism, then it seems that this suggestion doesn’t yield the kind of divine action he seeks. Chaos and nonlinear dynamics are not only rich areas for scientific investigation, but also raise a number of interesting philosophical questions. The majority of these questions, however, remain thoroughly under studied by philosophers. • Aihara, K. (2008), “Chaos in Neurons”, Scholarpedia, 3(5): 1768 available online, referenced on 31 July 2014. • Anderson, M. L. (2010), “Neural Re-use as a Fundamental Organizational Principle of the Brain”, Behavioral Brain Science, 33: 45–313. • Aristotle (1985) [OTH], On the Heavens,; in. J. Barnes (ed.), The Complete Works of Aristotle: The Revised Oxford Translation, Vol 1. Princeton: Princeton University Press. • Arnold, V. I. and Avez, A. (1968), Ergodic Problems of Classical Mechanics. Reading, MA: W. A. Benjamin. • Atmanspacher, H. and Scheingraber, H. (1987), “A Fundamental Link between System Theory and Statistical Mechanics”, Foundations of Physics, 17: 939–963. • Avnir, D., Biham, O., Lidar, D. and Malcai, O. (1998), “Is the Geometry of Nature Fractal?” Science, 279: 39–40. • Banks, J., Brooks, J. Cairns, G. Davis, G. and Stacey, P. (1992), “On Devaney’s Definition of Chaos”, American Mathematical Monthly, 99: 332–334. • Barone, S. R., Kunhardt, E. E., Bentson, J. and Syljuasen (1993), “Newtonian Chaos + Heisenberg Uncertainty = Macroscopic Indeterminacy”, American Journal Of Physics, 61: 423–7. • Bayfield , J. E. and Koch, P. M. (1974), “Multiphoton Ionization of Highly Excited Hydrogen Atoms”, Physical Review Letters, 33: 258–261. • Batterman, R. W. (1993), “Defining Chaos”, Philosophy of Science, 60: 43–66. • Beck, F. and Eccles, J. (1992), “Quantum Aspects of Brain Activity and the Role of Consciousness”, in Proceedings of the National Academy of Science (United States), 89: 11357–11361. • Berry, M. V. (1977), “Regular and Irregular Semiclassical Wavefunctions”, Journal of Physics A, 10: 2083–198. • Berry, M. V. (1987), “Quantum Chaology”, Proceedings of the Royal Society A, 413: 183–2091. • Berry, M. V. (1989), “Quantum Chaology, Not Quantum Chaos,” Physica Scripta, 40: 335–336. • Berry, M. V. (2001), “Chaos and the Semiclassical Limit of Quantum Mechanics (Is the Moon There When Somebody Looks?)”, R. J. Russell, P. Clayton, K. Wegter-McNelly, and J. Polkinghorne (eds.), Quantum Mechanics: Scientific Perspectives on Divine Action, Vatican Observatory: CTNS Publications, pp. 41–54. • Berry, M. V., Balazs, N. L., Tabor, M. and Voros, A. (1979), “Quantum Maps,” Annals of Physics, 122: 26–63. • Berkovitz, J, Frigg, R. and Kronz, F. (2006), “The Ergodic Hierarchy, Randomness and Chaos”, Studies in History and Philosophy of Modern Physics, 37: 661–691. • Bishop, R. C. (2002a), “Chaos, Indeterminism, and Free Will”, in R. Kane (ed.), The Oxford Handbook of Free Will. Oxford: Oxford University Press, pp. 111–124. • Bishop, R. C. (2002b), “Deterministic and Indeterministic Descriptions”, in H. Atmanspacher and R. Bishop (eds.), Between Chance and Choice: Interdisciplinary Perspectives on Determinism. Thorverton: Imprint Academic, pp. 5–31. • Bishop, R. C. (2003), “On Separating Prediction from Determinism”, Erkenntnis, 58: 169–188. • Bishop, R. C. (2004), “Nonequilibrium Statistical Mechanics Brussels-Austin Style”, Studies in History and Philosophy of Modern Physics, 35: 1–30. • Bishop, R. C. (2005), “Anvil or Onion? Determinism as a Layered Concept”, Erkenntnis, 63: 55–71. • Bishop, R. C. (2008), “What Could Be Worse than the Butterfly Effect?”, Canadian Journal of Philosophy 38: 519–548. • Bishop, R. C. (2010a), “Metaphysical and Epistemological Issues in Complex Systems”, in C. Hooker (ed.) Philosophy of Complex Systems, vol 10, Handbook of the Philosophy of Science, Amsterdam: North Holland. pp. 119–150. • Bishop, R. C. (2010b), “Whence Chemistry? Reductionism and Neoreductionism”, Studies in History and Philosophy of Modern Physics 41: 171–177. • Bishop, R. C. (2012), “Fluid Convection, Constraint and Causation”, Interface Focus, 2: 4–12. • Bricmont , J. (1995), “Science of Chaos or Chaos in Science?”, Physicalia Magazine 17: 159–208. • Bohigas , O., Giannoni, M. J. and Schmit, C. (1984), “Characterization of Chaotic Quantum Spectra and Universality of Level Fluctuation Laws”, Physical Review Letters 52: 1–4. • Bohm, D. (1951), Quantum Mechanics. Englewood Cliffs, NJ: Prentice-Hall. • Bohm, D. and Hiley, B. J. (1993), The Undivided Universe. New York: Routledge. • Bokulich, A. (2008), Reexamining the Quantum-Classical Relation: Beyond Reductionism and Pluralism. Cambridge: Cambridge University Press. • Cartwright, N. (1999), The Dappled World: A study of the Boundaries of Science. Cambridge: Cambridge University Press. • Casati, G., Chirikov, B. V., Izrailev, F. M. and Ford, J. (1979), “Stochastic Behavior of a Quantum Pendulum Under a Periodic perturbation”, in G. Casati and J. Ford (eds.) Stochastic Behavior in Classical and Quantum Hamiltonian Systems. Lecture Notes in Physics, Vol. 93. Berlin: Springer. pp. 334–352. • Casati, G., Chirikov, B. V. and Shepelyanski, D. (1984), “Classical Billiards and Double-slit Quantum Interference”, Physical Review A, 72: 032111. • Casati, G. and Prosen, T. (2005), “Quantum Limitations for Chaotic Excitation of the Hydrogen Atom in a Monochromatic Field”, Physical Review Letters, 53: 2525–2528. • Casati, G., Valz-Gris, F. and Guarneri, I. (1980) , “On the Connection Between Quantization of Nonintegrable Systems and Statistical Theory of Spectra”, Lettere Al Nuovo Cimento Series 2, 28: • Chemero A. and Silberstein M. (2008), “After the Philosophy of Mind: Replacing Scholasticism with Science”, Philosophy of Science, 75: 1–27. • Chirikov, B. V., Izrailev, F. M. and Shepelyanski, D. (1988), “Quantum Chaos: Localization vs. Ergodicity”, Physica D, 33: 77–88. • Clark, A. (1998), “Time and Mind”, Journal of Philosophy, 95: 354–376. • Compton, A.(1935), The Freedom of Man. New Haven: Yale University Press. • Davies, E. B. (1976), Quantum Theory of Open Systems. Waltham, MA: Academic Press. • Devaney, R. (1989), “Dynamics of Simple Maps”, Proceedings of Symposia in Applied Mathematics, 39: 1–24. • Diesmann M., Gewaltig M-O., Aertsen A. (1999), “Stable Propagation of Synchronous Spiking in Cortical Neural Networks”, Nature, 402: 529–533. • Dingle, R. (1973), Asymptotic Expansions: Their Derivation and Interpretation. New York: Academic Press. • Duhem, P. (1982), The Aim and Structure of Physical Theory. Princeton: Princeton University Press. • Dupré, J. (1993), The Disorder of Things: Metaphysical Foundations of the Disunity of Science. Cambridge, MA: Harvard University Press. • Eccles, J. (1970), Facing Reality. New York: Springer. • Engel, A., Roelfsema, P., König, P. and Singer, W. (1997), “Neurophysiological Relevance of Time”, in H. Atmanspacher and E. Ruhnau (eds.), Time, Temporality, Now: Experiencing Time and Concepts of Time in an Interdisciplinary Perspective. Berlin: Springer, pp. 133–157. • Filikhin, I., Matinyan, S. and Vlahovic, B. (2011), “Disappearance of Quantum Chaos in Coupled Chaotic Quantum Dots”, Physical Letters A, 375: 620–623. • Fishman, S., Grempel, D. R. and Prange, R. E. (1982), “Chaos, Quantum Recurrences, and Anderson Localization” Physical Review Letters, 49: 509–512. • Ford, J. and Mantica, G. (1992), “Does Quantum Mechanics Obey the Correspondence Principle? Is It Complete?” American Journal of Physics, 60: 1086–1097. • Fox, R. F. (1990), “Chaos, Molecular Fluctuations, and the Correspondence Limit”, Physical Review A, 41: 2969–2976. • Freeman, W. J. (1991), “Nonlinear Dynamics in Olfactory Information Processing”, in J. Davis and H. Eichenbaum (eds.) Olfaction. Cambridge, MA: MIT Press. • Freeman, W. J. (2000), How Brains Make Up Their Minds. New York: Columbia University Press. • Freeman, W. J. and Skarda, C. A. (1987), “How Brains Make Chaos in Order to Make Sense of the World”, Behavioral and Brain Sciences, 10: 161–195. • Friedrichs, K. (1955), “Asymptotic Phenomena in Mathematical Physics”, Bulletin of the American Mathematics Society, 61: 485–504. • van Gelder, T. (1995), “What Might Cognition Be if not Computation?”, Journal of Philosophy, 92: 345–381. • Guhr, T., Müller-Groeling, A. and Weidenmüller, H. A. (1998), “Random-matrix Theories in Quantum Physics: Common Concepts”, Physics Reports, 299: 189–425. • Gutzwiller, M. C. (1971), “Periodic Orbits and Classical Quantization Conditions”, Journal of Mathematical Physics, 91: 343–358. • Gutzwiller, M. C. (1992), “Quantum Chaos”, Scientific American, 266 (January): 78–84. • Hadamard, J. (1922), Lectures on Cauchy’s Problem in Linear Partial Differential Equations, New Haven: Yale University Press. • Hilborn, R. C. (1994), Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers, Oxford: Oxford University Press. • Hobbs, J. (1991), “Chaos and Indeterminism”, Canadian Journal of Philosophy, 21: 141–164. • Hunt, B. R. and Yorke, J. A. (1993), “Maxwell on Chaos”, Nonlinear Science Today, 3(1): 1–4. • Jensen, R. V. (1987), “Classical Chaos”, American Scientist, 75 (March-April): 168–181. • Jensen, R. V. (1992), “Quantum Chaos”, Nature, 355: 311–318. • Jones, R. (1990), “Determinism in Deterministic Chaos”, in A. Fine, M Forbes, and L. Wessels (eds.) PSA 1990, Volume 2, East Lansing: Philosophy of Science Association, pp. 537–549. • Juarrero, A. (1999), Dynamics in Action: Intentional Behavior as a Complex System, Cambridge, MA: MIT Press. • Judd, K., and Smith, L. (2001), “Indistinguishable States I: Perfect Model Scenario”, Physica D 151: 125–141. • Judd, K., and Smith, L. (2004), “Indistinguishable States II: Imperfect Model Scenarios”, Physica D 196: 224–242. • Kane, R. (1996), The Significance of Free Will, Oxford: Oxford University Press. • Kaneko, K. and Tsuda, I. (2000), Complex Systems: Chaos and Beyond, Berlin: Springer. • Kaneko, K., Tsuda, I. and Ikegami, T. (eds.), (1994), “Constructive Complexity and Artificial Reality: Proceedings of the Oji International Seminar on Complex Systems—from Complex Dynamical Systems to Sciences of Artificial Reality”, Physica D 75: 1–448. • Kellert, S. (1993), In the Wake of Chaos, Chicago University Press. • Kelso, J. A. S. (1995), Dynamical Patterns: The Self-Organization of Brain and Behavior, Cambridge, MA: MIT Press. • King, C. C. (1995), “Fractal Neurodynamics and Quantum Chaos: Resolving the Mind-Brain Paradox Through Novel Biophysics”, in E. MacCormack, E. and M.I. Stamenov, M.I. (eds.) Fractals of Brain, Fractals of Mind. Amsterdam and Philadelphia: John Benjamins. • Koperski, J. (1998), “Models, Confirmation and Chaos”, Philosophy of Science, 65: 624–648. • Koperski, J. (2001), “Has Chaos Been Explained?”, British Journal for the Philosophy of Science, 52: 683–700. • Kronz, F. (1998), “Nonseparability and Quantum Chaos”, Philosophy of Science, 65: 50–75. • Kronz, F. (2000), “Chaos in a Model of an Open Quantum System”, Philosophy of Science, 67 (Proceedings): S446–S453. • Kuhn, T. (1996), The Structure of Scientific Revolutions, Chicago: University of Chicago Press, 3^rd edition. • Laymon, R. (1989), “Cartwright and the Lying Laws of Physics”, Journal of Philosophy, 86: 353–372. • Lehnertz, K., Elger, C., Arnhold, J. and Grassberger, P. (eds.) (2000), Chaos in Brain?: Proceedings of the Workshop. Singapore: World Scientific. • Lorenz, E. N. (1963), “Deterministic Nonperiodic Flow”, Journal of Atmospheric Science, 20: 131–40. • Lorenz, E. N. (1965), “A Study of the Predictability of a 28-Variable Atmospheric Model”, Tellus, 17: 321–33. • Mar, G. and Patrick, G. (1991), “Pattern and Chaos: New Images in the Semantics of Paradox”, Noûs, 25: 659–693. • Maxwell, J. C. [1860] (1965), “Illustrations of the dynamical theory of gases,” Philosophical Magazine, in W. D. Nivens (ed.) The Scientific Papers of James Clerk Maxwell, New York: Dover, pp. • Maxwell, J. C. [1876] (1992), Matter and Motion, New York: Dover. • May, R. M. (1976), “Simple Mathematical Models with very Complicated Dynamics”, Nature, 261: 459–467. • Omnés, R. (1994), The Interpretation of Quantum Mechanics. Princeton, NJ: University of Princeton Press. • Oseledec, V. I. (1969), “A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems,” Transactions of the Moscow Mathematical Society, 19: 197–232. • Ott, E. (2002), Chaos in Dynamical Systems, Cambridge: Cambridge University Press, 2^nd edition. • Packard, N. H., Crutchfield, J. P., Farmer, J. D., and Shaw, R. S. (1980), “Geometry from a Time Series”, Physical Review Letters, 45: 712–716. • Pauli, W. (1993), “Allgemeinen Prinzipien der Wellenmechanik”, in H. Geiger and K. Scheel (eds.), Handbuch der Physik, vol 24. Berlin: Springer Verlag, pp. 83–272. • Penrose, R. (1991) The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics, New York: Penguin Books. • Penrose, R. (1994) Shadows of the Mind, Oxford: Oxford University Press. • Penrose, R. (1997). The Large, the Small and the Human Mind, Cambridge: Cambridge University Press. • Poincaré, H. (1913), The Foundations of Science: Science and Method, Lancaster: The Science Press. • Polkinghorne, J. (1989), Science and Creation: The Search for Understanding, Boston: Shambhala Publications. • Polkinghorne, J. (1991), Reason and Reality: The Relationship between Science and Theology, Valley Forge, PA: Trinity Press. • Ponomarenko, L. A., Schedin, F., Katsnelson, M. I., Yang, R., Hill, E. W., Novoselov, K. S. and Geim, K. A. (2008), “Chaotic Dirac Billiard in Graphene Quantum Dots”, Science 320: 356–358. • Popper, K. (1950), “Indeterminism in Quantum Physics and in Classical Physics”, The British Journal for the Philosophy of Science, 1: 117–133. • Port, R. and van Gelder, T. (eds.) (1995), Mind as Motion, Cambridge, MA: MIT Press. • Rajan, K., Abbott, L. F., and Sompolinsky, H. (2010), “Stimulus-dependent Suppression of Chaos in Recurrent Neural Networks”, Physical Review E, 82: 011903. • Primas, H. (1998), “Emergence in Exact Natural Sciences”, Acta Polytechnica Scandinavia, 91: 83–98. • Redhead, M. G. L. (1980), “Models in Physics”, British Journal for the Philosophy of Science, 31: 145–163. • Robinson, C. (1995), Dynamical Systems: Stability, Symbol Dynamics and Chaos, London: CRC Press. • Rueger, A. and Sharp, D. (1996), “Simple Theories of a Messy World: Truth and Explanatory Power in Nonlinear Dynamics”, British Journal for the Philosophy of Science, 47: 93–112. • Ruhla, C. (1992), “Poincaré, or Deterministic Chaos (Sensitivity to Initial Conditions)”, in C. Ruhla, The Physics of Chance: From Blaise Pascal to Niels Bohr, translated from the French by G. Barton, Oxford: Oxford University Press. • St. Denis, P. and Patrick, G. (1997), “Fractal Images of Formal Systems”, Journal of Philosophical Logic, 26: 181–222. • Shaw, R. S. (1981), “Modeling Chaotic Systems”, in H. Haken (ed.) , Chaos and Order in Nature, New York: Springer, pp. 218–231. • Shenker, O. (1994), “Fractal Geometry Is not the Geometry of Nature”, Studies in the History and Philosophy of Modern Physics, 25: 147–82. • Sklar, L. (1995), Physics and Chance: Philosophical Issues in the Foundations of Statistical Mechanics, Cambridge: Cambridge University Press. • Smart, J. (1963), Philosophy and Scientific Realism, New York: The Humanities Press. • Smith, L. A. (1992), “Identification and Prediction of Low Dimensional Dynamics”, Physica D, 58: 50–76. • Smith, L. A. (2000), “Disentangling Uncertainty and Error: On the Predictability of Nonlinear Systems”, in A. Mees (ed.) Nonlinear Dynamics and Statistics, Boston: Birkhauser, pp. 31–64. • Smith, L. A. (2003), “Predictability Past Predictability Present”, in Seminar on Predictability of Weather and Climate, Reading, UK: ECMWF Proceedings, pp. 219–242. • Smith, L. A. (2007), Chaos: A Very Short Introduction, Oxford: Oxford University Press. • Smith, L. A., Ziehmann, C. and Fraedrich, K. (1999), “Uncertainty Dynamics and Predictability in Chaotic Systems”, Quarterly Journal of the Royal Meteorological Society, 125: 2855–86. • Smith, P. (1998), Explaining Chaos, Cambridge: Cambridge University Press. • Stapp, H. (1993) Mind, Matter, and Quantum Mechanics. Berlin: Springer. • Stone, M. A. (1989), “Chaos, Prediction and Laplacian Determinism”, American Philosophical Quarterly, 26: 123–31. • Takens, F. (1981), “Detecting Strange Attractors in Turbulence”, in D. Rand and L.-S. Young (eds.), Lecture Notes in Mathematics, Vol. 898. Berlin: Springer, pp. 366–381. • Thompson, P. D. (1957), “Uncertainty of Initial State as a Factor in the Predictability of Large Scale Atmospheric Flow Patterns”, Tellus, 9: 275–295. • Tomsovic, S. and Heller, E. J. (1993), “Long-time Semiclassical Dynamics of Chaos: The Stadium Billiard”, Physical Review E, 47: 282–300. • Tsuda, I (2001). “Towards an Interpretation of Dynamic Neural Activity in Terms of Chaotic Dynamical Systems”, Behavioral and Brain Sciences, 24: 793–847. • Vandervert, L., ed. (1997), Understanding Tomorrow’s Mind: Advances in Chaos Theory, Quantum Theory, and Consciousness in Psychology, New York: Journal of Mind and Behavior, Volume 18, Numbers • Van Orden, G., Holden, J. and Turvey, M. T. (2003), “Self-Organization of Cognitive Performance”, Journal of Experimental Psychology: General, 132: 331–351. • Van Orden, G., Holden, J. and Turvey, M. T. (2003), “Human Cognition and 1/\(f\) Scaling”, Journal of Experimental Psychology: General, 134: 117–123. • Wallace, D. (2012) The Emergent Multiverse: Quantum Theory according to the Everett Interpretation. Oxford: Oxford University Press. • Weigert, S. (1990), “The Configurational Quantum Cat Map”, Zeitschrift für Physik B, 80: 3–4. • Weigert, S. (1992), “The Problem of Quantum Integrability”, Physica D, 56: 107–119. • Weigert, S. (1993), “Quantum Chaos in the Configurational Quantum Cat Map”, Physical Review A, 48: 1780–1798. • Wigner, E. P. (1951), “On the Statistical Distribution of the Widths and Spacings of Nuclear Resonance Levels”, Mathematical Proceedings of the Cambridge Philosophical Society, 47: 790–798. • Wimsatt, W. C. (1987), “False Models as Means to Truer Theories”, in M. Nitecki and A. Hoffmann (eds.), Neutral Models in Biology, New York: Oxford University Press, pp. 3–55. • Zheng, Z., Misra, B. and Atmanspacher, H. (2003), “Observer-Dependence of Chaos Under Lorentz and Rindler Transformations”, International Journal of Theoretical Physics, 42: 869–878. • Ziehmann, C., Smith, L. A., and Kurths, J. (2000), “Localized Lyapunov Exponents and the Prediction of Predictability”, Physics Letters A, 271: 237–51. • Zhilinskií, B. I. (2001), “Symmetry, invariants, and topology, vol II: Symmetry, invariants, and topology in molecular models”, Physics Reports, 341: 85–171. • Zurek, W. H. (1991), “Quantum Measurements and the Environment-Induced Transition from Quantum to Classical”, in A. Ashtekar and J. Stachel (eds.), Conceptual Problems of Quantum Gravity, Boston: Birkhäuser, pp. 43–62. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
{"url":"https://plato.stanford.edu/archivES/FALL2017/entries/chaos/","timestamp":"2024-11-13T21:53:06Z","content_type":"text/html","content_length":"208955","record_id":"<urn:uuid:36925637-7df2-4b6b-aa18-8ec09adbc979>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00564.warc.gz"}
ing A 27 Lines on the Clebsh Surface In algebraic geometry, there is a remarkable theorem that states that any smooth, cubic surface has 27 lines that lie on it. In collaboration with postdoc Nathan Fieldsteel, we managed to 3D print the Clebsh surface. Using Sage to generate the surface and Blender to make it look a little more refined, we used the University of Kentucky Math Lab's Form 2 3D printer to print it. After a couple of test prints, we finally got a fantastic model to hold and display. Photos are courtesy of Nathan Fieldsteel and Dave Jensen. The Twisted Cubic The twisted cubic can be expressed parametrically with \( t \rightarrow (t, t^2, t^3) \). If considering this curve in \( \mathbb{R}^3 \), then in particular, projecting the twisted cubic along any dimension yields a different polynomial curve. The tangent variety of the twisted cubic is the collection of lines that lie tangent to the twisted cubic. The designer of this model, Nathan Fieldsteel, has a fantastic blog post here that describes what this model is visualizing. We printed two versions of the tangent variety. The first is solid, whereas as the second is skeletal and accentuates the tangent lines more. Calculus III For many students, the higher dimensionality of multivariate calculus can be a challenge to handle. By creating interactive and tangible objects related to some of the objects, the goal is to make the concepts more accessible and to provide some intuition for students. Because much of Calculus III begins in \( \mathbb{R}^3 \), many of the surfaces can actually be 3D-printed. Photos are courtesy of Christina Osborne.
{"url":"https://seangrate.com/past-projects/visualizations/","timestamp":"2024-11-10T17:27:01Z","content_type":"text/html","content_length":"10442","record_id":"<urn:uuid:106873cb-af5d-44cd-87f8-5e772835dd7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00602.warc.gz"}
Project euler is a fun way to become a better geek | Swizec Teller By 1772, Leonhard Euler had proved that 2,147,... Yesterday I was a bit bored at the Theoretical Basis Of Computer Science class and the obvious solution was to try doing a bit of coding on an algorithm I'm trying to develop that aims to learn the proper syllabification for a language by reading it. Because I'm all sorts of cool I want to develop this thing in Clojure. Naturally, the moment I started my severe lack of proficiency in clojure started showing through and I found myself spending more time online than coding. Somehow I ended up on stackoverflow where some guy suggested to another guy they go through the Project Euler problem set to get better at Clojure. And so I did just that. Have only managed to solve the first three problems so far, but by god this crap is fun. Maybe I'm just being way too dorky, but it's #WINNING! Me being me, I've decided to motivate myself to solving these by posting a solution on this blog every time I solve a problem. Here are the first three, probably less than elegant, but they're mine and I love them! ;If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. ;Find the sum of all the multiples of 3 or 5 below 1000. (defn nums [max n] (loop [cnt (+ n n) acc [n]] (if (>= cnt max) acc (recur (+ cnt n) (concat acc [cnt]))))) (defn answer [max] (println (reduce + (set (concat (nums max 3) (nums max 5)))))) (answer 1000) ;Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: ;1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... ;By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. (defn term [p-2 p-1] (+ p-2 p-1)) (defn fib [max p-2 p-1 acc] (if (>= p-1 max) acc (fib max p-1 (term p-2 p-1) (concat acc [p-1])))) (defn fibonacci [max p-2 p-1] (fib max p-2 p-1 [1])) (defn bla [a] (if (even? a) a 0)) (defn answer [max] (reduce + (map #(if (even? %1) %1 0) (fibonacci max 1 2)))) (println (answer 4000000)) ; The prime factors of 13195 are 5, 7, 13 and 29. ; What is the largest prime factor of the number 600851475143 ? (defn any? [l] (reduce #(or %1 %2) l)) (defn prime? [n known] (loop [cnt (dec (count known)) acc []] (if (< cnt 0) (not (any? acc)) (recur (dec cnt) (concat acc [(zero? (mod n (nth known cnt)))]))))) (defn next-prime [primes] (let [n (inc (count primes))] (let [lk (if (even? (inc (last primes))) (+ 2 (last primes)) (inc (last primes)))] (loop [cnt lk p primes] (if (>= (count p) n) (last p) (recur (+ cnt 2) (if (prime? cnt p) (concat p [cnt]) p))))))) (memoize next-prime) (defn n-primes [n] (loop [cnt 1 p [2]] (if (>= cnt n) p (recur (inc cnt) (concat p [(next-prime p)]))))) (defn factor [n factors primes] (if (== n 1) factors (loop [p primes] (if (== 0 (mod n (last p))) (factor (/ n (last p)) (concat [(last p)] factors) (recur (concat p [(next-prime p)])))))) (println (factor 600851475143 [] (n-primes 1))) The third one took particularly long to figure out because I was going about it all wrong. What I later realized was that I don't at all have to spend time finding the next prime for the sequence of already known primes because factorization doesn't have to be that complicated. Could just have gone through the number and anything it would divide by would already be a guaranteed prime. Published on March 8th, 2011 in Clojure, Computer Science, Fibonacci number, Mathematics, Prime factor, Prime number, Project Euler, Technical Did you enjoy this article? Continue reading about Project euler is a fun way to become a better geek Semantically similar articles hand-picked by GPT-4 I write articles with real insight into the career and skills of a modern software engineer. "Raw and honest from the heart!" as one reader described them. Fueled by lessons learned over 20 years of building production code for side-projects, small businesses, and hyper growth startups. Both successful and not. Subscribe below 👇 Join Swizec's Newsletter and get insightful emails 💌 on mindsets, tactics, and technical skills for your career. Real lessons from building production software. No bullshit. "Man, love your simple writing! Yours is the only newsletter I open and only blog that I give a fuck to read & scroll till the end. And wow always take away lessons with me. Inspiring! And very relatable. 👌" Senior Mindset Book Get promoted, earn a bigger salary, work for top companies Learn more Have a burning question that you think I can answer? Hit me up on twitter and I'll do my best. Who am I and who do I help? I'm Swizec Teller and I turn coders into engineers with "Raw and honest from the heart!" writing. No bullshit. Real insights into the career and skills of a modern software engineer. Want to become a true senior engineer? Take ownership, have autonomy, and be a force multiplier on your team. The Senior Engineer Mindset ebook can help 👉 swizec.com/senior-mindset. These are the shifts in mindset that unlocked my career. Curious about Serverless and the modern backend? Check out Serverless Handbook, for frontend engineers 👉 ServerlessHandbook.dev Want to Stop copy pasting D3 examples and create data visualizations of your own? Learn how to build scalable dataviz React components your whole team can understand with React for Data Visualization Want to get my best emails on JavaScript, React, Serverless, Fullstack Web, or Indie Hacking? Check out swizec.com/collections Did someone amazing share this letter with you? Wonderful! You can sign up for my weekly letters for software engineers on their path to greatness, here: swizec.com/blog Want to brush up on your modern JavaScript syntax? Check out my interactive cheatsheet: es6cheatsheet.com By the way, just in case no one has told you it yet today: I love and appreciate you for who you are ❤️
{"url":"https://swizec.com/blog/project-euler-is-a-fun-way-to-become-a-better-geek/","timestamp":"2024-11-10T17:53:35Z","content_type":"text/html","content_length":"863131","record_id":"<urn:uuid:8d6940f6-1250-4095-b3b5-68dd209e3bfc>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00457.warc.gz"}
Existence theorem for solitary waves on lattices In this article we give an existence theorem for localized travelling wave solutions on one-dimensional lattices with Hamiltonian {Mathematical expression} where V(·) is the potential energy due to nearest-neighbour interactions. Until now, apart from rare integrable lattices like the Toda lattice V(f)=ab^-1(e^-bf+bf-1), the only evidence for existence of such solutions has been numerical. Our result in particular recovers existence of solitary waves in the Toda lattice, establishes for the first time existence of solitary waves in the (nonintegrable) cubic and quartic lattices V(f)= 1/2 f ^2 + 1/3 af^3, V(f) = 1/2 f^2 + 1/4 bf^4, thereby confirming the numerical findings in [1] and shedding new light on the recurrence phenomena in these systems observed first by Fermi, Pasta and Ulam [2], and shows that contrary to widespread belief, the presence of exact solitary waves is not a peculiarity of integrable systems, but "generic" in this class of nonlinear lattices. The approach presented here is new and quite general, and should also be applicable to other forms of lattice equations: the travelling waves are sought as minimisers of a naturally associated variational problem (obtained via Hamilton's principle), and existence of minimisers is then established using modern methods in the calculus of variations (the concentration-compactness principle of P.-L. Lions [3]). © 1994 Springer-Verlag. Dive into the research topics of 'Existence theorem for solitary waves on lattices'. Together they form a unique fingerprint.
{"url":"https://researchportal.hw.ac.uk/en/publications/existence-theorem-for-solitary-waves-on-lattices","timestamp":"2024-11-14T12:11:40Z","content_type":"text/html","content_length":"55133","record_id":"<urn:uuid:32ef66c8-423d-483d-b2ab-a3d8947be0e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00111.warc.gz"}
Suggested Solutions: 2019 GCE O Level A Math Paper 2 - Emily Learning Suggested Solutions: 2019 GCE O Level A Math Paper 2 Here, you will find the suggested answer for 2019 GCE O Level A Math exam Paper 2. Click on the link below to go to the question directly. Question 7 (Absolute functions are no longer tested in O Level A Math exams from 2021 onwards) Question 10 (Sum and product of roots of a quadratic equation is no longer tested from 2021 onwards) Question 1 on Trigonometry (Differentiation and integration as a reverse of differentiation) Question 2 on Trigonometry This question on trigonometry involves proving identities, and using the identity to manipulate and equation to solve the trigonometry equation. Question 3 on Polynomials (factor theorem and partial fractions) Question 4 on Logarithm Question 5 on Integration Question 6 on Coordinate geometry of circles Question 7 on Absolute Functions Absolute functions are no longer tested in A Math exams from 2021 onwards. Question 8 on Linear Law Question 9 on Finding equation of tangent using differentiation, and finding area under two graphs using integration Question 10 on Quadratic functions (sum and product of roots) This section on quadratic function (sum and product of roots) is no longer tested in O Level A Math from 2021 onwards. Question 11 on Trigonometry and R- Formula, Application of differentiation to maximum and minimum problems Learn On-demand – A- Math Courses If you want to learn a particular topic in detail, check out ourĀ on-demand O Level A Math courses here. Suggested Answers for other A Math Papers: Suggested Answers for O Level Add Math Specimen Paper 1 Suggested Answers for O Level Add Math Specimen Paper 2 Suggested Answers for O Level Add Math 2020 Paper 1 Suggested Answers for O Level Add Math 2020 Paper 2 Suggested Answers for O Level Add Math 2019 Paper 1 Suggested Answers for O Level Add Math 2019 Paper 2 Suggested Answers for N Level Add Math Specimen Paper 1
{"url":"https://emilylearning.com/suggested-solutions-2019-gce-o-level-a-math-paper-2/","timestamp":"2024-11-06T08:21:58Z","content_type":"text/html","content_length":"82131","record_id":"<urn:uuid:e1347c8d-ed9b-419c-a2e5-81a30a315000>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00718.warc.gz"}
DES vs AES | Top 9 Amazing Differences You Should Learn Updated March 14, 2023 Difference Between DES and AES In this topic, we will learn about the difference between DES vs AES with the introduction, key differences, and head-to-head comparison table provided below. DES (stands for Data Encryption Standard) and AES (Advanced Encryption Standard) are symmetric block cipher. Before learning DES vs AES, do we know what the block cipher is? So let’s understand what a block cipher is. Block cipher is a cryptographic algorithm used to encrypt the plain text to produce the encrypted text (also called ciphertext) in which a cryptographic key is applied to the whole block rather than on individual bits. This algorithm always works on fixed-length blocks using the shared/ secret key. The same secret key is used to both encrypt and decrypt the text. This key is shared with both parties to encrypt and decrypt the data using it and hence protect the data from external attacks. What is DES? It is a symmetric block cipher that was introduced by the National Institute of Standard and Technology (NIST) in 1977. It is an implementation of Feistel Structure (a multi-round cipher that divides the whole text into two parts and works on each part individually). It works on 64- bit input key and uses 56- bit shared key to produce the ciphertext of 64-bit. In DES, whole plain text is divided into two parts of 32- a bit each before processing and the same operations are performed on individual parts. Each part undergoes an operation of 16 rounds, and after those operations, the final permutation is done to obtain the 64-bit ciphertext. The various functions involved in the rounds are Expansion, Permutation, and Substitution, XOR operation with a round key. Decryption follows the same process as encryption but in reverse order. Although DES was considered to be less secure to encrypt the highly confidential data of government as it uses the smaller shared key, to overcome this, triple-DES was introduced, but it was also not considered as a good algorithm as it turns out to be very slow to encrypt data. Even a small change in input text produces a whole different ciphertext in DES. What is AES? It came into the picture after triple-DES as it was found to be slow. It is one of the most widely used symmetric block cipher algorithm used nowadays. It was introduced by the National Institute of Standard and Technology in 2001. It is at least six times faster than triple DES. Unlike DES, it works on the principle of ‘Substitution and Permutation’. It follows an iterative approach. AES works on bytes rather than bits. In AES, plain text is considered 126 bits equivalent to 16 bytes with the secret key of 128 bits, which forms a matrix of 4×4 (having 4 rows and 4 columns). After this step, it performs 10 rounds. Each round has its subprocesses in which 9 rounds include the process of Sub bytes, Shift Rows, Mix Columns and Add Round Keys and the 10th round does include all the above operations excluding ‘Mix columns’ to produce the 126- bit ciphertext. In AES number of rounds depends on the size of the key, i.e. 10 rounds for 128- bit keys, 12 rounds for 192- bit key and 14 rounds for 256- bit keys. It is used in many protocols like TLS, SSL and various modern applications that require high encryption security. AES is also used for hardware that requires high throughput. Head To Head Comparison Between DES and AES (Infographics) Below are the top 9 differences between DES vs AES. Key Differences between DES and AES Let us discuss some of the major differences between DES vs AES. 1. The main difference between DES vs AES is the process of encrypting. In DES, the plaintext is divided into two halves before further processing, whereas in AES whole block, there is no division, and the whole block is processed together to produce the ciphertext. 2. AES is comparatively much faster than DES and can encrypt large files in a fraction of seconds compared to DES. 3. Because of the small bit size of the shared key used in DES, it is considered to be less secure than AES. DES is considered more vulnerable to brute-force attacks, whereas AES has not encountered any serious attacks. 4. Implementation of the Algorithm is evaluated on the basis of flexibility, and AES is comparatively more flexible than DES as it allows the text of various length, including 128, 192, 256 bits, whereas DES allows the encryption of text of fixed 64 bits. 5. Functions used in the processing of DES rounds are Expansion, Permutation, and Substitution, XOR operation with round key, whereas the functions used in rounds of AES are Sub bytes, Shift Rows, Mix Columns and Add Round Keys. 6. AES is practically efficient with both hardware and software implementations, unlike DES, which was initially efficient with only hardware. DES vs AES Comparison table Below is the topmost comparison between DES vs AES: Basis of AES Comparison DES Developed DES was developed in 1977 AES was developed in 2001 Full-Form DES stands for Data Encryption Standard AES stands for Advanced Encryption Standard Principle DES follows the principle of Feistel AES s based on the principle of Substitution and Permutation Plaintext The plaintext is of 64 bits. The plaintext can be 128, 192, 256 bits. Ciphertext Generate Ciphertext of 64 bits Can Generate Ciphertext of 128, 192, 256 bits Key Length The key length is 56 bits. Key length can be 128, 192, 256 bits. Rounds DES contains a fixed number of rounds, AES contains a variable number of rounds depending on the size of the input, i.e. 10 rounds for 128 bit, 12 rounds for 192 bit and 14 i.e. 16 rounds for 256 bits Security DES is less secure and hardly used now AES is much more secure than DES and is widely used nowadays. Speed DES is comparatively slower than AES AES is faster than DES Both DES vs AES is used to encrypt the data and are useful in their own way. AES came as the successor of DES to overcome its drawbacks. AES is also accepted by the U.S. government and has been accepted as a reliable algorithm to secure classified information. Although DES had made great contributions in the field of data security, it is now replaced by AES in the areas of high security. Recommended Articles This has been a guide to the top difference between DES vs AES. Here we also discuss the key differences with infographics and comparison table. You may also have a look at the following articles to learn more-
{"url":"https://www.educba.com/des-vs-aes/","timestamp":"2024-11-02T02:26:26Z","content_type":"text/html","content_length":"313819","record_id":"<urn:uuid:9dcac31a-1163-42c8-b1ae-13051437875b>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00212.warc.gz"}
Reinforcing Key Stage 5 Maths Teacher Guide These are a series of eleven videos with accompanying question sheets and solutions, produced by MSP Wales in Summer 2020, to help students transition from GCSE to A-level maths. A: Problem Solving B: Reinforcing AS-level Maths B1: Algebra (25 minutes) Surds, indices, factorising, algebraic fractions, completing the square, remainder and factor theorems, simultaneous equations B2: Coordinate Geometry (35 minutes) Midpoints, distance, gradient, parallel and perpendicular lines, equation of line given gradient and a point, intersection a quadratic and line B3: Algebraic Proof (50 minutes) Worked examples of deductive algebraic proofs, including using the method of exhaustion. B4: Differential Calculus (45 minutes) Differentiating simple expressions, equation of a tangent to a curve, finding and identifying stationary points, differentiation from first principles. B5: Integral Calculus (25 minutes) Integrating simple expressions, finding equation of a curve from the gradient function and a point, finding the area under a graph. B6: Introduction to Applied Maths (40 minutes) Statistics: variance and standard deviation, Venn diagrams, distributions, hypothesis testing; Mechanics: vectors, motion equations with constant acceleration, Newton’s second law, planar motion C: Tasting Further Mathematics C1: FM Pure Maths taster (55 minutes) Manipulating matrices, transformations using matrices, summing series, Euler’s formula. C2: Complex number taster (45 minutes) The idea of i =√(-1), the Argand diagram and manipulating complex expressions in Cartesian x+iy form C3: FM Applied Maths taster (35 minutes) Regression lines and collisions Any questions or feedback, please contact rhgmc-mspw@swansea.ac.uk
{"url":"https://furthermaths.wales/reinforce/","timestamp":"2024-11-06T09:00:54Z","content_type":"text/html","content_length":"114191","record_id":"<urn:uuid:a2084675-b862-4158-9181-fff0b6654c4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00186.warc.gz"}
What is the least number of solid metallic spheres of 6 cm diam-Turito Are you sure you want to logout? What is the least number of solid metallic spheres of 6 cm diameter that should be melted and recast to form a solid metal right circular cone whose height is 135 cm and diameter is 4 cm? Volume of sphere • We have given solid metallic spheres of 6 cm diameter that should be melted and recast to form a solid metal right circular cone whose height is 135 cm and diameter is 4 cm • We have to find the least number of those sphere. Step 1 of 1: Let the number of solid metallic sphere be n Volume of one sphere be Volume of metallic cone will be n = 5 So, The least number of sphere needed to form the cone is 5. Get an Expert Advice From Turito.
{"url":"https://www.turito.com/ask-a-doubt/Maths-what-is-the-least-number-of-solid-metallic-spheres-of-6-cm-diameter-that-should-be-melted-and-recast-to-for-qd58f1d9b","timestamp":"2024-11-10T09:52:07Z","content_type":"application/xhtml+xml","content_length":"377785","record_id":"<urn:uuid:5b5f1d36-367b-42c2-8a25-d17e551fdb92>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00429.warc.gz"}
Complex Analysis by Christian Berg Complex Analysis by Christian Berg Publisher: Kobenhavns Universitet 2012 Number of pages: 192 From the table of contents: Introduction; Holomorphic functions; Contour integrals and primitives; The theorems of Cauchy; Applications of Cauchy's integral formula; Argument, Logarithm, Powers; Zeros and isolated singularities; The calculus of residues; The maximum modulus principle; Moebius transformations. Download or read it online for free here: Download link (1.1MB, PDF) Similar books Introduction to Complex Analysis W W L Chen Macquarie UniversityIntroduction to some of the basic ideas in complex analysis: complex numbers; foundations of complex analysis; complex differentiation; complex integrals; Cauchy's integral theorem; Cauchy's integral formula; Taylor series; Laurent series; etc. Calculus of Residua: Complex Functions Theory a-2 Leif Mejlbro BookBoonThis is the second part in the series of books on complex functions theory. From the table of contents: Introduction; Power Series; Harmonic Functions; Laurent Series and Residua; Applications of the Calculus of Residua; Index. On Riemann's Theory of Algebraic Functions and their Integrals Felix Klein Macmillan and BowesIn his scholarly supplement to Riemann's complex mathematical theory, rather than offer proofs in support of the theorem, Klein chose to offer this exposition and annotation, first published in 1893, in an effort to broaden and deepen understanding. Several Complex Variables Michael Schneider, Yum-Tong Siu Cambridge University PressSeveral Complex Variables is a central area of mathematics with interactions with partial differential equations, algebraic geometry and differential geometry. This text emphasizes these interactions and concentrates on problems of current interest.
{"url":"https://www.e-booksdirectory.com/details.php?ebook=11533","timestamp":"2024-11-12T14:07:28Z","content_type":"text/html","content_length":"11146","record_id":"<urn:uuid:df307b72-bb8e-4cdd-b8fa-6445151bc1d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00677.warc.gz"}
Optimization Problems II | JustToThePointOptimization Problems II For every problem there is always, at least, a solution which seems quite plausible and reasonable. It is simple and clean, direct, neat, and very nice, and yet it is plainly wrong, #Anawim, The derivative of a function at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. It is the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Definition. A function f(x) is differentiable at a point “a” of its domain, if its domain contains an open interval containing “a”, and the limit $\lim _{h \to 0}{\frac {f(a+h)-f(a)}{h}}$ exists, f’ (a) = L = $\lim _{h \to 0}{\frac {f(a+h)-f(a)}{h}}$. More formally, for every positive real number ε, there exists a positive real number δ, such that for every h satisfying 0 < |h| < δ, then |L-$\ frac {f(a+h)-f(a)}{h}$|< ε. 1. Power Rule: $\frac{d}{dx}(x^n) = nx^{n-1}$. 2. Sum Rule: $\frac{d}{dx}(f(x) + g(x)) = \frac{d}{dx}(f(x)) + \frac{d}{dx}(g(x))$ 3. Product Rule: $\frac{d}{dx}(f(x) \cdot g(x)) = f’(x)g(x) + f(x)g’(x)$. 4. Quotient Rule: $\frac{d}{dx}\left(\frac{f(x)}{g(x)}\right) = \frac{f’(x)g(x) - f(x)g’(x)}{(g(x))^2}$ 5. Chain Rule: $\frac{d}{dx}(f(g(x))) = f’(g(x)) \cdot g’(x)$ 6. $\frac{d}{dx}(e^x) = e^x, \frac{d}{dx}(\ln(x)) = \frac{1}{x}, \frac{d}{dx}(\sin(x)) = \cos(x), \frac{d}{dx}(\cos(x)) = -\sin(x), \frac{d}{dx}(\tan(x)) = \sec^2(x), \frac{d}{dx}(\arcsin(x)) = \ frac{1}{\sqrt{1 - x^2}}, \frac{d}{dx}(\arccos(x)) = -\frac{1}{\sqrt{1 - x^2}}, \frac{d}{dx}(\arctan(x)) = \frac{1}{1 + x^2}.$ The critical points of a function f are the x-values, within the domain (D) of f for which f’(x) = 0 or where f’ is undefined. Notice that the sign of f’ must stay constant between two consecutive critical points. If the derivative of a function changes sign around a critical point, the function is said to have a local or relative extremum (maximum or minimum) at that point. If f’ changes sign from positive (increasing function) to negative (decreasing function), the function has a local or relative maximum at that critical point. Similarly, if f’ changes sign from negative to positive, the function has a local or relative minimum. Steps to solve an optimization problem. 1. Understand the Problem. The first step is to thoroughly read the problem statement multiple times to ensure a clear understanding, including the quantity that needs to be optimized (maximized or minimized) and any constraints that must be satisfied. 2. Represent the problem, make a diagram. 3. Find the Objective Function. Express the quantity to be optimized as a function of one or more variables. This function could represent a cost, profit, area, volume, or any other relevant quantity. Using the problem constraints, one variable could be expressed in terms of the others, resulting in a function of a single variable that can be optimized. 4. Find critical points. This step involves calculating the derivative of the function with respect to the variable that has been expressed in terms of the others and setting the derivative equal to zero to find the function’s critical points. 5. Determine whether critical points correspond to local maxima, local minima, or saddle points of the graph by using the first and second derivative tests. 6. Interpret the results in the context of the problem to determine the optimal solution and check that the optimal solution being found makes sense. Optimization problems II • A rectangular garden is to be constructed using a rock wall as one side of the garden and wire fencing for the other three sides. Given that there are 100 meters of fencing available, determine the dimensions that would create the garden of maximum area. You may enter an exact answer or round it to the nearest hundredth. Understand/Represent the problem, (Figure 4). Let x and y denote the width and length of the rectangular garden, that is, y is the side of the rectangle to be constructed with a rock wall. Find the Objective Function. Area of the rectangular garden, A = x · y. Perimeter of the rectangular area = 2·x + 2·y, but in this particular case only one side y will be fenced with wire ⇒ [Constraint: there are 100 meters of fencing available] 2x + y = 100 ⇒ y = 100 -2x ⇒ A = x · y = x·(100-2x) = 100x -2x^2. Find critical points: $\frac{dA}{dx} = 100 -4x = 0 ⇒ x = \frac{-100}{-4} = 25.$ Determine whether critical points correspond to local maxima, local minima, or saddle points. Recall Second Derivative Test. Let f be a function defined on a closed interval I that is twice differentiable at a point “a” (Obviously, a ∈ I). 1. f has a local maximum at a if f’(a) = 0, and f’’(a) < 0. 2. f has a local minimum at a if f’(a) = 0, and f’’(a) > 0. 3. The test fails if f’(a) = f’’(a) = 0. $\frac{d^2A}{dx^2} = -4 < 0$ ⇒ A has a local maximum at x = 25. Interpret the results. To optimize (maximize) the area of the garden, let x = 25m and y = 100 -2·25 = 50m, and the area of the garden would be 25·50 = 1250m^2. • A box with a square base and open top must have a volume of 42,592m^3. Find the dimensions of the box that minimize the amount of material used. Understand/Represent the problem, the diagram is shown in Figure 1.c. V = x^2y = 42,592 ⇒ y = $\frac{V}{x^{2}} = \frac{42,592}{x^{2}}$ Find the Objective Function. A = x^2 (base) +4xy (4 sides) + 0 (no top) = $ x^2+4·x·\frac{42,592}{x^{2}} = x^2+\frac{170,368}{x}$ Find critical points: $\frac{dA}{dx} = 2x -\frac{170,368}{x^2}, \frac{dA}{dx} = 0 ⇒ 2x -\frac{170,368}{x^2} = 0 ⇒ 0 = \frac{2x^{3}-170,368}{x^{2}}$ ⇒ $x = \sqrt[3]{\frac{170,368}{2}}≈44$ Determine whether critical points correspond to local maxima, local minima, or saddle points. 1. (0, 44), $\frac{dA}{dx}$ < 0 ⇒ A is decreasing. 2. (44, ∞), A’>0 ⇒ A is increasing ⇒ x= 44 is a minimum. Alternative method. $\frac{dA^2}{dx}$ = 2 + ^2*170,368⁄[x^3] >0 ⇒ Concave upward ⇒ 44 is a minimum. Interpret the results. y = $\frac{V}{x^{2}} = \frac{42,592}{x^{2}} = \frac{42,592}{44^{2}}$ ≈ 22 ⇒ Dimensions: 44m x 44 m x 22 m. A = x^2 (base) +4xy (4 sides) + 0 (no top) = 44^2 +4·44·22 = 5,808 m^ • What is the maximum volume you can get for an open box constructed by removing squares of size x from each corner of a paper that is 6 m by 6 m and folding up the sides? Understand/Represent the problem, (Figure iv) Length and breadth of the open box are l = w = (6–2x) m and height = x m. Find the Objective Function, V(x) = (6 -2x)·(6 -2x)·x m^3 = (36 -24x +4x^2)·x = 36x -24x^2 +4x^3. Find critical points: $\frac{dV}{dx} = 36 -48x +12x^2= 0 ↭ x^2-4x + 3 = 0 ⇒ x = \frac{4±\sqrt{4^2-4·3·1}}{2·1} = \frac{4±\sqrt{4}}{2} = \frac{4±2}{2} = $ 3m (this is not possible) or 1m. Determine whether critical points correspond to local maxima, local minima, or saddle points. $\frac{d^2V}{dx^2} = -48 +24x$ at x = 1, $\frac{d^2V}{dx^2}\bigg|_{1} =$ -48 + 24 = -24 < 0 ⇒ there exists a maximum. Interpret the results. To maximize the volume, put x = 1 meter, and the volume would be V(1) = (6 -2·1)·(6 -2·1)·1 = 4·4·1 = 16 m^3. • A rectangular flower garden with an area of 30 m^2 is surrounded by a fenced border 1 m wide on two sides and 2 m wide on the other two sides. What dimensions of the garden minimize the combined area of the garden and borders? Understand/Represent the problem, Figure 3. Our area’s flower garden = 30 = x·y (i). The combined area of the garden and border A = (x + 4)·(y + 2) (ii). Find the Objective Function. 30 = x·y (i) ⇒ y = $\frac{30}{x}$ ⇒[Replacing y into (ii)] A = $(x + 4)·(\frac{30}{x}+2)= (x +4)(\frac{30+2x}{x}) = (\frac{(x+4)(30+2x)}{x}) = \frac{2x^2+38x120}{x} = 2x + 38 + \frac{120}{x}$ Find critical points. $\frac{dA}{dx} = 2 -\frac{120}{x^2} = 0 ↭ \frac{120}{x^2} = 2 ↭ x^2 = 60 ↭ x = \sqrt{60} ≈ 7.746$ Determine whether critical points correspond to local maxima, local minima, or saddle points. $\frac{d^2A}{dx^2} = \frac{240}{x^3}$ at x = $\sqrt{60},\frac{d^2A}{dx^2}=\frac{240}{x^3}\bigg|_{\sqrt{60}}$ > 0 ⇒ there exists a minimum. Interpret the results. The dimensions of the garden that minimize the combined area of the garden and borders are x = $\sqrt{60}≈ 7.746$m and y = $\frac{30}{x} = \frac{30}{\sqrt{60}} ≈ 3.87m.$ • Find the area of the largest rectangle that can be inscribed in the ellipse, $\frac{x^2}{a^2}+\frac{y^2}{b^2} = 1.$ Understand/Represent the problem. For a rectangle to be inscribed in the ellipse, the sides of the rectangle must be parallel to the axes, so its vertices are (±a·cos(θ), ±b·sin(θ)) (Figure 1). Recall that the parametric equations of an ellipse in standard form are: x(t) = a cos(t), y(t) = b sin(t) where: a is the length of the semi-major axis and b is the length of the semi-minor axis, and t is the parameter ranging from 0 to Find the Objective Function. The rectangle area is A(θ) = l·w =[l = 2·a·cos(θ), w = 2·b·sin(θ)] 4·a·b·cos(θ)·sin(θ) = 2·a·b·sin(2θ). Find critical points. $\frac{dA}{dθ} = 2·a·b·cos(2θ) = 0 ↭$[a ≠ 0, b ≠ 0] cos(2θ) = 0 ⇒ 2θ = $\frac{π}{2} ⇒ θ = \frac{π}{4}.$ Determine whether critical points correspond to local maxima, local minima, or saddle points. $\frac{d^2A}{dθ^2} = -4·a·b·sin(2θ), \frac{d^2A}{dθ^2}\bigg|_{\frac{π}{4}} = -4·a·b·sin(\frac{π}{2}) = -4·a·b < 0 ⇒ \frac{π}{4}$ is a maximum. Interpret the results. The maximum area is A = 4·a·b·$cos(\frac{π}{4})sin(\frac{π}{4}) = \frac{4·a·b}{\sqrt{2}·\sqrt{2}} = 2ab.$ Example: Let the elipse be $\frac{x^2}{4}+y^2 = 1, a = 2, b = 1, Area = 2·2·1 = 4, θ = \frac{π}{4}$ and its vertices are (±a·cos(θ), ±b·sin(θ)) = $(±2·cos(\frac{π}{4}), ±1·sin(\frac{π}{4})) = (±2·\ frac{1}{\sqrt{2}}, ±\frac{1}{\sqrt{2}}) = (±\sqrt{2}, ±\frac{1}{\sqrt{2}})$ • A piece of wire 10m long is cut into two pieces. One piece is bent into a square and the other is bent into an equilateral triangle. How should the wire be cut so that the total area enclosed is (a) a minimum and (b) a maximum? Understand/Represent the problem (Figure 2). The first piece will have length x which we’ll bend into a square. Each side of the square will have length $\frac{x}{4}$. Similarly, the second piece will have length 10 - x, and we will bend it into an equilateral triangle (a triangle that has three sides that are all the same length and three angles that are all the same size, namely 60° = $\frac{π}{3}$) ⇒ each side of the equilateral triangle will have length $\frac{1}{3}·(10-x)$. $sin(\frac{π}{3})=\frac{\sqrt{3}}{2}=\frac{h}{\frac{1}{3}(10-x)}⇒ h = \frac{\sqrt{3}(10-x)}{6}$ Find the Objective Function. A =[A[1] = l^2, A[2] = $\frac{base*h}{2}$] A[1] + A[2] = $(\frac{x}{4})^2 + \frac{1}{3}(10-x)\frac{\sqrt{3}(10-x)}{6}·\frac{1}{2} = \frac{x^2}{16}+\frac{\sqrt{3}(10-x)^2} Find critical points. $\frac{dA}{dx} = \frac{x}{8}-\frac{\sqrt{3}(10-x)}{18} = 0 ↭ (\frac{1}{8}+\frac{\sqrt{3}}{18})x = \frac{\sqrt{3}·10}{18} ↭ \frac{18x +\sqrt{3}·8·x}{8·18} = \frac{\sqrt{3}·10} {18} ↭ \frac{18 +\sqrt{3}·8}{8}x = \sqrt{3}·10 ↭ x = \frac{\sqrt{3}·80}{18 +\sqrt{3}·8}$ ≈ 4.35. Determine whether critical points correspond to local maxima, local minima. $\frac{d^2A}{dx^2} =\frac{1}{8}+\frac{\sqrt{3}}{18} > 0 ⇒ x = \frac{\sqrt{3}·80}{18 +\sqrt{3}·8}$ ≈ 4.35 is a local Interpret the results. The area is minimized when x = $\frac{\sqrt{3}·80}{18 +\sqrt{3}·8}$ ≈ 4.35 (that is used to make the square and so 5.65 is used to make a triangle) and A(4.35) ≈ 2.71. The area is maximized in the boundaries, more precisely when x = 10, that is, when all of the wire is used to make the square, A = $\frac{10^2}{16} ≈ 6.25$ (we have previously calculated A(x = 0) = $\frac{\ sqrt{3}·10^2}{36}≈ 4.81 < 6.25$). • A box with a square base and open top must have a fixed volume. Find the dimensions of the box that minimize the amount of material used. Let x represent the base’s side, and y the height. V = x^2y (fixed) ⇒ y = $\frac{V}{x^{2}}$ A = x^2+4xy = x^2+4^V⁄[x] $\frac{dA}{dt}$ = 2x -4^V⁄[x^2]. $\frac{dA}{dt} = 0 ⇒ 0 = \frac{2x^{3}-4V}{x^{2}} ⇒ 0 = 2x^3-V $ ⇒ $x = \sqrt[3]{2V}$ 1. (0, $\sqrt[3]{2V}$), $\frac{dA}{dt}$ < 0 ⇒ A is decreasing. 2. ($\sqrt[3]{2V}$, ∞), $\frac{dA}{dt}$ >0 ⇒ A is increasing. Therefore, x = $\sqrt[3]{2V}$ is a minimum, y = $\frac{V}{x^2}=\frac{V}{(2V)^{2/3}} = 2^{\frac{-2}{3}}V^{\frac{1}{3}}$ Futhermore, $\frac{x}{y}=\frac{2^{\frac{1}{3}}V^{\frac{1}{3}}}{2^{\frac{-2}{3}}V^{\frac{1}{3}}} = 2.$ This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. 1. NPTEL-NOC IITM, Introduction to Galois Theory. 2. Algebra, Second Edition, by Michael Artin. 3. LibreTexts, Calculus. Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson). 4. Field and Galois Theory, by Patrick Morandi. Springer. 5. Michael Penn, Andrew Misseldine, blackpenredpen, and MathMajor, YouTube’s channels. 6. Contemporary Abstract Algebra, Joseph, A. Gallian. 7. MIT OpenCourseWare, 18.01 Single Variable Calculus, Fall 2007 and 18.02 Multivariable Calculus, Fall 2007, YouTube. 8. Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.
{"url":"https://justtothepoint.com/calculus/optimizationb/","timestamp":"2024-11-14T04:36:31Z","content_type":"text/html","content_length":"29263","record_id":"<urn:uuid:f89f5b98-b709-45fa-af5b-2d67c30f572f>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00897.warc.gz"}
Digital Plumber — Python — #adventofcode Day 12 Today’s challenge has us helping a village of programs who are unable to communicate. We have a list of the the communication channels between their houses, and need to sort them out into groups such that we know that each program can communicate with others in its own group but not any others. Then we have to calculate the size of the group containing program 0 and the total number of groups. !!! commentary This is one of those problems where I’m pretty sure that my algorithm isn’t close to being the most efficient, but it definitely works! For the sake of solving the challenge that’s all that matters, but it still bugs me. By now I’ve become used to using fileinput to transparently read data either from files given on the command-line or standard input if no arguments are given. First we make an initial pass through the input data, creating a group for each line representing the programs on that line (which can communicate with each other). We store this as a Python set. groups = [] for line in fi.input(): head, rest = line.split(' <-> ') group = set([int(head)]) group.update([int(x) for x in rest.split(', ')]) Now we iterate through the groups, starting with the first, and merging any we find that overlap with our current group. i = 0 while i < len(groups): current = groups[i] Each pass through the groups brings more programs into the current group, so we have to go through and check their connections too. We make several merge passes, until we detect that no more merges took place. num_groups = len(groups) + 1 while num_groups > len(groups): j = i+1 num_groups = len(groups) This inner loop does the actual merging, and deletes each group as it’s merged in. while j < len(groups): if len(current & groups[j]) > 0: del groups[j] j += 1 i += 1 All that’s left to do now is to display the results. print("Number in group 0:", len([g for g in groups if 0 in g][0])) print("Number of groups:", len(groups)) You can respond to this post, "Digital Plumber — Python — #adventofcode Day 12", by: liking, boosting or replying to a tweet or toot that mentions it; or sending a webmention from your own site to Comments & reactions haven't loaded yet. You might have JavaScript disabled but that's cool 😎. Powered by Cactus Comments 🌵
{"url":"https://erambler.co.uk/blog/day-12/","timestamp":"2024-11-12T17:20:34Z","content_type":"text/html","content_length":"22364","record_id":"<urn:uuid:09467a99-9c5e-4dc4-b5ba-509a56b7f309>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00009.warc.gz"}
Which Scripts? There are six numbers written in five different scripts. Can you sort out which is which? Which Scripts? printable sheet There are six numbers written in five different scripts. Can you sort out which is which? Write $51$ in each script. Here is an interactive version for you to try out your ideas. [Thank you to the SMILE Centre for permission to use this puzzle.] Getting Started You might find it helpful to print out this sheet of the numbers. Which numbers do you know? Can you see any similarities between any of the numbers? Which numbers are the 'shortest' and the 'longest'? Student Solutions Well done to everybody who worked out which numbers belong to which scripts! Poppy from Acomb First School in the UK shared this strategy with us: First of all I found the numbers that I recognised. Then I found the numbers that looked a little bit like the numbers that I recognised and looked for one-digit numbers and three-digit numbers because I knew that one-digit numbers would be 2 and that three-digit numbers would be 100. Then I grouped the numbers into 5 groups of similar scripts. To work out the trickier numbers, I was able to work out the 1 from the 100 and the 2 from the 2. Then I could work out the 13 and the 25. Once I knew the 13 and 25, I could find the 58 from the 5 and then the 83 from the 3 and the 8. Good ideas, Poppy - lots of children found it easiest to start by looking for the digit 2. James from Co-op Academy New Islington, Manchester in the UK started by sorting the numbers into the scripts based on which numbers looked similar: I cut the number grid into squares so I could move them around. I found out what the six numbers in English were first because I know what they look like. Then, I started to look for ones that looked similar. I put them in rows and columns for the the same number and the same scripts to be organised. Next, I looked for numbers in other scripts that looked very similar and and matched them up with the correct number. Because I knew 2 it was easier to work the rest of them out because after you've found 2 you can work out 25. Once you've worked out 25 you can work out 58. Once you've worked out 58 you can work out 83. After you've worked out 83 you can work out 13. For the 100s you can use the 1 from 13 and there are three digits or two zeroes or two dots on some of them. This is a good step-by-step method, James! The numbers we use in English are also the same numbers that are used in many other languages. I wonder if anyone knows the name for these numbers? We received a lot of solutions from the children at St. Helen's School in Abbotsham, England. Amelia-May and Frances explained their strategy, which was similar to James's: First we looked for all the 2s (which were kind of easy), then found the 25s because we knew what the 2 would look like. After that we got the 58s because we knew what the 5 looked like. Then we found that it had an 8 like the 83 so we moved on to that looking for the 83s. We realised that the 83 linked to 13 because of the 3. So we found all of the 13s. Last but not least we did the 100! Thank you as well to Edgar, Will, Amber, Grace, Kacie, Fraser, Oliver, Myles, Albie, Charlie and Lucy from St. Helen's School who also sent in some similar solutions. Gabe and Muhammad from Wembrook Primary also used similar reasoning, and explained their thinking very clearly: Dhruv from The Glasgow Academy in the UK used their prior knowledge of two of the scripts to solve this problem: First, I separated these numbers into different groups based on their writing pattern. Secondly, I knew numbers that were in English and Hindi because I am from India. Thirdly, I arranged the numbers in ascending order. Finally, for the first row in Chinese I took a guess for the first number and then linked it to find the other numbers. Example: For the Chinese group I took a guess that the two lines were the number 2 and then found the same two lines in another number and so on. Then I arranged the numbers in ascending order. I followed the same strategy for the two scripts which were unknown to me. Well explained, Dhruv - this looks similar to some of the other strategies above, but the pictures make it really clear how you got from one solution to the next. I wonder if anybody has worked out what the other scripts might be? Thank you as well to the following children for sending in their ideas about which numbers belonged to which scripts: Zoe from Canada; Chloe, Meriam, Sophia, Milan, Josh, Thomas, Hogan, Henry and Harry from Banstead Prep; Blossom, Gabe, Kinel and Lilah from Onchan Primary School on the Isle of Man; Freddie; and the children at Ganit Kreeda in Vicharvatika, India. The second part of this problem involved writing the number 51 in each script. Only the next four groups of children sent in a correct solution to this, as lots of children made a mistake with writing 51 in the Chinese script. Isobelle, Edie and James from Richmond Methodist School in the UK explained: We took the 5 and the 1 from each script to make 51 in the different scripts. I wonder if you had actually had to use a slightly different strategy with the Chinese script? Sophie from Glenfall in the UK explained: The trickiest was the Chinese style numbers because they actually used a 'plus' symbol which wasn't a digit. Elliott from Richmond Methodist School in the UK had an idea about what the plus symbol might mean: The second step is to write 51 in all of the scripts. You just need to take the 5 and 1 from all the scripts except script E (the Chinese script). I saw that a + equals x10. If one of the symbols has lines going up, one line is 1, two are 2 and 3 are 3. So adding a plus after a symbol multiplies the previous symbol by 10. Good ideas! I wonder how this works with the number 13 as there isn't a digit before the plus symbol? Junior Maths Club at Caulfield Grammar School in Wheelers Hill, Australia sent in these full solutions: Thank you all for sharing your ideas with us. Teachers' Resources Why do this problem? This problem consolidates understanding of place value in a demanding but intriguing context. In order to tackle the problem, learners will have to organise and sort the information given. We hope they are curious enough to keep going, even when it gets tricky! Possible approach Show the image of the numbers to the group and ask them to talk to a partner about what they notice. Gather some suggestions and explain what the image shows, if this has not already come up in discussion. Invite learners to suggest ways of beginning the problem and then set them off in pairs to work together, using this copy of the image and providing squared paper. As they work, encourage them to develop a good way to record their findings. In the plenary, it might be helpful for you to enlarge this sheet and cut out the numbers so they can be moved around on the board. (If these were laminated, they would make a useful set of cards to be used again.) You could invite pairs of children to explain how they reached their conclusions and recorded the results. This could lead into a discussion of the place value system (compared with, for example, Roman numerals). Key questions Which numbers do you know? Can you see any similarities between any of the numbers? Which numbers are the 'shortest' and the 'longest'? Possible extension Learners could write hints which might help others work on the task without giving away the solution. You could also encourage children to find out the name of each script. Another idea would be to include Roman Numeral versions of the numbers: XIII, II, LVIII, XXV, LXXXIII, C. This sheet includes six cards which could be printed off to accompany the original numbers. Alternatively, challenge children to create calculations and their answers in one of the scripts. Possible support Some children may find it useful to cut out the individual numbers so they can be sorted more easily. Below you can see some pictures of children at Lancasterian Primary School in Haringey working on this task:
{"url":"https://nrich.maths.org/problems/which-scripts","timestamp":"2024-11-07T20:08:08Z","content_type":"text/html","content_length":"54512","record_id":"<urn:uuid:016af0c2-1e3a-432c-94bf-4d823dd231c4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00423.warc.gz"}
Finantick Demo – What is Free Margin What is Free Margin? What does “Free Margin” mean? Margin can be classified as either “used” or “free”. Used Margin, which is just the aggregate of all the Required Margin from all open positions, was discussed in a previous lesson. Free Margin is the difference between Equity and Used Margin. Free Margin refers to the Equity in a trader’s account that is NOT tied up in margin for current open positions. Free Margin is also known as “Usable Margin” because it’s margin that you can “use”….it’s “usable”. Free Margin can be thought of as two things: 1. The amount available to open NEW positions. 2. The amount that EXISTING positions can move against you before you receive a Margin Call or Stop Out. Don’t worry about what a Margin Call and Stop Out are. They will be discussed later. For now, just know they’re bad things. Like acne breakouts, you don’t want to experience them. Free Margin is also known as Usable Margin, Usable Maintenance Margin, Available Margin, and “Available to Trade“. How to Calculate Free Margin Here’s how to calculate Free Margin: Free Margin = Equity - Used Margin If you have open positions, and they are currently profitable, your Equity will increase, which means that you will have more Free Margin as well. Floating profits increase Equity, which increases Free Margin. If your open positions are losing money, your Equity will decrease, which means that you will also have less Free Margin as well. Floating losses decrease Equity, which decreases Free Margin. Example: No Open Positions Let’s start with an easy example. You deposit $1,000 in your trading account. You don’t have any open positions, what is your Free Margin? Step 1: Calculate Equity If you don’t have any open position, calculating the Equity is easy. Equity = Account Balance + Floating Profits (or Losses) $1,000 = $1,000 + $0 The Equity would be the SAME as your Balance. Since you don’t have any open positions, you don’t have any floating profits or losses. Step 2: Calculate Free Margin If you don’t have any open positions, then the Free Margin is the SAME as the Equity. Free Margin = Equity - Used Margin $1,000 = $1,000 - $0 Since you don’t have any open positions, there is no margin being “used”. This means that your Free Margin will be the same as your Balance and Equity. Example: Open a Long USD/JPY Position Now let’s make it a bit more complicated by entering a trade! Let’s say you have an account balance of $1,000. Step 1: Calculate Required Margin You want to go long USD/JPY and want to open 1 mini lot (10,000 units) position. The Margin Requirement is 4%. How much margin (Required Margin) will you need to open the position? Since USD is the base currency. this mini lot is 10,000 dollars, which means the position’s Notional Value is $10,000. Required Margin = Notional Value x Margin Requirement $400 = $10,000 x .04 Assuming your trading account is denominated in USD, since the Margin Requirement is 4%, the Required Margin will be $400. Step 2: Calculate Used Margin Aside from the trade we just entered, there aren’t any other trades open. Since we just have a SINGLE position open, the Used Margin will be the same as Required Margin. Step 3: Calculate Equity Let’s assume that the price has moved slightly in your favor and your position is now trading at breakeven. This means that your floating P/L is $0. Let’s calculate your Equity: Equity = Account Balance + Floating Profits (or Losses) $1,000 = $1,000 + $0 The Equity in your account is now $1,000. Step 4: Calculate Free Margin Now that we know the Equity, we can now calculate the Free Margin: Free Margin = Equity - Used Margin $600 = $1,000 - $400 As you can see, another way to look at Equity is that is the sum of your Used and Free margin. Equity = Used Margin + Free Margin In this lesson, we learned about the following: • Free Margin is the money that is NOT “locked up” due to an open position and can be used to open new positions. • When Free Margin is at zero or less, additional positions cannot be opened.
{"url":"https://www.finantickdemo.com/what-is-free-margin/","timestamp":"2024-11-05T13:35:38Z","content_type":"text/html","content_length":"102043","record_id":"<urn:uuid:8ab11398-d1e1-421c-8bc6-905dbc5c045f>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00813.warc.gz"}
How to Perform One-Way ANOVA in R (With Example Dataset) The one-way ANOVA (Analysis of Variance) is used for determining statistical differences in more than two groups by comparing their group means. The one-way ANOVA is also known as one-factor ANOVA as there is only one independent variable (factor or group variable) to analyze. A one-way ANOVA tests the null hypothesis that group means are equal against the alternative hypothesis that group means are not equal (i.e. there is a significant difference between at least one group and the others). You can use following code to perform one-way ANOVA in R: # model model <- aov(y ~ x, data = df) # view ANOVA summary Parameter Description y Response variable (should be continuous variable) x Group variable df Data frame containing the group and response variable The following example illustrates how to use one-way ANOVA for analyzing the group differences. How to Perform One-Way ANOVA in R For example, a researcher wants to analyze whether plant height differs among plant genotypes. The researcher collects plant height data for four plant genotypes. The researcher have following Null and Alternative hypotheses: Null Hypothesis: The plant height is equal among plant genotypes i.e. the mean of plant height is equal Alternative hypothesis: The plant height is not equal among plant genotypes i.e. the mean of plant height is significantly different Here, the alternative hypothesis is two-side as the plant height can be lesser or greater in one plant genotype than in another genotypes. The following ANOVA code shows how to perform one-way ANOVA in R: Load and view the dataset, # load dataset df <- read.csv("https://reneshbedre.github.io/assets/posts/anova/one_way_anova.csv") # view five rows of data frame genotype height 1 A 5 2 A 6 3 A 7 4 A 8 5 A 8 6 B 12 Check descriptive statistics (mean and variance) for each plant genotype, # load package # get descriptive statistics df %>% group_by(genotype) %>% summarise(mean = mean(height), var = var(height)) # A tibble: 4 × 3 genotype mean var <fct> <dbl> <dbl> 1 A 6.8 1.7 2 B 13.6 2.3 3 C 7 3.5 4 D 7.2 1.7 From the descriptive statistics, we can see that plant height is highest for genotype B and lowest for genotype A. The variance is a roughly similar for all genotypes. Now, we will perform a one-way ANOVA to check whether these differences in plant height are statistically significant. Perform a one-way ANOVA and summarise the results using summary() function, # fit model model <- aov(height ~ genotype, data = df) # summary statistics Df Sum Sq Mean Sq F value Pr(>F) genotype 3 163.8 54.58 23.73 3.93e-06 *** Residuals 16 36.8 2.30 Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The one-way ANOVA analysis reports the following important statistics for interpretation, Parameter Value F 23.73 p value 3.93e-06 Degree of freedom 3 and 16 According to the one-way ANOVA results, the p value is significant [F(3, 16) = 23.73, p < 0.05]. Hence, we reject the null hypothesis and conclude that plant height among genotypes is significantly Relevant article Enhance your skills with courses on Statistics and R This work is licensed under a Creative Commons Attribution 4.0 International License Some of the links on this page may be affiliate links, which means we may get an affiliate commission on a valid purchase. The retailer will pay the commission at no additional cost to you.
{"url":"https://www.reneshbedre.com/blog/one-way-anova-r","timestamp":"2024-11-07T07:57:08Z","content_type":"text/html","content_length":"84054","record_id":"<urn:uuid:4d2de4c8-d78a-49fd-abb1-f913667f31d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00194.warc.gz"}
SAT Practice Test #5 Answer Explanations SAT Suite Of . - Free Download PDF Answer Explanations 2016 The College Board. College Board, SAT, and the acorn logo are registered trademarks of the College Board.K-5MSA04 Answer ExplanationsSection 1: Reading TestQUESTION 1Choice D is the best answer. The passage begins with the main character, Lymie, sitting in a restaurantand reading a history book. The first paragraph describes the book in front of him (“Blank pages frontand back were filled in with maps, drawings, dates, comic cartoons, and organs of the body,” lines 1113). The second paragraph reveals what Lymie is reading about (the Peace of Paris and the Congress ofVienna) and suggests his intense concentration on the book (“sometimes he swallowed whole the foodthat he had no idea he was eating,” lines 23-24). In the third paragraph, the focus of the passage shiftsto a description and discussion of others in the restaurant, namely “A party of four, two men and twowomen . . . ” (lines 42-43).Choice A is incorrect because the passage does not provide observations made by other characters, onlyoffering Lymie’s and the narrator’s observations. Choice B is incorrect because the beginning of thepassage focuses on Lymie as he reads by himself and the end of the passage focuses on the arrival ofLymie’s father, with whom Lymie’s relationship seems somewhat strained. Choice C is incorrect becausethe setting is described in the beginning of the first paragraph but is never the main focus of thepassage.QUESTION 2Choice C is the best answer. The main purpose of the first paragraph is to establish the passage’s settingby describing a place and an object. The place is the Alcazar Restaurant, which is described as being“long and narrow” and decorated with “art moderne,” murals, and plants (lines 2-6), and the object isthe history book Lymie is reading.Choice A is incorrect because rather than establishing what Lymie does every night, the first paragraphdescribes what Lymie is doing on one night. Choice B is incorrect because nothing in the first paragraphindicates when the passage takes place, as the details provided (such as the restaurant and the book)are not specific to one era. Choice D is incorrect because nothing in the first paragraph clearlyforeshadows a later event.QUESTION 3 Choice C is the best answer. The passage states that “when Lymie put down his fork and began to count. . . the waitress, whose name was Irma, thought he was through eating and tried to take his plate away”(lines 34-38). It is reasonable to assume that Irma thinks Lymie is finished eating because he is no longerholding his fork.Choice A is incorrect because Lymie has already been reading his book while eating for some timebefore Irma thinks he is finished eating. Choice B is incorrect because the passage doesn’t state thatLymie’s plate is empty, and the fact that Lymie stops Irma from taking his plate suggests that it is notempty. Choice D is incorrect because the passage gives no indication that Lymie asks Irma to clear thetable.QUESTION 4Choice A is the best answer. The passage makes it clear that Lymie finds the party of four who enter therestaurant to be loud and bothersome, as their entrance means he is no longer able to concentrate onhis book: “They laughed more than there seemed any occasion for . . . and their laughter was too loud.But it was the women’s voices . . . which caused Lymie to skim over two whole pages without knowingwhat was on them” (lines 52-59).Choices B, C, and D are incorrect because lines 55-59 make clear that Lymie is annoyed by the party offour, not that he finds their presence refreshing (choice B), thinks they resemble the people he isreading about (choice C), or thinks they represent glamour and youth (choice D).QUESTION 5Choice C is the best answer. The previous question asks about Lymie’s impression of the party of fourwho enter the restaurant, with the correct answer being that he finds them noisy and distracting. This issupported in lines 55-59: “But it was the women’s voices, the terrible not quite sober pitch of thewomen’s voices, which caused Lymie to skim over two whole pages without knowing what was onthem.”Choices A, B, and D are incorrect because the lines cited do not support the answer to the previousquestion about Lymie’s impression of the party of four who enter the restaurant. Rather than showingthat Lymie finds the group of strangers noisy and distracting, the lines simply describe how two of thefour people look (choices A and B) and indicate what Lymie does when his father joins him in therestaurant (choice D).QUESTION 6Choice A is the best answer. In the passage, Lymie closes his book only after “a coat that he recognizedas his father’s was hung on the hook next to his chair” (lines 67-68). It is Lymie’s father’s arrival thatcauses him to close the book. Choices B, C, and D are incorrect because lines 67-70 of the passage clearly establish that Lymie closeshis book because his father has arrived, not that he does so because the party of four is too loud (choiceB), because he has finished reading a section of the book (choice C), or because he is getting ready toleave (choice D).QUESTION 7Choice D is the best answer. In lines 74-79, the narrator describes Mr. Peters as “gray” and balding,noting that he has “lost weight” and his color is “poor.” This description suggests Mr. Peters is aging andlosing strength and vigor.Choices A, B, and C are incorrect because the description of Mr. Peters in lines 74-79 suggests he is aperson who is wan and losing vitality, not someone who is healthy and in good shape (choice A), angryand intimidating (choice B), or emotionally anxious (choice C).QUESTION 8Choice B is the best answer. In the last paragraph of the passage, Mr. Peters is described as beingunaware “that there had been any change” in his appearance since he was younger (lines 80-81). Laterin the paragraph, the passage states that “the young man” Mr. Peters once was “had never for onesecond deserted” him (lines 90-91). The main idea of the last paragraph is that Mr. Peters still thinks ofhimself as young, or at least acts as if he is a younger version of himself.Choice A is incorrect because Mr. Peters is spending time with Lymie, his son, and there is no indicationthat he generally does not spend time with his family. Choice C is incorrect because although there arebrief mentions of a diamond ring and manicured fingers, the paragraph focuses on Mr. Peters’s overallappearance, not on his awareness of status symbols. Choice D is incorrect because the last paragraphclearly states that Mr. Peters is “not aware that there had been any change” and thinks of himself asyoung.QUESTION 9Choice B is the best answer. In lines 81-85, Mr. Peters is described as having “straightened his tie selfconsciously” and gestured with a menu “so that the two women at the next table would notice thediamond ring on the fourth finger of his right hand.” Mr. Peters’s actions are those of someone whowants to attract attention and be noticed.Choices A, C, and D are incorrect because the lines cited do not support the idea Mr. Peters wants toattract attention to himself. Choices A and C address Mr. Peters’s view of himself. Choice D indicatesthat Mr. Peters’s view of himself affects his behavior but does not reveal that he acts in a way meant todraw attention.QUESTION 10 Choice B is the best answer. The last sentence of the passage states that Mr. Peters’smischaracterization of himself makes him act in ways that are not “becoming” for a man of his age. Inthis context, “becoming” suggests behavior that is appropriate or fitting.Choices A, C, and D are incorrect because in the context of describing one’s behavior, “becoming”means appropriate or fitting, not becoming known (choice A), becoming more advanced (choice C), orsimply occurring (choice D).QUESTION 11Choice B is the best answer. In Passage 1, Beecher makes the point that even if women in her societyare perceived as being inferior to men, they are still able to effect considerable influence on that society:“But while woman holds a subordinate relation in society to the other sex, it is not because it wasdesigned that her duties or her influence should be any the less important, or all-pervading” (lines 6-10).Choice A is incorrect because Beecher describes the dynamic between men and women in terms of theway they can change society, not in terms of security and physical safety. Choice C is incorrect becauseeven though Beecher implies that women have fewer rights in society than men do, she doesn’t say thatwomen have fewer responsibilities. Choice D is incorrect because Beecher does not assert that womenare superior to men.QUESTION 12Choice A is the best answer. The previous question asks what point Beecher makes regarding therelationship between men and women in her society, with the answer being that women are consideredinferior but can still have influence. This is supported in lines 6-10: “But while woman holds asubordinate relation in society to the other sex, it is not because it was designed that her duties or herinfluence should be any the less important, or all-pervading.”Choices B, C, and D are incorrect because the lines cited do not support the answer to the previousquestion about the point Beecher makes regarding the relationship between men and women in hersociety. Instead, they describe ways men can affect society (choices B and C) and explain how certainactions undertaken by a woman can be viewed negatively (choice D).QUESTION 13Choice B is the best answer. In the third paragraph (lines 22-37), Beecher suggests that women can be“so much respected, esteemed and loved” by those around them that men will accede to their wishes:“then, the fathers, the husbands, and the sons, will find an influence thrown around them, to which theywill yield not only willingly but proudly . . . .” These lines show that Beecher believes women caninfluence society by influencing the men around them; in other words, women have an indirectinfluence on public life. Choices A, C, and D are incorrect because lines 34-37 make it clear that Beecher believes women dohave an effect on society, even if it is an indirect effect. Beecher does not indicate that women’s effecton public life is ignored because most men are not interested (choice A), unnecessary because men donot need help governing society (choice C), or merely symbolic because women tend to be idealistic(choice D).QUESTION 14Choice D is the best answer. Regarding the dynamic of men and women in society, Beecher says thatone sex is given “the subordinate station” while the other is given the “superior” station (lines 1-2). Inthe context of how one gender exists in comparison to the other, the word “station” suggests a standingor rank.Choices A, B, and C are incorrect because in the context of the relative standing of men and women inBeecher’s society, the word “station” suggests a standing or rank, not a physical location or area(choices A, B, and C).QUESTION 15Choice C is the best answer. When describing how men and women can influence society, Beecher saysthe ways they can do so “should be altogether different and peculiar” (lines 11-12). In the context of the“altogether different” ways men and women can influence society, the word “peculiar” implies beingunique or distinctive.Choices A, B, and D are incorrect because in the context of the “altogether different” ways men andwomen can influence society, the word “peculiar” suggests something unique or distinctive, notsomething unusual and odd (choice A), unexpected (choice B), or rare (choice D).QUESTION 16Choice A is the best answer. In Passage 2, Grimké makes the main point that people have rights becausethey are human, not because of their gender or race. This is clear in lines 58-60, when Grimké states that“human beings have rights, because they are moral beings: the rights of all men grow out of their moralnature” and lines 65-68, when Grimké writes, “Now if rights are founded in the nature of our moralbeing, then the mere circumstance of sex does not give to man higher rights and responsibilities, than towoman.”Choices B, C, and D are incorrect because Grimké primarily emphasizes that all men and womeninherently have the same rights (“rights are founded in the nature of our moral being,” lines 65-66). Hercentral claim is not that men and women need to work together to change society (choice B), that moralrights are the distinguishing characteristic separating humans from animals (choice C), or that thereshould be equal opportunities for men and women to advance and succeed.QUESTION 17 Choice B is the best answer. In Passage 2, Grimké makes the point that human rights are not fleeting orchangeable but things that remain, regardless of the circumstances, because they are tied to humans’moral nature. She emphasizes that human rights exist even if societal laws attempt to contradict oroverride them, citing slavery as an example: “These rights may be wrested from the slave, but theycannot be alienated: his title to himself is as perfect now, as is that of Lyman Beecher: it is stamped onhis moral being, and is, like it, imperishable” (lines 61-65).Choices A and D are incorrect because in Passage 2, Grimké makes the point that human rights areinherent and unchanging, not that they are viewed differently in different societies (choice A) or thatthey have changed and developed over time (choice D). Choice C is incorrect because Grimké doesn’tdescribe a clash between human rights and moral responsibilities; instead, she says that humans haverights “because they are moral beings” (lines 58-59).QUESTION 18Choice B is the best answer. The previous question asks what point Grimké makes about human rightsin Passage 2, with the answer being that they exist and have moral authority whether or not they areestablished by societal law. This is supported in lines 61-65: “These rights may be wrested from theslave, but they cannot be alienated: his title to himself is as perfect now, as is that of Lyman Beecher: itis stamped on his moral being, and is, like it, imperishable.”Choices A, C, and D are incorrect because the lines cited do not support the answer to the previousquestion about the point Grimké makes about human rights in Passage 2. Instead, they explain thesource of all people’s human rights (choice A), indicate what would happen if rights were determined bygender (choice C), and discuss why gender is irrelevant to rights (choice D).QUESTION 19Choice B is the best answer. In Passage 1, Beecher asserts that men and women naturally have differentpositions in society: “Heaven has appointed to one sex the superior, and to the other the subordinatestation” (lines 1-2). She goes on to argue that a woman should act within her subordinate role toinfluence men but should not “exert coercive influences” that would put her “out of her appropriatesphere” (lines 44-46). In Passage 2, Grimké takes issue with the idea that men and women have differentrights and roles. She asserts that as moral beings all people have the same inherent rights and statesthat “the mere circumstance of sex does not give to man higher rights and responsibilities, than towoman” (lines 66-68).Choice A is incorrect because Passage 2 does not discuss the practical difficulties of something that isproposed in Passage 1 but rather argues against the main point of Passage 1. Choice C is incorrectbecause Passage 2 does not provide historical context for the view expressed in Passage 1; the passageswere published at around the same time and both discuss contemporary society. Choice D is incorrectbecause Passage 2 does not elaborate on implications found in Passage 1 as much as it disputes theideas explicitly expressed in Passage 1. QUESTION 20Choice A is the best answer. While Beecher and Grimké clearly disagree regarding a woman’s role insociety, the passages suggest that both authors share the belief that women do have moral duties andresponsibilities in society. In Passage 1, Beecher writes that “while woman holds a subordinate relationin society to the other sex, it is not because it was designed that her duties or her influence should beany the less important, or all-pervading” (lines 6-10). She suggests that women do have an obligation touse their influence to bring about beneficial changes in society. In Passage 2, Grimké asserts that allpeople “are moral beings” (lines 58-59) and that both men and women have “rights and responsibilities”(line 68). She concludes that “whatever it is morally right for man to do, it is morally right for woman todo” (lines 81-83).Choice B is incorrect because neither author suggests that when men work to bring about politicalchanges, they often do so out of consideration for others rather than considerations for themselves.Choice C is incorrect because neither passage discusses the value given to women’s ethical obligations,although both authors suggest that women do have ethical and moral obligations. Choice D is incorrectbecause in Passage 1 Beecher argues that women should avoid direct political activism, cautioningagainst actions that would put them outside their “appropriate sphere” (line 46).QUESTION 21Choice D is the best answer. In lines 65-68 of Passage 2, Grimké writes, “Now if rights are founded inthe nature of our moral being, then the mere circumstance of sex does not give to man higher rights andresponsibilities, than to woman.” In other words, gender does not make men’s rights and dutiessuperior to women’s. Beecher, on the other hand, begins Passage 1 by stating that “heaven hasappointed to one sex the superior, and to the other the subordinate station,” suggesting that men andwomen have fundamentally different natures. Therefore, Beecher most likely would have disagreedwith Grimké’s assertion.Choices A and B are incorrect because Beecher fundamentally disagrees with Grimké regarding the basicnature and societal roles of men and women, making it very unlikely that she would have viewedGrimké’s statement in lines 65-68 with either sympathy or agreement. Choice C is incorrect becauseBeecher wouldn’t necessarily have been dismayed by Grimké’s belief as much as she would have simplydisagreed with it, and she does not indicate that the role of women in society is more difficult to playthan is that of men.QUESTION 22Choice A is the best answer. In line 14, the passage states that industrial agriculture has become“incredibly efficient on a simple land to food basis.” In this context, “simple” suggests something basicor straightforward. Choices B, C, and D are incorrect because in the context of a land to food dynamic, the word “simple”suggests something basic or straightforward, not something humble (choice B), something without anydecoration or ornamentation (choice C), or something that requires little effort (choice D).QUESTION 23Choice B is the best answer. The passage clearly states that conventional agriculture is very efficient,especially when compared to organic farming: “organic farming yields 25% fewer crops on average thanconventional agriculture” (lines 40-42) and in a study “organic farming delivered a lower yield for everycrop type” (lines 51-52). It can therefore be understood from the passage that conventional agriculturedoes a good job maximizing the output of the land that is farmed.Choice A is incorrect because the passage states how efficient conventional agriculture is in regard tothe amount of food it can produce but does not indicate that it produces a significantly wide variety offruits and vegetables. Choice C is incorrect because even if the passage does say that each Americanfarmer can produce crops to feed “over 155 people worldwide” (lines 16-17), it never claims thatconventional agriculture can satisfactorily feed everyone in the world. Choice D is incorrect because thepassage states that conventional agriculture uses a great deal of nitrogen, not that it changes the needfor nitrogen in plant growth Answer Explanations Section 1: Reading Test QUESTION 1 Choice D is the best answer. The passage begins with the main character, Lymie, sitting in a restaurant and reading a history book. The first paragraph describes the book in front of him (“lank pages front
{"url":"https://zbook.org/read/48ad_sat-practice-test-5-answer-explanations-sat-suite-of.html","timestamp":"2024-11-05T09:57:09Z","content_type":"text/html","content_length":"94459","record_id":"<urn:uuid:afc3b4aa-29af-4128-b33f-9b7e825a7cb0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00389.warc.gz"}
Probability Distributions MCQs [PDF] Questions Answers | Probability Distributions MCQ App e-Book Download: Test 1 BBA Business Statistics MCQs - Chapter 7 Probability Distributions Multiple Choice Questions (MCQs) PDF Download - 1 The Probability Distributions Multiple Choice Questions (MCQs) with Answers PDF (Probability Distributions MCQs PDF e-Book) download Ch. 7-1 to study BBA Business Statistics Course. Practice Discrete Probability Distributions MCQs, Probability Distributions trivia questions and answers PDF for general business degree online. The Probability Distributions MCQs App Download: Free learning app for standard normal probability distribution, binomial distribution, rectangular distribution career test to learn business certificate courses. The MCQ: In binomial probability distribution, the dependents of standard deviations must includes; "Probability Distributions" App Download (Free) with answers: Probability of p; Probability of q; Trials; for general business degree online. Solve Standard Deviation in Stats Quiz Questions, download Google eBook (Free Sample) for online courses for business management degree. Probability Distributions MCQ with Answers PDF Download: Quiz 1 MCQ 1: In binomial probability distribution, the dependents of standard deviations must includes 1. probability of q 2. probability of p 3. trials 4. all of above MCQ 2: The formula to calculate standardized normal random variable is 1. x - μ ⁄ σ 2. x + μ ⁄ σ 3. x - σ ⁄ μ 4. x + σ ⁄ μ MCQ 3: In random experiment, the observations of random variable are classified as 1. events 2. composition 3. trials 4. functions MCQ 4: In binomial distribution, the formula of calculating standard deviation is 1. square root of p 2. square root of pq 3. square root of npq 4. square root of np MCQ 5: The variance of random variable x of gamma distribution can be calculated as 1. Var(x) = n + 2 ⁄ μsup2; 2. Var(x) = n ⁄ μsup2; 3. Var (x) = n * 2 ⁄ μsup2; 4. Var(x) = n - 2 ⁄ μsup3; Probability Distributions Learning App: Free Download Android & iOS The App: Probability Distributions MCQs App to learn Probability Distributions Textbook, Business Statistics MCQ App, and Human Resource Management (BBA) MCQs App. The "Probability Distributions" App to free download iOS & Android Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with subscriptions!
{"url":"https://mcqslearn.com/business-statistics/mcq/probability-distributions-multiple-choice-questions-answers.php","timestamp":"2024-11-06T11:38:02Z","content_type":"text/html","content_length":"96378","record_id":"<urn:uuid:6f0cb848-9fcb-4e33-beac-9c8bb837c6f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00227.warc.gz"}
Effect of Infrared Radiation on the Hydrogen in Thin Films Double Barriers Based Melt Silicone-Germany Effect of Infrared Radiation on the Hydrogen in Thin Films Double Barriers Based Melt Silicone-Germany Received Date: December 08, 2018; Published Date: December 17, 2018 Possibilities of plasma chemical deposition of α-Si[1-x] Ge[x]:H (x=0÷1) films undoped and doped with PH[3] or B[2]H[6] have been analyzed from the viewpoint of their application in p-i-n structures of solar cell. The optical properties are considered, and the amount of hydrogen contained in those films is determined. The film properties are found to strongly depend on the film composition and the hydrogenation level. The number of hydrogen atoms in the films is varied by changing the gas mixture composition, and IR absorption in α-Si:H and α-G[e]:H films is measured. The α-Si:H and а-Si [0,88]Ge[1,2]:H films were used to fabricate three-layer solar with an element area of 1,3 sm^2 and an efficiency (η) of 9.5 %. Keywords: Effect infrared on thin films; Amorphous silicon; Solar cells; Efficiency; Optical properties; Oscillator; Absorption coefficient; Effusion of hydrogen; Deposition rate Introduction of Si films and its alloy characterized by various structural phases. The most interesting of them are crystals that are in the amorphous matrix. Such alloys are produced by different methods at different technological regimes. For films of amorphous hydrogenated silicon and a-Si: H, formed by the method of cyclic sedimentation annealed in hydrogen plasma effect Staeblera- Vronsky is expressed weakly [1]. Authors [2] note the absence of the effect of Stajeblera-Vronsky in Nanostructure films of a-Si:H film Silicon alloys crystallization of a-Si: H is carried out by various methods: long annealing in vacuum at 600 °C, fast heat treatment [3], laser annealing [4] and ion implantation [5]. The mobility of the charge carriers, alloying and efficiency optical absorption coefficient in films of a-Si: H vy she than crystalline silicon. Films a-Si[1-x]Ge[x]: H are an effective and inexpensive material for making solar cells and other electronic devices [6, 7]. In this regard, the receipt of the aforementioned films and changing their conductivity type are actual tasks. [8] shows that with the change in the temperature of the substrate grow Nano crystals increases. Found that with increasing concentration of average grain size decreases RN3 (d) and the proportion of crystalline particle volume (Vc). When alloying with increasing concentration of boron, B[2]H[6], value (d) does not change, and Vc is reduced. The value of photo films and efficiency-Si[1-x]Ge[x]: H, somewhat lower than in a-Si: H [9, 10]. With the changing conditions of technology and technological parameters of hydrogen deposited on various structural phases: microcrystalline, polycrystalline, Nana crystalline, etc. of energy conversion efficiency based on Schottky barrier in films of a-Si: H was 5.5%. Attempts have been made to obtain high efficiency solar cell (CPA ASE) (~ 9.0%) on the basis of a-Si[1-x]Ge[x]: H [11]. Most literature data show that when using amorphous silicon and Silicon-Germanium alloys in solar cells with multilayered or cascading structure have the greatest EFFICIENCY ~ 8.5% [12]. Based on the above stated the purpose of this work is to determine the amount of hydrogen in amorphous films method for optical solid solution a-Si[1-x]Ge[x]: H (x = 0-1) and manufacturing of solar cells based on them. The Experimental Part Thin films of a-Si[1-x]Ge[x]: H (x = 0-1) received plasma chemical deposition method using gas mixtures of H[2] + SiH[4], U + GeH[4] in various proportions. Details on obtaining films shown in [11, 12]. Plasma RF field created through mainly inductive coupling. Film thickness was 0.1 ÷ 1.0 μm. Measured absorption coefficient (α), refraction (n), reflection (R), (T), width of band gap (E0) for each sample, using appropriate models [13.14]. Optical absorption at room temperature were studied by the method of [13-16] on spectrometer x-29. Result and Discussion The hydrogen concentration in a-Si[1-x]Ge[1-x]: H, (x = 0 ÷ 1) films is determined using the method of Brodsky et al. [14 - 17]: where N is the number of Avogadro and (ξ) the integral strength of the hydride with units cm2/mole (g/ξ) = 3.5. If the width of the absorption indicate through and center frequency ω*, ω[0], when Δ ω /ω[0] ≤ 0.1 after approximation with a tolerance of ± 2%, equation (1) can be written as follows: where is: If the equation (2) preintegral expression Relabel АSи Coefficient А[S]– for films a-Si:H, is in the field of fashion stretching 1,4∙1020 см^-2. Absorption coefficient (α) for these phones (2100 см^-1) is 8∙10-1÷3∙102см^-1 When this NH=7∙1021÷2,1∙1022 см ^-3. For films а-Ge:H АS =1,7∙1020 см^-2. In films a-Si:H and а-Ge:H absorption frequencies 2000 и 1980 сm^-1 is caused by fluctuations in the type of valance and absorption frequencies 630 и 570 см^ -1 oscillations of type bend (fig. 1a and 1 c). Thus, for а-Si[1-x] Ge[x]:H the place has a significant overlap, which is observed in the spectrum of IR absorption for both bands stretching Ge:H (1980 см^-1) and Si: H (2000 см^-1), and for bending strips around the frequency 600 сm^-1 (fig. 1б) [5, 12]. It is clear that the equation (3), so did the links stretch fashion oscillating characterizes in films а-Si:H, а-Ge:H и а-Si[1-x]Ge[x]:H. Assess the relative binding hydrogen to hydrogenated amorphous а-Si[1-x]Ge[x]:H: Where is NSi-H и NGe-H – hydrogen concentration in а-Si:H и а-Ge:H (в см^3). Equation (3) You can rewrite to fashion swings (waggingmode) films а-Si:H и а-Ge:H. Thus the value of the NSi-H и NGe-H are determined from equations (3) to fashion rocking in the following form: where is, ^-2 и Aw=1,1∙1019 сm^-3, accordingly. Knowing NGe-H (where, for films а-Ge:H, Aw=1,6∙1019 см-2 and α=5∙101 см^-1), calculate hydrogen concentration N[H] in the film а-Si[1-x]Ge[x]:H in the words of: where is, ^-1) in the film а-Si[1-x]Ge[x]:H. From these data it is possible to evaluate the effect of the oscillator in film а-Si[1-x]Ge[x]:H the ratio of: Q = J[S]/J[W], where is, Table 1 shows the characteristic parameters of amorphous films а-Si[0],60G[e] 0,40:H. On Figure 2 shows the distribution of hydrogen on film thickness d: certain 1- method of recoil protons, 2- method of IR absorption spectrum. You can see, the distribution of hydrogen sufficiently uniform. Unlike other methods, the method of recoil protons (MOP) sample bombing beam of protons. When researching а-Si:H and its alloys, this allows you to get distribution hydrogen on thickness ~ 40 ÷ 100 Å. Method of calibration accuracy is limited only by the MCS, which is determined by the largest concentration of hydrogen (NH), and IR spectroscopy of found values that match 2 ÷ 3 %. This method provides information about the General content of both associated and not associated with Si hydrogen. А, with regard to the precise definition of hydrogen content in the volume of films, this band was analyzed INFRARED absorption 630 сm^-1. To clarify the amount of hydrogen is embedded in amorphous matrix below as follows is determined by the structural parameter (R): Table 1: Characteristic parameters of amorphous films a-Si[0,60] Ge[0,40] :H. where is J[2000] и J[2100] – intensity of absorption bands at 2000 и 2100 см^-1. Using the equation (3), of this ratio is determined by the concentration of hydrogen. Increase R occurs simultaneously with a decrease in the concentration of hydrogen. The highest magnitude R (before 0,8) observed for films а-Si:H, the besieged plasma chemical deposition method (ПХО), when TS=300 0С, power frequency discharge W=100W. However, the films studied in the present work, that when TS=200 ÷ 300 0С, micro structure parameter cannot vary in the range R= 0,1 ÷ 0,8. When annealing during 30 minutes in a vacuum at R value reaches 1.0. Accordingly, in this case, СH is 24.5 ÷ 14.0 at. %. By number of links Si-H, you can define a specific concentration of hydrogen containing links [Si–H]/[Si]. Specific concentrations of hydrogen containing Silicon links in the maximum reaches the value 0.58 [10–12]. Hydrogen concentration (NH), some effusion method, correlated with the concentration of hydrogen, calculated using the integrated force IW, fashion rocking 600 см^-1 (рис. 3). The number of hydrogen atoms is found by at. % (СH), effusion method is defined for the data tapes and compared to the number of hydrogen atoms NА (Avogadro’s number). - the film received when hydrogen pressure 0.6 mTorr. - the film received when hydrogen pressure 1.2 mTorr. - the film received when hydrogen pressure 1.8 mTorr. - the film received at a pressure of hydrogen 2.4 mTorr. - the film received when hydrogen pressure 3.0 mTorr. Therefore, the ratio of [0,60]Ge[0,40]:H When partial pressures from 0.6 to 3.0 m Torr power oscillator increases [5, 12]. This is due to the hydrogen containing links Ge:H, Si: H at specified Heating the sample in a closed volume is due to the fact that the material almost completely decomposed into its constituent elements, with crystallization temperature range 350 ÷ 650 0С, what causes hydrogen jeffuziju and leads to increased pressure. Pressure measured capacitive pressure gauge with a precision of 0,1 %. To determine the effusion other gases should undertake quantitative mass spectrometric analysis of the composition of the gas. Note that the hydrogen inside the film identifies several ways: at. %,NH, РH2 and Р. To define these settings, you must remove the IR absorption spectra of the corresponding frequency fluctuations associated with the absorption of hydrogen. Optical properties of thin films The dependence of (αhν)^1/2 from hν to determine the width of the forbidden zone [14, 16] for each film. In all the studied films of the optical absorption edge ratio describes the ratio of: where is, α = 5∙104 cm^-1, E0 - optical band gap width for each film, В – the coefficient of proportionality. The value of the В determined by extrapolation of dependencies (αhν)^1/2 from hν for each sample. Quadratic dependence (7) received in theory model [13,14], Describes the density of States slit mobility. The value of the В when х=0÷1 at. % Ge, for films а-Si[1-x]Ge[x]:H changes from 527 before 343 eV-1cm1/2, accordingly, E0=1,86 eV and E0=1,14 eV. Means with increasing content of Germany, E0 decreases. Mobility of carriers and photoconductivity in film а-Si[1-x]Ge[x]:H, also diminishes when Germany more 40 at.% [11, 12]. We use the known relative absorption coefficient α - is determined from the following equation [14 - 17]: here take, that For weakly absorbing light areas k^2[0]≤(n −1,5). к[0] shows light attenuation in the substrate. Note that the film thickness d, defined in this case, the relevant transmission or reflection from extreme interference fringes. From equation (8), the coefficients of absorption (α) are defined as follows: Equation (11) is a working formula for determining optical absorption coefficients for films, in a weakly absorbing spectral region. In a strongly absorbing spectral regions R[3] = 0, R[2] = R[1] = R, n(λ ) = const and n = n[1] =1,5 for glass substrates, а n = n[1] = 3,42 for silicon substrate. Then equation (8) can be rewritten as follows: This formula can be used to determine the coefficient of optical absorption in a strongly absorbing spectral regions. Accordingly, the coefficients of refraction is defined using the following ratio: or by using the following formula: where is λ[m], λ[m-1] – the wavelength corresponding to the neigh bouring extreme and spectra of transparency or reflection (corresponding frequency, с- the speed of light. Refractive index is defined or the following formula [15]: where is T[max] и T[min] – functions of the wavelength λ, n[1] – index of refraction of the substrate, which is defined by the expression: where is T[1] – deletion of the substrate, which is almost always in the area of transparency. As for glass substrates T[1] = 0,91, то n[1]= 1,554. Accordingly, the film thickness is calculated by the formula: where is λ[1] и λ[2] – wavelengths which correspond to the neigh pouring extreme points on the spectrum bandwidth, А=1 for two extremes of the same type (max– max, min– min) and А=0,5 for two adjacent extremums of the opposite type (max– min, min– max). Creation of solar cells Studies show that films а-Si[1-x]Ge[x]:H (x≥0,20) can be used as a qualitative material in semiconductor electronics [12]. For this purpose, we have developed a 3-item based on two elements of cascade type. Three-layer element is made of 2-layer element consisting of two elements on the basis а-Si:H с p-i-n transition and p-i-n element with i- a layer of film а-Si[0,88]Ge[0,12]: H. The thickness i-layers to the top two transitions selected in such a way that respected the condition of equality of short-circuit current lower element. Short circuit current was about half the value for an element with a p-i-n transition. Idling voltage and short circuit current decreases with increasing number of superimposed layers. This way you can build multiple layers (create n-layer element). Note that for each item produced i-0.5 μm thick layer. The area of each element was 1.3 cm^2. When receiving a three-layer solar cells must be respected uniform thickness and square to each element. Substrate material of steel and was chosen as the cover used ZrO[2] with missing light 80%. Covering the same time playing the role of upper ZrO[2] (front) of the contact. The thickness of the layers of a-Si:H p-and n-types was ~300 and 400Å, respectively. For alloying films number of В[2]Н[6] and РН[3] in gas mixtures changed within Alloy films in gas mixtures changed within 0.1 and 0.5%, respectively. After the deposition of amorphous semiconducting layers deposited by evaporation film ZrO[2] thickness ~ 500 Å. The upper contacts used Ni/Ag, for lower-stainless steel substrate. Items covered source sunlight provided АМ^-1 (100 mW/sm^2). Short-circuit current for 3-layer elements was 8,5 mA/cm^2, no-load voltage ~ 2,25 V, fill factor ~ 0,50 and CPA ~ 9,5% (рис.4). CPA for single-layer and double-layer element is 7% and 8.9%, respectively. The effectiveness of collecting media when different wavelengths is defined by the formula: where is J[ф](λ) – the photocurrent density (10 mA/cm^2), N(λ) – the number of photons incident per unit surface per second, e- free media charge. For elements with the structures of the short-circuit current is calculated in the supposition of a complete depletion of all layers, in the absence of direct bias. Thus, the short circuit current for the first, second, and third elements provides the following expressions: where is, W[i], W[n], Wp field distribution inside the i, n, p layer, respectively, N[ph] – the number of photons incident on the surface of the elements, R – reflectivity film, α – absorption coefficient for each layer elements. Idling voltage for cascading elements with two and three transitions is presented as: The fill factor for all elements of the set size 0.5. Shortcircuit current of a cascading element with two sets of values less transitions I[sc](II) sets the lower value I[sc1] and I[sc2] . Short-circuit current of a cascading element with three passages is determined by the smallest amount of I[sc1], I[sc2] or I[sc3]. CPA of many transitional cascade elements is given by the expression: where is i = 2 and 3 – shows the number of layers, P[in] – power of incident light to the surface elements, its value is 100 mW/cm^2, E[01], E[02], E[03] – Accordingly, the width of the forbidden zone for each i- the layer. To raise η for solar cell, you want to increase the number of layers reduce the area elements, the choice of metal wires to reduce the resistance of the metal contacts, etc. Measurement of spectral sensitivity is usually produced at a constant illumination with white light, the intensity of which corresponds to the normal conditions of work (AM-1~100 mW/cm^2), at the same time an element falls modulation calibrated monochromatic radiation. Photocurrent and its dependence on wavelength of monochromatic radiation is measured in shorted circuits by using synchronized amplifier. To determine the effectiveness of collecting important knowledge of the electric field which is passed to the element. It has been noticed that in device dependently to the configuration collection efficiency is offset from red light in the blue spectrum. It is known that the photon energy and momentum of the corresponding electromagnetic wave with frequency and wavelength in vacuum, equal: where is, h- Planck’s constant. When agile frequencies ν - the preponderant role played by wave properties, at large ν - particle properties of light. If P*- electromagnetic radiation energy feeding okay on some surface unit area for 1 sec, c-the speed of propagation of light waves in a vacuum, R-reflectivity surface pressure p-light on this surface as well: light pressure (P) is defined by equation (20) and represent the following form: N- the number of incident photons. W- photon energy falling at all wavelengths of the body surface. P* -momentum light falling on dies surface for 1 sec. Then the pressure of the incoming light is defined in the following form: F - the power of light pressure (F=10^-8 H) on the surface (S=1cm^2), λ - incident wave length, t - time of incidence of the light for 1 sec. with energy P* and its value is Nhν photons, with the momentum of each photon is equal to hν/c. With radiation reflection R, λ- the number falling photon is 10^17÷10^18 м^-2с^-1, λ = 300 ÷ 900 nm (fig. 4). Obtained thin films а-Si[1-x]Ge[x]:H (х=0÷1) plasma-chemical deposition method using gas mixtures of H[2] + SiH[4]; H[2]+GeH[4] in various proportions. It was determined that the highest R value (up to 0.8) is observed for films of a-Si: H deposited method (PHO) at temperature t = 300° c, with an output frequency of discharge W = 100 W. The data on the ratio of [1-x]Ge[x]:H, где Based on the films а-Si:H and а-Si[0,88]Ge[0,12]:H manufactured solar cells and created single-layer, double-layer and three-layer structure; their characteristics are measured. Found that for single-layer, double-layer and three-layer structures with an area element 1,3 cm^2 η is 7 %; 8,9%; 9,5%, accordingly. For the three layered element highs move in the scope of collection efficiency of longer wavelengths. In the reporting structures of their light in the wavelength interval 0,3 ÷ 1,1 μm within 120 hours, there has been no degradation. It is shown that the multi-layer structure solar cells based on α-Si[0,88]Ge[0,12]:H and α-Si:H effective, and the improvement of their Efficiency are relevant tasks. Conflict of Interest No Conflict of Interest.
{"url":"https://irispublishers.com/mcms/fulltext/effect-of-infrared-radiation-on-the-hydrogen-in-thin-films-double-barriers-based-melt-silicone-germany.ID.000504.php","timestamp":"2024-11-04T18:43:53Z","content_type":"text/html","content_length":"131356","record_id":"<urn:uuid:206a518d-f5bd-4e68-a944-94be363d363d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00473.warc.gz"}
Equilibrium Solutions Show Mobile Notice Show All Notes Hide All Notes Mobile Notice You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen Section 2.8 : Equilibrium Solutions In the previous section we modeled a population based on the assumption that the growth rate would be a constant. However, in reality this doesn’t make much sense. Clearly a population cannot be allowed to grow forever at the same rate. The growth rate of a population needs to depend on the population itself. Once a population reaches a certain point the growth rate will start reduce, often drastically. A much more realistic model of a population growth is given by the logistic growth equation. Here is the logistic growth equation. \[P' = r\left( {1 - \frac{P}{K}} \right)P\] In the logistic growth equation \(r\) is the intrinsic growth rate and is the same \(r\) as in the last section. In other words, it is the growth rate that will occur in the absence of any limiting factors. \(K\) is called either the saturation level or the carrying capacity. Now, we claimed that this was a more realistic model for a population. Let’s see if that in fact is correct. To allow us to sketch a direction field let’s pick a couple of numbers for \(r\) and \(K \). We’ll use \(r = \frac{1}{2}\) and \(K = 10\). For these values the logistics equation is. \[P' = \frac{1}{2}\left( {1 - \frac{P}{{10}}} \right)P\] If you need a refresher on sketching direction fields go back and take a look at that section. First notice that the derivative will be zero at \(P = 0\) and \(P = 10\). Also notice that these are in fact solutions to the differential equation. These two values are called equilibrium solutions since they are constant solutions to the differential equation. We’ll leave the rest of the details on sketching the direction field to you. Here is the direction field as well as a couple of solutions sketched in as well. Note, that we included a small portion of negative \(P\)’s in here even though they really don’t make any sense for a population problem. The reason for this will be apparent down the road. Also, notice that a population of say 8 doesn’t make all that much sense so let’s assume that population is in thousands or millions so that 8 actually represents 8,000 or 8,000,000 individuals in a Notice that if we start with a population of zero, there is no growth and the population stays at zero. So, the logistic equation will correctly figure out that. Next, notice that if we start with a population in the range \(0 < P\left( 0 \right) < 10\) then the population will grow, but start to level off once we get close to a population of 10. If we start with a population of 10, the population will stay at 10. Finally, if we start with a population that is greater than 10, then the population will actually die off until we start nearing a population of 10, at which point the population decline will start to slow down. Now, from a realistic standpoint this should make some sense. Populations can’t just grow forever without bound. Eventually the population will reach such a size that the resources of an area are no longer able to sustain the population and the population growth will start to slow as it comes closer to this threshold. Also, if you start off with a population greater than what an area can sustain there will actually be a die off until we get near to this threshold. In this case that threshold appears to be 10, which is also the value of \(K\) for our problem. That should explain the name that we gave \(K\) initially. The carrying capacity or saturation level of an area is the maximum sustainable population for that area. So, the logistics equation, while still quite simplistic, does a much better job of modeling what will happen to a population. Now, let’s move on to the point of this section. The logistics equation is an example of an autonomous differential equation. Autonomous differential equations are differential equations that are of the form. \[\frac{{dy}}{{dt}} = f\left( y \right)\] The only place that the independent variable, \(t\) in this case, appears is in the derivative. Notice that if \(f\left( {{y_0}} \right) = 0\) for some value \(y = {y_0}\) then this will also be a solution to the differential equation. These values are called equilibrium solutions or equilibrium points. What we would like to do is classify these solutions. By classify we mean the following. If solutions start “near” an equilibrium solution will they move away from the equilibrium solution or towards the equilibrium solution? Upon classifying the equilibrium solutions we can then know what all the other solutions to the differential equation will do in the long term simply by looking at which equilibrium solutions they start near. So, just what do we mean by “near”? Go back to our logistics equation. \[P' = \frac{1}{2}\left( {1 - \frac{P}{{10}}} \right)P\] As we pointed out there are two equilibrium solutions to this equation \(P = 0\) and \(P = 10\). If we ignore the fact that we’re dealing with population these points break up the \(P\) number line into three distinct regions. \[ - \infty < P < 0\hspace{0.25in}\hspace{0.25in}\hspace{0.25in}\,\,\,\,0 < P < 10\hspace{0.25in}\hspace{0.25in}\hspace{0.25in}\,10 < P < \infty \] We will say that a solution starts “near” an equilibrium solution if it starts in a region that is on either side of that equilibrium solution. So, solutions that start “near” the equilibrium solution \(P = 10\) will start in either \[0 < P < 10\hspace{0.25in}{\mbox{OR}}\hspace{0.25in}\,10 < P < \infty \] and solutions that start “near” \(P = 0\) will start in either \[ - \infty < P < 0\hspace{0.25in}\,\,\,{\mbox{OR}}\hspace{0.25in}\,\,\,\,\,\,0 < P < 10\] For regions that lie between two equilibrium solutions we can think of any solutions starting in that region as starting “near” either of the two equilibrium solutions as we need to. Now, solutions that start “near” \(P = 0\) all move away from the solution as \(t\) increases. Note that moving away does not necessarily mean that they grow without bound as they move away. It only means that they move away. Solutions that start out greater than \(P = 0\) move away but do stay bounded as \(t\) grows. In fact, they move in towards \(P = 10\). Equilibrium solutions in which solutions that start “near” them move away from the equilibrium solution are called unstable equilibrium points or unstable equilibrium solutions. So, for our logistics equation, \(P = 0\) is an unstable equilibrium solution. Next, solutions that start “near” \(P = 10\) all move in toward \(P = 10\) as \(t\) increases. Equilibrium solutions in which solutions that start “near” them move toward the equilibrium solution are called asymptotically stable equilibrium points or asymptotically stable equilibrium solutions. So, \(P = 10\) is an asymptotically stable equilibrium solution. There is one more classification, but I’ll wait until we get an example in which this occurs to introduce it. So, let’s take a look at a couple of examples. Example 1 Find and classify all the equilibrium solutions to the following differential equation. \[y' = {y^2} - y - 6\] Show Solution First, find the equilibrium solutions. This is generally easy enough to do. \[{y^2} - y - 6 = \left( {y - 3} \right)\left( {y + 2} \right) = 0\] So, it looks like we’ve got two equilibrium solutions. Both \(y = -2\) and \(y = 3\) are equilibrium solutions. Below is the sketch of some integral curves for this differential equation. A sketch of the integral curves or direction fields can simplify the process of classifying the equilibrium solutions. From this sketch it appears that solutions that start “near” \(y = -2\) all move towards it as \(t\) increases and so \(y = -2\) is an asymptotically stable equilibrium solution and solutions that start “near” \(y = 3\) all move away from it as \(t\) increases and so \(y = 3\) is an unstable equilibrium solution. This next example will introduce the third classification that we can give to equilibrium solutions. Example 2 Find and classify the equilibrium solutions of the following differential equation. \[y' = \left( {{y^2} - 4} \right){\left( {y + 1} \right)^2}\] Show Solution The equilibrium solutions are to this differential equation are \(y = -2\), \(y = 2\), and \(y = -1\). Below is the sketch of the integral curves. From this it is clear (hopefully) that \(y = 2\) is an unstable equilibrium solution and \(y = -2\) is an asymptotically stable equilibrium solution. However, \(y = -1\) behaves differently from either of these two. Solutions that start above it move towards \(y = -1\) while solutions that start below \(y = -1\) move away as \(t\) increases. In cases where solutions on one side of an equilibrium solution move towards the equilibrium solution and on the other side of the equilibrium solution move away from it we call the equilibrium solution semi-stable. So, \(y = -1\) is a semi-stable equilibrium solution.
{"url":"https://tutorial.math.lamar.edu/classes/de/EquilibriumSolutions.aspx","timestamp":"2024-11-14T12:12:04Z","content_type":"text/html","content_length":"84046","record_id":"<urn:uuid:742fd517-99e2-4c87-baa7-bd0f3453fc18>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00117.warc.gz"}
We’re living inside a huge black hole! According to Niayesh Afshordi, Perimeter Institute Associate Faculty member, we are all living in the event horizon of a huge higher dimensional black hole. In 2000 Gia Dvali, Gregory Gabadadze, and Massimo Porrati published a paper, “4D Gravity on a Brane in 5D Minkowski Space”, in which they wrote: The observed weakness of gravity may be due to the fact that we live on a brane embedded in space with large extra dimensions. The universe appears to us to exist in three dimensions of space. This is a three-dimensional (3D) universe. Imagine that our 3D universe is a subuniverse or brane embedded in a bulk universe that has four spatial dimensions (4D). All known forms of matter and energy are limited to our brane and cannot move to the bulk. It is like flatland, two-dimensional figures live in a two-dimensional Only gravity can propagate in the bulk universe and in 5D Minkowski space, the brane model of Dvali, Gabadadze and Porrati (DGP). In 2013 three Perimeter Institute researchers, Razieh Pourhasan, Niayesh Afshordi and Robert B. Mann, carried out calculations and argued that it is possible to track the beginning of the universe back to an era before the Big Bang, and we can even avoid the Big Bang singularity. They published their findings in a paper under the title, “Out of the white hole: a holographic origin for the Big Bang” in which they have written: … our universe emerges from the collapse a 5D “star” into a black hole, reminiscent of an astrophysical core-collapse supernova. In this scenario, there is no big bang singularity in our causal past, and the only singularity is shielded by a black hole horizon. Thus a 5D black hole (in four dimensions of space and one dimension of time) could have a 4D event horizon (in three dimensions of space and one dimension of time), which could spawn a whole new universe as it forms, that is to say, our entire universe came into being during stellar implosion that created a brane around a black hole. This suggestion avoids the Big Bang singularity. In the standard story, the Big Bang began with a singularity where laws of physics break down. Instead, the scholars postulate that the universe began when a star in a 5D universe collapsed to form a black hole. Our universe would be protected from the singularity at the heart of this black hole by the 4D event horizon. Scholars first define the brane subuniverse and the bulk superuniverse: … one way to describe our four-dimensional universe is through embedding it in a higher dimensional spacetime — with at least one more dimension — and investigate its gravitational and/or cosmological properties. This is known as the “brane world” scenario, where the brane refers to our 4D universe embedded in a bulk space-time with 5 or more dimensions, where only gravitational forces dare to venture. Well-known (and well-studied) examples of such scenarios are the Randall-Sundrum (RS) model [bulk universe in a 5D anti-de Sitter space] where 4D gravity is recovered through a compact volume bulk, or the Dvali-Gabadadze-Porrati (DGP) construction… And then they add the requirement of the holographic cosmology: Here we study the DGP model around a 5D black hole… We find that viable solutions are indeed possible, leading us to propose a holographic description for the Big Bang, that avoids the Big Bang singularity. … We then give our proposal for a holographic Big Bang as emergence from a collapsing 5D black hole. The event horizon of a 4D black hole (in four dimensions of space) would be a 3D hypersphere (in three dimensions of space) and it indicates that: the radius of our 4 dimensional [brane] universe [four-dimensions of space-time] coincides with the black hole horizon in the 5 dimensional bulk [five-dimensions of space-time] and the radius of our holographic universe < the horizon radius. When the above team of scholars modelled the death of a 5D star (in five-dimensions of space-time), they found that the ejected material (of the collapsing start) would form a 4D brane (in four dimensions of space-time) surrounding a 4D event horizon (in four dimensions of space-time), and slowly expand. The authors postulate that the 4D Universe we live in might be just such a brane — and that we detect the brane’s growth as cosmic expansion. They therefore explain: For [… ejected material] chosen to be above […the horizon] the radius of our holographic universe is larger than the horizon radius, meaning that our present cosmos lies outside the horizon of the black hole in the bulk, i.e. [radius of our holographic universe > the horizon radius]. Let us assume that the universe today has its radius larger than the horizon in the bulk black hole. Moving backwards to early times [… back to Big Bang Nucleosynthesis, BBN], as the radius of the universe… decreases, it may or may not cross the […event horizon]. Indeed, crossing the […event horizon] means that at some early time the radius of the universe was smaller than the horizon radius. Since nothing can escape the horizon of a black hole, one would exclude […the ejected material] for which [… its radius of holographic universe…] at some [later times] cross the [event horizon]. Consequently, one may interpret the crossing [radius of our holographic universe = the horizon radius] before BBN as the emergence of the holographic universe out of a “collapsing star”: this scenario replaces the Big Bang singularity.
{"url":"https://me.myrationalthoughts.com/2019/06/were-living-inside-huge-black-hole.html","timestamp":"2024-11-03T15:46:53Z","content_type":"text/html","content_length":"94567","record_id":"<urn:uuid:4d89e219-ed0b-4cd3-84a3-cf06f2a5df37>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00356.warc.gz"}
Surface vs Volume Formats in Tools The long-established default in the modeling space has been the transition from pushing pixels to manipulating points in space. Originally defined with a lightpen scanning physical models or handwritten points in code into point clouds. Over time, the ambiguity of points in space led to the dominance of the triangle or surface representation. Today, we explore whether this is still the best format for our tools to work in. What are the advantages and challenges of moving to volumetric tools? Note:: This is a large topic so there will be a second part next week Key Takeaways • Triangles have superior mathematical properties for rendering efficiency. • Volumes are better sources of truth and easier to sample. • All volumes have a surface, but not all surfaces have a valid volume. • GPU advancements now enable direct work with volumetric data in creative workflows. • Future 3D modeling likely shifts to volume-first tools, with triangles remaining in final rendering. • Hybrid approaches will bridge the gap between volumetric and surface representations. The Fundamentals: Surface vs. Volume Representations Comparing Cubes and Spheres: A Tale of Two Geometries Before we delve into why triangles have become the dominant format, we need to explore the more fundamental problem space of surface vs volume representation. In this ideal use case, a cube can easily be defined with 12 triangles with no loss of data. However, in the case of a sphere using purely planar triangles, it is effectively impossible to have no data loss, so we always have some degree of data loss or must use more implicit surface representations. The issue is that worst case volume scales in a cubic, in the common case surfaces are a squared progression though the worst case surfaces are infinity. Though in practical terms this is avoidable for surfaces in general volumes are more predictable though heavier weight data formats. The Spectrum of 3D Data Formats As you can see, the space of surfaces is DOMINATED by triangles, but it's important to realize the breadth of the ecosystem out there. Here is a brief overview of most the formats that come to mind: The Reign of Triangles: Understanding the Status Quo Mathematical Superiority of Triangles The mathematical superiority of triangles is hard to argue against: • Given any three points (not in a line - collinear) you get a valid triangle • Barycentric coordinates for interpolation is easy and fast • Given a fixed 2d perspective it is easy to sort and organize • Line to Triangle intersection is fast, solid and predictable The first two are strong arguments which always hold, but interestingly, the third point is only relevant when moving data from 3D to a framebuffer. The last one has wide application to creative tooling as we often want to project to or select a surface. Historical Context: The Triangle Limit Another significant reason for the dominance of triangles, which we touched on in the intro, is the data compression factor. Any artist of a certain age will be strongly aware of the triangle limit. Working with any abstractions from the real final data loses artistic control, which was very obvious when our limits were so low. It is my firm belief that when talking about graphics engines and final render pipelines, triangles will remain a dominant format long into the future. Even with the increased usage of ray tracing, triangles will always need a good strong option for exporting to triangles regardless of format. The Volume Advantage: Challenging the Surface Paradigm The False Equivalence: Surfaces vs. Volumes For any volume there exists a valid surface, but not every surface represents a valid volume This is the key argument for why our toolchains should maintain volumetrics for as long as possible. Even real-time graphics engines often need volumetrics for lighting, physics, or other Conversion Challenges: From Surface to Volume and Back Due to the dominance of surface representations in our tool chain, the methods for generating volumes from surfaces are very mature and well-established. However, they often require assumptions or are fragile to non-watertight meshes or other open cases which break the conversion. The algorithms are often very expensive for high-quality volumes. The conversion of volumes to surfaces is less well represented. The most well-understood method, marching cubes, was locked behind a software patent for a key period of graphics development, stunting the growth of these methods. It is no longer patented, and more superior methods like dual contouring now exist. The Complexity Spectrum: From Explicit to Implicit Representations Explicit Representations: Triangles and Voxels Triangles and dense voxel grids are at the most explicit end of the spectrum. There is a one-to-one mapping of data which is very predictable to process. Though as covered in earlier articles, our bottleneck on modern hardware is typically related to I/O. So even when a format takes a little more time to process, it is worth the tradeoff in many cases. Complex Explicit Representations: Textures and UV Mapping This moves into more complex formats, with things like textured surfaces using displacement maps and other UV mapped 2D data. This is a complex explicit representation but is still explicit; it requires even more data lookup, as well as some calculations. UV Mapping also adds an additional fragile quality beyond the scope of this article to discuss. Implicit Representations: NURBS and Signed Distance Fields Another approach which does not require additional data lookup but calculation are methods like NURBs and 2D signed distance fields. NURBS (Non-Uniform Rational B-Splines) are mathematical surfaces with a high degree of precision and flexibility. They are defined by control points, weights, and knot vectors, allowing for smooth, easily manipulable surfaces. Car manufacturers and people looking for precision surfaces really like these models, but they tend to be very computationally expensive. Game Engine Approaches: BSP Trees and CSG Game engines have traditionally preferred BSP Trees and CSG, typically building BSP from CSG. Constructive Solid Geometry (CSG) combines simple shapes to create complex 3D models, while Binary Space Partitioning (BSP) trees efficiently subdivide space for rendering and collision detection. In Quake, CSG was used to design levels, and BSP trees generated from these designs enabled real-time rendering of complex 3D environments on mid-1990s hardware by quickly determining visible polygons from any viewpoint. Advanced Volumetric Representations: Sparse Voxel Grids While CSG works well for coarse level design, it does not scale well to artistic shape and form. Another approach taken by photogrammetry and simulation formats like OpenVDB is sparse voxel grids. This takes the approach of representing volumetric data efficiently by storing only the relevant, non-empty voxels in a hierarchical data structure. Sparse voxel grids or level sets divide space into a grid but only allocate memory and compute resources for areas containing actual data or near the surface of objects. This allows for highly detailed and complex shapes to be represented without the memory overhead of storing empty space, making it possible to handle much higher resolution volumes than traditional dense grids. Adapative grids are also really interesting by storing data at each graph point you are able to reduce the node count. Also You can adaptivly sample the grid for the desired level of detail. This maps well to certain GPU texture optimisation modes. Though typically a blend of approaches is used with a low level lookup vs a high level grid. Finally using scene tree for the highest level of sparse data. Beyond Binary: Occupancy and Density in 3D Representation Occupancy: The Traditional Approach The traditional triangle surface or voxel grid works on the concept of one-bit occupancy. There is either a surface or volume there, or there is not. Density: Adding a New Dimension to 3D Data Density doesn't really exist in most surface representations, though some volumetric capture data from medical imaging and volume simulation software creates it. Density is great for VFX use cases like clouds and fire simulation. It also has pretty awesome physics and sculpting properties. For physics simulation, light transport, and even gameplay, this is an interesting property. In the case of tools, you can squish and pull while maintaining volume, like real materials would. This dynamic manipulation property is one I really wanted to explore in the future, though I am now unlikely to dedicated the time. Material Properties: Transparency and Surface Interactions There is a concept of material transparency. In the world of surfaces, we can use those additional data channels to lay on alpha or transparency, but it is fundamentally not a concept which maps well to our real-world understanding of light transport and physics. In most cases when we need to calculate surface interactions, we fake it or we infer a volume to calculate thickness, sometimes assuming a simple in-and-out model based on surface facing. In the world of volumes, we just say this volume or sub-volume has an index of refraction of X. Performance and Practicality: Navigating the Tradeoffs The Cache Compromise: Balancing Speed and Memory When working with volumetric data, caching strategies become crucial for maintaining performance. Both volume-to-surface and surface-to-volume conversions can benefit from caching, reducing uncertainties and improving overall performance. However, caching comes with its own set of challenges: 1. Cached data is heavy to move, potentially causing I/O bottlenecks. 2. Cached data often needs reconstruction, which can be computationally expensive. 3. Caches consume valuable video RAM, a limited resource on many systems. Surface Regeneration: The Volumetric Editing Challenge One of the key challenges when working with volumetric data is the need to regenerate surface representations when editing volumes. This process can be computationally intensive and may introduce latency in interactive editing scenarios. Several strategies can be employed to mitigate this issue including incremental update and multi resolution. In Dreams we had a fast path surface generation for responsive sculpting then a more correct slow pass which was done in the background as the stroke completes. Most of the time a user is idle in creative applications. That sounds strange but even in a flow state from a computers point of view the time between brush strokes is massive. An alternative approach is avoid caching entirely and work directly with ray casting applications. I think in realtime modelling applications the computer graphics hardware is fast heading towards that direction. Though when not working directly with surfaces a caching approach is still likely to be superior in render times. GPU-Centric Workflows: New Horizons in 3D Processing Recent advancements in GPU technology and memory management have opened up new possibilities for volumetric workflows: 1. Direct Memory Access on GPUs: Modern GPUs can load memory directly from the hard drive into GPU memory using Direct Storage APIs. This capability alleviates many issues related to data transfer and management. 2. Keeping calculations in video RAM: By performing most calculations directly in GPU memory, we can avoid expensive host-to-device memory transfers. 3. GPU-based volumetric operations: Implementing volume editing and surface generation algorithms directly on the GPU can significantly improve performance for interactive workflows. However, working entirely on the GPU comes with its own set of challenges: 1. Limited memory: GPU memory is typically more expensive and limited in size compared to system memory. 2. Complexity: All operations must be written in GPU code, which can increase development complexity. 3. CPU-GPU synchronization: Managing data coherence between CPU and GPU can introduce additional complexity. Looking Forward: Emerging Solutions and Future Directions Lightweight Volumetric Formats: Addressing the Data Challenge To address the challenges of storing and transferring volumetric data, several lightweight formats have emerged: 1. Implicit Signed Distance Functions (SDFs): These provide a compact representation of complex volumetric data. 2. Constructive Solid Geometry (CSG): Allows for efficient representation of certain types of geometry through boolean operations. These formats offer reduced storage requirements compared to explicit voxel grids and can be more efficient for certain types of geometric operations and queries. However, they may require more computation to evaluate compared to explicit representations, and certain operations might be more complex or time-consuming. In a future article I do want to talk about some of the data advantages of implicit SDF over CSG though again this article is already long. Hybrid Approaches: Combining Surface and Volumetric Strengths As we look to the future, it's likely that hybrid approaches, leveraging the strengths of both surface and volumetric representations, will play an increasingly important role in creative tools. These hybrid methods could potentially offer the best of both worlds: the editing flexibility of volumetric data with the rendering efficiency of surface representations. Conclusion: Shaping the Future of 3D Creative Tools The choice between surface and volumetric representations in creative tools is not a simple one. Each approach offers distinct advantages and challenges, and the optimal choice often depends on the specific requirements of the application at hand. Surface representations, particularly triangles, have long dominated the field due to their mathematical properties and rendering efficiency. However, volumetric approaches offer advantages in terms of representing complex geometries, handling multi-scale detail, and providing a more intuitive editing experience in some scenarios. Through due to the imbalance of the false equivlence volumes are a fundementally a better source of truth than surfaces. Computers are now fast enough to work directly with volumes, which open up new As hardware capabilities continue to evolve, particularly in the realm of GPU computing and memory management, we may see a shift towards more volumetric workflows. The ability to work directly with volumetric data on the GPU, coupled with advanced caching strategies and lightweight volumetric formats, could potentially overcome many of the traditional barriers to volumetric adoption. Ultimately, the ongoing evolution of 3D representation methods continues to shape the field of creative tools and computer graphics. As we push the boundaries of what's possible in 3D modeling and rendering, we can expect to see continued innovation in data structures, algorithms, and workflows that bridge the gap between surface and volumetric approaches. The future of 3D modeling for realtime applications is complex but I believe it is time for a fundemental shift to a volume first creative tools workflow. Dreams, Modeller and their cohourts are the first in a wave of change to come. Though triangles will still often be the last geometry before rasterisation.
{"url":"https://claire-blackshaw.com/blog/2024/09/volume_vs_surface/","timestamp":"2024-11-11T07:19:23Z","content_type":"text/html","content_length":"22093","record_id":"<urn:uuid:5f9102ce-9dc9-4232-a701-3c8b02c25447>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00452.warc.gz"}
Zero Mean Normalized Cross-Correlation An image from Tsukuba University. This is one of hundreds of images that you can use to test your algorithms. Link is below. Zero Mean Normalized Cross-Correlation or shorter ZNCC is an integer you can get when you compare two grayscale images. Lets say you have a webcam at a fixed position for security. It takes images all the time, but most of the time the room is empty. So quite a lot of images will not be interesting. They only waste space. So you want to get rid of those redundant images. BUT those images are not identical! Even if the scenery didn't change, your sensor will produce slightly different results. A human will not notice them, but you can't simply compare images bit by bit. Even if you could, the images will be different because the sun moved (and so do shadows) and perhaps you have a clock in the image. Now you can solve this problem with various techniques. I want to describe those techniques in a very general way. As the images in other scenarios might have different sizes and you probably don't want to compare whole images, I'll assume you have a part of both image of size \((2n+1) \times (2n+1)\). The pixel in the center has coordinates \((u_1, v_1)\) for the part of the first image and \((u_2, v_2)\) for the second image. Sum of squared differences Go through all pixels, get the difference of both and add up the squares: \(\displaystyle SSD(Img_1, Img_2, u_1, v_1, u_2, v_2, n) := \sum_{i=-n}^n \sum_{j=-n}^n \left ( Img_1(u_1+i, v_1+j) - Img_2(u_2 + i, v_2 + j) \right )^2\) When SSD is small, both images are very similar. Wehn SSD is 0, the images are identical. Zero Mean Normalized Cross-Correlation The average gray value is: \(\displaystyle \overline{Img}(u, v, n) := \frac{1}{(2n+1)^2} \sum_{i=-n}^n \sum_{j=-n}^n Img(u+i, v+j)\) The standard deviation is: \(\displaystyle \sigma(u, v, n) := \sqrt{\frac{1}{(2n+1)^2} \left (\sum_{i=-n} \sum_{j=-n}^n (Img(u +i, v+j)-\overline{Img}(u, v, n))^2 \right )}\) The ZNCC is defined as: \(\displaystyle ZNCC(Img_1, Img_2, u_1, v_1, u_2, v_2, n) := \frac{\frac{1}{(2n+1)^2}\sum_{i=-n}^n \sum_{j=-n}^n \prod_{t=1}^2 \left (Img_t (u_t+i,v_t+j) - \overline{Img}(u_t, v_t, n) \right )}{\ sigma_1(u_1, v_1, n) \cdot \sigma_2(u_2, v_2, n)}\) The higher the ZNCC gets, the more are those two images correlated. (I think the value is always in [0, 1]) Here is some Python code: #!/usr/bin/env python # -*- coding: utf-8 -*- def getAverage(img, u, v, n): """img as a square matrix of numbers""" s = 0 for i in range(-n, n + 1): for j in range(-n, n + 1): s += img[u + i][v + j] return float(s) / (2 * n + 1) ** 2 def getStandardDeviation(img, u, v, n): s = 0 avg = getAverage(img, u, v, n) for i in range(-n, n + 1): for j in range(-n, n + 1): s += (img[u + i][v + j] - avg) ** 2 return (s ** 0.5) / (2 * n + 1) def zncc(img1, img2, u1, v1, u2, v2, n): stdDeviation1 = getStandardDeviation(img1, u1, v1, n) stdDeviation2 = getStandardDeviation(img2, u2, v2, n) avg1 = getAverage(img1, u1, v1, n) avg2 = getAverage(img2, u2, v2, n) s = 0 for i in range(-n, n + 1): for j in range(-n, n + 1): s += (img1[u1 + i][v1 + j] - avg1) * (img2[u2 + i][v2 + j] - avg2) return float(s) / ((2 * n + 1) ** 2 * stdDeviation1 * stdDeviation2) if __name__ == "__main__": A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] B1 = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] B2 = [[1, 2, 3], [4, 5, 6], [7, 8, 7]] print(zncc(A, B1, 1, 1, 1, 1, 1)) print(zncc(A, B2, 1, 1, 1, 1, 1)) See also
{"url":"https://martin-thoma.com/zero-mean-normalized-cross-correlation/","timestamp":"2024-11-03T04:20:14Z","content_type":"text/html","content_length":"26389","record_id":"<urn:uuid:b8586ca0-5aad-4afc-ac3b-6540a74943ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00302.warc.gz"}
Optimal Control for Thermal Management of Li-ion Batteries via Temperature-Responsive Coolant Flow Volume 11 - Year 2024 - Pages 29-35 DOI: 10.11159/jffhmt.2024.004 Optimal Control for Thermal Management of Li-ion Batteries via Temperature-Responsive Coolant Flow Aaditya Sakrikar^1, Jacob Thomas Sony ^1, Pranav Singla ^1, Aniruddh Baranwal ^1 ^1Indian Institute of Technology Bombay Department of Mechanical Engineering, IIT Bombay, Mumbai-400076, India 200100002@iitb.ac.in; 200100072@iitb.ac.in; 200040102@iitb.ac.in; 200100025@iitb.ac.in Abstract - A Battery Thermal Management system (BTMS) is responsible for cooling, heating, insulation, and ventilation of the battery pack to ensure safe, reliable, and long-lasting operation of the battery pack under controlled optimal conditions. Cold plates are one of the most popular active thermal control methods used in BTMS. Most cold plate-based cooling strategies use a constant coolant flow rate. A possible method to improve this could be a temperature-responsive coolant flow strategy. This paper focuses on the optimal control of coolant flow for an active hydraulic BTMS with mini-channel cooling plates. An appropriate cost function was formulated to optimise the coolant flow, capturing the trade-off between thermal degradation and pumping power. A software framework was created to obtain the optimal solution for flow rate as a function of time. The results obtained after optimisation showed that an optimal cooling strategy for high heat dissipation generally has three phases - i) increasing flow rate, ii) saturated flow rate, and iii) decreasing flow rate, which gives a lesser cost when compared to a constant flow rate. Keywords: Optimal control, BTMS, electric vehicles, cold plate, thermal degradation. © Copyright 2024 Authors - This is an Open Access article published under the Creative Commons Attribution License terms Creative Commons Attribution License terms. Unrestricted use, distribution, and reproduction in any medium are permitted, provided the original work is properly cited. Date Received: 2023-09-16 Date Revised: 2024-01-12 Date Accepted: 2024-02-02 Date Published: 2024-02-07 1. Introduction Lithium-ion (Li-ion) batteries have become increasingly popular in recent years due to their long-life span, low maintenance requirements, and high reliability in various devices, including laptops, smartphones, and electric vehicles (EVs). An important component integrated with most Li-ion battery packs in EVs is the Battery Thermal Management System (BTMS) [1]. A BTMS is responsible for cooling, heating, insulation and ventilation of the battery pack to ensure safe, reliable and long-lasting operation of the battery pack under controlled optimal conditions. Cold plates are one of the most popular active thermal control methods used in BTMS [2]. Most cold plate-based cooling strategies use a constant coolant flow rate. The current literature has explored the topology optimization of the cold plate channels [3-4]. The effect of an oscillating flow was investigated by Li et al. [5] which showed that pulsating flow requires a lower pumping power when compared to constant flow rate cold plates for the same thermal performance. However, a temperature-based optimal coolant flow strategy hasn’t been explored in the literature. At lower system temperatures, the constant flow rate used is higher than the value required for maintaining the optimal temperature range of the batteries, resulting in higher pumping power. A possible method to improve this could be a temperature-responsive coolant flow strategy. This paper focuses on the optimal control of coolant flow for an active hydraulic BTMS with mini-channel cooling plates. A serpentine channel-shaped, single inlet-outlet cold plate was chosen to analyse the coolant flow control strategy, as serpentine channels are one of the most commonly used cold plate geometries in the literature. An appropriate cost function was formulated to optimise the coolant flow, capturing the trade-off between thermal degradation and pumping power [6]. Two models, the Equilibrium Temperature model (ETM) and the Linear Thermal Degradation model (LTDM), have been proposed to model the thermal degradation costs of the batteries. ETM considers the equilibrium temperature during steady-state operation, such that the cost increases with increasing temperature. The ETM is useful for the steady state behaviour of the battery, which usually occurs under constant external power consumption. LTDM, on the other hand, focuses on linearly reducing battery thermal life when temperatures exceed a critical threshold, making it suitable for systems with fluctuating power consumption and transient behaviour. A software framework was created to obtain the optimal solution for flow rate as a function of time. The continuous time setting was converted to a discrete-time setting to perform all operations numerically. A cascaded optimisation approach was utilised in which a close-to-optimum solution was obtained first, which was then used as the initial point for the next optimisation step in order to ensure a more efficient and fast convergence to the optimal solution, as the first-order optimality value reached closer to zero quicker. The software framework was made to support variable external power consumption, so the robustness of the cooling strategy to fluctuations in power drawn can also be seen. 2. Methodology 2. 1. Design of the BTMS A serpentine channel cold plate was considered as the thermal management system for the Li-ion battery. The cross-section area of a channel was taken as 2 mm x 10 mm (the battery face area is 160 mm x 250 mm). The design with an optimal number of turns considering the maximum temperature of the battery, thermal gradient and pumping power required was chosen for further analysis. The optimised serpentine cold plate design is shown in Figure 1 and its temperature contour is shown in Figure 2. The heat transfer coefficient and the pumping power for the cold plate at different inlet velocities were obtained using Ansys Fluent due to the complex analytical modelling of a serpentine channel. These values were considered for calculating the overall cost during the control optimisation. Figure 1. Design of the optimised serpentine channel Figure 2. Temperature contour of the optimised channel 2. 2. Thermal degradation models To optimise the coolant flow, an appropriate cost/objective function needs to be determined that accurately captures the trade-off between thermal degradation and pumping power. The effect of pumping power / pumping energy consumed to circulate the coolant can be easily incorporated since the monetary cost associated with pumping energy increases linearly. Mathematically, the average power consumed E(τ) during the period of operation can be represented as follows: Where Pow[p](t) is the instantaneous power consumed as a function of time and τ is the duration of operation. Incorporating the cost associated with thermal degradation is trickier and highly scenario-dependent. Various models can be used to incorporate the cost associated with thermal degradation. Some of them are as follows: 1. Equilibrium temperature model - In this model, the equilibrium temperature Teq of the battery pack in steady-state operation is incorporated into the cost function along with the pumping energy consumed in such a way that as the equilibrium temperature increases, the cost increases. 2. Linear thermal degradation model - In this model, the battery thermal life 𝐿(τ) reduces linearly when the temperature exceeds a critical value. Mathematically, L[0] represents the initial battery life in the absence of any thermal degradation T(t) is the battery temperature at some time 0 ≤ 𝑡 ≤ τ T[crit] is the critical temperature above which thermal degradation starts to occur K(T) is the thermal degradation coefficient, which represents the reduction in the lifetime of the battery when the battery temperature exceeds the critical temperature by 1 unit for a unit time duration, where K(T) = 0 when T < T[crit] and K(T) = K[0] when T > T[crit] The total thermal damage Dmg(τ) to the battery is represented by; The equilibrium temperature model is useful for optimising the steady state behaviour of the battery, which usually occurs under constant external power consumption. However, if the external power consumption is varying or fluctuating due to external factors like noise, then the thermal degradation model is more suitable since it takes into account the transient behaviour of the battery. However, the equilibrium temperature model has the advantage of being computationally less complex as compared to the thermal degradation model. 2. 3. Control optimisation The decision variable(s) is the mass flow rate of the coolant as a function of time during the operation period of the battery, which is represented as f(t) in a continuous time setting. In a discrete-time setting, f(t) gets converted to an array/vector of flow rates, f[1:N] where 1,2,3, ..., N represent the time instances. Thus, there would be N decision variables in total. The size of the time step depends on the characteristic time/time constant of the battery. For a given external power consumption Pow[ext](t) and a decided flow rate f(t), the dynamics of the system are as follows: Where D(t) is an intermediate variable that emerges from energy balance and I_drawn is the current drawn, which is also calculated based on energy balance. It is assumed that the power from the battery is used by the pump Pow[p](f(t)) and for external use Pow[ext](t), without any loss of energy and Vol represents the voltage of the battery during operation. The temperature as a function of time can be determined on the basis of lumped-mass analysis, I[drawn]^2R represents the heat dissipated from the battery due to internal processes & h(f(t)) A (T(t)-T[cool]) represents the heat absorbed by the coolant. R - effective resistance of the battery A - surface area for heat transfer to the coolant h(f(t)) – flow rate dependent heat transfer coefficient T[cool] - Temperature of the coolant, which is maintained constant during operation (the temperature maintenance aspect is incorporated in Pow[p](f(t)) using a suitable correction factor) For constant flow rate, the equilibrium temperature T[eq] can be obtained as: Based on the equilibrium temperature model, the cost function 𝐶 can be defined as: C[1] is the cost per unit increase in the equilibrium temperature above critical temperature and C[2] is the cost per unit increase in average pumping power consumed. Based on the thermal degradation model, the cost function can be defined as: Thus, C[deg] is the cost of operating the BTMS system per unit duration of service of the battery. After choosing the appropriate cost function based on the context, the optimisation problem can be formulated in continuous time as follows: Minimize C subject to the following constraints; 1) D(t)>0,0≤t≤τ 2) 0≤f(t)≤f[max](t) In discrete time, the optimisation problem can be formulated as follows - Minimize C subject to the following constraints; 1) D[n]>0,1≤n≤N 2) 0≤f[n]≤f[max] 3. Results & Discussions A software framework was created in MATLAB to obtain the optimal solution for flow rate as a function of time. The continuous time setting had to be converted to the discrete-time setting since all the operations needed to be performed numerically. The time step taken in this paper was 0.1s, ensuring a balance between the accuracy of the solution obtained and the computational time required. Since the optimisation problem is non-linear, the solver used for obtaining the optimal solution was fmincon, which utilises the interior point algorithm. A cascaded optimisation approach was utilised in which a close-to-optimum solution was obtained first, which was then used as the starting point for the next optimisation step. This made the process more efficient and led to faster convergence to the optimal solution as the first-order optimality value reached closer to zero quickly. The results obtained below are based on the thermal degradation model for different external power consumption modes. The graph corresponding to uniform flow rate assumes that the flow rate cannot vary with time and the optimum value for constant flow rate is solved for. The graph for non-uniform flow rate relaxes the assumption used for uniform flow rate. This is done so as to compare the costs obtained in both cases. The cost in the case of a non-uniform flow rate should be lower than the uniform case since a uniform flow rate is technically a special case of a non-uniform flow rate. Figure 3. (a) Flow rate and (b) Battery pack temperature for constant heat dissipation The graphs above show the optimal flow rate (in g/s) and battery pack temperature (K) as a function of time (in units of battery time constant) for a constant external power consumption corresponding to a heat dissipation of 10 W / cell. This was done to observe the effect of thermal damage on the optimal solution more evidently, as the peak temperature reached is greater than ideal. In general, the optimal cooling strategy for high constant heat dissipation has three phases - i) increasing flow rate, ii) saturated flow rate and iii) decreasing flow rate. Thus, the flow rate as a function of time takes the shape of an upside-down U. It has been observed that not only is the overall objective value reduced for non-uniform flow rate, but even the total thermal damage and pumping energy consumed are reduced by 1-5% for the operating duration. Figure 4. (a) Flow rate and (b) Battery pack temperature for fluctuating heat dissipation Our software framework also supports variable external power consumption, so the robustness of the cooling strategy to variation/fluctuations in external power drawn can be seen as well. The graphs above depict for the case of 10 W/cell subject to heat dissipation fluctuations. It is observed that the overall shape of the graph doesn’t alter that much. The graphs below depict the flow rate and battery temperature variation with time for linearly increasing power consumption (ramp power consumption) and sinusoidal power consumption respectively. Figure 5. (a) Flow rate and (b) Battery pack temperature for linearly increasing heat dissipation Figure 6. (a) Flow rate and (b) Battery pack temperature for sinusoidal heat dissipation For the ramp power consumption scenario, since steady state is not really achieved when constant flow rate is used, it is important to vary the flow rate during the period of operation so that temperature doesn't shoot up significantly. It can be seen that for variable flow rate, the graph of temperature is close to what it would have been for steady-state operation. The cost obtained for various cases is dependent on the values of the variables in the definition of the cost function, especially the variables of K(T) (thermal degradation coefficient), C[1] (cost per unit increase in the temperature above critical temperature), C[2] (cost per unit increase in average pumping power) and L_0 (initial battery life). The value of cost for different logical values of these parameters was considered. An improvement in the range of 1-5% of the cost was obtained for different set of variable values. The results obtained for a particular set of values (K(T) = 5, C [1] = 1, C[2] = 0.1 & L[0] = 1000) are tabulated below. Although the percentage difference observed is ~1%, over a long duration and multiple charging cycles, the advantage of the optimal flow control becomes significant in absolute terms. Table 1. Cost values for different cases for the above-mentioned parameter values. Heat dissipation type Cost for constant flow rate Cost for optimal flow rate Percentage difference (%) Constant 5.46E+03 5.40E+03 1.13 Fluctuating 6.29E+03 6.24E+03 0.794 Ramped 7.39E+03 7.33E+03 0.819 Sinusoidal 4.81E+03 4.76E+03 1.05 4. Conclusion Based on the results and the discussions, the following conclusions can be drawn. In general, the optimal cooling strategy for high constant heat dissipation has three phases during the operating duration - i) increasing flow rate, ii) saturated flow rate and iii) decreasing flow rate. The optimal cooling strategy is not greatly altered even in the presence of fluctuations, which implies that the cooling strategy is robust to external disturbances. The equilibrium temperature model is useful for optimising the steady state behaviour of the battery, while the thermal degradation model is more suitable if the external power consumption is varying due to factors like noise. Although we are able to obtain lower costs by varying the flow rate, the cost of the controller required to vary the flow rate properly has not been accounted for in this approach. Also, lumped analysis has been used for the dynamics of the system. Thus, it is unable to capture the effect of thermal gradients in the battery. The requirement for computational power increases greatly as we try to increase the operating duration of the battery. Thus, the current optimal control framework cannot be used for very long operating durations. 5. Acknowledgements We would first like to express our sincere gratitude to IIT Bombay for giving us the opportunity and all required resources and guidance for carrying out this work. We would like to thank Prof. Avinash Bhardwaj, for his unwavering support and invaluable guidance throughout the course of this work. His expert advice, valuable insights, and dedicated mentorship significantly contributed to the successful completion of this paper. [1] J. Kim, J. Oh, and H. Lee, "Review on battery thermal management system for electric vehicles," Applied Thermal Engineering, vol. 149. Elsevier Ltd, pp. 192-212, Feb. 25, 2019. doi: 10.1016/ j.applthermaleng.2018.12.020. View Article [2] P. R. Tete, M. M. Gupta, and S. S. Joshi, "Developments in battery thermal management systems for electric vehicles: A technical review," Journal of Energy Storage, vol. 35. Elsevier Ltd, Mar. 01, 2021. doi: 10.1016/j.est.2021.102255. View Article [3] Ji, H., Luo, T., Dai, L., He, Z., & Wang, Q. (2023). Topology design of cold plates for pouch battery thermal management considering heat distribution characteristics. Applied Thermal Engineering, 224, 119940. https://doi.org/10.1016/j.applthermaleng.2022.119940 View Article [4] Mo, X., Zhi, H., Ye, X., Hua, H., & He, L. (2021). Topology optimization of cooling plates for battery thermal management. International Journal of Heat and Mass Transfer, 178, 121612. https:// doi.org/10.1016/j.ijheatmasstransfer.2021.121612 View Article [5] Li, D., Zuo, W., Li, Q., Zhang, G., Zhou, K., & Jiaqiang, E. (2023). Effects of pulsating flow on the performance of multi-channel cold plate for thermal management of lithium-ion battery pack. Energy, 273, 127250. https://doi.org/10.1016/j.energy.2023.127250 View Article [6] Shuai Ma, Modi Jiang, Peng Tao, Chengyi Song, Jianbo Wu, Jun Wang, Tao Deng and Wen Shang, "Temperature effect and thermal impact in lithium-ion batteries: A review," Progress in Natural Science: Materials International, vol. 28, no. 6. Elsevier B.V., pp. 653-666, Dec. 01, 2018. doi: 10.1016/j.pnsc.2018.11.002. View Article
{"url":"https://jffhmt.avestia.com/2024/004.html","timestamp":"2024-11-05T09:54:00Z","content_type":"text/html","content_length":"40044","record_id":"<urn:uuid:cb2b54ca-277d-4890-a453-8790e0235f65>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00780.warc.gz"}
Reinforcement: weight and length, Reinforcement: weight and length, ratio and calculations in construction work In the capital construction of country houses from a monolith, you cannot do without reinforced structures. At the same time, most of the costs in the process of purchasing materials mainly fall on the reinforcement. The weight of the material, calculated accurately and correctly, will help to realistically estimate not only the costs of organizing construction work, but also an important part of the cost of the entire object. The need to calculate the weight of reinforcement: tables of correspondence of weight and length Reinforcement is a building material, representing a collection of certain metal elements, intended for the construction of a monolithic structure with cement mortar. Serves as a support to hold tensile stress and to strengthen the concrete structure in the compression zone. Reinforcement components are mainly used in the construction of the foundation and the erection of walls of concrete-cast buildings. A significant part of the time, effort and material costs during the construction of a concrete building is spent precisely on the creation of a reinforced frame, which is made from reinforced rods and nets. To avoid unnecessary costs, you should calculate the required amount of material as accurately as possible. Here you cannot do without knowing the weight of the reinforcement in a meter. A table of ratios of weight and length of different types of structures will help make the correct calculations. To calculate the weight of the reinforcement, add the total length of all the bars and multiply it by the mass of one meter. All the necessary data, taking into account the steel class and the diameter of the rods, are given in the calculation tables. The grade of the material from which the fittings are made is also taken into account. Reinforcement weight table: GOSTregulating the quality of goods The indicator of the standard for the mass of reinforcement of the corresponding diameter is regulated by the developed standards - GOST 5781-82 and GOST R 52544-2006. The table of the weight of a running meter of reinforcement, the length and diameter of the rod will help to perform the correct calculations: │Reinforcement section, mm│Running meter weight, g│Total length of reinforcement in ton of material, m │ │6 │222 │4505 │ │8 │395 │2532 │ │10 │617 │1620 │ │12 │888 │1126 │ │14 │1210 │826 │ │16 │1580 │633 │ │18 │2000 │500 │ │20 │2470 │405 │ │22 │2980 │336 │ │25 │3850 │260 │ │28 │4830 │207 │ │32 │6310 │158 │ │36 │7990 │125 │ │40 │9870 │101 │ │45 │12480 │80 │ │50 │15410 │65 │ │55 │18650 │54 │ │60 │22190 │45 │ │70 │30210 │33 │ │80 │39460 │25 │ This table is fairly straightforward to use. The first column contains data on the diameter of the bar, in the second - the mass of a running meter of a specific type of reinforcing bar. The third column shows the total length of the reinforcing elements in one ton. After examining the table, you can see one pattern. The higher the diameter of the reinforcement, the greater the weight per meter of material. The total length in one ton, on the contrary, is inversely proportional to the thickness of the rods. Helpful advice! The size of the diameter should be checked with the manufacturer. If you measure it yourself, then this will entail errors in the calculations, since the surface of the reinforcing bars has a ribbed structure. Thus, knowing the weight of the reinforcement in accordance with GOST 5781-82, it is easy to calculate the coefficient of the total reinforced structure, it is possible to determine the weight of the reinforcement in relation to the required volumes of concrete. With this data available, it is easy to calculate the total amount of materials that will be required for the construction of a particular structure - be it a foundation or a monolithic building. The amount of material consumption is made from calculations per cubic meter of concrete. Specific gravity of reinforcement: tables matches taking into account the running meter A linear meter of a profile bar is a piece of material with a length of one meter. It can have both smooth and embossed surfaces. The weight of the rods, accordingly, regulates their diameter. GOST established indicators from 6 to 80 millimeters. The material is based on periodic steel. The mass of a mesh made of reinforcing wire for plaster, reinforcing cage for a reinforced concrete foundation, reinforcing mesh for brickwork depends on the dimensions of the canvas, the area of the cells and the diameter of the rods in millimeters. Reinforcing steel produced on the domestic market is widely used in construction, has high quality characteristics, meets all the requirements of GOST for rolled metal products. Calculations are performed using the reinforcement table shown. The weight of 1 running meter depends on the external structure of the profile, which is corrugated or smooth. The presence of ribs and corrugations on the outside provides a more reliable adhesion of the rods to the concrete solution. Thus, the concrete structure itself, in this case, has higher quality characteristics. Features of the technological process of manufacturing reinforcing steel determine the entire range of reinforcement. According to these indicators, steel is hot-rolled rod or cold-drawn wire. The fittings produced in accordance with GOST 5781-82 are rods with a smooth surface of class A, as well as profiles from periodic steel of classes from A-II to A-VI. GOST R 52544-2006 - these are profiles of classes A500C and B500C made of periodic steel, intended for welding. The letter A marks hot-rolled and heat-strengthened reinforcement, the letter B is cold-deformed material, and the letter C is the welded steel. Material marking, weight 1 meter: assortment table If we take as a basis the mechanical characteristics of reinforcing steel, such as strength and weight, then the material is subdivided into separate classes of assortment with the corresponding special designations from A-I to A-VI. At the same time, the weight of a meter of hot-rolled steel reinforcement does not depend on them. The correspondence between class, diameter and brand is clearly demonstrated in the table: │Steel class according to GOST 5781-82│Rod diameter, mm│Steel class according to GOST R 52544-2006 │Rod diameter, mm│Rebar brand │ │A-I │6-40 │A240 │6-40 │St3kp, St3ps, St3sp │ │A-II │ │ │ │St5sp, St5ps │ │ │10-40 │A300 │40-80 │ │ │ │ │ │ │18G2S │ │Ac-II │10-32 │Ac300 │36-40 │10GT │ │ │ │ │ │35GS, 25G2S │ │A-III │6-40 │A400 │6-22 │ │ │ │ │ │ │32G2Rps │ │ │ │ │ │80C │ │A-IV │10-32 │A600 │6-8 36-40 │ │ │ │ │ │ │20 * 2HZ │ │A-V │6-8 and 10-32 │A800 │36-40 │23 * 2G2T │ │ │ │ │ │22 * 2G2AYU, 22 * 2G2R,│ │A-VI │10-22 │A1000 │10-22 │ │ │ │ │ │ │20 * 2G2SR │ If we take, for example, class A-III reinforcement, then it is used to strengthen the foundations of concrete buildings erected in a short time. The weight of the reinforcement in this case is equal to the weight of the entire steel frame, including the foundation, walls and concrete floors, as well as the weight of welded meshes poured with concrete. Rebar diameters ranging from 8 to 25 mm are considered the most popular profile sizes on the construction market. All domestic fittings go through quality control stages before they reach metal depots, which guarantees their compliance with GOST. Reference! The volume of a steel bar is calculated by multiplying the footage by the geometric area of the circle - 3.14 * D * D / 4. D is the diameter. The specific weight of the reinforcement is 7850 kg / m³. If you multiply it by the volume, you get the total indicator of the specific gravity of one meter of reinforcement. Armature: weight and various options for calculating it Reinforcement weight is calculated in different ways: • according to the data on the standard weight; • taking the specific gravity as a basis; • using an online calculator. The required number of rods according to the standard weight is determined using the above weight table in relation to the running meter. This is the simplest calculation option. For example, let's calculate the weight of reinforcement 14. The main condition for such calculations is the presence of an appropriate table. The calculation process itself (when drawing up a construction plan, taking into account the construction of a reinforcing mesh) includes the following stages: • choose the appropriate diameter of the rods; • calculate the footage of the required reinforcement; • multiply the weight of one meter of reinforcement of the corresponding diameter by the number of bars required. For example, 2300 meters of reinforcement 14 will be used for construction. The weight of 1 meter of rods is 1.21 kg. We carry out the calculation: 2300 * 1.21 = 2783 kilograms. Thus, to perform this volume of work, 2 tons of 783 kilograms of steel rods will be required. The number of bars of the corresponding diameter in one ton is calculated in a similar way. The data is taken from the table. Calculations by specific gravity using the example of calculating the weight of a meter of reinforcement 12 The method of calculating the specific gravity requires special skills and knowledge. It is based on the formula for determining mass using quantities such as the volume of an object and its specific gravity. This is the most difficult and time consuming way to calculate weight. It is applicable only in cases where there is no table with norms available and it is impossible to use an online You can clearly consider these calculations using the example of determining the weight of 1 meter of 12 mm reinforcement. First, you need to remember the formula for calculating weight from a physics course, according to which mass is equal to the volume of an object multiplied by its density, that is, specific gravity. For steel, this figure corresponds to 7850 kg / m³. The volume is determined independently, taking into account the fact that the reinforcement bar has a cylindrical shape. In this case, knowledge of geometry is useful. The formula says: the volume of a cylinder is calculated by multiplying the cross-section of the area by the height of the figure. In a cylinder, a section is a circle. Its area is calculated using a different formula, where the constant Pi with a value of 3.14 is multiplied by the radius squared. The radius is, as you know, half the diameter. The procedure for calculating the weight of reinforcement 12 mm per meter, the length of the entire bar The diameter of the reinforcing bars is taken from the plans and calculations of the construction site. It is better not to measure it yourself in order to avoid errors. We determine how much one meter of reinforcement 12 mm weighs. Thus, we get that the radius is 6 mm or 0.006 m. Helpful advice! The easiest way to calculate is to use special programs (or an online calculator). To do this, enter the data of the mass of the reinforcement in tons, the number of the corresponding profile and the length of the rod in millimeters into certain cells. The standard length of the rods is 6000 or 12000 mm. The sequence of independent calculations using the formula is as follows: 1. Determination of the area of a circle: 3.14 * 0.006² = 0.00011304 m². 2. Calculation of the volume of a meter of rods: 0.00011304 * 1 = 0.00011304 m³. 3. Calculation of the weight of the reinforcement 12 in 1 meter: 0.00011304 m³ * 7850 kg / m³ = 0.887 kg. If the obtained result is checked against the table, then we will find that the data complies with state standards. If it is necessary to calculate the mass of a particular rod, then the area of the circle is multiplied by its length. In general, the calculation algorithm is similar. The complete procedure for calculating the weight of 1 meter of reinforcement 12, represented by a mathematical expression, will look like this: 1m * (3.14 * 0.012m * 0.012m / 4) * 7850kg / m³ = 0.887 kg. The result is identical to the previous one. Depending on the length of the reinforcement, the corresponding value is substituted into the formula and the weight is calculated from it. The weight of the entire mesh can be calculated by multiplying the value obtained for 1 m² by the required number of square meters in the reinforced frame. Calculation of the weight of reinforcing wire in square meter Reinforcing wire meets the requirements of GOST 6727-80. Low-carbon steel is used for its production. The diametric values of ordinary wire are 3, 4 and 5 mm. The range has two classes: B-I - with a smooth surface and Bp-1 - material from a periodic profile. Related article: I-beam: size table, weight and technical characteristics of profiles Features of the product design. Formulas for calculating I-beams. The price of a running meter of an I-profile. The wire weight is calculated in accordance with the specific standards and data given in the table: │Wire diameter, mm │Weight of one meter, g │ │3 │52 │ │4 │92 │ │5 │144 │ You can calculate the weight for a specific case using the following algorithm. In order to determine the mass of one hundred meters of reinforcing wire with a diameter of 4 mm, it is necessary to multiply the specific gravity by the meter. The calculation will look like this: 92 * 100 = 9200 g (or 9 kg 200 g). The reverse calculation can also be performed. For example, a coil of wire with a diameter of 4 mm weighs 10 kg. To determine the footage, you need to divide the total mass by the specific gravity. The calculation is as follows: 10 / 0.092 = 108.69 meters. The following methods are used to calculate the weight of the reinforcement mesh. For example, grid dimensions – 50x50x4. The square meter area includes 18 rods of 1 m each. Thus, a total of 18 m of reinforcement 6 is obtained, the weight of which is 0.222 kg / m. A running meter of wire in a structure is calculated as follows: 18 * 0.222 = 3.996 kg / m². Add approximately 1% to take into account the welding tolerance. We get a full 4 kilograms. Characteristics, dimensions and calculation of the weight of reinforcement 8 mm per meter Reinforcing bars with a diameter of 8 mm are considered thin. At first glance, they look like a simple wire. The technological process of their manufacture is regulated by GOST 5781. The surface of reinforcement 8 is corrugated or smooth. Helpful advice! In any calculations and calculations of the reinforcement mass, one should not forget about the permissible error readings. They range from 1 to 6%. This is especially important to take into account when a large amount of welding is expected. The main technical characteristics of the material are as follows: • for manufacturing, steel with marking 25G2S and 35GS is used; • ribbed step - A400 and A500; • reinforcement class A3. The weight of rods of 8 mm per meter is most appropriate in places where excessive weight is unacceptable, but additional strength is needed. The weight of 1 meter of reinforcement 8 is 394.6 grams. The amount of material per ton will be 2,534.2 m. Calculate the weight of 1 meter of 8 mm reinforcement according to the above formula using the specific gravity of the corresponding steel: 1m * (3.14 * 0.008m * 0.008m / 4) * 7850kg / m3 = 0.394 kg. It is this value of the weight of reinforcement 8 that is given in the table of correspondence between the weight and length of the Scope of application and calculation of the weight of 10 mm reinforcement per meter One of the most popular in construction is considered a rod with a diameter of 10 millimeters. Such reinforcement, like rods of a different thickness, is produced by hot-rolled or cold-rolled methods. These are metal rods of medium thickness with a high degree of strength. It is quite simple to calculate the total weight of the reinforcement 10: it is enough to sum the total length and multiply it by the mass of a running meter of material. The required data can be found in the general table. The general characteristics of fittings 10 are as follows: • rod diameter - 10 mm; • there are 1622 m of rolled metal in one ton; • weight of 1 meter of reinforcement 10 mm - 616.5 g; • the permissible error in calculating the weight is + 6%; • steel classes used in the production of this type of rolled metal: At-400, At-500S, At-600, At-600K, At-800K, At-1000, At-1000K, At-1200. With the given parameters, you can easily find out the required amount and weight of building material. An independent calculation is quite easy to do according to the already knurled formula, it will look like this: 1m * (3.14 * 0.01m * 0.01m / 4) * 7850 kg / m³ = 0.617 kg. A similar indicator of the weight of 1 meter of reinforcement 10 contains a table of the ratio of the diameter and mass of one meter. Versatile features and ideal valve weight 12 Fittings with a diameter of 12 mm are rightfully considered the most popular in the field of metal rolling and the most demanded. Its dimensions are the most optimal in various types of construction work. In this reinforcement, such qualities as strength, flexibility and rather low weight are surprisingly combined. At the same time, it has a high degree of adhesion to concrete. Armature frames and structures with its use serve for a very long time. They are practically indestructible. It is reinforcement 12 that is recommended by construction standards for the construction of strip foundations for cottages and private houses. Valve characteristics 12: • rod diameter - 12 mm; • there are 1126 m of rolled stock in one ton; • ovality of the rod - no more than 1.2 mm; • pitch of transverse protrusions - from 0.55 to 0.75 * dH; • weight of 1 meter is 887.8 g; • rental length - from 6 to 12m. The tolerance is possible only upwards and no more than 10 cm, and the curvature should not exceed 0.6%. Important! Each type of reinforcement has its own characteristics, and not necessarily a large diameter guarantees good strength. The same goes for weight. Rebar 20, for example, is more vulnerable to corrosion, but it is ideal for welding. Therefore, the choice of material is individual. It was on reinforcement 12 that an example of calculating the weight of a running meter of a product was considered. The calculations carried out coincided with the data of the table of the weight of reinforcement per meter 12 mm. This indicator in all cases was 887.8 g. Rebar weight 16 mm per meter: features and specifications To the rank high-quality rolled metal reinforcement 16 belongs. The weight and quality of the material ensure its reliability, so builders characterize it as strong, reliable, wear-resistant and environmentally friendly.In addition, it is affordable and easy to install, as well as used in other areas of production. Most often, reinforcement 16 is used for high-quality reinforcement of concrete structures. It can withstand high flexural and tensile loads, spreading it evenly over the entire surface. 16 mm rods are widely used in the arrangement of welded metal structures, reinforcement of concrete structures, the construction of roads, bridges, spans. The production uses high quality steel in accordance with GOST 5781-82. The main characteristics are as follows: • smooth and corrugated profile type; • steel grades are used in production: 35GS, 25G2S, 32G2Rps, A400; • weight of 1 meter of reinforcement 16 mm - 1580 g; • diameter area - 2.010 cm²; • the length of the rods is from 2 to 12 m. According to the calculations, by analogy with the previous brands of reinforcement and in accordance with the table of the ratio of the diameter and mass of one meter, the weight of 16 reinforcement in 1 meter is 1.580 kg. The weight of the reinforcement must be known at the design stage of the construction site. Correct calculations will help in budgeting and avoid unnecessary costs for materials. Thus, by accurately calculating the mass and footage of the reinforcing bars, you can significantly save during the construction process and, conversely, avoid the lack of rods already at the stage of construction of the reinforced structure.
{"url":"https://athome.htgetrid.com/en/materialy/armatura-ves","timestamp":"2024-11-10T01:18:41Z","content_type":"text/html","content_length":"120667","record_id":"<urn:uuid:8d07ba59-fed5-4f83-afe1-ecbaa81997ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00798.warc.gz"}
How do you count the distance traveled using gps? Hi. I'm trying to build an application that will show me how many meters I have traveled. I'm thinking of counting the distance between two coordinates every 5 seconds. However, I don't know how to implement it in the form of blocks. Or maybe you have another idea how to calculate the distance traveled? Distance traveled or distance between two points Adrian? They are different; see Hi SteveJG. Distance traveled. For example. I'm driving a car and cliking on start button which will start counting distance traveled. So for every 5 or 10 second i will receive updated distance in meters which i traveled from point when start button was activated. If it matters i'm using KIO4_LocationSensor1 extension in my app. • use Navigate control • request distance to destination using GotDirections(directions,points,distance,duration) . • post distance in a Label perhaps • request directions again using a Clock event handler, set Clock interval to perhaps 30 seconds (5 to 10 is too often) and call GotDirections distance to destination (enrouteDistance) might work for you. See one of the example links which describe something similar. I'm fighting with that from few hours but with any result. I don't exactly how to build these blocks. If it will make things easier i want to count distance in just straith line. My blocks below: This example might help or this one See post number 2 and use a Great Circle algorithm for kilometers or miles Instead of counting the distance every 5 seconds, consider using the Haversine formula to calculate the total distance traveled between two points (initial and final). This formula accounts for the Earth's curvature and provides a more accurate measurement than straight-line distance. You can implement this formula using blocks by first storing the initial coordinates and then periodically (e.g., every 5 seconds) calculating the distance between the current and initial coordinates using the Haversine formula. Finally, sum up all these individual distances to get the total traveled distance. How can we do that in blocks, ie using Haversine Formula? this is how to solve this question in my way: 1. search "Haversine Formula javascript", 2. you will find a function of that in javascript, like this: got below code from here: Remember to remove the comments, if there are // in javascript, the RunJavascript function will not work. let computedDistance = function getDistanceFromLatLonInKm(lat1, lon1, lat2, lon2) { Number.prototype.deg2rad = function (deg) { return deg * (Math.PI / 180) var R = 6371; var dLat = 0..deg2rad(lat2 - lat1); var dLon = 0..deg2rad(lon2 - lon1); var a = Math.sin(dLat / 2) * Math.sin(dLat / 2) + Math.cos(0..deg2rad(lat1)) * Math.cos(0..deg2rad(lat2)) * Math.sin(dLon / 2) * Math.sin(dLon / 2) var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a)); var d = R * c; return d; 3. use Webviewer.Runjavascript to get a result: photos draggable. 1 Like Il try to grasp this. Using App Inventor Blocks as already posted in the thread: 1 Like Il try this. You can also use the Marker component. You can compute distance between any two Map-based components using the appropriate functions, and these use a built-in implementation that should be much faster than implementing it in blocks. 1 Like @SteveJG , Yesterday when I checked, I saw here about using haversine formula using mit blocks but today I am not able to find it. My purpose is to know when a vehichle started to move (begining time), so the app will give some warning message as 'Drive safely or wear seat belt etc'. What about following the link, @SteveJG posted earlier? And what about trying the tip from @ewpatton ? 1 Like
{"url":"https://community.appinventor.mit.edu/t/how-do-you-count-the-distance-traveled-using-gps/101343/6","timestamp":"2024-11-04T08:24:17Z","content_type":"text/html","content_length":"55595","record_id":"<urn:uuid:0467582f-522e-4587-851e-568b07a40a93>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00851.warc.gz"}
Vibrating tube densitometers Vibrating tube densitometers The invention provides a method of calibrating a vibrating tube densitometer intended to operate at combined elevated temperatures and pressures. This Application is a Section 371 National Stage Application of International Application No. PCT/GB2011/000154, filed 4 Feb. 2011 and published as WO 2011/095784 on Aug. 11, 2011, in English, the contents of which are hereby incorporated by reference in their entirety. This invention relates to vibrating tube densitometers. Vibrating tube densitometers are a well-known form of apparatus for measuring the density of a flowing medium. One example of this form of apparatus is described in British Patent 2 062 865. In operation, a vibrating tube densitometer is excited so as to vibrate, in a particular mode, at its resonant frequency. This resonant frequency will be effected by changes in the density of the fluid contained in, or passing through, the tube. The indicated density will also be effected by the fluid temperature and/or fluid pressure to which the vibrating tube is subjected. This requires each densitometer to be calibrated as can be more readily understood with reference to the following: The resonant frequency of a vibrating tube densitometer with fluid contained in it can be expressed as: $f = 1 2 ⁢ π ⁢ k m r + V f · ρ f ( Equation ⁢ ⁢ 1 )$ □ f is the resonant frequency of the vibrating tube densitometer containing a fluid □ m[r ]is the mass of the resonant element within the vibrating tube densitometer □ V[f ]is the volume of the fluid contained in the resonant element □ ρ[f ]is the density of the fluid contained in the resonant element □ k is the stiffness of the resonant element Among the above parameters, m[r ]is a constant. All the other parameters vary with measurement conditions, i.e. mainly temperature (t) and pressure (p), therefore we have V[f](t, p), ρ[f](t, p), k(t, p)−fluid volume, fluid density and resonant element stiffness as functions of temperature and pressure respectively. At measurement conditions, the resonant frequency (f) of a vibrating tube densitometer containing a fluid, varies with not only the fluid density ρ[f](t,p), but also with the volume of the fluid V[f] (t, p) and the stiffness of the resonant element k(t, p) which both are affected by the temperature/pressure effects of the vibrating tube densitometer. Equation 1 can be rewritten in terms of fluid density as: ρ[f]=K[0]+K[2]τ^2(Equation 2) where K[0]=−m[r]/V[f], K[2]=k/(4π^2V[f]), and r=1/f is the period of oscillation. As Equation 1 is only a first order approximation to the actual behavior of a vibrating tube densitometer containing a fluid, more generic equations have been developed for use in the calibration of specific vibration tube densitometers. One example of such a generic equation is: D=K[0]+K[1]·τ+K[2]·τ^2(Equation 3) in which K[0], K[1], and K[2 ]are density coefficients to be calibrated, D is the indicated fluid density, and τ is the period of oscillation. One way to calibrate such a densitometer is to determine K[0], K[1], and K[2], across the full operational temperature and pressure range, with fluids of known density at those conditions. The relationships between K[0], K[1], K[2], and temperature and pressure, can then be derived. This method requires numerous calibration points. One more conventional way to calibrate such a densitometer is to first determine density coefficients K[0], K[1], and K[2 ]at a reference temperature and pressure condition, such as at temperature t [0]=20° C. and at atmospheric pressure p[0]=1 BarA; then determine the temperature effects of the densitometer at the reference pressure condition; and then determine the pressure effects of the densitometer at the reference temperature condition. In other words, the temperature effects of densitometer are calibrated at the reference pressure condition and the pressure effects of densitometer are calibrated at the reference temperature condition. When a densitometer, so calibrated, operates at other temperatures and elevated pressures, the indicated fluid density is calculated first, and then corrected for the above temperature effects characterized at the reference pressure condition and for the above pressure effects characterized at the reference temperature condition. For example: One form of temperature correction is: D[t]=D·(1+K[18]·(t−t[0]))+K[19]·(t−t[0])(Equation 4) where t is the operating temperature, t[0 ]is the reference temperature and K[18 ]and K[19 ]are temperature correction coefficient constants. The temperature correction coefficient constants K[18 ] and K[19 ]are generally calibrated at atmospheric pressure p[0]=1 BarA. If necessary or desired in a complex situation, K[18 ]and K[19 ]can be expressed as functions of temperature. One form of pressure correction is: D[p]=D[t]·(1+K[20]·(p−p[0]))+K[21]·(p−p[0])(Equation 5) K[20]=K[20A]+K[20B]·(p−p[0])+K[20C]·(p−p[0])^2(Equation 6) K[21]=K[21A]+K[21B]·(p−p[0])+K[231C]·(p−p[0])^2(Equation 7) where p is the operating pressure, p[0 ]is the reference pressure, and K[20A], K[20B], K[20C], K[21A], K[21B ]and K[21C ]are pressure correction coefficient constants. The pressure correction coefficient constants K[20A], K[20B], K[20C], K[21A], K[21B ]and K[21C ]are generally calibrated at a reference temperature t[0]=20° C. K[20 ]and K[21 ]can, if necessary or desired, be expanded as higher order polynomial functions of pressure, or expressed as other functions of pressure. A problem with the above-described calibration is that, at combined elevated pressure and temperature, measurement errors may be observed between the corrected density value D[p ]and the true density of the fluid under measurement, By way of example, at a combined condition of 80° C. and 100 BarG on a fluid of base density 826.8 kg/m^3, the measurement error can be as great as 0.25% or 2 kg/m^3. This may exceed the error-acceptance level of many applications, particularly fiscal metering applications. It is an object of this invention to provide a method of calibrating a vibrating tube densitometer which will go at least some way in addressing the problem described, or which will at least provide a novel and useful addition to the art. Accordingly, the invention provides a method of calibrating a vibrating tube densitometer including the steps of: □ establishing density coefficients at a reference temperature and pressure condition; establishing temperature effects correction coefficients at the reference pressure condition; and establishing pressure effects correction coefficients at the reference temperature condition; □ said method being characterized in that it includes establishing one or more further correction coefficients to compensate for the temperature-pressure coupling effects arising at combined elevated temperature and pressure conditions. Said further correction coefficients may be determined by calibrating the densitometer using two fluids of densities at substantially the opposite ends of the range of specified densities to be accommodated, each fluid being at a combined elevated temperature and pressure. Alternatively a single further correction coefficient is derived by calibrating the densitometer using a single fluid of a density substantially at the mid-point of the range of specified densities to be accommodated, said single fluid being at a combined elevated temperature and pressure. Many variations in the way the invention may be performed will present themselves to those skilled in the art, upon reading the following description. The description should not be regarded as limiting but rather as an illustration, only, of one manner of performing the invention. Where appropriate any element or component should be taken as including any or all equivalents thereof whether or not specifically mentioned. One working embodiment of the invention will now be described with reference to the accompanying drawings in which: FIG. 1: shows a cross-sectional view of an example of vibrating tube densitometer to which the invention may be applied; FIG. 2: shows one mode of vibration of the densitometer shown in FIG. 1; FIG. 3: shows the performance of a vibrating tube densitometer as currently calibrated, on a first fluid; FIG. 4: shows the performance of the same densitometer used in the FIG. 3 example as currently calibrated, on a second fluid; FIG. 5: shows the performance on the first fluid of the densitometer as used in the FIG. 3 example but calibrated in accordance with a first method according to the invention; FIG. 6: shows the performance on the second fluid of the densitometer as used in the FIG. 3 example but calibrated in accordance with a first method according to the invention; and FIG. 7: shows the performance on the second fluid of the densitometer as used in the FIG. 3 example but calibrated on the first fluid in accordance with a second method according to the invention. As will be described in greater detail below, the invention provides a method of calibrating a vibrating tube densitometer to take into account the temperature-pressure coupling effects which arise at combined elevated temperatures and pressures. Referring to FIG. 1, a vibrating tube densitometer 10 will be well known to those skilled in the art. A vibrating tube 11 is held between a pair of flanges 12 which, in use, are connected between like flanges on a pipe carrying the fluid whose density is to be measured. Sleeves 14 surround the ends of the tube 11 and carry coils 15 which are located adjacent to the points of maximum lateral displacement of the tube 11 as seen in FIG. 1. In use the coils are powered to cause the tube to vibrate, in the mode shown in FIG. 2, at its natural frequency. An outer cover 17 is fixed between collars attached to opposite ends of the tube 11. A more thorough description of this form of apparatus can, for example, be found in British Patent 2 062 865. Whilst the description provided herein assumes a lateral mode of vibration shown in FIG. 2, it will be appreciated by those skilled in the art that the general calibration methods herein described are equally applicable to vibrating tube densitometers configured to vibrate in other modes. Conventionally, vibrating tube densitometers are not calibrated at combined elevated temperature and pressure conditions. By way of example, the density coefficients K[0], K[1], and K[2 ]mentioned above are determined at reference conditions of 20° C. and 1 BarA; the temperature correction coefficients K[18 ]and K[19 ]are determined at a reference pressure of 1 BarA; and the pressure correction coefficients K[20A], K[20B], K[20C], K[21A], K[21B ]and K[21C ]are determined at a reference temperature of 20° C. The invention proposes methods to calibrate and correct a vibration tube densitometer for the residual temperature-pressure coupling effects at combined elevated temperatures and pressures according to the following expressions: D[pt]=D[p]·(1+K[22]·(t−t[0])·(p−p[0]))+K[23]·(t−t[0])·(p−p[0])(Equation 8) D[pt]=D[p]+(D[p]·K[22]+K[23])·(t−t[0])·(p−p[0])(Equation 9) D[pt]=D+K[pt]·(t−t[0])·(p−p[0])(Equation 10) in which: D[pt ]is the final indicated density corrected for temperature-pressure coupling effects, K[22 ]and K[23 ]are temperature-pressure coupling effects coefficient constants, K[pt]=D[p]+K[22]+K[23 ]is a temperature-pressure coupling effects coefficient on a fluid at measurement conditions. K[22 ]and K[23 ]are the coefficients to be calibrated and, generally, K[22 ]and K[23 ]can be assumed to be constants, i.e. independent of temperature and pressure. In a complex situation K[22 ]and K [23 ]can be expressed as functions of temperature and pressure. It has been found that, on a fluid at a given temperature, the temperature-pressure coupling effects correction (D[pt]−D[p]) is approximately proportional to the pressure difference (p−p[0]); and further, that the proportional constant is approximately proportional to the temperature difference (t−t[0]). In principle, the temperature-pressure coupling effects coefficient K[pt ]is fluid density dependent, however it has been found that, within a defined limited density range, e.g. ±100 kg/m^3, K[pt ] can be approximated to a constant, thus simplifying the calibration. K[22 ]and K[23 ]can be determined with the densitometer calibrated on two fluids having densities at the opposite ends of the specified range of densities of interest, each fluid being at an additional combined elevated temperature and elevated pressure condition. Therefore the following two equations are obtained: D[pt](1)=D[p](1)·(1+K[22]·(t(1)−t[0])·(p(1)−p[0]))+K[23]·(t(1)−t[0])·(p(1)−p[0])(Equation 11) D[pt](2)=D[p](2)·(1+K[22]·(t(2)−t[0])·(p(2)−p[0]))+K[23]·(t(2)−t[0])·(p(2)−p[0])(Equation 12) Now let C(1)=(t(1)−t[0])·(p(1)−p[0])(Equation 13) C(2)=(t(2)−t[0])·(p(2)−p[0])(Equation 14) From Equations 11 and 12, K[22 ]and K[23 ]can be derived as $K 22 = ( D pt ⁡ ( 1 ) · C ⁡ ( 2 ) - D pt ⁡ ( 2 ) · C ⁡ ( 1 ) ) - ( D p ⁡ ( 1 ) · C ⁡ ( 2 ) - D p ⁡ ( 2 ) · C ⁡ ( 1 ) ) ( D p ⁡ ( 1 ) - D p ⁡ ( 2 ) ) · C ⁡ ( 1 ) · C ⁡ ( 2 ) ( Equation ⁢ ⁢ 15 ) and K 23 = D pt ⁡ ( 1 ) - D p ⁡ ( 1 ) · ( 1 + K 22 · C ⁡ ( 1 ) ) C ⁡ ( 1 ) ( Equation ⁢ ⁢ 16 )$ An alternative approach is to derive a single correction factor K[pt]. Within a limited density range, K[pt ]can be approximated to a constant, therefore simplifying the calibration of the temperature-pressure coupling effects. K[pt ]can be determined by calibrating the densitometer using a single fluid of a density in the middle of the specified range of densities of interest, at an additional combined elevated temperature and elevated pressure condition. Thus the following equation is obtained: D[pt]=D[p]+K[pt]·(t−t[0])·(p−p[0])(Equation 17) From Equation 17, K[pt ]can be derived as $K pt = D pt - D p ( t - t 0 ) ⁢ ( p - p 0 ) ( Equation ⁢ ⁢ 18 )$ With more calibration points of the densitometer at multiple combined elevated temperature and pressure conditions, and on multiple fluids, K[22], K[23 ]and K[pt ]can be determined using a least mean square fit method by solving multiple Equations 8, 9 or 10 above. The densitometer is installed on a temperature and pressure controlled rig circulating a first calibration fluid of a known base density. The temperature is set to 20° C., the pressure set to 0 bar gauge and the rig allowed to stabilize. When stabilized, temperature, pressure and densitometer time period readings are recorded. Whilst maintaining the temperature at 20° C. the pressure is stepped up through to the maximum pressure. At each pressure step the rig is allowed to stabilize before the same set of readings is taken. Typically readings are taken at five pressure points, for example 0, 30, 50, 70 and 100 bar gauge. Once data has been collected for all pressure readings at 20° C., the system temperature is raised to an elevated temperature, typically 60 or 80° C., and a further set of readings taken at each pressure point (0, 30, 50, 70, 100 bar gauge). The densitometer is then taken off the rig, cleaned, and then installed on a second, identical, rig circulating a second calibration fluid of a different but known base density. The same calibration steps as described above are then undertaken and the same range of readings obtained. Next the densitometer is cleaned and mounted to a third rig circulating a third fluid having a base density which differs from that of the first and second fluids. After stabilization a set of readings is taken at 20° C. and at 0 bar gauge. As an alternative to the measurement using this third fluid, a measurement can be taken in air at 20° C. in a temperature controlled area and using barometric pressure to determine the density of air. The density calibration coefficients are normally referenced to 20° C. and 0 bar gauge. In reality the measurements will not be exactly at 20° C. or 0 bar gauge and so it is not possible to calculate the density coefficients, the temperature coefficients and the pressure coefficients independently from each other. As a consequence the calculation routines usually involve looped calculations and several iterations. Broadly speaking the values of K[0], K[1 ]and K[2 ]are calculated from all three calibration fluids (or air in place of the third calibration fluid) at 20° C. and 0 bar; K[18 ]and K[19 ]are calculated from the first two calibration fluids at 20° C., and at elevated temperature, and 0 bar. K[20A], K[20B], K[21A], K[21B ]and K[21C ]are calculated using the first two calibration fluids at 20° C. and at each pressure point; and K[22 ]and K[23 ]are calculated using a combination of all the data. Experimental Results FIGS. 3 to 7 show comparisons between the current performance with the existing calibration method and the new performance obtained using the alternative methods proposed herein. In all cases density measurement errors are shown at a range of temperature/pressure combinations. The fluid used in the examples shown in FIGS. 3 and 5 has a base density of 826.8 kg/m^3 whilst the fluid used in the examples shown in FIGS. 4, 6 and 7 has a base density of 914.0 kg/m^3. It can clearly be seen from FIGS. 3 and 4 that, with no correction for temperature-pressure coupling effects, at combined elevated temperatures and pressures, significant density measurement errors Referring to FIGS. 5 and 6, by calculating and applying correction coefficients K[22 ]and K[23 ]in the manner described above, the residual density measurement errors due to temperature-pressure coupling effects are substantially corrected. FIG. 7 shows the errors of the densitometer on a fluid of base density of 914 kg/m^3 with a K[pt ]value calibrated on a fluid of base density 826.8 kg/m^3 according to the second or alternative method described above. It can be seen that the residual density measurement errors due to temperature-pressure coupling effects are also substantially corrected. In relation to the FIG. 7 example, it should be pointed out that, since only two calibration fluids were available, we were able to demonstrate that if the K[pt ]value is calibrated at a density of 826.8 kg/m3 (i.e. this is adopted as the middle value) the resulting K[pt ]value obtained is applicable to a fluid of density as high as 914.0 kg/m3, as tested. As can be seen, both methods yield much smaller density measurement errors at a temperature and pressure combination of 80° C. and 101 barA, compared with the errors of 2.0 kg/m^3 and 1.8 kg/m^3, which arise with the same fluids of base density 826.8 kg/m^3 and 914.0 kg/m^3 respectively, at the same temperature/pressure combination, when calibrated according to current practice. Thus with the method proposed in the invention, densitometer measurement performance is significantly improved from its current performance at combined elevated temperature and pressure conditions. 1. A vibrating tube densitometer calibrated in accordance with a method, wherein: density correction coefficients are established at a reference temperature and pressure condition and are combined with a time period of oscillation measurement to give an uncorrected density value D; temperature effects correction coefficients are established at a reference pressure condition and are combined with a temperature measurement to modify the uncorrected density value D to obtain a temperature corrected density value Dt; pressure effects correction coefficients are established at a reference temperature condition and are combined with a pressure measurement to modify said temperature corrected density value Dt to obtain a pressure corrected density value Dp: establishing one or more further correction factors to compensate for temperature-pressure coupling effects arising at a combination of elevated temperature and elevated pressure, said one or more further correction factors being combined with said measurements of temperature and pressure to modify said pressure corrected density value Dp to thereby obtain a temperature-pressure coupling effects corrected density value Dpt, wherein said one or more further correction factors are determined by calibrating the densitometer using two fluids of densities at substantially the opposite ends of the range of specified densities to be accommodated, each fluid being at a combined elevated temperature and pressure. 2. A vibrating tube densitometer calibrated in accordance with a method, wherein: density correction coefficients are established at a reference temperature and pressure condition and are combined with a time period of oscillation measurement to give an uncorrected density value D; temperature effects correction coefficients are established at a reference pressure condition and are combined with a temperature measurement to modify the uncorrected density value D to obtain a temperature corrected density value Dt; pressure effects correction coefficients are established at a reference temperature condition and are combined with a pressure measurement to modify said temperature corrected density value Dt to obtain a pressure corrected density value Dp: establishing one or more further correction factors to compensate for temperature-pressure coupling effects arising at a combination of elevated temperature and elevated pressure, said one or more further correction factors being combined with said measurements of temperature and pressure to modify said pressure corrected density value Dp to thereby obtain a temperature-pressure coupling effects corrected density value Dpt, wherein a single further correction coefficient is derived by calibrating the densitometer using a single fluid of density substantially at the mid-point of the range of specified densities to be accommodated, said single fluid being at a combined elevated temperature and pressure. Referenced Cited U.S. Patent Documents 4872351 October 10, 1989 Ruesch 20020184940 December 12, 2002 Storm et al. 20030200816 October 30, 2003 Francisco, Jr. 20040123645 July 1, 2004 Storm, Jr. et al. 20070186684 August 16, 2007 Pham 20080257066 October 23, 2008 Henry et al. Foreign Patent Documents 1 306 659 May 2003 EP 2 062 865 May 1981 GB WO 2005/040733 September 2003 WO 2005003690 January 2005 WO 2005010467 February 2005 WO WO 2006/009548 January 2006 WO Other references • Charles S. Oakes et al. “Apparent Molar Volumes of Aqueous Calcium Chloride to 250° C., 400 bars, and from Molalities of 0.242 to 6.150”, Journal of Solution Chemistry, vol. 24, No. 9, 1995, pp. • International Search Report from PCT/GB2011/000154, dated Jun. 15, 2011. • Search Report for GB Application No. GB1001948.7, dated Apr. 22, 2010, 1 page. Patent History Patent number : 9322759 : Feb 4, 2011 Date of Patent : Apr 26, 2016 Patent Publication Number 20120310579 Assignee Mobrey Limited Tinghu Yan George Macdonald David Malcolm Campbell Primary Examiner Alexander Satanovsky Application Number : 13/576,806 Current U.S. Class: 73/32.0A International Classification: G01N 9/00 (20060101);
{"url":"https://patents.justia.com/patent/9322759","timestamp":"2024-11-04T11:22:23Z","content_type":"text/html","content_length":"94631","record_id":"<urn:uuid:c4da3886-d7b0-4dc4-8aec-6f16d563f676>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00851.warc.gz"}
Definitions/Strictly Increasing Mappings This category contains definitions related to Strictly Increasing Mappings. Related results can be found in Category:Strictly Increasing Mappings. Let $\struct {S, \preceq_1}$ and $\struct {T, \preceq_2}$ be ordered sets. Let $\phi: \struct {S, \preceq_1} \to \struct {T, \preceq_2}$ be a mapping. Then $\phi$ is strictly increasing if and only if: $\forall x, y \in S: x \prec_1 y \implies \map \phi x \prec_2 \map \phi y$ Note that this definition also holds if $S = T$. This category has only the following subcategory. Pages in category "Definitions/Strictly Increasing Mappings" The following 4 pages are in this category, out of 4 total.
{"url":"https://proofwiki.org/wiki/Category:Definitions/Strictly_Increasing_Mappings","timestamp":"2024-11-03T07:29:40Z","content_type":"text/html","content_length":"41951","record_id":"<urn:uuid:d194ca41-df7e-4af2-9dac-4befb390f41e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00541.warc.gz"}
Let $f: \{0,1\}^n \to \{0, 1\}$ be a boolean function, and let $f_\land (x, y) = f(x \land y)$ denote the AND-function of $f$, where $x \land y$ denotes bit-wise AND. We study the deterministic communication complexity of $f_\land$ and show that, up to a $\log n$ factor, it is bounded by a polynomial in the logarithm of the real rank of the communication matrix of $f_\land$. This comes within a $\log n$ factor of establishing the log-rank conjecture for AND-functions with no assumptions on $f$. Our result stands in contrast with previous results on special cases of the log-rank conjecture, which needed significant restrictions on $f$ such as monotonicity or low $\mathbb{F}_2$-degree. Our techniques can also be used to prove (within a $\log n$ factor) a lifting theorem for AND-functions, stating that the deterministic communication complexity of $f_\land$ is polynomially-related to the AND-decision tree complexity of $f$. The results rely on a new structural result regarding boolean functions $f:\{0, 1\}^n \to \{0, 1\}$ with a sparse polynomial representation, which may be of independent interest. We show that if the polynomial computing $f$ has few monomials then the set system of the monomials has a small hitting set, of size poly-logarithmic in its sparsity. We also establish extensions of this result to multi-linear polynomials $f:\{0,1\}^n \to \mathbb{R}$ with a larger range. Changes to previous version: Fixed author order. TR20-155 | 18th October 2020 17:09 Log-rank and lifting for AND-functions Let $f: \{0,1\}^n \to \{0, 1\}$ be a boolean function, and let $f_\land (x, y) = f(x \land y)$ denote the AND-function of $f$, where $x \land y$ denotes bit-wise AND. We study the deterministic communication complexity of $f_\land$ and show that, up to a $\log n$ factor, it is bounded by a polynomial in the logarithm of the real rank of the communication matrix of $f_\land$. This comes within a $\log n$ factor of establishing the log-rank conjecture for AND-functions with no assumptions on $f$. Our result stands in contrast with previous results on special cases of the log-rank conjecture, which needed significant restrictions on $f$ such as monotonicity or low $\mathbb{F}_2$-degree. Our techniques can also be used to prove (within a $\log n$ factor) a lifting theorem for AND-functions, stating that the deterministic communication complexity of $f_\land$ is polynomially-related to the AND-decision tree complexity of $f$. The results rely on a new structural result regarding boolean functions $f:\{0, 1\}^n \to \{0, 1\}$ with a sparse polynomial representation, which may be of independent interest. We show that if the polynomial computing $f$ has few monomials then the set system of the monomials has a small hitting set, of size poly-logarithmic in its sparsity. We also establish extensions of this result to multi-linear polynomials $f:\{0,1\}^n \to \mathbb{R}$ with a larger range.
{"url":"https://eccc.weizmann.ac.il/report/2020/155/","timestamp":"2024-11-14T21:31:02Z","content_type":"application/xhtml+xml","content_length":"24445","record_id":"<urn:uuid:eaef746f-fa51-4c9c-91d2-2d1b6297b0c6>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00569.warc.gz"}
LanguageFeatures/Set-literals/type_inference_A27_t01.dart - co19 - Git at Google // Copyright (c) 2019, the Dart project authors. Please see the AUTHORS file // for details. All rights reserved. Use of this source code is governed by a // BSD-style license that can be found in the LICENSE file. /// @assertion Finally, we define inference on a setOrMapLiteral collection as /// follows: /// ... /// Otherwise, collection is still ambiguous, the downwards context for the /// elements of collection is ?, and the disambiguation is done using the /// immediate elements of collection as follows: /// ... /// If all elements can be a map, and at least one element must be a map, then e /// is a map literal with static type Map<K, V> where K is the least upper bound /// of the key types of the elements and V is the least upper bound of the value /// types. /// @description Checks that if all elements can be a map, and at least one /// element must be a map, then e is a map literal with static type Map<K, V> /// where K is the least upper bound of the key types of the elements and V is /// the least upper bound of the value types. /// @author sgrekhov@unipro.ru import "../../Utils/expect.dart"; main() { dynamic d1 = 1; dynamic d2 = 2; var m = {d1: d2, 3: 4}; Expect.isTrue(m is Map<dynamic, dynamic>); Expect.isFalse(m is Map<int, int>); Expect.runtimeIsType<Map<dynamic, dynamic>>(m); Expect.runtimeIsNotType<Map<int, int>>(m);
{"url":"https://dart.googlesource.com/co19/+/fa736bcbc5845c67a559ef15943a0f4447af291e/LanguageFeatures/Set-literals/type_inference_A27_t01.dart","timestamp":"2024-11-13T16:19:07Z","content_type":"text/html","content_length":"13971","record_id":"<urn:uuid:f06e712d-71be-4362-b0ef-6611591a9ea3>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00559.warc.gz"}
2022 AMC 8 Problems/Problem 15 Laszlo went online to shop for black pepper and found thirty different black pepper options varying in weight and price, shown in the scatter plot below. In ounces, what is the weight of the pepper that offers the lowest price per ounce? $[asy] //diagram by pog size(5.5cm); usepackage("mathptmx"); defaultpen(mediumgray*0.5+gray*0.5+linewidth(0.63)); add(grid(6,6)); label(scale(0.7)*"1", (1,-0.3), black); label(scale(0.7)*"2", (2,-0.3), black); label(scale(0.7)*"3", (3,-0.3), black); label(scale(0.7)*"4", (4,-0.3), black); label(scale(0.7)*"5", (5,-0.3), black); label(scale(0.7)*"1", (-0.3,1), black); label(scale(0.7)*"2", (-0.3,2), black); label(scale(0.7)*"3", (-0.3,3), black); label(scale(0.7)*"4", (-0.3,4), black); label(scale(0.7)*"5", (-0.3,5), black); label(scale(0.8)*rotate(90)*"Price (dollars)", (-1,3.2), black); label(scale(0.8)*"Weight (ounces)", (3.2,-1), black); dot((1,1.2),black); dot((1,1.7),black); dot((1,2),black); dot((1,2.8),black); dot((1.5,2.1),black); dot((1.5,3),black); dot ((1.5,3.3),black); dot((1.5,3.75),black); dot((2,2),black); dot((2,2.9),black); dot((2,3),black); dot((2,4),black); dot((2,4.35),black); dot((2,4.8),black); dot((2.5,2.7),black); dot ((2.5,3.7),black); dot((2.5,4.2),black); dot((2.5,4.4),black); dot((3,2.5),black); dot((3,3.4),black); dot((3,4.2),black); dot((3.5,3.8),black); dot((3.5,4.5),black); dot((3.5,4.8),black); dot ((4,3.9),black); dot((4,5.1),black); dot((4.5,4.75),black); dot((4.5,5),black); dot((5,4.5),black); dot((5,5),black); [/asy]$ $\textbf{(A) }1\qquad\textbf{(B) }2\qquad\textbf{(C) }3\qquad\textbf{(D) }4\qquad\textbf{(E) }5$ $[asy] //diagram by pog size(5.5cm); usepackage("mathptmx"); defaultpen(mediumgray*0.5+gray*0.5+linewidth(0.63)); add(grid(6,6)); label(scale(0.7)*"1", (1,-0.3), black); label(scale(0.7)*"2", (2,-0.3), black); label(scale(0.7)*"3", (3,-0.3), black); label(scale(0.7)*"4", (4,-0.3), black); label(scale(0.7)*"5", (5,-0.3), black); label(scale(0.7)*"1", (-0.3,1), black); label(scale(0.7)*"2", (-0.3,2), black); label(scale(0.7)*"3", (-0.3,3), black); label(scale(0.7)*"4", (-0.3,4), black); label(scale(0.7)*"5", (-0.3,5), black); label(scale(0.8)*rotate(90)*"Price (dollars)", (-1,3.2), black); label(scale(0.8)*"Weight (ounces)", (3.2,-1), black); draw((0,0)--(6,5),red); dot((1,1.2),black); dot((1,1.7),black); dot((1,2),black); dot((1,2.8),black); dot((1.5,2.1),black); dot ((1.5,3),black); dot((1.5,3.3),black); dot((1.5,3.75),black); dot((2,2),black); dot((2,2.9),black); dot((2,3),black); dot((2,4),black); dot((2,4.35),black); dot((2,4.8),black); dot((2.5,2.7),black); dot((2.5,3.7),black); dot((2.5,4.2),black); dot((2.5,4.4),black); dot((3,2.5),blue); dot((3,3.4),black); dot((3,4.2),black); dot((3.5,3.8),black); dot((3.5,4.5),black); dot((3.5,4.8),black); dot ((4,3.9),black); dot((4,5.1),black); dot((4.5,4.75),black); dot((4.5,5),black); dot((5,4.5),black); dot((5,5),black); [/asy]$ We are looking for a black point, that when connected to the origin, yields the lowest slope. The slope represents the price per ounce. We can visually find that the point with the lowest slope is the blue point. Furthermore, it is the only one with a price per ounce significantly less than $1$. Finally, we see that the blue point is in the category with a weight of $\boxed{\textbf{(C) } 3}$ Solution 2 (Elimination) By the answer choices, we can disregard the points that do not have integer weights. As a result, we obtain the following diagram: $[asy] //diagram by pog size(5.5cm); usepackage("mathptmx"); defaultpen(mediumgray*0.5+gray*0.5+linewidth(0.63)); add(grid(6,6)); label(scale(0.7)*"1", (1,-0.3), black); label(scale(0.7)*"2", (2,-0.3), black); label(scale(0.7)*"3", (3,-0.3), black); label(scale(0.7)*"4", (4,-0.3), black); label(scale(0.7)*"5", (5,-0.3), black); label(scale(0.7)*"1", (-0.3,1), black); label(scale(0.7)*"2", (-0.3,2), black); label(scale(0.7)*"3", (-0.3,3), black); label(scale(0.7)*"4", (-0.3,4), black); label(scale(0.7)*"5", (-0.3,5), black); label(scale(0.8)*rotate(90)*"Price (dollars)", (-1,3.2), black); label(scale(0.8)*"Weight (ounces)", (3.2,-1), black); dot((1,1.2),black); dot((1,1.7),black); dot((1,2),black); dot((1,2.8),black); dot((2,2),black); dot((2,2.9),black); dot((2,3),black); dot ((2,4),black); dot((2,4.35),black); dot((2,4.8),black); dot((3,2.5),blue); dot((3,3.4),black); dot((3,4.2),black); dot((4,3.9),black); dot((4,5.1),black); dot((5,4.5),black); dot((5,5),black); [/asy] We then proceed in the same way that we had done in Solution 1. Following the steps, we figure out the blue dot that yields the lowest slope, along with passing the origin. We then can look at the x-axis(in this situation, the weight) and figure out it has $\boxed{\textbf{(C) } 3}$ ounces. ~DairyQueenXD (edited by HW73) Solution 3 (Elimination) We can find the lowest point in each line ($1$, $2$, $3$, $4$, or $5$) and find the price per pound. (Note that we don't need to find the points higher than the points below since we are finding the lowest price per pound.) $[asy] //diagram by pog size(5.5cm); usepackage("mathptmx"); defaultpen(mediumgray*0.5+gray*0.5+linewidth(0.63)); add(grid(6,6)); label(scale(0.7)*"1", (1,-0.3), black); label(scale(0.7)*"2", (2,-0.3), black); label(scale(0.7)*"3", (3,-0.3), black); label(scale(0.7)*"4", (4,-0.3), black); label(scale(0.7)*"5", (5,-0.3), black); label(scale(0.7)*"1", (-0.3,1), black); label(scale(0.7)*"2", (-0.3,2), black); label(scale(0.7)*"3", (-0.3,3), black); label(scale(0.7)*"4", (-0.3,4), black); label(scale(0.7)*"5", (-0.3,5), black); label(scale(0.8)*rotate(90)*"Price (dollars)", (-1,3.2), black); label(scale(0.8)*"Weight (ounces)", (3.2,-1), black); dot((1,1.2),red); dot((1,1.7),black); dot((1,2),black); dot((1,2.8),black); dot((2,2),green); dot((2,2.9),black); dot((2,3),black); dot ((2,4),black); dot((2,4.35),black); dot((2,4.8),black); dot((3,2.5),blue); dot((3,3.4),black); dot((3,4.2),black); dot((4,3.9),orange); dot((4,5.1),black); dot((5,4.5),purple); dot((5,5),black); [/ The red dot has a price per pound of something that is larger than $1$. The green dot has a price per pound of $1$. The blue dot has a price per pound of something like $\frac{2.5}{3}$. The orange dot has a price per pound that is less than $1$, but is very close to it. The purple dot has a price per pound of something like $\frac{4.5}{5}$. We see that choices $\textbf{(A)}$, $\textbf{(B)}$ ,and $\textbf{(D)}$ are eliminated. Also, $\frac{4.5}{5} > \frac{2.5}{3}$ thus the answer is $\boxed{\textbf{(C) } 3}$. Video Solution by Math-X (First understand the problem!!!) Video Solution (CREATIVE THINKING!!!) ~Education, the Study of Everything Video Solution Video Solution Video Solution See Also The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
{"url":"https://artofproblemsolving.com/wiki/index.php/2022_AMC_8_Problems/Problem_15","timestamp":"2024-11-12T10:50:09Z","content_type":"text/html","content_length":"53607","record_id":"<urn:uuid:c38a3150-3cea-496f-bb22-6b96479ba6e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00576.warc.gz"}
Viewer Mail: Larger Moves on Different Stock Options I received a great question today from Jeremy: Today, September in the money call options in $TZA had a greater positive mark % change than out of the money call options. Why is that? My guess: since they are in the money they are worth more today and since its a first day move down, option buyers are reluctant to buy far out and are choosing to buy in the money instead. If the downside in the market continues, out of the money should outperform in the money? Jeremy, welcome to the wonderful world of higher-order greeks. Your guess is close, but needs a little refinement. To best approach this question, we need to consider what a call option looks like at options expiration: This is a p/l graph for a bought call option. We know that at expiration it will be at full loss as long as it is out of the money, and if it goes in the money (plus the premium paid) then you can make a profit. What we want to concern ourselves with here is the "delta." This is the directional exposure per $1 move in the stock. It can also be viewed as the slope of that line above. Here's the delta at We know that at expiration no matter what, the OTM option will have no directional exposure because there's no reason for have it at a positive value-- there's no advantage to owning it. We also know that ITM options are all intrinsic value, which means they will behave like stock and have 100 delta. Now here's where it gets tricky: we know what the delta of call options will behave like at expiration. But this delta will be different with time left. Therefore, the delta of an option will change over time, and either begin to approach 0 or 100. This is known as charm, or delta decay. Here's what it looks like without my awesome MSPaint skills: Back to the original question: the Sep $TZA call options have about 2 weeks left to options expiration. There will be a higher relative delta decay on those OTM options, and coupled with the time decay on the value of the option, they have just begun to lose their sparkle. So when the market moves higher, the OTM options will continue to lose the potential "kick" that they had, say, 3 days ago. And to top it all off, implied voatility and skew continued to "normalize" which also made ATM calls in $TZA a "less bad" trade than OTM calls. So it's not that option buyers are hesitant to buy OTM options, its that the mathematical models are starting to reduce the odds as we get closer to opex and as volatility contracts. That's why it's so important to choose the right strikes when you're hedging, otherwise you'll get burnt on the delta decay. Now if we do in fact start to run higher in TZA, then your OTM calls will move ITM, which will, on a percentage basis, be better than if you had held the original ATM options. But as time continues to pass, the odds of that will decrease. If you understand charm, then check out my intro into color. Did you have an 'a ha' moment when reading this post? Imagine that, except for 19 hours in video. OptionFu will make you a great options trader. And yes, we talk about charm, too.
{"url":"https://investingwithoptions.com/viewer-mail-larger-moves-on-different-stock-options-delta-decay-option-greeks/","timestamp":"2024-11-14T03:43:36Z","content_type":"text/html","content_length":"46116","record_id":"<urn:uuid:41b7c89e-6251-40bb-b05b-ec6ede15cb24>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00239.warc.gz"}